From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.9 commit in: /
Date: Thu, 5 Nov 2020 17:54:27 +0000 (UTC) [thread overview]
Message-ID: <1604598844.fe1318ebbf61e581f94a611f60560e0e0d63eba8.mpagano@gentoo> (raw)
commit: fe1318ebbf61e581f94a611f60560e0e0d63eba8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 5 17:54:04 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov 5 17:54:04 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fe1318eb
Linux patch 5.9.5 and 5.9.6
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 12 +
1004_linux-5.9.5.patch | 18521 +++++++++++++++++++++++++++++++++++++++++++++++
1005_linux-5.9.6.patch | 29 +
3 files changed, 18562 insertions(+)
diff --git a/0000_README b/0000_README
index 85e9d90..95528ee 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,18 @@ Patch: 1003_linux-5.9.4.patch
From: http://www.kernel.org
Desc: Linux 5.9.4
+Patch: 1004_linux-5.9.5.patch
+From: http://www.kernel.org
+Desc: Linux 5.9.5
+
+Patch: 1005_linux-5.9.6.patch
+From: http://www.kernel.org
+Desc: Linux 5.9.6
+
+Patch: 1006_linux-5.9.7.patch
+From: http://www.kernel.org
+Desc: Linux 5.9.7
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1004_linux-5.9.5.patch b/1004_linux-5.9.5.patch
new file mode 100644
index 0000000..e545ae3
--- /dev/null
+++ b/1004_linux-5.9.5.patch
@@ -0,0 +1,18521 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index ffe864390c5ac..dca917ac21d93 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5828,6 +5828,14 @@
+ improve timer resolution at the expense of processing
+ more timer interrupts.
+
++ xen.event_eoi_delay= [XEN]
++ How long to delay EOI handling in case of event
++ storms (jiffies). Default is 10.
++
++ xen.event_loop_timeout= [XEN]
++ After which time (jiffies) the event handling loop
++ should start to delay EOI handling. Default is 2.
++
+ nopv= [X86,XEN,KVM,HYPER_V,VMWARE]
+ Disables the PV optimizations forcing the guest to run
+ as generic guest with no PV drivers. Currently support
+diff --git a/Documentation/devicetree/bindings/soc/ti/k3-ringacc.yaml b/Documentation/devicetree/bindings/soc/ti/k3-ringacc.yaml
+index ae33fc957141f..c3c595e235a86 100644
+--- a/Documentation/devicetree/bindings/soc/ti/k3-ringacc.yaml
++++ b/Documentation/devicetree/bindings/soc/ti/k3-ringacc.yaml
+@@ -62,11 +62,6 @@ properties:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description: TI-SCI device id of the ring accelerator
+
+- ti,dma-ring-reset-quirk:
+- $ref: /schemas/types.yaml#definitions/flag
+- description: |
+- enable ringacc/udma ring state interoperability issue software w/a
+-
+ required:
+ - compatible
+ - reg
+@@ -94,7 +89,6 @@ examples:
+ reg-names = "rt", "fifos", "proxy_gcfg", "proxy_target";
+ ti,num-rings = <818>;
+ ti,sci-rm-range-gp-rings = <0x2>; /* GP ring range */
+- ti,dma-ring-reset-quirk;
+ ti,sci = <&dmsc>;
+ ti,sci-dev-id = <187>;
+ msi-parent = <&inta_main_udmass>;
+diff --git a/Documentation/userspace-api/media/v4l/colorspaces-defs.rst b/Documentation/userspace-api/media/v4l/colorspaces-defs.rst
+index 01404e1f609a7..4089f426258d6 100644
+--- a/Documentation/userspace-api/media/v4l/colorspaces-defs.rst
++++ b/Documentation/userspace-api/media/v4l/colorspaces-defs.rst
+@@ -36,8 +36,7 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
+ :c:type:`v4l2_hsv_encoding` specifies which encoding is used.
+
+ .. note:: The default R'G'B' quantization is full range for all
+- colorspaces except for BT.2020 which uses limited range R'G'B'
+- quantization.
++ colorspaces. HSV formats are always full range.
+
+ .. tabularcolumns:: |p{6.7cm}|p{10.8cm}|
+
+@@ -169,8 +168,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
+ - Details
+ * - ``V4L2_QUANTIZATION_DEFAULT``
+ - Use the default quantization encoding as defined by the
+- colorspace. This is always full range for R'G'B' (except for the
+- BT.2020 colorspace) and HSV. It is usually limited range for Y'CbCr.
++ colorspace. This is always full range for R'G'B' and HSV.
++ It is usually limited range for Y'CbCr.
+ * - ``V4L2_QUANTIZATION_FULL_RANGE``
+ - Use the full range quantization encoding. I.e. the range [0…1] is
+ mapped to [0…255] (with possible clipping to [1…254] to avoid the
+@@ -180,4 +179,4 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
+ * - ``V4L2_QUANTIZATION_LIM_RANGE``
+ - Use the limited range quantization encoding. I.e. the range [0…1]
+ is mapped to [16…235]. Cb and Cr are mapped from [-0.5…0.5] to
+- [16…240].
++ [16…240]. Limited Range cannot be used with HSV.
+diff --git a/Documentation/userspace-api/media/v4l/colorspaces-details.rst b/Documentation/userspace-api/media/v4l/colorspaces-details.rst
+index 300c5d2e7d0f0..cf1b825ec34a7 100644
+--- a/Documentation/userspace-api/media/v4l/colorspaces-details.rst
++++ b/Documentation/userspace-api/media/v4l/colorspaces-details.rst
+@@ -377,9 +377,8 @@ Colorspace BT.2020 (V4L2_COLORSPACE_BT2020)
+ The :ref:`itu2020` standard defines the colorspace used by Ultra-high
+ definition television (UHDTV). The default transfer function is
+ ``V4L2_XFER_FUNC_709``. The default Y'CbCr encoding is
+-``V4L2_YCBCR_ENC_BT2020``. The default R'G'B' quantization is limited
+-range (!), and so is the default Y'CbCr quantization. The chromaticities
+-of the primary colors and the white reference are:
++``V4L2_YCBCR_ENC_BT2020``. The default Y'CbCr quantization is limited range.
++The chromaticities of the primary colors and the white reference are:
+
+
+
+diff --git a/Makefile b/Makefile
+index 0c8f0ba8c34f4..27d4fe12da24c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 9
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/Kconfig b/arch/Kconfig
+index af14a567b493f..94821e3f94d16 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -414,6 +414,13 @@ config MMU_GATHER_NO_GATHER
+ bool
+ depends on MMU_GATHER_TABLE_FREE
+
++config ARCH_WANT_IRQS_OFF_ACTIVATE_MM
++ bool
++ help
++ Temporary select until all architectures can be converted to have
++ irqs disabled over activate_mm. Architectures that do IPI based TLB
++ shootdowns should enable this.
++
+ config ARCH_HAVE_NMI_SAFE_CMPXCHG
+ bool
+
+diff --git a/arch/arc/boot/dts/axc001.dtsi b/arch/arc/boot/dts/axc001.dtsi
+index 79ec27c043c1d..2a151607b0805 100644
+--- a/arch/arc/boot/dts/axc001.dtsi
++++ b/arch/arc/boot/dts/axc001.dtsi
+@@ -91,7 +91,7 @@
+ * avoid duplicating the MB dtsi file given that IRQ from
+ * this intc to cpu intc are different for axs101 and axs103
+ */
+- mb_intc: dw-apb-ictl@e0012000 {
++ mb_intc: interrupt-controller@e0012000 {
+ #interrupt-cells = <1>;
+ compatible = "snps,dw-apb-ictl";
+ reg = < 0x0 0xe0012000 0x0 0x200 >;
+diff --git a/arch/arc/boot/dts/axc003.dtsi b/arch/arc/boot/dts/axc003.dtsi
+index ac8e1b463a709..cd1edcf4f95ef 100644
+--- a/arch/arc/boot/dts/axc003.dtsi
++++ b/arch/arc/boot/dts/axc003.dtsi
+@@ -129,7 +129,7 @@
+ * avoid duplicating the MB dtsi file given that IRQ from
+ * this intc to cpu intc are different for axs101 and axs103
+ */
+- mb_intc: dw-apb-ictl@e0012000 {
++ mb_intc: interrupt-controller@e0012000 {
+ #interrupt-cells = <1>;
+ compatible = "snps,dw-apb-ictl";
+ reg = < 0x0 0xe0012000 0x0 0x200 >;
+diff --git a/arch/arc/boot/dts/axc003_idu.dtsi b/arch/arc/boot/dts/axc003_idu.dtsi
+index 9da21e7fd246f..70779386ca796 100644
+--- a/arch/arc/boot/dts/axc003_idu.dtsi
++++ b/arch/arc/boot/dts/axc003_idu.dtsi
+@@ -135,7 +135,7 @@
+ * avoid duplicating the MB dtsi file given that IRQ from
+ * this intc to cpu intc are different for axs101 and axs103
+ */
+- mb_intc: dw-apb-ictl@e0012000 {
++ mb_intc: interrupt-controller@e0012000 {
+ #interrupt-cells = <1>;
+ compatible = "snps,dw-apb-ictl";
+ reg = < 0x0 0xe0012000 0x0 0x200 >;
+diff --git a/arch/arc/boot/dts/vdk_axc003.dtsi b/arch/arc/boot/dts/vdk_axc003.dtsi
+index f8be7ba8dad49..c21d0eb07bf67 100644
+--- a/arch/arc/boot/dts/vdk_axc003.dtsi
++++ b/arch/arc/boot/dts/vdk_axc003.dtsi
+@@ -46,7 +46,7 @@
+
+ };
+
+- mb_intc: dw-apb-ictl@e0012000 {
++ mb_intc: interrupt-controller@e0012000 {
+ #interrupt-cells = <1>;
+ compatible = "snps,dw-apb-ictl";
+ reg = < 0xe0012000 0x200 >;
+diff --git a/arch/arc/boot/dts/vdk_axc003_idu.dtsi b/arch/arc/boot/dts/vdk_axc003_idu.dtsi
+index 0afa3e53a4e39..4d348853ac7c5 100644
+--- a/arch/arc/boot/dts/vdk_axc003_idu.dtsi
++++ b/arch/arc/boot/dts/vdk_axc003_idu.dtsi
+@@ -54,7 +54,7 @@
+
+ };
+
+- mb_intc: dw-apb-ictl@e0012000 {
++ mb_intc: interrupt-controller@e0012000 {
+ #interrupt-cells = <1>;
+ compatible = "snps,dw-apb-ictl";
+ reg = < 0xe0012000 0x200 >;
+diff --git a/arch/arc/kernel/perf_event.c b/arch/arc/kernel/perf_event.c
+index 79849f37e782c..145722f80c9b7 100644
+--- a/arch/arc/kernel/perf_event.c
++++ b/arch/arc/kernel/perf_event.c
+@@ -562,7 +562,7 @@ static int arc_pmu_device_probe(struct platform_device *pdev)
+ {
+ struct arc_reg_pct_build pct_bcr;
+ struct arc_reg_cc_build cc_bcr;
+- int i, has_interrupts, irq;
++ int i, has_interrupts, irq = -1;
+ int counter_size; /* in bits */
+
+ union cc_name {
+@@ -637,19 +637,28 @@ static int arc_pmu_device_probe(struct platform_device *pdev)
+ .attr_groups = arc_pmu->attr_groups,
+ };
+
+- if (has_interrupts && (irq = platform_get_irq(pdev, 0) >= 0)) {
++ if (has_interrupts) {
++ irq = platform_get_irq(pdev, 0);
++ if (irq >= 0) {
++ int ret;
+
+- arc_pmu->irq = irq;
++ arc_pmu->irq = irq;
+
+- /* intc map function ensures irq_set_percpu_devid() called */
+- request_percpu_irq(irq, arc_pmu_intr, "ARC perf counters",
+- this_cpu_ptr(&arc_pmu_cpu));
++ /* intc map function ensures irq_set_percpu_devid() called */
++ ret = request_percpu_irq(irq, arc_pmu_intr, "ARC perf counters",
++ this_cpu_ptr(&arc_pmu_cpu));
++
++ if (!ret)
++ on_each_cpu(arc_cpu_pmu_irq_init, &irq, 1);
++ else
++ irq = -1;
++ }
+
+- on_each_cpu(arc_cpu_pmu_irq_init, &irq, 1);
+- } else {
+- arc_pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;
+ }
+
++ if (irq == -1)
++ arc_pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT;
++
+ /*
+ * perf parser doesn't really like '-' symbol in events name, so let's
+ * use '_' in arc pct name as it goes to kernel PMU event prefix.
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index e00d94b166587..23e2c0dc85c1e 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -506,8 +506,10 @@ config ARCH_S3C24XX
+ select HAVE_S3C2410_WATCHDOG if WATCHDOG
+ select HAVE_S3C_RTC if RTC_CLASS
+ select NEED_MACH_IO_H
++ select S3C2410_WATCHDOG
+ select SAMSUNG_ATAGS
+ select USE_OF
++ select WATCHDOG
+ help
+ Samsung S3C2410, S3C2412, S3C2413, S3C2416, S3C2440, S3C2442, S3C2443
+ and S3C2450 SoCs based systems, such as the Simtec Electronics BAST
+diff --git a/arch/arm/boot/dts/aspeed-g5.dtsi b/arch/arm/boot/dts/aspeed-g5.dtsi
+index 9c91afb2b4042..a93009aa2f040 100644
+--- a/arch/arm/boot/dts/aspeed-g5.dtsi
++++ b/arch/arm/boot/dts/aspeed-g5.dtsi
+@@ -425,7 +425,6 @@
+ interrupts = <8>;
+ clocks = <&syscon ASPEED_CLK_APB>;
+ no-loopback-test;
+- aspeed,sirq-polarity-sense = <&syscon 0x70 25>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts b/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
+index 2b760f90f38c8..5375c6699843f 100644
+--- a/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
++++ b/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
+@@ -192,6 +192,7 @@
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
++ pause;
+ };
+ };
+ };
+diff --git a/arch/arm/boot/dts/omap4.dtsi b/arch/arm/boot/dts/omap4.dtsi
+index 0282b9de3384f..52e8298275050 100644
+--- a/arch/arm/boot/dts/omap4.dtsi
++++ b/arch/arm/boot/dts/omap4.dtsi
+@@ -410,7 +410,7 @@
+ status = "disabled";
+ };
+
+- target-module@56000000 {
++ sgx_module: target-module@56000000 {
+ compatible = "ti,sysc-omap4", "ti,sysc";
+ reg = <0x5600fe00 0x4>,
+ <0x5600fe10 0x4>;
+diff --git a/arch/arm/boot/dts/omap443x.dtsi b/arch/arm/boot/dts/omap443x.dtsi
+index 8ed510ab00c52..cb309743de5da 100644
+--- a/arch/arm/boot/dts/omap443x.dtsi
++++ b/arch/arm/boot/dts/omap443x.dtsi
+@@ -74,3 +74,13 @@
+ };
+
+ /include/ "omap443x-clocks.dtsi"
++
++/*
++ * Use dpll_per for sgx at 153.6MHz like droid4 stock v3.0.8 Android kernel
++ */
++&sgx_module {
++ assigned-clocks = <&l3_gfx_clkctrl OMAP4_GPU_CLKCTRL 24>,
++ <&dpll_per_m7x2_ck>;
++ assigned-clock-rates = <0>, <153600000>;
++ assigned-clock-parents = <&dpll_per_m7x2_ck>;
++};
+diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi
+index 822207f63ee0a..bd4450dbdcb61 100644
+--- a/arch/arm/boot/dts/s5pv210-aries.dtsi
++++ b/arch/arm/boot/dts/s5pv210-aries.dtsi
+@@ -47,6 +47,18 @@
+ };
+ };
+
++ pmic_ap_clk: clock-0 {
++ /* Workaround for missing clock on PMIC */
++ compatible = "fixed-clock";
++ #clock-cells = <0>;
++ clock-frequency = <32768>;
++ };
++
++ bt_codec: bt_sco {
++ compatible = "linux,bt-sco";
++ #sound-dai-cells = <0>;
++ };
++
+ vibrator_pwr: regulator-fixed-0 {
+ compatible = "regulator-fixed";
+ regulator-name = "vibrator-en";
+@@ -54,7 +66,7 @@
+ gpio = <&gpj1 1 GPIO_ACTIVE_HIGH>;
+
+ pinctrl-names = "default";
+- pinctr-0 = <&vibrator_ena>;
++ pinctrl-0 = <&vibrator_ena>;
+ };
+
+ touchkey_vdd: regulator-fixed-1 {
+@@ -533,7 +545,7 @@
+ value = <0x5200>;
+ };
+
+- spi_lcd: spi-gpio-0 {
++ spi_lcd: spi-2 {
+ compatible = "spi-gpio";
+ #address-cells = <1>;
+ #size-cells = <0>;
+@@ -624,6 +636,11 @@
+ };
+ };
+
++&i2s0 {
++ dmas = <&pdma0 9>, <&pdma0 10>, <&pdma0 11>;
++ status = "okay";
++};
++
+ &mfc {
+ memory-region = <&mfc_left>, <&mfc_right>;
+ };
+@@ -815,6 +832,11 @@
+ samsung,pwm-outputs = <1>;
+ };
+
++&rtc {
++ clocks = <&clocks CLK_RTC>, <&pmic_ap_clk>;
++ clock-names = "rtc", "rtc_src";
++};
++
+ &sdhci1 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+diff --git a/arch/arm/boot/dts/s5pv210-fascinate4g.dts b/arch/arm/boot/dts/s5pv210-fascinate4g.dts
+index 65eed01cfced1..ca064359dd308 100644
+--- a/arch/arm/boot/dts/s5pv210-fascinate4g.dts
++++ b/arch/arm/boot/dts/s5pv210-fascinate4g.dts
+@@ -35,6 +35,80 @@
+ linux,code = <KEY_VOLUMEUP>;
+ };
+ };
++
++ headset_micbias_reg: regulator-fixed-3 {
++ compatible = "regulator-fixed";
++ regulator-name = "Headset_Micbias";
++ gpio = <&gpj2 5 GPIO_ACTIVE_HIGH>;
++ enable-active-high;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&headset_micbias_ena>;
++ };
++
++ main_micbias_reg: regulator-fixed-4 {
++ compatible = "regulator-fixed";
++ regulator-name = "Main_Micbias";
++ gpio = <&gpj4 2 GPIO_ACTIVE_HIGH>;
++ enable-active-high;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&main_micbias_ena>;
++ };
++
++ sound {
++ compatible = "samsung,fascinate4g-wm8994";
++
++ model = "Fascinate4G";
++
++ extcon = <&fsa9480>;
++
++ main-micbias-supply = <&main_micbias_reg>;
++ headset-micbias-supply = <&headset_micbias_reg>;
++
++ earpath-sel-gpios = <&gpj2 6 GPIO_ACTIVE_HIGH>;
++
++ io-channels = <&adc 3>;
++ io-channel-names = "headset-detect";
++ headset-detect-gpios = <&gph0 6 GPIO_ACTIVE_HIGH>;
++ headset-key-gpios = <&gph3 6 GPIO_ACTIVE_HIGH>;
++
++ samsung,audio-routing =
++ "HP", "HPOUT1L",
++ "HP", "HPOUT1R",
++
++ "SPK", "SPKOUTLN",
++ "SPK", "SPKOUTLP",
++
++ "RCV", "HPOUT2N",
++ "RCV", "HPOUT2P",
++
++ "LINE", "LINEOUT2N",
++ "LINE", "LINEOUT2P",
++
++ "IN1LP", "Main Mic",
++ "IN1LN", "Main Mic",
++
++ "IN1RP", "Headset Mic",
++ "IN1RN", "Headset Mic",
++
++ "Modem Out", "Modem TX",
++ "Modem RX", "Modem In",
++
++ "Bluetooth SPK", "TX",
++ "RX", "Bluetooth Mic";
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&headset_det &earpath_sel>;
++
++ cpu {
++ sound-dai = <&i2s0>, <&bt_codec>;
++ };
++
++ codec {
++ sound-dai = <&wm8994>;
++ };
++ };
+ };
+
+ &fg {
+@@ -51,6 +125,12 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&sleep_cfg>;
+
++ headset_det: headset-det {
++ samsung,pins = "gph0-6", "gph3-6";
++ samsung,pin-function = <EXYNOS_PIN_FUNC_F>;
++ samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
++ };
++
+ fg_irq: fg-irq {
+ samsung,pins = "gph3-3";
+ samsung,pin-function = <EXYNOS_PIN_FUNC_F>;
+@@ -58,6 +138,24 @@
+ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
+ };
+
++ headset_micbias_ena: headset-micbias-ena {
++ samsung,pins = "gpj2-5";
++ samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
++ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
++ };
++
++ earpath_sel: earpath-sel {
++ samsung,pins = "gpj2-6";
++ samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
++ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
++ };
++
++ main_micbias_ena: main-micbias-ena {
++ samsung,pins = "gpj4-2";
++ samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
++ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
++ };
++
+ /* Based on vendor kernel v2.6.35.7 */
+ sleep_cfg: sleep-cfg {
+ PIN_SLP(gpa0-0, PREV, NONE);
+diff --git a/arch/arm/boot/dts/s5pv210-galaxys.dts b/arch/arm/boot/dts/s5pv210-galaxys.dts
+index 5d10dd67eacc5..560f830b6f6be 100644
+--- a/arch/arm/boot/dts/s5pv210-galaxys.dts
++++ b/arch/arm/boot/dts/s5pv210-galaxys.dts
+@@ -72,6 +72,73 @@
+ pinctrl-0 = <&fm_irq &fm_rst>;
+ };
+ };
++
++ micbias_reg: regulator-fixed-3 {
++ compatible = "regulator-fixed";
++ regulator-name = "MICBIAS";
++ gpio = <&gpj4 2 GPIO_ACTIVE_HIGH>;
++ enable-active-high;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&micbias_reg_ena>;
++ };
++
++ sound {
++ compatible = "samsung,aries-wm8994";
++
++ model = "Aries";
++
++ extcon = <&fsa9480>;
++
++ main-micbias-supply = <&micbias_reg>;
++ headset-micbias-supply = <&micbias_reg>;
++
++ earpath-sel-gpios = <&gpj2 6 GPIO_ACTIVE_HIGH>;
++
++ io-channels = <&adc 3>;
++ io-channel-names = "headset-detect";
++ headset-detect-gpios = <&gph0 6 GPIO_ACTIVE_LOW>;
++ headset-key-gpios = <&gph3 6 GPIO_ACTIVE_HIGH>;
++
++ samsung,audio-routing =
++ "HP", "HPOUT1L",
++ "HP", "HPOUT1R",
++
++ "SPK", "SPKOUTLN",
++ "SPK", "SPKOUTLP",
++
++ "RCV", "HPOUT2N",
++ "RCV", "HPOUT2P",
++
++ "LINE", "LINEOUT2N",
++ "LINE", "LINEOUT2P",
++
++ "IN1LP", "Main Mic",
++ "IN1LN", "Main Mic",
++
++ "IN1RP", "Headset Mic",
++ "IN1RN", "Headset Mic",
++
++ "IN2LN", "FM In",
++ "IN2RN", "FM In",
++
++ "Modem Out", "Modem TX",
++ "Modem RX", "Modem In",
++
++ "Bluetooth SPK", "TX",
++ "RX", "Bluetooth Mic";
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&headset_det &earpath_sel>;
++
++ cpu {
++ sound-dai = <&i2s0>, <&bt_codec>;
++ };
++
++ codec {
++ sound-dai = <&wm8994>;
++ };
++ };
+ };
+
+ &aliases {
+@@ -88,6 +155,12 @@
+ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
+ };
+
++ headset_det: headset-det {
++ samsung,pins = "gph0-6", "gph3-6";
++ samsung,pin-function = <EXYNOS_PIN_FUNC_F>;
++ samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
++ };
++
+ fm_irq: fm-irq {
+ samsung,pins = "gpj2-4";
+ samsung,pin-function = <EXYNOS_PIN_FUNC_INPUT>;
+@@ -102,6 +175,12 @@
+ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
+ };
+
++ earpath_sel: earpath-sel {
++ samsung,pins = "gpj2-6";
++ samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
++ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
++ };
++
+ massmemory_en: massmemory-en {
+ samsung,pins = "gpj2-7";
+ samsung,pin-function = <EXYNOS_PIN_FUNC_OUTPUT>;
+@@ -109,6 +188,12 @@
+ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
+ };
+
++ micbias_reg_ena: micbias-reg-ena {
++ samsung,pins = "gpj4-2";
++ samsung,pin-pud = <S3C64XX_PIN_PULL_NONE>;
++ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
++ };
++
+ /* Based on CyanogenMod 3.0.101 kernel */
+ sleep_cfg: sleep-cfg {
+ PIN_SLP(gpa0-0, PREV, NONE);
+diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi
+index 1b0ee884e91db..2871351ab9074 100644
+--- a/arch/arm/boot/dts/s5pv210.dtsi
++++ b/arch/arm/boot/dts/s5pv210.dtsi
+@@ -52,34 +52,26 @@
+ };
+ };
+
++ xxti: oscillator-0 {
++ compatible = "fixed-clock";
++ clock-frequency = <0>;
++ clock-output-names = "xxti";
++ #clock-cells = <0>;
++ };
++
++ xusbxti: oscillator-1 {
++ compatible = "fixed-clock";
++ clock-frequency = <0>;
++ clock-output-names = "xusbxti";
++ #clock-cells = <0>;
++ };
++
+ soc {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+
+- external-clocks {
+- compatible = "simple-bus";
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- xxti: oscillator@0 {
+- compatible = "fixed-clock";
+- reg = <0>;
+- clock-frequency = <0>;
+- clock-output-names = "xxti";
+- #clock-cells = <0>;
+- };
+-
+- xusbxti: oscillator@1 {
+- compatible = "fixed-clock";
+- reg = <1>;
+- clock-frequency = <0>;
+- clock-output-names = "xusbxti";
+- #clock-cells = <0>;
+- };
+- };
+-
+ onenand: onenand@b0600000 {
+ compatible = "samsung,s5pv210-onenand";
+ reg = <0xb0600000 0x2000>,
+@@ -100,19 +92,16 @@
+ };
+
+ clocks: clock-controller@e0100000 {
+- compatible = "samsung,s5pv210-clock", "simple-bus";
++ compatible = "samsung,s5pv210-clock";
+ reg = <0xe0100000 0x10000>;
+ clock-names = "xxti", "xusbxti";
+ clocks = <&xxti>, <&xusbxti>;
+ #clock-cells = <1>;
+- #address-cells = <1>;
+- #size-cells = <1>;
+- ranges;
++ };
+
+- pmu_syscon: syscon@e0108000 {
+- compatible = "samsung-s5pv210-pmu", "syscon";
+- reg = <0xe0108000 0x8000>;
+- };
++ pmu_syscon: syscon@e0108000 {
++ compatible = "samsung-s5pv210-pmu", "syscon";
++ reg = <0xe0108000 0x8000>;
+ };
+
+ pinctrl0: pinctrl@e0200000 {
+@@ -128,35 +117,28 @@
+ };
+ };
+
+- amba {
+- #address-cells = <1>;
+- #size-cells = <1>;
+- compatible = "simple-bus";
+- ranges;
+-
+- pdma0: dma@e0900000 {
+- compatible = "arm,pl330", "arm,primecell";
+- reg = <0xe0900000 0x1000>;
+- interrupt-parent = <&vic0>;
+- interrupts = <19>;
+- clocks = <&clocks CLK_PDMA0>;
+- clock-names = "apb_pclk";
+- #dma-cells = <1>;
+- #dma-channels = <8>;
+- #dma-requests = <32>;
+- };
++ pdma0: dma@e0900000 {
++ compatible = "arm,pl330", "arm,primecell";
++ reg = <0xe0900000 0x1000>;
++ interrupt-parent = <&vic0>;
++ interrupts = <19>;
++ clocks = <&clocks CLK_PDMA0>;
++ clock-names = "apb_pclk";
++ #dma-cells = <1>;
++ #dma-channels = <8>;
++ #dma-requests = <32>;
++ };
+
+- pdma1: dma@e0a00000 {
+- compatible = "arm,pl330", "arm,primecell";
+- reg = <0xe0a00000 0x1000>;
+- interrupt-parent = <&vic0>;
+- interrupts = <20>;
+- clocks = <&clocks CLK_PDMA1>;
+- clock-names = "apb_pclk";
+- #dma-cells = <1>;
+- #dma-channels = <8>;
+- #dma-requests = <32>;
+- };
++ pdma1: dma@e0a00000 {
++ compatible = "arm,pl330", "arm,primecell";
++ reg = <0xe0a00000 0x1000>;
++ interrupt-parent = <&vic0>;
++ interrupts = <20>;
++ clocks = <&clocks CLK_PDMA1>;
++ clock-names = "apb_pclk";
++ #dma-cells = <1>;
++ #dma-channels = <8>;
++ #dma-requests = <32>;
+ };
+
+ adc: adc@e1700000 {
+@@ -241,43 +223,36 @@
+ status = "disabled";
+ };
+
+- audio-subsystem {
+- compatible = "samsung,s5pv210-audss", "simple-bus";
+- #address-cells = <1>;
+- #size-cells = <1>;
+- ranges;
+-
+- clk_audss: clock-controller@eee10000 {
+- compatible = "samsung,s5pv210-audss-clock";
+- reg = <0xeee10000 0x1000>;
+- clock-names = "hclk", "xxti",
+- "fout_epll",
+- "sclk_audio0";
+- clocks = <&clocks DOUT_HCLKP>, <&xxti>,
+- <&clocks FOUT_EPLL>,
+- <&clocks SCLK_AUDIO0>;
+- #clock-cells = <1>;
+- };
++ clk_audss: clock-controller@eee10000 {
++ compatible = "samsung,s5pv210-audss-clock";
++ reg = <0xeee10000 0x1000>;
++ clock-names = "hclk", "xxti",
++ "fout_epll",
++ "sclk_audio0";
++ clocks = <&clocks DOUT_HCLKP>, <&xxti>,
++ <&clocks FOUT_EPLL>,
++ <&clocks SCLK_AUDIO0>;
++ #clock-cells = <1>;
++ };
+
+- i2s0: i2s@eee30000 {
+- compatible = "samsung,s5pv210-i2s";
+- reg = <0xeee30000 0x1000>;
+- interrupt-parent = <&vic2>;
+- interrupts = <16>;
+- dma-names = "rx", "tx", "tx-sec";
+- dmas = <&pdma1 9>, <&pdma1 10>, <&pdma1 11>;
+- clock-names = "iis",
+- "i2s_opclk0",
+- "i2s_opclk1";
+- clocks = <&clk_audss CLK_I2S>,
+- <&clk_audss CLK_I2S>,
+- <&clk_audss CLK_DOUT_AUD_BUS>;
+- samsung,idma-addr = <0xc0010000>;
+- pinctrl-names = "default";
+- pinctrl-0 = <&i2s0_bus>;
+- #sound-dai-cells = <0>;
+- status = "disabled";
+- };
++ i2s0: i2s@eee30000 {
++ compatible = "samsung,s5pv210-i2s";
++ reg = <0xeee30000 0x1000>;
++ interrupt-parent = <&vic2>;
++ interrupts = <16>;
++ dma-names = "rx", "tx", "tx-sec";
++ dmas = <&pdma1 9>, <&pdma1 10>, <&pdma1 11>;
++ clock-names = "iis",
++ "i2s_opclk0",
++ "i2s_opclk1";
++ clocks = <&clk_audss CLK_I2S>,
++ <&clk_audss CLK_I2S>,
++ <&clk_audss CLK_DOUT_AUD_BUS>;
++ samsung,idma-addr = <0xc0010000>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2s0_bus>;
++ #sound-dai-cells = <0>;
++ status = "disabled";
+ };
+
+ i2s1: i2s@e2100000 {
+diff --git a/arch/arm/configs/aspeed_g4_defconfig b/arch/arm/configs/aspeed_g4_defconfig
+index 303f75a3baec9..58d293b635818 100644
+--- a/arch/arm/configs/aspeed_g4_defconfig
++++ b/arch/arm/configs/aspeed_g4_defconfig
+@@ -160,7 +160,8 @@ CONFIG_SENSORS_TMP421=y
+ CONFIG_SENSORS_W83773G=y
+ CONFIG_WATCHDOG_SYSFS=y
+ CONFIG_MEDIA_SUPPORT=y
+-CONFIG_MEDIA_CAMERA_SUPPORT=y
++CONFIG_MEDIA_SUPPORT_FILTER=y
++CONFIG_MEDIA_PLATFORM_SUPPORT=y
+ CONFIG_V4L_PLATFORM_DRIVERS=y
+ CONFIG_VIDEO_ASPEED=y
+ CONFIG_DRM=y
+diff --git a/arch/arm/configs/aspeed_g5_defconfig b/arch/arm/configs/aspeed_g5_defconfig
+index b0d056d49abe1..cc2449ed6e6d3 100644
+--- a/arch/arm/configs/aspeed_g5_defconfig
++++ b/arch/arm/configs/aspeed_g5_defconfig
+@@ -175,7 +175,8 @@ CONFIG_SENSORS_TMP421=y
+ CONFIG_SENSORS_W83773G=y
+ CONFIG_WATCHDOG_SYSFS=y
+ CONFIG_MEDIA_SUPPORT=y
+-CONFIG_MEDIA_CAMERA_SUPPORT=y
++CONFIG_MEDIA_SUPPORT_FILTER=y
++CONFIG_MEDIA_PLATFORM_SUPPORT=y
+ CONFIG_V4L_PLATFORM_DRIVERS=y
+ CONFIG_VIDEO_ASPEED=y
+ CONFIG_DRM=y
+diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
+index 7a4853b1213a8..08660ae9dcbce 100644
+--- a/arch/arm/kernel/hw_breakpoint.c
++++ b/arch/arm/kernel/hw_breakpoint.c
+@@ -683,6 +683,40 @@ static void disable_single_step(struct perf_event *bp)
+ arch_install_hw_breakpoint(bp);
+ }
+
++/*
++ * Arm32 hardware does not always report a watchpoint hit address that matches
++ * one of the watchpoints set. It can also report an address "near" the
++ * watchpoint if a single instruction access both watched and unwatched
++ * addresses. There is no straight-forward way, short of disassembling the
++ * offending instruction, to map that address back to the watchpoint. This
++ * function computes the distance of the memory access from the watchpoint as a
++ * heuristic for the likelyhood that a given access triggered the watchpoint.
++ *
++ * See this same function in the arm64 platform code, which has the same
++ * problem.
++ *
++ * The function returns the distance of the address from the bytes watched by
++ * the watchpoint. In case of an exact match, it returns 0.
++ */
++static u32 get_distance_from_watchpoint(unsigned long addr, u32 val,
++ struct arch_hw_breakpoint_ctrl *ctrl)
++{
++ u32 wp_low, wp_high;
++ u32 lens, lene;
++
++ lens = __ffs(ctrl->len);
++ lene = __fls(ctrl->len);
++
++ wp_low = val + lens;
++ wp_high = val + lene;
++ if (addr < wp_low)
++ return wp_low - addr;
++ else if (addr > wp_high)
++ return addr - wp_high;
++ else
++ return 0;
++}
++
+ static int watchpoint_fault_on_uaccess(struct pt_regs *regs,
+ struct arch_hw_breakpoint *info)
+ {
+@@ -692,23 +726,25 @@ static int watchpoint_fault_on_uaccess(struct pt_regs *regs,
+ static void watchpoint_handler(unsigned long addr, unsigned int fsr,
+ struct pt_regs *regs)
+ {
+- int i, access;
+- u32 val, ctrl_reg, alignment_mask;
++ int i, access, closest_match = 0;
++ u32 min_dist = -1, dist;
++ u32 val, ctrl_reg;
+ struct perf_event *wp, **slots;
+ struct arch_hw_breakpoint *info;
+ struct arch_hw_breakpoint_ctrl ctrl;
+
+ slots = this_cpu_ptr(wp_on_reg);
+
++ /*
++ * Find all watchpoints that match the reported address. If no exact
++ * match is found. Attribute the hit to the closest watchpoint.
++ */
++ rcu_read_lock();
+ for (i = 0; i < core_num_wrps; ++i) {
+- rcu_read_lock();
+-
+ wp = slots[i];
+-
+ if (wp == NULL)
+- goto unlock;
++ continue;
+
+- info = counter_arch_bp(wp);
+ /*
+ * The DFAR is an unknown value on debug architectures prior
+ * to 7.1. Since we only allow a single watchpoint on these
+@@ -717,33 +753,31 @@ static void watchpoint_handler(unsigned long addr, unsigned int fsr,
+ */
+ if (debug_arch < ARM_DEBUG_ARCH_V7_1) {
+ BUG_ON(i > 0);
++ info = counter_arch_bp(wp);
+ info->trigger = wp->attr.bp_addr;
+ } else {
+- if (info->ctrl.len == ARM_BREAKPOINT_LEN_8)
+- alignment_mask = 0x7;
+- else
+- alignment_mask = 0x3;
+-
+- /* Check if the watchpoint value matches. */
+- val = read_wb_reg(ARM_BASE_WVR + i);
+- if (val != (addr & ~alignment_mask))
+- goto unlock;
+-
+- /* Possible match, check the byte address select. */
+- ctrl_reg = read_wb_reg(ARM_BASE_WCR + i);
+- decode_ctrl_reg(ctrl_reg, &ctrl);
+- if (!((1 << (addr & alignment_mask)) & ctrl.len))
+- goto unlock;
+-
+ /* Check that the access type matches. */
+ if (debug_exception_updates_fsr()) {
+ access = (fsr & ARM_FSR_ACCESS_MASK) ?
+ HW_BREAKPOINT_W : HW_BREAKPOINT_R;
+ if (!(access & hw_breakpoint_type(wp)))
+- goto unlock;
++ continue;
+ }
+
++ val = read_wb_reg(ARM_BASE_WVR + i);
++ ctrl_reg = read_wb_reg(ARM_BASE_WCR + i);
++ decode_ctrl_reg(ctrl_reg, &ctrl);
++ dist = get_distance_from_watchpoint(addr, val, &ctrl);
++ if (dist < min_dist) {
++ min_dist = dist;
++ closest_match = i;
++ }
++ /* Is this an exact match? */
++ if (dist != 0)
++ continue;
++
+ /* We have a winner. */
++ info = counter_arch_bp(wp);
+ info->trigger = addr;
+ }
+
+@@ -765,13 +799,23 @@ static void watchpoint_handler(unsigned long addr, unsigned int fsr,
+ * we can single-step over the watchpoint trigger.
+ */
+ if (!is_default_overflow_handler(wp))
+- goto unlock;
+-
++ continue;
+ step:
+ enable_single_step(wp, instruction_pointer(regs));
+-unlock:
+- rcu_read_unlock();
+ }
++
++ if (min_dist > 0 && min_dist != -1) {
++ /* No exact match found. */
++ wp = slots[closest_match];
++ info = counter_arch_bp(wp);
++ info->trigger = addr;
++ pr_debug("watchpoint fired: address = 0x%x\n", info->trigger);
++ perf_bp_event(wp, regs);
++ if (is_default_overflow_handler(wp))
++ enable_single_step(wp, instruction_pointer(regs));
++ }
++
++ rcu_read_unlock();
+ }
+
+ static void watchpoint_single_step_handler(unsigned long pc)
+diff --git a/arch/arm/plat-samsung/Kconfig b/arch/arm/plat-samsung/Kconfig
+index 301e572651c0f..790c87ee72716 100644
+--- a/arch/arm/plat-samsung/Kconfig
++++ b/arch/arm/plat-samsung/Kconfig
+@@ -241,6 +241,7 @@ config SAMSUNG_PM_DEBUG
+ depends on PM && DEBUG_KERNEL
+ depends on PLAT_S3C24XX || ARCH_S3C64XX || ARCH_S5PV210
+ depends on DEBUG_EXYNOS_UART || DEBUG_S3C24XX_UART || DEBUG_S3C2410_UART
++ depends on DEBUG_LL && MMU
+ help
+ Say Y here if you want verbose debugging from the PM Suspend and
+ Resume code. See <file:Documentation/arm/samsung-s3c24xx/suspend.rst>
+diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
+index cd58f8495c458..5b433a7f975b0 100644
+--- a/arch/arm64/Kconfig.platforms
++++ b/arch/arm64/Kconfig.platforms
+@@ -54,6 +54,7 @@ config ARCH_BCM_IPROC
+ config ARCH_BERLIN
+ bool "Marvell Berlin SoC Family"
+ select DW_APB_ICTL
++ select DW_APB_TIMER_OF
+ select GPIOLIB
+ select PINCTRL
+ help
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7-emmc.dts b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7-emmc.dts
+index 03733fd92732d..215d2f7026233 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7-emmc.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7-emmc.dts
+@@ -20,17 +20,23 @@
+ compatible = "globalscale,espressobin-v7-emmc", "globalscale,espressobin-v7",
+ "globalscale,espressobin", "marvell,armada3720",
+ "marvell,armada3710";
++
++ aliases {
++ /* ethernet1 is wan port */
++ ethernet1 = &switch0port3;
++ ethernet3 = &switch0port1;
++ };
+ };
+
+ &switch0 {
+ ports {
+- port@1 {
++ switch0port1: port@1 {
+ reg = <1>;
+ label = "lan1";
+ phy-handle = <&switch0phy0>;
+ };
+
+- port@3 {
++ switch0port3: port@3 {
+ reg = <3>;
+ label = "wan";
+ phy-handle = <&switch0phy2>;
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7.dts b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7.dts
+index 8570c5f47d7d8..b6f4af8ebafbb 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-v7.dts
+@@ -19,17 +19,23 @@
+ model = "Globalscale Marvell ESPRESSOBin Board V7";
+ compatible = "globalscale,espressobin-v7", "globalscale,espressobin",
+ "marvell,armada3720", "marvell,armada3710";
++
++ aliases {
++ /* ethernet1 is wan port */
++ ethernet1 = &switch0port3;
++ ethernet3 = &switch0port1;
++ };
+ };
+
+ &switch0 {
+ ports {
+- port@1 {
++ switch0port1: port@1 {
+ reg = <1>;
+ label = "lan1";
+ phy-handle = <&switch0phy0>;
+ };
+
+- port@3 {
++ switch0port3: port@3 {
+ reg = <3>;
+ label = "wan";
+ phy-handle = <&switch0phy2>;
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
+index b97218c727277..0775c16e0ec80 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
+@@ -13,6 +13,10 @@
+ / {
+ aliases {
+ ethernet0 = ð0;
++ /* for dsa slave device */
++ ethernet1 = &switch0port1;
++ ethernet2 = &switch0port2;
++ ethernet3 = &switch0port3;
+ serial0 = &uart0;
+ serial1 = &uart1;
+ };
+@@ -120,7 +124,7 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+- port@0 {
++ switch0port0: port@0 {
+ reg = <0>;
+ label = "cpu";
+ ethernet = <ð0>;
+@@ -131,19 +135,19 @@
+ };
+ };
+
+- port@1 {
++ switch0port1: port@1 {
+ reg = <1>;
+ label = "wan";
+ phy-handle = <&switch0phy0>;
+ };
+
+- port@2 {
++ switch0port2: port@2 {
+ reg = <2>;
+ label = "lan0";
+ phy-handle = <&switch0phy1>;
+ };
+
+- port@3 {
++ switch0port3: port@3 {
+ reg = <3>;
+ label = "lan1";
+ phy-handle = <&switch0phy2>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi b/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi
+index 4032b7478f044..791f254ac3f87 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994-sony-xperia-kitakami.dtsi
+@@ -221,7 +221,12 @@
+ };
+
+ &sdhc1 {
+- status = "okay";
++ /* There is an issue with the eMMC causing permanent
++ * damage to the card if a quirk isn't addressed.
++ * Until it's fixed, disable the MMC so as not to brick
++ * devices.
++ */
++ status = "disabled";
+
+ /* Downstream pushes 2.95V to the sdhci device,
+ * but upstream driver REALLY wants to make vmmc 1.8v
+diff --git a/arch/arm64/boot/dts/renesas/ulcb.dtsi b/arch/arm64/boot/dts/renesas/ulcb.dtsi
+index ff88af8e39d3f..a2e085db87c53 100644
+--- a/arch/arm64/boot/dts/renesas/ulcb.dtsi
++++ b/arch/arm64/boot/dts/renesas/ulcb.dtsi
+@@ -469,6 +469,7 @@
+ mmc-hs200-1_8v;
+ mmc-hs400-1_8v;
+ non-removable;
++ full-pwr-cycle-in-suspend;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 905c2b87e05ac..34675109921e7 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -231,6 +231,7 @@ enum vcpu_sysreg {
+ #define cp14_DBGWCR0 (DBGWCR0_EL1 * 2)
+ #define cp14_DBGWVR0 (DBGWVR0_EL1 * 2)
+ #define cp14_DBGDCCINT (MDCCINT_EL1 * 2)
++#define cp14_DBGVCR (DBGVCR32_EL2 * 2)
+
+ #define NR_COPRO_REGS (NR_SYS_REGS * 2)
+
+diff --git a/arch/arm64/include/asm/numa.h b/arch/arm64/include/asm/numa.h
+index 626ad01e83bf0..dd870390d639f 100644
+--- a/arch/arm64/include/asm/numa.h
++++ b/arch/arm64/include/asm/numa.h
+@@ -25,6 +25,9 @@ const struct cpumask *cpumask_of_node(int node);
+ /* Returns a pointer to the cpumask of CPUs on Node 'node'. */
+ static inline const struct cpumask *cpumask_of_node(int node)
+ {
++ if (node == NUMA_NO_NODE)
++ return cpu_all_mask;
++
+ return node_to_cpumask_map[node];
+ }
+ #endif
+diff --git a/arch/arm64/kernel/efi-header.S b/arch/arm64/kernel/efi-header.S
+index df67c0f2a077e..a71844fb923ee 100644
+--- a/arch/arm64/kernel/efi-header.S
++++ b/arch/arm64/kernel/efi-header.S
+@@ -147,6 +147,6 @@ efi_debug_entry:
+ * correctly at this alignment, we must ensure that .text is
+ * placed at a 4k boundary in the Image to begin with.
+ */
+- .align 12
++ .balign SEGMENT_ALIGN
+ efi_header_end:
+ .endm
+diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
+index 0801a0f3c156a..ff1dd1dbfe641 100644
+--- a/arch/arm64/kernel/topology.c
++++ b/arch/arm64/kernel/topology.c
+@@ -36,21 +36,23 @@ void store_cpu_topology(unsigned int cpuid)
+ if (mpidr & MPIDR_UP_BITMASK)
+ return;
+
+- /* Create cpu topology mapping based on MPIDR. */
+- if (mpidr & MPIDR_MT_BITMASK) {
+- /* Multiprocessor system : Multi-threads per core */
+- cpuid_topo->thread_id = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+- cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+- cpuid_topo->package_id = MPIDR_AFFINITY_LEVEL(mpidr, 2) |
+- MPIDR_AFFINITY_LEVEL(mpidr, 3) << 8;
+- } else {
+- /* Multiprocessor system : Single-thread per core */
+- cpuid_topo->thread_id = -1;
+- cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+- cpuid_topo->package_id = MPIDR_AFFINITY_LEVEL(mpidr, 1) |
+- MPIDR_AFFINITY_LEVEL(mpidr, 2) << 8 |
+- MPIDR_AFFINITY_LEVEL(mpidr, 3) << 16;
+- }
++ /*
++ * This would be the place to create cpu topology based on MPIDR.
++ *
++ * However, it cannot be trusted to depict the actual topology; some
++ * pieces of the architecture enforce an artificial cap on Aff0 values
++ * (e.g. GICv3's ICC_SGI1R_EL1 limits it to 15), leading to an
++ * artificial cycling of Aff1, Aff2 and Aff3 values. IOW, these end up
++ * having absolutely no relationship to the actual underlying system
++ * topology, and cannot be reasonably used as core / package ID.
++ *
++ * If the MT bit is set, Aff0 *could* be used to define a thread ID, but
++ * we still wouldn't be able to obtain a sane core ID. This means we
++ * need to entirely ignore MPIDR for any topology deduction.
++ */
++ cpuid_topo->thread_id = -1;
++ cpuid_topo->core_id = cpuid;
++ cpuid_topo->package_id = cpu_to_node(cpuid);
+
+ pr_debug("CPU%u: cluster %d core %d thread %d mpidr %#016llx\n",
+ cpuid, cpuid_topo->package_id, cpuid_topo->core_id,
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index 077293b5115fa..de5a5a80ae99a 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1881,9 +1881,9 @@ static const struct sys_reg_desc cp14_regs[] = {
+ { Op1( 0), CRn( 0), CRm( 1), Op2( 0), trap_raz_wi },
+ DBG_BCR_BVR_WCR_WVR(1),
+ /* DBGDCCINT */
+- { Op1( 0), CRn( 0), CRm( 2), Op2( 0), trap_debug32 },
++ { Op1( 0), CRn( 0), CRm( 2), Op2( 0), trap_debug32, NULL, cp14_DBGDCCINT },
+ /* DBGDSCRext */
+- { Op1( 0), CRn( 0), CRm( 2), Op2( 2), trap_debug32 },
++ { Op1( 0), CRn( 0), CRm( 2), Op2( 2), trap_debug32, NULL, cp14_DBGDSCRext },
+ DBG_BCR_BVR_WCR_WVR(2),
+ /* DBGDTR[RT]Xint */
+ { Op1( 0), CRn( 0), CRm( 3), Op2( 0), trap_raz_wi },
+@@ -1898,7 +1898,7 @@ static const struct sys_reg_desc cp14_regs[] = {
+ { Op1( 0), CRn( 0), CRm( 6), Op2( 2), trap_raz_wi },
+ DBG_BCR_BVR_WCR_WVR(6),
+ /* DBGVCR */
+- { Op1( 0), CRn( 0), CRm( 7), Op2( 0), trap_debug32 },
++ { Op1( 0), CRn( 0), CRm( 7), Op2( 0), trap_debug32, NULL, cp14_DBGVCR },
+ DBG_BCR_BVR_WCR_WVR(7),
+ DBG_BCR_BVR_WCR_WVR(8),
+ DBG_BCR_BVR_WCR_WVR(9),
+diff --git a/arch/arm64/lib/memcpy.S b/arch/arm64/lib/memcpy.S
+index e0bf83d556f23..dc8d2a216a6e6 100644
+--- a/arch/arm64/lib/memcpy.S
++++ b/arch/arm64/lib/memcpy.S
+@@ -56,9 +56,8 @@
+ stp \reg1, \reg2, [\ptr], \val
+ .endm
+
+- .weak memcpy
+ SYM_FUNC_START_ALIAS(__memcpy)
+-SYM_FUNC_START_PI(memcpy)
++SYM_FUNC_START_WEAK_PI(memcpy)
+ #include "copy_template.S"
+ ret
+ SYM_FUNC_END_PI(memcpy)
+diff --git a/arch/arm64/lib/memmove.S b/arch/arm64/lib/memmove.S
+index 02cda2e33bde2..1035dce4bdaf4 100644
+--- a/arch/arm64/lib/memmove.S
++++ b/arch/arm64/lib/memmove.S
+@@ -45,9 +45,8 @@ C_h .req x12
+ D_l .req x13
+ D_h .req x14
+
+- .weak memmove
+ SYM_FUNC_START_ALIAS(__memmove)
+-SYM_FUNC_START_PI(memmove)
++SYM_FUNC_START_WEAK_PI(memmove)
+ cmp dstin, src
+ b.lo __memcpy
+ add tmp1, src, count
+diff --git a/arch/arm64/lib/memset.S b/arch/arm64/lib/memset.S
+index 77c3c7ba00842..a9c1c9a01ea90 100644
+--- a/arch/arm64/lib/memset.S
++++ b/arch/arm64/lib/memset.S
+@@ -42,9 +42,8 @@ dst .req x8
+ tmp3w .req w9
+ tmp3 .req x9
+
+- .weak memset
+ SYM_FUNC_START_ALIAS(__memset)
+-SYM_FUNC_START_PI(memset)
++SYM_FUNC_START_WEAK_PI(memset)
+ mov dst, dstin /* Preserve return value. */
+ and A_lw, val, #255
+ orr A_lw, A_lw, A_lw, lsl #8
+diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
+index 73f8b49d485c2..88e51aade0da0 100644
+--- a/arch/arm64/mm/numa.c
++++ b/arch/arm64/mm/numa.c
+@@ -46,7 +46,11 @@ EXPORT_SYMBOL(node_to_cpumask_map);
+ */
+ const struct cpumask *cpumask_of_node(int node)
+ {
+- if (WARN_ON(node >= nr_node_ids))
++
++ if (node == NUMA_NO_NODE)
++ return cpu_all_mask;
++
++ if (WARN_ON(node < 0 || node >= nr_node_ids))
+ return cpu_none_mask;
+
+ if (WARN_ON(node_to_cpumask_map[node] == NULL))
+diff --git a/arch/ia64/kernel/Makefile b/arch/ia64/kernel/Makefile
+index 1a8df6669eee6..18d6008b151fd 100644
+--- a/arch/ia64/kernel/Makefile
++++ b/arch/ia64/kernel/Makefile
+@@ -41,7 +41,7 @@ obj-y += esi_stub.o # must be in kernel proper
+ endif
+ obj-$(CONFIG_INTEL_IOMMU) += pci-dma.o
+
+-obj-$(CONFIG_BINFMT_ELF) += elfcore.o
++obj-$(CONFIG_ELF_CORE) += elfcore.o
+
+ # fp_emulate() expects f2-f5,f16-f31 to contain the user-level state.
+ CFLAGS_traps.o += -mfixed-range=f2-f5,f16-f31
+diff --git a/arch/ia64/kernel/kprobes.c b/arch/ia64/kernel/kprobes.c
+index 7a7df944d7986..fc1ff8a4d7de6 100644
+--- a/arch/ia64/kernel/kprobes.c
++++ b/arch/ia64/kernel/kprobes.c
+@@ -396,83 +396,9 @@ static void kretprobe_trampoline(void)
+ {
+ }
+
+-/*
+- * At this point the target function has been tricked into
+- * returning into our trampoline. Lookup the associated instance
+- * and then:
+- * - call the handler function
+- * - cleanup by marking the instance as unused
+- * - long jump back to the original return address
+- */
+ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
+ {
+- struct kretprobe_instance *ri = NULL;
+- struct hlist_head *head, empty_rp;
+- struct hlist_node *tmp;
+- unsigned long flags, orig_ret_address = 0;
+- unsigned long trampoline_address =
+- ((struct fnptr *)kretprobe_trampoline)->ip;
+-
+- INIT_HLIST_HEAD(&empty_rp);
+- kretprobe_hash_lock(current, &head, &flags);
+-
+- /*
+- * It is possible to have multiple instances associated with a given
+- * task either because an multiple functions in the call path
+- * have a return probe installed on them, and/or more than one return
+- * return probe was registered for a target function.
+- *
+- * We can handle this because:
+- * - instances are always inserted at the head of the list
+- * - when multiple return probes are registered for the same
+- * function, the first instance's ret_addr will point to the
+- * real return address, and all the rest will point to
+- * kretprobe_trampoline
+- */
+- hlist_for_each_entry_safe(ri, tmp, head, hlist) {
+- if (ri->task != current)
+- /* another task is sharing our hash bucket */
+- continue;
+-
+- orig_ret_address = (unsigned long)ri->ret_addr;
+- if (orig_ret_address != trampoline_address)
+- /*
+- * This is the real return address. Any other
+- * instances associated with this task are for
+- * other calls deeper on the call stack
+- */
+- break;
+- }
+-
+- regs->cr_iip = orig_ret_address;
+-
+- hlist_for_each_entry_safe(ri, tmp, head, hlist) {
+- if (ri->task != current)
+- /* another task is sharing our hash bucket */
+- continue;
+-
+- if (ri->rp && ri->rp->handler)
+- ri->rp->handler(ri, regs);
+-
+- orig_ret_address = (unsigned long)ri->ret_addr;
+- recycle_rp_inst(ri, &empty_rp);
+-
+- if (orig_ret_address != trampoline_address)
+- /*
+- * This is the real return address. Any other
+- * instances associated with this task are for
+- * other calls deeper on the call stack
+- */
+- break;
+- }
+- kretprobe_assert(ri, orig_ret_address, trampoline_address);
+-
+- kretprobe_hash_unlock(current, &flags);
+-
+- hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
+- hlist_del(&ri->hlist);
+- kfree(ri);
+- }
++ regs->cr_iip = __kretprobe_trampoline_handler(regs, kretprobe_trampoline, NULL);
+ /*
+ * By returning a non-zero value, we are telling
+ * kprobe_handler() that we don't want the post_handler
+@@ -485,6 +411,7 @@ void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
+ struct pt_regs *regs)
+ {
+ ri->ret_addr = (kprobe_opcode_t *)regs->b0;
++ ri->fp = NULL;
+
+ /* Replace the return addr with trampoline addr */
+ regs->b0 = ((struct fnptr *)kretprobe_trampoline)->ip;
+diff --git a/arch/mips/configs/qi_lb60_defconfig b/arch/mips/configs/qi_lb60_defconfig
+index 81bfbee72b0c3..9c2c183085d11 100644
+--- a/arch/mips/configs/qi_lb60_defconfig
++++ b/arch/mips/configs/qi_lb60_defconfig
+@@ -8,6 +8,7 @@ CONFIG_EMBEDDED=y
+ # CONFIG_COMPAT_BRK is not set
+ CONFIG_SLAB=y
+ CONFIG_MACH_INGENIC=y
++CONFIG_JZ4740_QI_LB60=y
+ CONFIG_HZ_100=y
+ # CONFIG_SECCOMP is not set
+ CONFIG_MODULES=y
+diff --git a/arch/mips/dec/setup.c b/arch/mips/dec/setup.c
+index d4e868b828e58..eaad0ed4b523b 100644
+--- a/arch/mips/dec/setup.c
++++ b/arch/mips/dec/setup.c
+@@ -6,7 +6,7 @@
+ * for more details.
+ *
+ * Copyright (C) 1998 Harald Koerfgen
+- * Copyright (C) 2000, 2001, 2002, 2003, 2005 Maciej W. Rozycki
++ * Copyright (C) 2000, 2001, 2002, 2003, 2005, 2020 Maciej W. Rozycki
+ */
+ #include <linux/console.h>
+ #include <linux/export.h>
+@@ -15,6 +15,7 @@
+ #include <linux/ioport.h>
+ #include <linux/irq.h>
+ #include <linux/irqnr.h>
++#include <linux/memblock.h>
+ #include <linux/param.h>
+ #include <linux/percpu-defs.h>
+ #include <linux/sched.h>
+@@ -22,6 +23,7 @@
+ #include <linux/types.h>
+ #include <linux/pm.h>
+
++#include <asm/addrspace.h>
+ #include <asm/bootinfo.h>
+ #include <asm/cpu.h>
+ #include <asm/cpu-features.h>
+@@ -29,7 +31,9 @@
+ #include <asm/irq.h>
+ #include <asm/irq_cpu.h>
+ #include <asm/mipsregs.h>
++#include <asm/page.h>
+ #include <asm/reboot.h>
++#include <asm/sections.h>
+ #include <asm/time.h>
+ #include <asm/traps.h>
+ #include <asm/wbflush.h>
+@@ -146,6 +150,9 @@ void __init plat_mem_setup(void)
+
+ ioport_resource.start = ~0UL;
+ ioport_resource.end = 0UL;
++
++ /* Stay away from the firmware working memory area for now. */
++ memblock_reserve(PHYS_OFFSET, __pa_symbol(&_text) - PHYS_OFFSET);
+ }
+
+ /*
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 2b15b4870565d..aaf069c72aa1b 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -148,6 +148,7 @@ config PPC
+ select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS
+ select ARCH_USE_QUEUED_SPINLOCKS if PPC_QUEUED_SPINLOCKS
+ select ARCH_WANT_IPC_PARSE_VERSION
++ select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
+ select ARCH_WEAK_RELEASE_ACQUIRE
+ select BINFMT_ELF
+ select BUILDTIME_TABLE_SORT
+@@ -1000,6 +1001,19 @@ config PPC_SECVAR_SYSFS
+ read/write operations on these variables. Say Y if you have
+ secure boot enabled and want to expose variables to userspace.
+
++config PPC_RTAS_FILTER
++ bool "Enable filtering of RTAS syscalls"
++ default y
++ depends on PPC_RTAS
++ help
++ The RTAS syscall API has security issues that could be used to
++ compromise system integrity. This option enforces restrictions on the
++ RTAS calls and arguments passed by userspace programs to mitigate
++ these issues.
++
++ Say Y unless you know what you are doing and the filter is causing
++ problems for you.
++
+ endmenu
+
+ config ISA_DMA_API
+diff --git a/arch/powerpc/include/asm/drmem.h b/arch/powerpc/include/asm/drmem.h
+index 030a19d922132..bf2402fed3e03 100644
+--- a/arch/powerpc/include/asm/drmem.h
++++ b/arch/powerpc/include/asm/drmem.h
+@@ -20,7 +20,7 @@ struct drmem_lmb {
+ struct drmem_lmb_info {
+ struct drmem_lmb *lmbs;
+ int n_lmbs;
+- u32 lmb_size;
++ u64 lmb_size;
+ };
+
+ extern struct drmem_lmb_info *drmem_info;
+@@ -80,7 +80,7 @@ struct of_drconf_cell_v2 {
+ #define DRCONF_MEM_RESERVED 0x00000080
+ #define DRCONF_MEM_HOTREMOVABLE 0x00000100
+
+-static inline u32 drmem_lmb_size(void)
++static inline u64 drmem_lmb_size(void)
+ {
+ return drmem_info->lmb_size;
+ }
+diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
+index 7f3658a973846..e02aa793420b8 100644
+--- a/arch/powerpc/include/asm/mmu_context.h
++++ b/arch/powerpc/include/asm/mmu_context.h
+@@ -244,7 +244,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
+ */
+ static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next)
+ {
+- switch_mm(prev, next, current);
++ switch_mm_irqs_off(prev, next, current);
+ }
+
+ /* We don't currently use enter_lazy_tlb() for anything */
+diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
+index f3ab94d73936d..a5a612deef66e 100644
+--- a/arch/powerpc/kernel/head_32.S
++++ b/arch/powerpc/kernel/head_32.S
+@@ -274,14 +274,8 @@ __secondary_hold_acknowledge:
+ DO_KVM 0x200
+ MachineCheck:
+ EXCEPTION_PROLOG_0
+-#ifdef CONFIG_VMAP_STACK
+- li r11, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
+- mtmsr r11
+- isync
+-#endif
+ #ifdef CONFIG_PPC_CHRP
+ mfspr r11, SPRN_SPRG_THREAD
+- tovirt_vmstack r11, r11
+ lwz r11, RTAS_SP(r11)
+ cmpwi cr1, r11, 0
+ bne cr1, 7f
+@@ -1002,7 +996,7 @@ BEGIN_MMU_FTR_SECTION
+ END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
+ blr
+
+-load_segment_registers:
++_GLOBAL(load_segment_registers)
+ li r0, NUM_USER_SEGMENTS /* load up user segment register values */
+ mtctr r0 /* for context 0 */
+ li r3, 0 /* Kp = 0, Ks = 0, VSID = 0 */
+diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
+index 9abec6cd099c6..cc36998c55416 100644
+--- a/arch/powerpc/kernel/head_32.h
++++ b/arch/powerpc/kernel/head_32.h
+@@ -40,48 +40,52 @@
+
+ .macro EXCEPTION_PROLOG_1 for_rtas=0
+ #ifdef CONFIG_VMAP_STACK
+- .ifeq \for_rtas
+- li r11, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
+- mtmsr r11
+- isync
+- .endif
+- subi r11, r1, INT_FRAME_SIZE /* use r1 if kernel */
++ mr r11, r1
++ subi r1, r1, INT_FRAME_SIZE /* use r1 if kernel */
++ beq 1f
++ mfspr r1,SPRN_SPRG_THREAD
++ lwz r1,TASK_STACK-THREAD(r1)
++ addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
+ #else
+- tophys(r11,r1) /* use tophys(r1) if kernel */
+- subi r11, r11, INT_FRAME_SIZE /* alloc exc. frame */
+-#endif
++ subi r11, r1, INT_FRAME_SIZE /* use r1 if kernel */
+ beq 1f
+ mfspr r11,SPRN_SPRG_THREAD
+- tovirt_vmstack r11, r11
+ lwz r11,TASK_STACK-THREAD(r11)
+ addi r11, r11, THREAD_SIZE - INT_FRAME_SIZE
+- tophys_novmstack r11, r11
++#endif
+ 1:
++ tophys_novmstack r11, r11
+ #ifdef CONFIG_VMAP_STACK
+- mtcrf 0x7f, r11
++ mtcrf 0x7f, r1
+ bt 32 - THREAD_ALIGN_SHIFT, stack_overflow
+ #endif
+ .endm
+
+ .macro EXCEPTION_PROLOG_2 handle_dar_dsisr=0
+-#if defined(CONFIG_VMAP_STACK) && defined(CONFIG_PPC_BOOK3S)
+-BEGIN_MMU_FTR_SECTION
++#ifdef CONFIG_VMAP_STACK
+ mtcr r10
+-FTR_SECTION_ELSE
+- stw r10, _CCR(r11)
+-ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
++ li r10, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
++ mtmsr r10
++ isync
+ #else
+ stw r10,_CCR(r11) /* save registers */
+ #endif
+ mfspr r10, SPRN_SPRG_SCRATCH0
++#ifdef CONFIG_VMAP_STACK
++ stw r11,GPR1(r1)
++ stw r11,0(r1)
++ mr r11, r1
++#else
++ stw r1,GPR1(r11)
++ stw r1,0(r11)
++ tovirt(r1, r11) /* set new kernel sp */
++#endif
+ stw r12,GPR12(r11)
+ stw r9,GPR9(r11)
+ stw r10,GPR10(r11)
+-#if defined(CONFIG_VMAP_STACK) && defined(CONFIG_PPC_BOOK3S)
+-BEGIN_MMU_FTR_SECTION
++#ifdef CONFIG_VMAP_STACK
+ mfcr r10
+ stw r10, _CCR(r11)
+-END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
+ #endif
+ mfspr r12,SPRN_SPRG_SCRATCH1
+ stw r12,GPR11(r11)
+@@ -97,19 +101,12 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
+ stw r10, _DSISR(r11)
+ .endif
+ lwz r9, SRR1(r12)
+-#if defined(CONFIG_VMAP_STACK) && defined(CONFIG_PPC_BOOK3S)
+-BEGIN_MMU_FTR_SECTION
+ andi. r10, r9, MSR_PR
+-END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
+-#endif
+ lwz r12, SRR0(r12)
+ #else
+ mfspr r12,SPRN_SRR0
+ mfspr r9,SPRN_SRR1
+ #endif
+- stw r1,GPR1(r11)
+- stw r1,0(r11)
+- tovirt_novmstack r1, r11 /* set new kernel sp */
+ #ifdef CONFIG_40x
+ rlwinm r9,r9,0,14,12 /* clear MSR_WE (necessary?) */
+ #else
+@@ -327,20 +324,19 @@ label:
+ .macro vmap_stack_overflow_exception
+ #ifdef CONFIG_VMAP_STACK
+ #ifdef CONFIG_SMP
+- mfspr r11, SPRN_SPRG_THREAD
+- tovirt(r11, r11)
+- lwz r11, TASK_CPU - THREAD(r11)
+- slwi r11, r11, 3
+- addis r11, r11, emergency_ctx@ha
++ mfspr r1, SPRN_SPRG_THREAD
++ lwz r1, TASK_CPU - THREAD(r1)
++ slwi r1, r1, 3
++ addis r1, r1, emergency_ctx@ha
+ #else
+- lis r11, emergency_ctx@ha
++ lis r1, emergency_ctx@ha
+ #endif
+- lwz r11, emergency_ctx@l(r11)
+- cmpwi cr1, r11, 0
++ lwz r1, emergency_ctx@l(r1)
++ cmpwi cr1, r1, 0
+ bne cr1, 1f
+- lis r11, init_thread_union@ha
+- addi r11, r11, init_thread_union@l
+-1: addi r11, r11, THREAD_SIZE - INT_FRAME_SIZE
++ lis r1, init_thread_union@ha
++ addi r1, r1, init_thread_union@l
++1: addi r1, r1, THREAD_SIZE - INT_FRAME_SIZE
+ EXCEPTION_PROLOG_2
+ SAVE_NVGPRS(r11)
+ addi r3, r1, STACK_FRAME_OVERHEAD
+diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
+index ada59f6c4298f..63702c0badb97 100644
+--- a/arch/powerpc/kernel/mce.c
++++ b/arch/powerpc/kernel/mce.c
+@@ -591,12 +591,11 @@ EXPORT_SYMBOL_GPL(machine_check_print_event_info);
+ long notrace machine_check_early(struct pt_regs *regs)
+ {
+ long handled = 0;
+- bool nested = in_nmi();
+ u8 ftrace_enabled = this_cpu_get_ftrace_enabled();
+
+ this_cpu_set_ftrace_enabled(0);
+-
+- if (!nested)
++ /* Do not use nmi_enter/exit for pseries hpte guest */
++ if (radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR))
+ nmi_enter();
+
+ hv_nmi_check_nonrecoverable(regs);
+@@ -607,7 +606,7 @@ long notrace machine_check_early(struct pt_regs *regs)
+ if (ppc_md.machine_check_early)
+ handled = ppc_md.machine_check_early(regs);
+
+- if (!nested)
++ if (radix_enabled() || !firmware_has_feature(FW_FEATURE_LPAR))
+ nmi_exit();
+
+ this_cpu_set_ftrace_enabled(ftrace_enabled);
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 73a57043ee662..3f2dc0675ea7a 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -1256,15 +1256,17 @@ struct task_struct *__switch_to(struct task_struct *prev,
+ restore_math(current->thread.regs);
+
+ /*
+- * The copy-paste buffer can only store into foreign real
+- * addresses, so unprivileged processes can not see the
+- * data or use it in any way unless they have foreign real
+- * mappings. If the new process has the foreign real address
+- * mappings, we must issue a cp_abort to clear any state and
+- * prevent snooping, corruption or a covert channel.
++ * On POWER9 the copy-paste buffer can only paste into
++ * foreign real addresses, so unprivileged processes can not
++ * see the data or use it in any way unless they have
++ * foreign real mappings. If the new process has the foreign
++ * real address mappings, we must issue a cp_abort to clear
++ * any state and prevent snooping, corruption or a covert
++ * channel. ISA v3.1 supports paste into local memory.
+ */
+ if (current->mm &&
+- atomic_read(¤t->mm->context.vas_windows))
++ (cpu_has_feature(CPU_FTR_ARCH_31) ||
++ atomic_read(¤t->mm->context.vas_windows)))
+ asm volatile(PPC_CP_ABORT);
+ }
+ #endif /* CONFIG_PPC_BOOK3S_64 */
+diff --git a/arch/powerpc/kernel/ptrace/ptrace-noadv.c b/arch/powerpc/kernel/ptrace/ptrace-noadv.c
+index 8bd8d8de5c40b..a570782e954be 100644
+--- a/arch/powerpc/kernel/ptrace/ptrace-noadv.c
++++ b/arch/powerpc/kernel/ptrace/ptrace-noadv.c
+@@ -217,7 +217,7 @@ long ppc_set_hwdebug(struct task_struct *child, struct ppc_hw_breakpoint *bp_inf
+ return -EIO;
+
+ brk.address = ALIGN_DOWN(bp_info->addr, HW_BREAKPOINT_SIZE);
+- brk.type = HW_BRK_TYPE_TRANSLATE;
++ brk.type = HW_BRK_TYPE_TRANSLATE | HW_BRK_TYPE_PRIV_ALL;
+ brk.len = DABR_MAX_LEN;
+ brk.hw_len = DABR_MAX_LEN;
+ if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_READ)
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index 806d554ce3577..954f41676f692 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -992,6 +992,147 @@ struct pseries_errorlog *get_pseries_errorlog(struct rtas_error_log *log,
+ return NULL;
+ }
+
++#ifdef CONFIG_PPC_RTAS_FILTER
++
++/*
++ * The sys_rtas syscall, as originally designed, allows root to pass
++ * arbitrary physical addresses to RTAS calls. A number of RTAS calls
++ * can be abused to write to arbitrary memory and do other things that
++ * are potentially harmful to system integrity, and thus should only
++ * be used inside the kernel and not exposed to userspace.
++ *
++ * All known legitimate users of the sys_rtas syscall will only ever
++ * pass addresses that fall within the RMO buffer, and use a known
++ * subset of RTAS calls.
++ *
++ * Accordingly, we filter RTAS requests to check that the call is
++ * permitted, and that provided pointers fall within the RMO buffer.
++ * The rtas_filters list contains an entry for each permitted call,
++ * with the indexes of the parameters which are expected to contain
++ * addresses and sizes of buffers allocated inside the RMO buffer.
++ */
++struct rtas_filter {
++ const char *name;
++ int token;
++ /* Indexes into the args buffer, -1 if not used */
++ int buf_idx1;
++ int size_idx1;
++ int buf_idx2;
++ int size_idx2;
++
++ int fixed_size;
++};
++
++static struct rtas_filter rtas_filters[] __ro_after_init = {
++ { "ibm,activate-firmware", -1, -1, -1, -1, -1 },
++ { "ibm,configure-connector", -1, 0, -1, 1, -1, 4096 }, /* Special cased */
++ { "display-character", -1, -1, -1, -1, -1 },
++ { "ibm,display-message", -1, 0, -1, -1, -1 },
++ { "ibm,errinjct", -1, 2, -1, -1, -1, 1024 },
++ { "ibm,close-errinjct", -1, -1, -1, -1, -1 },
++ { "ibm,open-errinct", -1, -1, -1, -1, -1 },
++ { "ibm,get-config-addr-info2", -1, -1, -1, -1, -1 },
++ { "ibm,get-dynamic-sensor-state", -1, 1, -1, -1, -1 },
++ { "ibm,get-indices", -1, 2, 3, -1, -1 },
++ { "get-power-level", -1, -1, -1, -1, -1 },
++ { "get-sensor-state", -1, -1, -1, -1, -1 },
++ { "ibm,get-system-parameter", -1, 1, 2, -1, -1 },
++ { "get-time-of-day", -1, -1, -1, -1, -1 },
++ { "ibm,get-vpd", -1, 0, -1, 1, 2 },
++ { "ibm,lpar-perftools", -1, 2, 3, -1, -1 },
++ { "ibm,platform-dump", -1, 4, 5, -1, -1 },
++ { "ibm,read-slot-reset-state", -1, -1, -1, -1, -1 },
++ { "ibm,scan-log-dump", -1, 0, 1, -1, -1 },
++ { "ibm,set-dynamic-indicator", -1, 2, -1, -1, -1 },
++ { "ibm,set-eeh-option", -1, -1, -1, -1, -1 },
++ { "set-indicator", -1, -1, -1, -1, -1 },
++ { "set-power-level", -1, -1, -1, -1, -1 },
++ { "set-time-for-power-on", -1, -1, -1, -1, -1 },
++ { "ibm,set-system-parameter", -1, 1, -1, -1, -1 },
++ { "set-time-of-day", -1, -1, -1, -1, -1 },
++ { "ibm,suspend-me", -1, -1, -1, -1, -1 },
++ { "ibm,update-nodes", -1, 0, -1, -1, -1, 4096 },
++ { "ibm,update-properties", -1, 0, -1, -1, -1, 4096 },
++ { "ibm,physical-attestation", -1, 0, 1, -1, -1 },
++};
++
++static bool in_rmo_buf(u32 base, u32 end)
++{
++ return base >= rtas_rmo_buf &&
++ base < (rtas_rmo_buf + RTAS_RMOBUF_MAX) &&
++ base <= end &&
++ end >= rtas_rmo_buf &&
++ end < (rtas_rmo_buf + RTAS_RMOBUF_MAX);
++}
++
++static bool block_rtas_call(int token, int nargs,
++ struct rtas_args *args)
++{
++ int i;
++
++ for (i = 0; i < ARRAY_SIZE(rtas_filters); i++) {
++ struct rtas_filter *f = &rtas_filters[i];
++ u32 base, size, end;
++
++ if (token != f->token)
++ continue;
++
++ if (f->buf_idx1 != -1) {
++ base = be32_to_cpu(args->args[f->buf_idx1]);
++ if (f->size_idx1 != -1)
++ size = be32_to_cpu(args->args[f->size_idx1]);
++ else if (f->fixed_size)
++ size = f->fixed_size;
++ else
++ size = 1;
++
++ end = base + size - 1;
++ if (!in_rmo_buf(base, end))
++ goto err;
++ }
++
++ if (f->buf_idx2 != -1) {
++ base = be32_to_cpu(args->args[f->buf_idx2]);
++ if (f->size_idx2 != -1)
++ size = be32_to_cpu(args->args[f->size_idx2]);
++ else if (f->fixed_size)
++ size = f->fixed_size;
++ else
++ size = 1;
++ end = base + size - 1;
++
++ /*
++ * Special case for ibm,configure-connector where the
++ * address can be 0
++ */
++ if (!strcmp(f->name, "ibm,configure-connector") &&
++ base == 0)
++ return false;
++
++ if (!in_rmo_buf(base, end))
++ goto err;
++ }
++
++ return false;
++ }
++
++err:
++ pr_err_ratelimited("sys_rtas: RTAS call blocked - exploit attempt?\n");
++ pr_err_ratelimited("sys_rtas: token=0x%x, nargs=%d (called by %s)\n",
++ token, nargs, current->comm);
++ return true;
++}
++
++#else
++
++static bool block_rtas_call(int token, int nargs,
++ struct rtas_args *args)
++{
++ return false;
++}
++
++#endif /* CONFIG_PPC_RTAS_FILTER */
++
+ /* We assume to be passed big endian arguments */
+ SYSCALL_DEFINE1(rtas, struct rtas_args __user *, uargs)
+ {
+@@ -1029,6 +1170,9 @@ SYSCALL_DEFINE1(rtas, struct rtas_args __user *, uargs)
+ args.rets = &args.args[nargs];
+ memset(args.rets, 0, nret * sizeof(rtas_arg_t));
+
++ if (block_rtas_call(token, nargs, &args))
++ return -EINVAL;
++
+ /* Need to handle ibm,suspend_me call specially */
+ if (token == ibm_suspend_me_token) {
+
+@@ -1090,6 +1234,9 @@ void __init rtas_initialize(void)
+ unsigned long rtas_region = RTAS_INSTANTIATE_MAX;
+ u32 base, size, entry;
+ int no_base, no_size, no_entry;
++#ifdef CONFIG_PPC_RTAS_FILTER
++ int i;
++#endif
+
+ /* Get RTAS dev node and fill up our "rtas" structure with infos
+ * about it.
+@@ -1129,6 +1276,12 @@ void __init rtas_initialize(void)
+ #ifdef CONFIG_RTAS_ERROR_LOGGING
+ rtas_last_error_token = rtas_token("rtas-last-error");
+ #endif
++
++#ifdef CONFIG_PPC_RTAS_FILTER
++ for (i = 0; i < ARRAY_SIZE(rtas_filters); i++) {
++ rtas_filters[i].token = rtas_token(rtas_filters[i].name);
++ }
++#endif
+ }
+
+ int __init early_init_dt_scan_rtas(unsigned long node,
+diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
+index 46b4ebc33db77..5dea98fa2f938 100644
+--- a/arch/powerpc/kernel/sysfs.c
++++ b/arch/powerpc/kernel/sysfs.c
+@@ -32,29 +32,27 @@
+
+ static DEFINE_PER_CPU(struct cpu, cpu_devices);
+
+-/*
+- * SMT snooze delay stuff, 64-bit only for now
+- */
+-
+ #ifdef CONFIG_PPC64
+
+-/* Time in microseconds we delay before sleeping in the idle loop */
+-static DEFINE_PER_CPU(long, smt_snooze_delay) = { 100 };
++/*
++ * Snooze delay has not been hooked up since 3fa8cad82b94 ("powerpc/pseries/cpuidle:
++ * smt-snooze-delay cleanup.") and has been broken even longer. As was foretold in
++ * 2014:
++ *
++ * "ppc64_util currently utilises it. Once we fix ppc64_util, propose to clean
++ * up the kernel code."
++ *
++ * powerpc-utils stopped using it as of 1.3.8. At some point in the future this
++ * code should be removed.
++ */
+
+ static ssize_t store_smt_snooze_delay(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf,
+ size_t count)
+ {
+- struct cpu *cpu = container_of(dev, struct cpu, dev);
+- ssize_t ret;
+- long snooze;
+-
+- ret = sscanf(buf, "%ld", &snooze);
+- if (ret != 1)
+- return -EINVAL;
+-
+- per_cpu(smt_snooze_delay, cpu->dev.id) = snooze;
++ pr_warn_once("%s (%d) stored to unsupported smt_snooze_delay, which has no effect.\n",
++ current->comm, current->pid);
+ return count;
+ }
+
+@@ -62,9 +60,9 @@ static ssize_t show_smt_snooze_delay(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- struct cpu *cpu = container_of(dev, struct cpu, dev);
+-
+- return sprintf(buf, "%ld\n", per_cpu(smt_snooze_delay, cpu->dev.id));
++ pr_warn_once("%s (%d) read from unsupported smt_snooze_delay\n",
++ current->comm, current->pid);
++ return sprintf(buf, "100\n");
+ }
+
+ static DEVICE_ATTR(smt_snooze_delay, 0644, show_smt_snooze_delay,
+@@ -72,16 +70,10 @@ static DEVICE_ATTR(smt_snooze_delay, 0644, show_smt_snooze_delay,
+
+ static int __init setup_smt_snooze_delay(char *str)
+ {
+- unsigned int cpu;
+- long snooze;
+-
+ if (!cpu_has_feature(CPU_FTR_SMT))
+ return 1;
+
+- snooze = simple_strtol(str, NULL, 10);
+- for_each_possible_cpu(cpu)
+- per_cpu(smt_snooze_delay, cpu) = snooze;
+-
++ pr_warn("smt-snooze-delay command line option has no effect\n");
+ return 1;
+ }
+ __setup("smt-snooze-delay=", setup_smt_snooze_delay);
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index d1ebe152f2107..8bcbf632e95a5 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -889,7 +889,7 @@ static void p9_hmi_special_emu(struct pt_regs *regs)
+ {
+ unsigned int ra, rb, t, i, sel, instr, rc;
+ const void __user *addr;
+- u8 vbuf[16], *vdst;
++ u8 vbuf[16] __aligned(16), *vdst;
+ unsigned long ea, msr, msr_mask;
+ bool swap;
+
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 4ba06a2a306cf..e2b476d76506a 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -3530,6 +3530,13 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit,
+ */
+ asm volatile("eieio; tlbsync; ptesync");
+
++ /*
++ * cp_abort is required if the processor supports local copy-paste
++ * to clear the copy buffer that was under control of the guest.
++ */
++ if (cpu_has_feature(CPU_FTR_ARCH_31))
++ asm volatile(PPC_CP_ABORT);
++
+ mtspr(SPRN_LPID, vcpu->kvm->arch.host_lpid); /* restore host LPID */
+ isync();
+
+@@ -5250,6 +5257,12 @@ static long kvm_arch_vm_ioctl_hv(struct file *filp,
+ case KVM_PPC_ALLOCATE_HTAB: {
+ u32 htab_order;
+
++ /* If we're a nested hypervisor, we currently only support radix */
++ if (kvmhv_on_pseries()) {
++ r = -EOPNOTSUPP;
++ break;
++ }
++
+ r = -EFAULT;
+ if (get_user(htab_order, (u32 __user *)argp))
+ break;
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 799d6d0f4eade..cd9995ee84419 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -1830,6 +1830,14 @@ END_FTR_SECTION_IFSET(CPU_FTR_P9_RADIX_PREFETCH_BUG)
+ 2:
+ #endif /* CONFIG_PPC_RADIX_MMU */
+
++ /*
++ * cp_abort is required if the processor supports local copy-paste
++ * to clear the copy buffer that was under control of the guest.
++ */
++BEGIN_FTR_SECTION
++ PPC_CP_ABORT
++END_FTR_SECTION_IFSET(CPU_FTR_ARCH_31)
++
+ /*
+ * POWER7/POWER8 guest -> host partition switch code.
+ * We don't have to lock against tlbies but we do
+diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
+index 26292544630fb..e7ae2a2c45450 100644
+--- a/arch/powerpc/mm/hugetlbpage.c
++++ b/arch/powerpc/mm/hugetlbpage.c
+@@ -330,10 +330,24 @@ static void free_hugepd_range(struct mmu_gather *tlb, hugepd_t *hpdp, int pdshif
+ get_hugepd_cache_index(pdshift - shift));
+ }
+
+-static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr)
++static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd,
++ unsigned long addr, unsigned long end,
++ unsigned long floor, unsigned long ceiling)
+ {
++ unsigned long start = addr;
+ pgtable_t token = pmd_pgtable(*pmd);
+
++ start &= PMD_MASK;
++ if (start < floor)
++ return;
++ if (ceiling) {
++ ceiling &= PMD_MASK;
++ if (!ceiling)
++ return;
++ }
++ if (end - 1 > ceiling - 1)
++ return;
++
+ pmd_clear(pmd);
+ pte_free_tlb(tlb, token, addr);
+ mm_dec_nr_ptes(tlb->mm);
+@@ -363,7 +377,7 @@ static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
+ */
+ WARN_ON(!IS_ENABLED(CONFIG_PPC_8xx));
+
+- hugetlb_free_pte_range(tlb, pmd, addr);
++ hugetlb_free_pte_range(tlb, pmd, addr, end, floor, ceiling);
+
+ continue;
+ }
+diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
+index 8459056cce671..2ae42c2a5cf04 100644
+--- a/arch/powerpc/mm/init_64.c
++++ b/arch/powerpc/mm/init_64.c
+@@ -162,16 +162,16 @@ static __meminit struct vmemmap_backing * vmemmap_list_alloc(int node)
+ return next++;
+ }
+
+-static __meminit void vmemmap_list_populate(unsigned long phys,
+- unsigned long start,
+- int node)
++static __meminit int vmemmap_list_populate(unsigned long phys,
++ unsigned long start,
++ int node)
+ {
+ struct vmemmap_backing *vmem_back;
+
+ vmem_back = vmemmap_list_alloc(node);
+ if (unlikely(!vmem_back)) {
+- WARN_ON(1);
+- return;
++ pr_debug("vmemap list allocation failed\n");
++ return -ENOMEM;
+ }
+
+ vmem_back->phys = phys;
+@@ -179,6 +179,7 @@ static __meminit void vmemmap_list_populate(unsigned long phys,
+ vmem_back->list = vmemmap_list;
+
+ vmemmap_list = vmem_back;
++ return 0;
+ }
+
+ static bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long start,
+@@ -199,6 +200,7 @@ static bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long star
+ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
+ struct vmem_altmap *altmap)
+ {
++ bool altmap_alloc;
+ unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift;
+
+ /* Align to the page size of the linear mapping. */
+@@ -228,13 +230,32 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
+ p = vmemmap_alloc_block_buf(page_size, node, altmap);
+ if (!p)
+ pr_debug("altmap block allocation failed, falling back to system memory");
++ else
++ altmap_alloc = true;
+ }
+- if (!p)
++ if (!p) {
+ p = vmemmap_alloc_block_buf(page_size, node, NULL);
++ altmap_alloc = false;
++ }
+ if (!p)
+ return -ENOMEM;
+
+- vmemmap_list_populate(__pa(p), start, node);
++ if (vmemmap_list_populate(__pa(p), start, node)) {
++ /*
++ * If we don't populate vmemap list, we don't have
++ * the ability to free the allocated vmemmap
++ * pages in section_deactivate. Hence free them
++ * here.
++ */
++ int nr_pfns = page_size >> PAGE_SHIFT;
++ unsigned long page_order = get_order(page_size);
++
++ if (altmap_alloc)
++ vmem_altmap_free(altmap, nr_pfns);
++ else
++ free_pages((unsigned long)p, page_order);
++ return -ENOMEM;
++ }
+
+ pr_debug(" * %016lx..%016lx allocated at %p\n",
+ start, start + page_size, p);
+diff --git a/arch/powerpc/platforms/powermac/sleep.S b/arch/powerpc/platforms/powermac/sleep.S
+index f9a680fdd9c4b..51bfdfe85058c 100644
+--- a/arch/powerpc/platforms/powermac/sleep.S
++++ b/arch/powerpc/platforms/powermac/sleep.S
+@@ -294,14 +294,7 @@ grackle_wake_up:
+ * we do any r1 memory access as we are not sure they
+ * are in a sane state above the first 256Mb region
+ */
+- li r0,16 /* load up segment register values */
+- mtctr r0 /* for context 0 */
+- lis r3,0x2000 /* Ku = 1, VSID = 0 */
+- li r4,0
+-3: mtsrin r3,r4
+- addi r3,r3,0x111 /* increment VSID */
+- addis r4,r4,0x1000 /* address of next segment */
+- bdnz 3b
++ bl load_segment_registers
+ sync
+ isync
+
+diff --git a/arch/powerpc/platforms/powernv/opal-elog.c b/arch/powerpc/platforms/powernv/opal-elog.c
+index 62ef7ad995da3..5e33b1fc67c2b 100644
+--- a/arch/powerpc/platforms/powernv/opal-elog.c
++++ b/arch/powerpc/platforms/powernv/opal-elog.c
+@@ -179,14 +179,14 @@ static ssize_t raw_attr_read(struct file *filep, struct kobject *kobj,
+ return count;
+ }
+
+-static struct elog_obj *create_elog_obj(uint64_t id, size_t size, uint64_t type)
++static void create_elog_obj(uint64_t id, size_t size, uint64_t type)
+ {
+ struct elog_obj *elog;
+ int rc;
+
+ elog = kzalloc(sizeof(*elog), GFP_KERNEL);
+ if (!elog)
+- return NULL;
++ return;
+
+ elog->kobj.kset = elog_kset;
+
+@@ -219,18 +219,37 @@ static struct elog_obj *create_elog_obj(uint64_t id, size_t size, uint64_t type)
+ rc = kobject_add(&elog->kobj, NULL, "0x%llx", id);
+ if (rc) {
+ kobject_put(&elog->kobj);
+- return NULL;
++ return;
+ }
+
++ /*
++ * As soon as the sysfs file for this elog is created/activated there is
++ * a chance the opal_errd daemon (or any userspace) might read and
++ * acknowledge the elog before kobject_uevent() is called. If that
++ * happens then there is a potential race between
++ * elog_ack_store->kobject_put() and kobject_uevent() which leads to a
++ * use-after-free of a kernfs object resulting in a kernel crash.
++ *
++ * To avoid that, we need to take a reference on behalf of the bin file,
++ * so that our reference remains valid while we call kobject_uevent().
++ * We then drop our reference before exiting the function, leaving the
++ * bin file to drop the last reference (if it hasn't already).
++ */
++
++ /* Take a reference for the bin file */
++ kobject_get(&elog->kobj);
+ rc = sysfs_create_bin_file(&elog->kobj, &elog->raw_attr);
+- if (rc) {
++ if (rc == 0) {
++ kobject_uevent(&elog->kobj, KOBJ_ADD);
++ } else {
++ /* Drop the reference taken for the bin file */
+ kobject_put(&elog->kobj);
+- return NULL;
+ }
+
+- kobject_uevent(&elog->kobj, KOBJ_ADD);
++ /* Drop our reference */
++ kobject_put(&elog->kobj);
+
+- return elog;
++ return;
+ }
+
+ static irqreturn_t elog_event(int irq, void *data)
+diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
+index b2ba3e95bda73..bbf361f23ae86 100644
+--- a/arch/powerpc/platforms/powernv/smp.c
++++ b/arch/powerpc/platforms/powernv/smp.c
+@@ -43,7 +43,7 @@
+ #include <asm/udbg.h>
+ #define DBG(fmt...) udbg_printf(fmt)
+ #else
+-#define DBG(fmt...)
++#define DBG(fmt...) do { } while (0)
+ #endif
+
+ static void pnv_smp_setup_cpu(int cpu)
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index 0ea976d1cac47..843db91e39aad 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -277,7 +277,7 @@ static int dlpar_offline_lmb(struct drmem_lmb *lmb)
+ return dlpar_change_lmb_state(lmb, false);
+ }
+
+-static int pseries_remove_memblock(unsigned long base, unsigned int memblock_size)
++static int pseries_remove_memblock(unsigned long base, unsigned long memblock_size)
+ {
+ unsigned long block_sz, start_pfn;
+ int sections_per_block;
+@@ -308,10 +308,11 @@ out:
+
+ static int pseries_remove_mem_node(struct device_node *np)
+ {
+- const __be32 *regs;
++ const __be32 *prop;
+ unsigned long base;
+- unsigned int lmb_size;
++ unsigned long lmb_size;
+ int ret = -EINVAL;
++ int addr_cells, size_cells;
+
+ /*
+ * Check to see if we are actually removing memory
+@@ -322,12 +323,19 @@ static int pseries_remove_mem_node(struct device_node *np)
+ /*
+ * Find the base address and size of the memblock
+ */
+- regs = of_get_property(np, "reg", NULL);
+- if (!regs)
++ prop = of_get_property(np, "reg", NULL);
++ if (!prop)
+ return ret;
+
+- base = be64_to_cpu(*(unsigned long *)regs);
+- lmb_size = be32_to_cpu(regs[3]);
++ addr_cells = of_n_addr_cells(np);
++ size_cells = of_n_size_cells(np);
++
++ /*
++ * "reg" property represents (addr,size) tuple.
++ */
++ base = of_read_number(prop, addr_cells);
++ prop += addr_cells;
++ lmb_size = of_read_number(prop, size_cells);
+
+ pseries_remove_memblock(base, lmb_size);
+ return 0;
+@@ -564,7 +572,7 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index)
+
+ #else
+ static inline int pseries_remove_memblock(unsigned long base,
+- unsigned int memblock_size)
++ unsigned long memblock_size)
+ {
+ return -EOPNOTSUPP;
+ }
+@@ -886,10 +894,11 @@ int dlpar_memory(struct pseries_hp_errorlog *hp_elog)
+
+ static int pseries_add_mem_node(struct device_node *np)
+ {
+- const __be32 *regs;
++ const __be32 *prop;
+ unsigned long base;
+- unsigned int lmb_size;
++ unsigned long lmb_size;
+ int ret = -EINVAL;
++ int addr_cells, size_cells;
+
+ /*
+ * Check to see if we are actually adding memory
+@@ -900,12 +909,18 @@ static int pseries_add_mem_node(struct device_node *np)
+ /*
+ * Find the base and size of the memblock
+ */
+- regs = of_get_property(np, "reg", NULL);
+- if (!regs)
++ prop = of_get_property(np, "reg", NULL);
++ if (!prop)
+ return ret;
+
+- base = be64_to_cpu(*(unsigned long *)regs);
+- lmb_size = be32_to_cpu(regs[3]);
++ addr_cells = of_n_addr_cells(np);
++ size_cells = of_n_size_cells(np);
++ /*
++ * "reg" property represents (addr,size) tuple.
++ */
++ base = of_read_number(prop, addr_cells);
++ prop += addr_cells;
++ lmb_size = of_read_number(prop, size_cells);
+
+ /*
+ * Update memory region to represent the memory add
+diff --git a/arch/riscv/include/uapi/asm/auxvec.h b/arch/riscv/include/uapi/asm/auxvec.h
+index d86cb17bbabe6..22e0ae8884061 100644
+--- a/arch/riscv/include/uapi/asm/auxvec.h
++++ b/arch/riscv/include/uapi/asm/auxvec.h
+@@ -10,4 +10,7 @@
+ /* vDSO location */
+ #define AT_SYSINFO_EHDR 33
+
++/* entries in ARCH_DLINFO */
++#define AT_VECTOR_SIZE_ARCH 1
++
+ #endif /* _UAPI_ASM_RISCV_AUXVEC_H */
+diff --git a/arch/s390/boot/head.S b/arch/s390/boot/head.S
+index dae10961d0724..1a2c2b1ed9649 100644
+--- a/arch/s390/boot/head.S
++++ b/arch/s390/boot/head.S
+@@ -360,22 +360,23 @@ ENTRY(startup_kdump)
+ # the save area and does disabled wait with a faulty address.
+ #
+ ENTRY(startup_pgm_check_handler)
+- stmg %r0,%r15,__LC_SAVE_AREA_SYNC
+- la %r1,4095
+- stctg %c0,%c15,__LC_CREGS_SAVE_AREA-4095(%r1)
+- mvc __LC_GPREGS_SAVE_AREA-4095(128,%r1),__LC_SAVE_AREA_SYNC
+- mvc __LC_PSW_SAVE_AREA-4095(16,%r1),__LC_PGM_OLD_PSW
++ stmg %r8,%r15,__LC_SAVE_AREA_SYNC
++ la %r8,4095
++ stctg %c0,%c15,__LC_CREGS_SAVE_AREA-4095(%r8)
++ stmg %r0,%r7,__LC_GPREGS_SAVE_AREA-4095(%r8)
++ mvc __LC_GPREGS_SAVE_AREA-4095+64(64,%r8),__LC_SAVE_AREA_SYNC
++ mvc __LC_PSW_SAVE_AREA-4095(16,%r8),__LC_PGM_OLD_PSW
+ mvc __LC_RETURN_PSW(16),__LC_PGM_OLD_PSW
+ ni __LC_RETURN_PSW,0xfc # remove IO and EX bits
+ ni __LC_RETURN_PSW+1,0xfb # remove MCHK bit
+ oi __LC_RETURN_PSW+1,0x2 # set wait state bit
+- larl %r2,.Lold_psw_disabled_wait
+- stg %r2,__LC_PGM_NEW_PSW+8
+- l %r15,.Ldump_info_stack-.Lold_psw_disabled_wait(%r2)
++ larl %r9,.Lold_psw_disabled_wait
++ stg %r9,__LC_PGM_NEW_PSW+8
++ l %r15,.Ldump_info_stack-.Lold_psw_disabled_wait(%r9)
+ brasl %r14,print_pgm_check_info
+ .Lold_psw_disabled_wait:
+- la %r1,4095
+- lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1)
++ la %r8,4095
++ lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r8)
+ lpswe __LC_RETURN_PSW # disabled wait
+ .Ldump_info_stack:
+ .long 0x5000 + PAGE_SIZE - STACK_FRAME_OVERHEAD
+diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
+index 513e59d08a55c..270f5e9d5a224 100644
+--- a/arch/s390/kernel/time.c
++++ b/arch/s390/kernel/time.c
+@@ -345,8 +345,9 @@ static DEFINE_PER_CPU(atomic_t, clock_sync_word);
+ static DEFINE_MUTEX(clock_sync_mutex);
+ static unsigned long clock_sync_flags;
+
+-#define CLOCK_SYNC_HAS_STP 0
+-#define CLOCK_SYNC_STP 1
++#define CLOCK_SYNC_HAS_STP 0
++#define CLOCK_SYNC_STP 1
++#define CLOCK_SYNC_STPINFO_VALID 2
+
+ /*
+ * The get_clock function for the physical clock. It will get the current
+@@ -583,6 +584,22 @@ void stp_queue_work(void)
+ queue_work(time_sync_wq, &stp_work);
+ }
+
++static int __store_stpinfo(void)
++{
++ int rc = chsc_sstpi(stp_page, &stp_info, sizeof(struct stp_sstpi));
++
++ if (rc)
++ clear_bit(CLOCK_SYNC_STPINFO_VALID, &clock_sync_flags);
++ else
++ set_bit(CLOCK_SYNC_STPINFO_VALID, &clock_sync_flags);
++ return rc;
++}
++
++static int stpinfo_valid(void)
++{
++ return stp_online && test_bit(CLOCK_SYNC_STPINFO_VALID, &clock_sync_flags);
++}
++
+ static int stp_sync_clock(void *data)
+ {
+ struct clock_sync_data *sync = data;
+@@ -604,8 +621,7 @@ static int stp_sync_clock(void *data)
+ if (rc == 0) {
+ sync->clock_delta = clock_delta;
+ clock_sync_global(clock_delta);
+- rc = chsc_sstpi(stp_page, &stp_info,
+- sizeof(struct stp_sstpi));
++ rc = __store_stpinfo();
+ if (rc == 0 && stp_info.tmd != 2)
+ rc = -EAGAIN;
+ }
+@@ -650,7 +666,7 @@ static void stp_work_fn(struct work_struct *work)
+ if (rc)
+ goto out_unlock;
+
+- rc = chsc_sstpi(stp_page, &stp_info, sizeof(struct stp_sstpi));
++ rc = __store_stpinfo();
+ if (rc || stp_info.c == 0)
+ goto out_unlock;
+
+@@ -687,10 +703,14 @@ static ssize_t ctn_id_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- if (!stp_online)
+- return -ENODATA;
+- return sprintf(buf, "%016llx\n",
+- *(unsigned long long *) stp_info.ctnid);
++ ssize_t ret = -ENODATA;
++
++ mutex_lock(&stp_work_mutex);
++ if (stpinfo_valid())
++ ret = sprintf(buf, "%016llx\n",
++ *(unsigned long long *) stp_info.ctnid);
++ mutex_unlock(&stp_work_mutex);
++ return ret;
+ }
+
+ static DEVICE_ATTR_RO(ctn_id);
+@@ -699,9 +719,13 @@ static ssize_t ctn_type_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- if (!stp_online)
+- return -ENODATA;
+- return sprintf(buf, "%i\n", stp_info.ctn);
++ ssize_t ret = -ENODATA;
++
++ mutex_lock(&stp_work_mutex);
++ if (stpinfo_valid())
++ ret = sprintf(buf, "%i\n", stp_info.ctn);
++ mutex_unlock(&stp_work_mutex);
++ return ret;
+ }
+
+ static DEVICE_ATTR_RO(ctn_type);
+@@ -710,9 +734,13 @@ static ssize_t dst_offset_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- if (!stp_online || !(stp_info.vbits & 0x2000))
+- return -ENODATA;
+- return sprintf(buf, "%i\n", (int)(s16) stp_info.dsto);
++ ssize_t ret = -ENODATA;
++
++ mutex_lock(&stp_work_mutex);
++ if (stpinfo_valid() && (stp_info.vbits & 0x2000))
++ ret = sprintf(buf, "%i\n", (int)(s16) stp_info.dsto);
++ mutex_unlock(&stp_work_mutex);
++ return ret;
+ }
+
+ static DEVICE_ATTR_RO(dst_offset);
+@@ -721,9 +749,13 @@ static ssize_t leap_seconds_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- if (!stp_online || !(stp_info.vbits & 0x8000))
+- return -ENODATA;
+- return sprintf(buf, "%i\n", (int)(s16) stp_info.leaps);
++ ssize_t ret = -ENODATA;
++
++ mutex_lock(&stp_work_mutex);
++ if (stpinfo_valid() && (stp_info.vbits & 0x8000))
++ ret = sprintf(buf, "%i\n", (int)(s16) stp_info.leaps);
++ mutex_unlock(&stp_work_mutex);
++ return ret;
+ }
+
+ static DEVICE_ATTR_RO(leap_seconds);
+@@ -732,9 +764,13 @@ static ssize_t stratum_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- if (!stp_online)
+- return -ENODATA;
+- return sprintf(buf, "%i\n", (int)(s16) stp_info.stratum);
++ ssize_t ret = -ENODATA;
++
++ mutex_lock(&stp_work_mutex);
++ if (stpinfo_valid())
++ ret = sprintf(buf, "%i\n", (int)(s16) stp_info.stratum);
++ mutex_unlock(&stp_work_mutex);
++ return ret;
+ }
+
+ static DEVICE_ATTR_RO(stratum);
+@@ -743,9 +779,13 @@ static ssize_t time_offset_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- if (!stp_online || !(stp_info.vbits & 0x0800))
+- return -ENODATA;
+- return sprintf(buf, "%i\n", (int) stp_info.tto);
++ ssize_t ret = -ENODATA;
++
++ mutex_lock(&stp_work_mutex);
++ if (stpinfo_valid() && (stp_info.vbits & 0x0800))
++ ret = sprintf(buf, "%i\n", (int) stp_info.tto);
++ mutex_unlock(&stp_work_mutex);
++ return ret;
+ }
+
+ static DEVICE_ATTR_RO(time_offset);
+@@ -754,9 +794,13 @@ static ssize_t time_zone_offset_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- if (!stp_online || !(stp_info.vbits & 0x4000))
+- return -ENODATA;
+- return sprintf(buf, "%i\n", (int)(s16) stp_info.tzo);
++ ssize_t ret = -ENODATA;
++
++ mutex_lock(&stp_work_mutex);
++ if (stpinfo_valid() && (stp_info.vbits & 0x4000))
++ ret = sprintf(buf, "%i\n", (int)(s16) stp_info.tzo);
++ mutex_unlock(&stp_work_mutex);
++ return ret;
+ }
+
+ static DEVICE_ATTR_RO(time_zone_offset);
+@@ -765,9 +809,13 @@ static ssize_t timing_mode_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- if (!stp_online)
+- return -ENODATA;
+- return sprintf(buf, "%i\n", stp_info.tmd);
++ ssize_t ret = -ENODATA;
++
++ mutex_lock(&stp_work_mutex);
++ if (stpinfo_valid())
++ ret = sprintf(buf, "%i\n", stp_info.tmd);
++ mutex_unlock(&stp_work_mutex);
++ return ret;
+ }
+
+ static DEVICE_ATTR_RO(timing_mode);
+@@ -776,9 +824,13 @@ static ssize_t timing_state_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- if (!stp_online)
+- return -ENODATA;
+- return sprintf(buf, "%i\n", stp_info.tst);
++ ssize_t ret = -ENODATA;
++
++ mutex_lock(&stp_work_mutex);
++ if (stpinfo_valid())
++ ret = sprintf(buf, "%i\n", stp_info.tst);
++ mutex_unlock(&stp_work_mutex);
++ return ret;
+ }
+
+ static DEVICE_ATTR_RO(timing_state);
+diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
+index e286e2badc8a4..e38d8bf454e86 100644
+--- a/arch/sparc/kernel/smp_64.c
++++ b/arch/sparc/kernel/smp_64.c
+@@ -1039,38 +1039,9 @@ void smp_fetch_global_pmu(void)
+ * are flush_tlb_*() routines, and these run after flush_cache_*()
+ * which performs the flushw.
+ *
+- * The SMP TLB coherency scheme we use works as follows:
+- *
+- * 1) mm->cpu_vm_mask is a bit mask of which cpus an address
+- * space has (potentially) executed on, this is the heuristic
+- * we use to avoid doing cross calls.
+- *
+- * Also, for flushing from kswapd and also for clones, we
+- * use cpu_vm_mask as the list of cpus to make run the TLB.
+- *
+- * 2) TLB context numbers are shared globally across all processors
+- * in the system, this allows us to play several games to avoid
+- * cross calls.
+- *
+- * One invariant is that when a cpu switches to a process, and
+- * that processes tsk->active_mm->cpu_vm_mask does not have the
+- * current cpu's bit set, that tlb context is flushed locally.
+- *
+- * If the address space is non-shared (ie. mm->count == 1) we avoid
+- * cross calls when we want to flush the currently running process's
+- * tlb state. This is done by clearing all cpu bits except the current
+- * processor's in current->mm->cpu_vm_mask and performing the
+- * flush locally only. This will force any subsequent cpus which run
+- * this task to flush the context from the local tlb if the process
+- * migrates to another cpu (again).
+- *
+- * 3) For shared address spaces (threads) and swapping we bite the
+- * bullet for most cases and perform the cross call (but only to
+- * the cpus listed in cpu_vm_mask).
+- *
+- * The performance gain from "optimizing" away the cross call for threads is
+- * questionable (in theory the big win for threads is the massive sharing of
+- * address space state across processors).
++ * mm->cpu_vm_mask is a bit mask of which cpus an address
++ * space has (potentially) executed on, this is the heuristic
++ * we use to limit cross calls.
+ */
+
+ /* This currently is only used by the hugetlb arch pre-fault
+@@ -1080,18 +1051,13 @@ void smp_fetch_global_pmu(void)
+ void smp_flush_tlb_mm(struct mm_struct *mm)
+ {
+ u32 ctx = CTX_HWBITS(mm->context);
+- int cpu = get_cpu();
+
+- if (atomic_read(&mm->mm_users) == 1) {
+- cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
+- goto local_flush_and_out;
+- }
++ get_cpu();
+
+ smp_cross_call_masked(&xcall_flush_tlb_mm,
+ ctx, 0, 0,
+ mm_cpumask(mm));
+
+-local_flush_and_out:
+ __flush_tlb_mm(ctx, SECONDARY_CONTEXT);
+
+ put_cpu();
+@@ -1114,17 +1080,15 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long
+ {
+ u32 ctx = CTX_HWBITS(mm->context);
+ struct tlb_pending_info info;
+- int cpu = get_cpu();
++
++ get_cpu();
+
+ info.ctx = ctx;
+ info.nr = nr;
+ info.vaddrs = vaddrs;
+
+- if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
+- cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
+- else
+- smp_call_function_many(mm_cpumask(mm), tlb_pending_func,
+- &info, 1);
++ smp_call_function_many(mm_cpumask(mm), tlb_pending_func,
++ &info, 1);
+
+ __flush_tlb_pending(ctx, nr, vaddrs);
+
+@@ -1134,14 +1098,13 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long
+ void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr)
+ {
+ unsigned long context = CTX_HWBITS(mm->context);
+- int cpu = get_cpu();
+
+- if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
+- cpumask_copy(mm_cpumask(mm), cpumask_of(cpu));
+- else
+- smp_cross_call_masked(&xcall_flush_tlb_page,
+- context, vaddr, 0,
+- mm_cpumask(mm));
++ get_cpu();
++
++ smp_cross_call_masked(&xcall_flush_tlb_page,
++ context, vaddr, 0,
++ mm_cpumask(mm));
++
+ __flush_tlb_page(context, vaddr);
+
+ put_cpu();
+diff --git a/arch/um/kernel/sigio.c b/arch/um/kernel/sigio.c
+index 10c99e058fcae..d1cffc2a7f212 100644
+--- a/arch/um/kernel/sigio.c
++++ b/arch/um/kernel/sigio.c
+@@ -35,14 +35,14 @@ int write_sigio_irq(int fd)
+ }
+
+ /* These are called from os-Linux/sigio.c to protect its pollfds arrays. */
+-static DEFINE_SPINLOCK(sigio_spinlock);
++static DEFINE_MUTEX(sigio_mutex);
+
+ void sigio_lock(void)
+ {
+- spin_lock(&sigio_spinlock);
++ mutex_lock(&sigio_mutex);
+ }
+
+ void sigio_unlock(void)
+ {
+- spin_unlock(&sigio_spinlock);
++ mutex_unlock(&sigio_mutex);
+ }
+diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
+index dde7cb3724df3..9bd966ef7d19e 100644
+--- a/arch/x86/boot/compressed/kaslr.c
++++ b/arch/x86/boot/compressed/kaslr.c
+@@ -87,8 +87,11 @@ static unsigned long get_boot_seed(void)
+ static bool memmap_too_large;
+
+
+-/* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
+-static unsigned long long mem_limit = ULLONG_MAX;
++/*
++ * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit.
++ * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options.
++ */
++static unsigned long long mem_limit;
+
+ /* Number of immovable memory regions */
+ static int num_immovable_mem;
+@@ -214,7 +217,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
+
+ if (start == 0) {
+ /* Store the specified memory limit if size > 0 */
+- if (size > 0)
++ if (size > 0 && size < mem_limit)
+ mem_limit = size;
+
+ continue;
+@@ -302,7 +305,8 @@ static void handle_mem_options(void)
+ if (mem_size == 0)
+ goto out;
+
+- mem_limit = mem_size;
++ if (mem_size < mem_limit)
++ mem_limit = mem_size;
+ } else if (!strcmp(param, "efi_fake_mem")) {
+ mem_avoid_memmap(PARSE_EFI, val);
+ }
+@@ -314,7 +318,9 @@ out:
+ }
+
+ /*
+- * In theory, KASLR can put the kernel anywhere in the range of [16M, 64T).
++ * In theory, KASLR can put the kernel anywhere in the range of [16M, MAXMEM)
++ * on 64-bit, and [16M, KERNEL_IMAGE_SIZE) on 32-bit.
++ *
+ * The mem_avoid array is used to store the ranges that need to be avoided
+ * when KASLR searches for an appropriate random address. We must avoid any
+ * regions that are unsafe to overlap with during decompression, and other
+@@ -614,10 +620,6 @@ static void __process_mem_region(struct mem_vector *entry,
+ unsigned long start_orig, end;
+ struct mem_vector cur_entry;
+
+- /* On 32-bit, ignore entries entirely above our maximum. */
+- if (IS_ENABLED(CONFIG_X86_32) && entry->start >= KERNEL_IMAGE_SIZE)
+- return;
+-
+ /* Ignore entries entirely below our minimum. */
+ if (entry->start + entry->size < minimum)
+ return;
+@@ -650,11 +652,6 @@ static void __process_mem_region(struct mem_vector *entry,
+ /* Reduce size by any delta from the original address. */
+ region.size -= region.start - start_orig;
+
+- /* On 32-bit, reduce region size to fit within max size. */
+- if (IS_ENABLED(CONFIG_X86_32) &&
+- region.start + region.size > KERNEL_IMAGE_SIZE)
+- region.size = KERNEL_IMAGE_SIZE - region.start;
+-
+ /* Return if region can't contain decompressed kernel */
+ if (region.size < image_size)
+ return;
+@@ -839,15 +836,16 @@ static void process_e820_entries(unsigned long minimum,
+ static unsigned long find_random_phys_addr(unsigned long minimum,
+ unsigned long image_size)
+ {
++ /* Bail out early if it's impossible to succeed. */
++ if (minimum + image_size > mem_limit)
++ return 0;
++
+ /* Check if we had too many memmaps. */
+ if (memmap_too_large) {
+ debug_putstr("Aborted memory entries scan (more than 4 memmap= args)!\n");
+ return 0;
+ }
+
+- /* Make sure minimum is aligned. */
+- minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
+-
+ if (process_efi_entries(minimum, image_size))
+ return slots_fetch_random();
+
+@@ -860,8 +858,6 @@ static unsigned long find_random_virt_addr(unsigned long minimum,
+ {
+ unsigned long slots, random_addr;
+
+- /* Make sure minimum is aligned. */
+- minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
+ /* Align image_size for easy slot calculations. */
+ image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
+
+@@ -908,6 +904,11 @@ void choose_random_location(unsigned long input,
+ /* Prepare to add new identity pagetables on demand. */
+ initialize_identity_maps();
+
++ if (IS_ENABLED(CONFIG_X86_32))
++ mem_limit = KERNEL_IMAGE_SIZE;
++ else
++ mem_limit = MAXMEM;
++
+ /* Record the various known unsafe memory ranges. */
+ mem_avoid_init(input, input_size, *output);
+
+@@ -917,6 +918,8 @@ void choose_random_location(unsigned long input,
+ * location:
+ */
+ min_addr = min(*output, 512UL << 20);
++ /* Make sure minimum is aligned. */
++ min_addr = ALIGN(min_addr, CONFIG_PHYSICAL_ALIGN);
+
+ /* Walk available memory entries to find a random address. */
+ random_addr = find_random_phys_addr(min_addr, output_size);
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index a023cbe21230a..39169885adfa8 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -335,11 +335,15 @@ static u64 get_ibs_op_count(u64 config)
+ {
+ u64 count = 0;
+
++ /*
++ * If the internal 27-bit counter rolled over, the count is MaxCnt
++ * and the lower 7 bits of CurCnt are randomized.
++ * Otherwise CurCnt has the full 27-bit current counter value.
++ */
+ if (config & IBS_OP_VAL)
+- count += (config & IBS_OP_MAX_CNT) << 4; /* cnt rolled over */
+-
+- if (ibs_caps & IBS_CAPS_RDWROPCNT)
+- count += (config & IBS_OP_CUR_CNT) >> 32;
++ count = (config & IBS_OP_MAX_CNT) << 4;
++ else if (ibs_caps & IBS_CAPS_RDWROPCNT)
++ count = (config & IBS_OP_CUR_CNT) >> 32;
+
+ return count;
+ }
+@@ -632,18 +636,24 @@ fail:
+ perf_ibs->offset_max,
+ offset + 1);
+ } while (offset < offset_max);
++ /*
++ * Read IbsBrTarget, IbsOpData4, and IbsExtdCtl separately
++ * depending on their availability.
++ * Can't add to offset_max as they are staggered
++ */
+ if (event->attr.sample_type & PERF_SAMPLE_RAW) {
+- /*
+- * Read IbsBrTarget and IbsOpData4 separately
+- * depending on their availability.
+- * Can't add to offset_max as they are staggered
+- */
+- if (ibs_caps & IBS_CAPS_BRNTRGT) {
+- rdmsrl(MSR_AMD64_IBSBRTARGET, *buf++);
+- size++;
++ if (perf_ibs == &perf_ibs_op) {
++ if (ibs_caps & IBS_CAPS_BRNTRGT) {
++ rdmsrl(MSR_AMD64_IBSBRTARGET, *buf++);
++ size++;
++ }
++ if (ibs_caps & IBS_CAPS_OPDATA4) {
++ rdmsrl(MSR_AMD64_IBSOPDATA4, *buf++);
++ size++;
++ }
+ }
+- if (ibs_caps & IBS_CAPS_OPDATA4) {
+- rdmsrl(MSR_AMD64_IBSOPDATA4, *buf++);
++ if (perf_ibs == &perf_ibs_fetch && (ibs_caps & IBS_CAPS_FETCHCTLEXTD)) {
++ rdmsrl(MSR_AMD64_ICIBSEXTDCTL, *buf++);
+ size++;
+ }
+ }
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index 76400c052b0eb..e7e61c8b56bd6 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -181,28 +181,16 @@ static void amd_uncore_del(struct perf_event *event, int flags)
+ }
+
+ /*
+- * Convert logical CPU number to L3 PMC Config ThreadMask format
++ * Return a full thread and slice mask until per-CPU is
++ * properly supported.
+ */
+-static u64 l3_thread_slice_mask(int cpu)
++static u64 l3_thread_slice_mask(void)
+ {
+- u64 thread_mask, core = topology_core_id(cpu);
+- unsigned int shift, thread = 0;
++ if (boot_cpu_data.x86 <= 0x18)
++ return AMD64_L3_SLICE_MASK | AMD64_L3_THREAD_MASK;
+
+- if (topology_smt_supported() && !topology_is_primary_thread(cpu))
+- thread = 1;
+-
+- if (boot_cpu_data.x86 <= 0x18) {
+- shift = AMD64_L3_THREAD_SHIFT + 2 * (core % 4) + thread;
+- thread_mask = BIT_ULL(shift);
+-
+- return AMD64_L3_SLICE_MASK | thread_mask;
+- }
+-
+- core = (core << AMD64_L3_COREID_SHIFT) & AMD64_L3_COREID_MASK;
+- shift = AMD64_L3_THREAD_SHIFT + thread;
+- thread_mask = BIT_ULL(shift);
+-
+- return AMD64_L3_EN_ALL_SLICES | core | thread_mask;
++ return AMD64_L3_EN_ALL_SLICES | AMD64_L3_EN_ALL_CORES |
++ AMD64_L3_F19H_THREAD_MASK;
+ }
+
+ static int amd_uncore_event_init(struct perf_event *event)
+@@ -232,7 +220,7 @@ static int amd_uncore_event_init(struct perf_event *event)
+ * For other events, the two fields do not affect the count.
+ */
+ if (l3_mask && is_llc_event(event))
+- hwc->config |= l3_thread_slice_mask(event->cpu);
++ hwc->config |= l3_thread_slice_mask();
+
+ uncore = event_to_amd_uncore(event);
+ if (!uncore)
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 11bbc6590f904..82bb0b716e49a 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -1286,11 +1286,11 @@ int x86_perf_event_set_period(struct perf_event *event)
+ wrmsrl(hwc->event_base, (u64)(-left) & x86_pmu.cntval_mask);
+
+ /*
+- * Clear the Merge event counter's upper 16 bits since
++ * Sign extend the Merge event counter's upper 16 bits since
+ * we currently declare a 48-bit counter width
+ */
+ if (is_counter_pair(hwc))
+- wrmsrl(x86_pmu_event_addr(idx + 1), 0);
++ wrmsrl(x86_pmu_event_addr(idx + 1), 0xffff);
+
+ /*
+ * Due to erratum on certan cpu we need
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 31e6887d24f1a..34b21ba666378 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -243,7 +243,7 @@ static struct extra_reg intel_skl_extra_regs[] __read_mostly = {
+
+ static struct event_constraint intel_icl_event_constraints[] = {
+ FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
+- INTEL_UEVENT_CONSTRAINT(0x1c0, 0), /* INST_RETIRED.PREC_DIST */
++ FIXED_EVENT_CONSTRAINT(0x01c0, 0), /* INST_RETIRED.PREC_DIST */
+ FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
+ FIXED_EVENT_CONSTRAINT(0x0300, 2), /* CPU_CLK_UNHALTED.REF */
+ FIXED_EVENT_CONSTRAINT(0x0400, 3), /* SLOTS */
+diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h
+index 5a42f92061387..51e2bf27cc9b0 100644
+--- a/arch/x86/include/asm/asm-prototypes.h
++++ b/arch/x86/include/asm/asm-prototypes.h
+@@ -5,6 +5,7 @@
+ #include <asm/string.h>
+ #include <asm/page.h>
+ #include <asm/checksum.h>
++#include <asm/mce.h>
+
+ #include <asm-generic/asm-prototypes.h>
+
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 2859ee4f39a83..b08c8a2afc0eb 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -464,6 +464,7 @@
+ #define MSR_AMD64_IBSOP_REG_MASK ((1UL<<MSR_AMD64_IBSOP_REG_COUNT)-1)
+ #define MSR_AMD64_IBSCTL 0xc001103a
+ #define MSR_AMD64_IBSBRTARGET 0xc001103b
++#define MSR_AMD64_ICIBSEXTDCTL 0xc001103c
+ #define MSR_AMD64_IBSOPDATA4 0xc001103d
+ #define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */
+ #define MSR_AMD64_SEV 0xc0010131
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index cdaab30880b91..cd6be6f143e85 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -807,6 +807,15 @@ static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
+ temp_mm_state_t temp_state;
+
+ lockdep_assert_irqs_disabled();
++
++ /*
++ * Make sure not to be in TLB lazy mode, as otherwise we'll end up
++ * with a stale address space WITHOUT being in lazy mode after
++ * restoring the previous mm.
++ */
++ if (this_cpu_read(cpu_tlbstate.is_lazy))
++ leave_mm(smp_processor_id());
++
+ temp_state.mm = this_cpu_read(cpu_tlbstate.loaded_mm);
+ switch_mm_irqs_off(NULL, mm, current);
+
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index ec88bbe08a328..4a96aa3de7d8a 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -320,19 +320,12 @@ EXPORT_SYMBOL_GPL(unwind_get_return_address);
+
+ unsigned long *unwind_get_return_address_ptr(struct unwind_state *state)
+ {
+- struct task_struct *task = state->task;
+-
+ if (unwind_done(state))
+ return NULL;
+
+ if (state->regs)
+ return &state->regs->ip;
+
+- if (task != current && state->sp == task->thread.sp) {
+- struct inactive_task_frame *frame = (void *)task->thread.sp;
+- return &frame->ret_addr;
+- }
+-
+ if (state->sp)
+ return (unsigned long *)state->sp - 1;
+
+@@ -662,7 +655,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ } else {
+ struct inactive_task_frame *frame = (void *)task->thread.sp;
+
+- state->sp = task->thread.sp;
++ state->sp = task->thread.sp + sizeof(*frame);
+ state->bp = READ_ONCE_NOCHECK(frame->bp);
+ state->ip = READ_ONCE_NOCHECK(frame->ret_addr);
+ state->signal = (void *)state->ip == ret_from_fork;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index ce856e0ece844..bacfc9e94a62b 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -259,13 +259,13 @@ static int kvm_msr_ignored_check(struct kvm_vcpu *vcpu, u32 msr,
+
+ if (ignore_msrs) {
+ if (report_ignored_msrs)
+- vcpu_unimpl(vcpu, "ignored %s: 0x%x data 0x%llx\n",
+- op, msr, data);
++ kvm_pr_unimpl("ignored %s: 0x%x data 0x%llx\n",
++ op, msr, data);
+ /* Mask the error */
+ return 0;
+ } else {
+- vcpu_debug_ratelimited(vcpu, "unhandled %s: 0x%x data 0x%llx\n",
+- op, msr, data);
++ kvm_debug_ratelimited("unhandled %s: 0x%x data 0x%llx\n",
++ op, msr, data);
+ return 1;
+ }
+ }
+diff --git a/block/bio.c b/block/bio.c
+index e865ea55b9f9a..58d7654002261 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1046,6 +1046,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
+ ssize_t size, left;
+ unsigned len, i;
+ size_t offset;
++ int ret = 0;
+
+ if (WARN_ON_ONCE(!max_append_sectors))
+ return 0;
+@@ -1068,15 +1069,17 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter)
+
+ len = min_t(size_t, PAGE_SIZE - offset, left);
+ if (bio_add_hw_page(q, bio, page, len, offset,
+- max_append_sectors, &same_page) != len)
+- return -EINVAL;
++ max_append_sectors, &same_page) != len) {
++ ret = -EINVAL;
++ break;
++ }
+ if (same_page)
+ put_page(page);
+ offset = 0;
+ }
+
+- iov_iter_advance(iter, size);
+- return 0;
++ iov_iter_advance(iter, size - left);
++ return ret;
+ }
+
+ /**
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 94a53d779c12b..ca2fdb58e7af5 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -105,7 +105,7 @@ static bool blk_mq_check_inflight(struct blk_mq_hw_ctx *hctx,
+ {
+ struct mq_inflight *mi = priv;
+
+- if (rq->part == mi->part)
++ if (rq->part == mi->part && blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT)
+ mi->inflight[rq_data_dir(rq)]++;
+
+ return true;
+diff --git a/drivers/acpi/acpi_configfs.c b/drivers/acpi/acpi_configfs.c
+index 88c8af455ea3f..cf91f49101eac 100644
+--- a/drivers/acpi/acpi_configfs.c
++++ b/drivers/acpi/acpi_configfs.c
+@@ -228,6 +228,7 @@ static void acpi_table_drop_item(struct config_group *group,
+
+ ACPI_INFO(("Host-directed Dynamic ACPI Table Unload"));
+ acpi_unload_table(table->index);
++ config_item_put(cfg);
+ }
+
+ static struct configfs_group_operations acpi_table_group_ops = {
+diff --git a/drivers/acpi/acpi_dbg.c b/drivers/acpi/acpi_dbg.c
+index 6041974c76271..fb72903385933 100644
+--- a/drivers/acpi/acpi_dbg.c
++++ b/drivers/acpi/acpi_dbg.c
+@@ -749,6 +749,9 @@ static int __init acpi_aml_init(void)
+ {
+ int ret;
+
++ if (acpi_disabled)
++ return -ENODEV;
++
+ /* Initialize AML IO interface */
+ mutex_init(&acpi_aml_io.lock);
+ init_waitqueue_head(&acpi_aml_io.wait);
+diff --git a/drivers/acpi/acpi_extlog.c b/drivers/acpi/acpi_extlog.c
+index f138e12b7b823..72f1fb77abcd0 100644
+--- a/drivers/acpi/acpi_extlog.c
++++ b/drivers/acpi/acpi_extlog.c
+@@ -222,9 +222,9 @@ static int __init extlog_init(void)
+ u64 cap;
+ int rc;
+
+- rdmsrl(MSR_IA32_MCG_CAP, cap);
+-
+- if (!(cap & MCG_ELOG_P) || !extlog_get_l1addr())
++ if (rdmsrl_safe(MSR_IA32_MCG_CAP, &cap) ||
++ !(cap & MCG_ELOG_P) ||
++ !extlog_get_l1addr())
+ return -ENODEV;
+
+ rc = -EINVAL;
+diff --git a/drivers/acpi/button.c b/drivers/acpi/button.c
+index a4eda7fe50d31..da4b125ab4c3e 100644
+--- a/drivers/acpi/button.c
++++ b/drivers/acpi/button.c
+@@ -153,6 +153,7 @@ struct acpi_button {
+ int last_state;
+ ktime_t last_time;
+ bool suspended;
++ bool lid_state_initialized;
+ };
+
+ static struct acpi_device *lid_device;
+@@ -383,6 +384,8 @@ static int acpi_lid_update_state(struct acpi_device *device,
+
+ static void acpi_lid_initialize_state(struct acpi_device *device)
+ {
++ struct acpi_button *button = acpi_driver_data(device);
++
+ switch (lid_init_state) {
+ case ACPI_BUTTON_LID_INIT_OPEN:
+ (void)acpi_lid_notify_state(device, 1);
+@@ -394,13 +397,14 @@ static void acpi_lid_initialize_state(struct acpi_device *device)
+ default:
+ break;
+ }
++
++ button->lid_state_initialized = true;
+ }
+
+ static void acpi_button_notify(struct acpi_device *device, u32 event)
+ {
+ struct acpi_button *button = acpi_driver_data(device);
+ struct input_dev *input;
+- int users;
+
+ switch (event) {
+ case ACPI_FIXED_HARDWARE_EVENT:
+@@ -409,10 +413,7 @@ static void acpi_button_notify(struct acpi_device *device, u32 event)
+ case ACPI_BUTTON_NOTIFY_STATUS:
+ input = button->input;
+ if (button->type == ACPI_BUTTON_TYPE_LID) {
+- mutex_lock(&button->input->mutex);
+- users = button->input->users;
+- mutex_unlock(&button->input->mutex);
+- if (users)
++ if (button->lid_state_initialized)
+ acpi_lid_update_state(device, true);
+ } else {
+ int keycode;
+@@ -457,7 +458,7 @@ static int acpi_button_resume(struct device *dev)
+ struct acpi_button *button = acpi_driver_data(device);
+
+ button->suspended = false;
+- if (button->type == ACPI_BUTTON_TYPE_LID && button->input->users) {
++ if (button->type == ACPI_BUTTON_TYPE_LID) {
+ button->last_state = !!acpi_lid_evaluate_state(device);
+ button->last_time = ktime_get();
+ acpi_lid_initialize_state(device);
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index fcddda3d67128..e0cb1bcfffb29 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -2011,20 +2011,16 @@ bool acpi_ec_dispatch_gpe(void)
+ if (acpi_any_gpe_status_set(first_ec->gpe))
+ return true;
+
+- if (ec_no_wakeup)
+- return false;
+-
+ /*
+ * Dispatch the EC GPE in-band, but do not report wakeup in any case
+ * to allow the caller to process events properly after that.
+ */
+ ret = acpi_dispatch_gpe(NULL, first_ec->gpe);
+- if (ret == ACPI_INTERRUPT_HANDLED) {
++ if (ret == ACPI_INTERRUPT_HANDLED)
+ pm_pr_dbg("ACPI EC GPE dispatched\n");
+
+- /* Flush the event and query workqueues. */
+- acpi_ec_flush_work();
+- }
++ /* Flush the event and query workqueues. */
++ acpi_ec_flush_work();
+
+ return false;
+ }
+diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
+index 2c32cfb723701..6a91a55229aee 100644
+--- a/drivers/acpi/numa/hmat.c
++++ b/drivers/acpi/numa/hmat.c
+@@ -424,7 +424,8 @@ static int __init hmat_parse_proximity_domain(union acpi_subtable_headers *heade
+ pr_info("HMAT: Memory Flags:%04x Processor Domain:%u Memory Domain:%u\n",
+ p->flags, p->processor_PD, p->memory_PD);
+
+- if (p->flags & ACPI_HMAT_MEMORY_PD_VALID && hmat_revision == 1) {
++ if ((hmat_revision == 1 && p->flags & ACPI_HMAT_MEMORY_PD_VALID) ||
++ hmat_revision > 1) {
+ target = find_mem_target(p->memory_PD);
+ if (!target) {
+ pr_debug("HMAT: Memory Domain missing from SRAT\n");
+diff --git a/drivers/acpi/numa/srat.c b/drivers/acpi/numa/srat.c
+index 15bbaab8500b9..1fb486f46ee20 100644
+--- a/drivers/acpi/numa/srat.c
++++ b/drivers/acpi/numa/srat.c
+@@ -31,7 +31,7 @@ int acpi_numa __initdata;
+
+ int pxm_to_node(int pxm)
+ {
+- if (pxm < 0)
++ if (pxm < 0 || pxm >= MAX_PXM_DOMAINS || numa_off)
+ return NUMA_NO_NODE;
+ return pxm_to_node_map[pxm];
+ }
+diff --git a/drivers/acpi/pci_mcfg.c b/drivers/acpi/pci_mcfg.c
+index 54b36b7ad47d9..e526571e0ebdb 100644
+--- a/drivers/acpi/pci_mcfg.c
++++ b/drivers/acpi/pci_mcfg.c
+@@ -142,6 +142,26 @@ static struct mcfg_fixup mcfg_quirks[] = {
+ XGENE_V2_ECAM_MCFG(4, 0),
+ XGENE_V2_ECAM_MCFG(4, 1),
+ XGENE_V2_ECAM_MCFG(4, 2),
++
++#define ALTRA_ECAM_QUIRK(rev, seg) \
++ { "Ampere", "Altra ", rev, seg, MCFG_BUS_ANY, &pci_32b_read_ops }
++
++ ALTRA_ECAM_QUIRK(1, 0),
++ ALTRA_ECAM_QUIRK(1, 1),
++ ALTRA_ECAM_QUIRK(1, 2),
++ ALTRA_ECAM_QUIRK(1, 3),
++ ALTRA_ECAM_QUIRK(1, 4),
++ ALTRA_ECAM_QUIRK(1, 5),
++ ALTRA_ECAM_QUIRK(1, 6),
++ ALTRA_ECAM_QUIRK(1, 7),
++ ALTRA_ECAM_QUIRK(1, 8),
++ ALTRA_ECAM_QUIRK(1, 9),
++ ALTRA_ECAM_QUIRK(1, 10),
++ ALTRA_ECAM_QUIRK(1, 11),
++ ALTRA_ECAM_QUIRK(1, 12),
++ ALTRA_ECAM_QUIRK(1, 13),
++ ALTRA_ECAM_QUIRK(1, 14),
++ ALTRA_ECAM_QUIRK(1, 15),
+ };
+
+ static char mcfg_oem_id[ACPI_OEM_ID_SIZE];
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 2499d7e3c710e..36b62e9c8b695 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -282,6 +282,15 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "530U4E/540U4E"),
+ },
+ },
++ /* https://bugs.launchpad.net/bugs/1894667 */
++ {
++ .callback = video_detect_force_video,
++ .ident = "HP 635 Notebook",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP 635 Notebook PC"),
++ },
++ },
+
+ /* Non win8 machines which need native backlight nevertheless */
+ {
+diff --git a/drivers/ata/sata_nv.c b/drivers/ata/sata_nv.c
+index eb9dc14e5147a..20190f66ced98 100644
+--- a/drivers/ata/sata_nv.c
++++ b/drivers/ata/sata_nv.c
+@@ -2100,7 +2100,7 @@ static int nv_swncq_sdbfis(struct ata_port *ap)
+ pp->dhfis_bits &= ~done_mask;
+ pp->dmafis_bits &= ~done_mask;
+ pp->sdbfis_bits |= done_mask;
+- ata_qc_complete_multiple(ap, ap->qc_active ^ done_mask);
++ ata_qc_complete_multiple(ap, ata_qc_get_active(ap) ^ done_mask);
+
+ if (!ap->qc_active) {
+ DPRINTK("over\n");
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index bb5806a2bd4ca..792b92439b77d 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -4260,6 +4260,7 @@ static inline bool fwnode_is_primary(struct fwnode_handle *fwnode)
+ */
+ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
+ {
++ struct device *parent = dev->parent;
+ struct fwnode_handle *fn = dev->fwnode;
+
+ if (fwnode) {
+@@ -4274,7 +4275,8 @@ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
+ } else {
+ if (fwnode_is_primary(fn)) {
+ dev->fwnode = fn->secondary;
+- fn->secondary = NULL;
++ if (!(parent && fn == parent->fwnode))
++ fn->secondary = ERR_PTR(-ENODEV);
+ } else {
+ dev->fwnode = NULL;
+ }
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 63b9714a01548..b0ec2721f55de 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -470,14 +470,12 @@ fw_get_filesystem_firmware(struct device *device, struct fw_priv *fw_priv,
+ int i, len;
+ int rc = -ENOENT;
+ char *path;
+- enum kernel_read_file_id id = READING_FIRMWARE;
+ size_t msize = INT_MAX;
+ void *buffer = NULL;
+
+ /* Already populated data member means we're loading into a buffer */
+ if (!decompress && fw_priv->data) {
+ buffer = fw_priv->data;
+- id = READING_FIRMWARE_PREALLOC_BUFFER;
+ msize = fw_priv->allocated_size;
+ }
+
+@@ -501,7 +499,8 @@ fw_get_filesystem_firmware(struct device *device, struct fw_priv *fw_priv,
+
+ /* load firmware files from the mount namespace of init */
+ rc = kernel_read_file_from_path_initns(path, &buffer,
+- &size, msize, id);
++ &size, msize,
++ READING_FIRMWARE);
+ if (rc) {
+ if (rc != -ENOENT)
+ dev_warn(device, "loading %s failed with error %d\n",
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 8143210a5c547..6f605f7820bb5 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -291,8 +291,7 @@ static int rpm_get_suppliers(struct device *dev)
+ device_links_read_lock_held()) {
+ int retval;
+
+- if (!(link->flags & DL_FLAG_PM_RUNTIME) ||
+- READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND)
++ if (!(link->flags & DL_FLAG_PM_RUNTIME))
+ continue;
+
+ retval = pm_runtime_get_sync(link->supplier);
+@@ -312,8 +311,6 @@ static void rpm_put_suppliers(struct device *dev)
+
+ list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
+ device_links_read_lock_held()) {
+- if (READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND)
+- continue;
+
+ while (refcount_dec_not_one(&link->rpm_active))
+ pm_runtime_put(link->supplier);
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index edf8b632e3d27..f46e26c9d9b3c 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -801,9 +801,9 @@ static void recv_work(struct work_struct *work)
+ if (likely(!blk_should_fake_timeout(rq->q)))
+ blk_mq_complete_request(rq);
+ }
++ nbd_config_put(nbd);
+ atomic_dec(&config->recv_threads);
+ wake_up(&config->recv_wq);
+- nbd_config_put(nbd);
+ kfree(args);
+ }
+
+diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h
+index daed4a9c34367..206309ecc7e4e 100644
+--- a/drivers/block/null_blk.h
++++ b/drivers/block/null_blk.h
+@@ -44,6 +44,7 @@ struct nullb_device {
+ unsigned int nr_zones;
+ struct blk_zone *zones;
+ sector_t zone_size_sects;
++ unsigned long *zone_locks;
+
+ unsigned long size; /* device size in MB */
+ unsigned long completion_nsec; /* time in ns to complete a request */
+diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c
+index 3d25c9ad23831..495713d6c989b 100644
+--- a/drivers/block/null_blk_zoned.c
++++ b/drivers/block/null_blk_zoned.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/vmalloc.h>
++#include <linux/bitmap.h>
+ #include "null_blk.h"
+
+ #define CREATE_TRACE_POINTS
+@@ -45,6 +46,12 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
+ if (!dev->zones)
+ return -ENOMEM;
+
++ dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL);
++ if (!dev->zone_locks) {
++ kvfree(dev->zones);
++ return -ENOMEM;
++ }
++
+ if (dev->zone_nr_conv >= dev->nr_zones) {
+ dev->zone_nr_conv = dev->nr_zones - 1;
+ pr_info("changed the number of conventional zones to %u",
+@@ -105,15 +112,26 @@ int null_register_zoned_dev(struct nullb *nullb)
+
+ void null_free_zoned_dev(struct nullb_device *dev)
+ {
++ bitmap_free(dev->zone_locks);
+ kvfree(dev->zones);
+ }
+
++static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno)
++{
++ wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE);
++}
++
++static inline void null_unlock_zone(struct nullb_device *dev, unsigned int zno)
++{
++ clear_and_wake_up_bit(zno, dev->zone_locks);
++}
++
+ int null_report_zones(struct gendisk *disk, sector_t sector,
+ unsigned int nr_zones, report_zones_cb cb, void *data)
+ {
+ struct nullb *nullb = disk->private_data;
+ struct nullb_device *dev = nullb->dev;
+- unsigned int first_zone, i;
++ unsigned int first_zone, i, zno;
+ struct blk_zone zone;
+ int error;
+
+@@ -124,15 +142,18 @@ int null_report_zones(struct gendisk *disk, sector_t sector,
+ nr_zones = min(nr_zones, dev->nr_zones - first_zone);
+ trace_nullb_report_zones(nullb, nr_zones);
+
+- for (i = 0; i < nr_zones; i++) {
++ zno = first_zone;
++ for (i = 0; i < nr_zones; i++, zno++) {
+ /*
+ * Stacked DM target drivers will remap the zone information by
+ * modifying the zone information passed to the report callback.
+ * So use a local copy to avoid corruption of the device zone
+ * array.
+ */
+- memcpy(&zone, &dev->zones[first_zone + i],
+- sizeof(struct blk_zone));
++ null_lock_zone(dev, zno);
++ memcpy(&zone, &dev->zones[zno], sizeof(struct blk_zone));
++ null_unlock_zone(dev, zno);
++
+ error = cb(&zone, i, data);
+ if (error)
+ return error;
+@@ -141,6 +162,10 @@ int null_report_zones(struct gendisk *disk, sector_t sector,
+ return nr_zones;
+ }
+
++/*
++ * This is called in the case of memory backing from null_process_cmd()
++ * with the target zone already locked.
++ */
+ size_t null_zone_valid_read_len(struct nullb *nullb,
+ sector_t sector, unsigned int len)
+ {
+@@ -172,10 +197,13 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
+ if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+ return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
+
++ null_lock_zone(dev, zno);
++
+ switch (zone->cond) {
+ case BLK_ZONE_COND_FULL:
+ /* Cannot write to a full zone */
+- return BLK_STS_IOERR;
++ ret = BLK_STS_IOERR;
++ break;
+ case BLK_ZONE_COND_EMPTY:
+ case BLK_ZONE_COND_IMP_OPEN:
+ case BLK_ZONE_COND_EXP_OPEN:
+@@ -193,66 +221,96 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
+ else
+ cmd->rq->__sector = sector;
+ } else if (sector != zone->wp) {
+- return BLK_STS_IOERR;
++ ret = BLK_STS_IOERR;
++ break;
+ }
+
+- if (zone->wp + nr_sectors > zone->start + zone->capacity)
+- return BLK_STS_IOERR;
++ if (zone->wp + nr_sectors > zone->start + zone->capacity) {
++ ret = BLK_STS_IOERR;
++ break;
++ }
+
+ if (zone->cond != BLK_ZONE_COND_EXP_OPEN)
+ zone->cond = BLK_ZONE_COND_IMP_OPEN;
+
+ ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
+ if (ret != BLK_STS_OK)
+- return ret;
++ break;
+
+ zone->wp += nr_sectors;
+ if (zone->wp == zone->start + zone->capacity)
+ zone->cond = BLK_ZONE_COND_FULL;
+- return BLK_STS_OK;
++ ret = BLK_STS_OK;
++ break;
+ default:
+ /* Invalid zone condition */
+- return BLK_STS_IOERR;
++ ret = BLK_STS_IOERR;
+ }
++
++ null_unlock_zone(dev, zno);
++
++ return ret;
+ }
+
+ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
+ sector_t sector)
+ {
+ struct nullb_device *dev = cmd->nq->dev;
+- unsigned int zone_no = null_zone_no(dev, sector);
+- struct blk_zone *zone = &dev->zones[zone_no];
++ unsigned int zone_no;
++ struct blk_zone *zone;
++ blk_status_t ret = BLK_STS_OK;
+ size_t i;
+
+- switch (op) {
+- case REQ_OP_ZONE_RESET_ALL:
+- for (i = 0; i < dev->nr_zones; i++) {
+- if (zone[i].type == BLK_ZONE_TYPE_CONVENTIONAL)
+- continue;
+- zone[i].cond = BLK_ZONE_COND_EMPTY;
+- zone[i].wp = zone[i].start;
++ if (op == REQ_OP_ZONE_RESET_ALL) {
++ for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) {
++ null_lock_zone(dev, i);
++ zone = &dev->zones[i];
++ if (zone->cond != BLK_ZONE_COND_EMPTY) {
++ zone->cond = BLK_ZONE_COND_EMPTY;
++ zone->wp = zone->start;
++ trace_nullb_zone_op(cmd, i, zone->cond);
++ }
++ null_unlock_zone(dev, i);
+ }
+- break;
++ return BLK_STS_OK;
++ }
++
++ zone_no = null_zone_no(dev, sector);
++ zone = &dev->zones[zone_no];
++
++ null_lock_zone(dev, zone_no);
++
++ switch (op) {
+ case REQ_OP_ZONE_RESET:
+- if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+- return BLK_STS_IOERR;
++ if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) {
++ ret = BLK_STS_IOERR;
++ break;
++ }
+
+ zone->cond = BLK_ZONE_COND_EMPTY;
+ zone->wp = zone->start;
+ break;
+ case REQ_OP_ZONE_OPEN:
+- if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+- return BLK_STS_IOERR;
+- if (zone->cond == BLK_ZONE_COND_FULL)
+- return BLK_STS_IOERR;
++ if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) {
++ ret = BLK_STS_IOERR;
++ break;
++ }
++ if (zone->cond == BLK_ZONE_COND_FULL) {
++ ret = BLK_STS_IOERR;
++ break;
++ }
+
+ zone->cond = BLK_ZONE_COND_EXP_OPEN;
+ break;
+ case REQ_OP_ZONE_CLOSE:
+- if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+- return BLK_STS_IOERR;
+- if (zone->cond == BLK_ZONE_COND_FULL)
+- return BLK_STS_IOERR;
++ if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) {
++ ret = BLK_STS_IOERR;
++ break;
++ }
++ if (zone->cond == BLK_ZONE_COND_FULL) {
++ ret = BLK_STS_IOERR;
++ break;
++ }
+
+ if (zone->wp == zone->start)
+ zone->cond = BLK_ZONE_COND_EMPTY;
+@@ -260,35 +318,54 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
+ zone->cond = BLK_ZONE_COND_CLOSED;
+ break;
+ case REQ_OP_ZONE_FINISH:
+- if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
+- return BLK_STS_IOERR;
++ if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) {
++ ret = BLK_STS_IOERR;
++ break;
++ }
+
+ zone->cond = BLK_ZONE_COND_FULL;
+ zone->wp = zone->start + zone->len;
++ ret = BLK_STS_OK;
+ break;
+ default:
+- return BLK_STS_NOTSUPP;
++ ret = BLK_STS_NOTSUPP;
++ break;
+ }
+
+- trace_nullb_zone_op(cmd, zone_no, zone->cond);
+- return BLK_STS_OK;
++ if (ret == BLK_STS_OK)
++ trace_nullb_zone_op(cmd, zone_no, zone->cond);
++
++ null_unlock_zone(dev, zone_no);
++
++ return ret;
+ }
+
+ blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_opf op,
+ sector_t sector, sector_t nr_sectors)
+ {
++ struct nullb_device *dev = cmd->nq->dev;
++ unsigned int zno = null_zone_no(dev, sector);
++ blk_status_t sts;
++
+ switch (op) {
+ case REQ_OP_WRITE:
+- return null_zone_write(cmd, sector, nr_sectors, false);
++ sts = null_zone_write(cmd, sector, nr_sectors, false);
++ break;
+ case REQ_OP_ZONE_APPEND:
+- return null_zone_write(cmd, sector, nr_sectors, true);
++ sts = null_zone_write(cmd, sector, nr_sectors, true);
++ break;
+ case REQ_OP_ZONE_RESET:
+ case REQ_OP_ZONE_RESET_ALL:
+ case REQ_OP_ZONE_OPEN:
+ case REQ_OP_ZONE_CLOSE:
+ case REQ_OP_ZONE_FINISH:
+- return null_zone_mgmt(cmd, op, sector);
++ sts = null_zone_mgmt(cmd, op, sector);
++ break;
+ default:
+- return null_process_cmd(cmd, op, sector, nr_sectors);
++ null_lock_zone(dev, zno);
++ sts = null_process_cmd(cmd, op, sector, nr_sectors);
++ null_unlock_zone(dev, zno);
+ }
++
++ return sts;
+ }
+diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
+index adfc9352351df..501e9dacfff9d 100644
+--- a/drivers/block/xen-blkback/blkback.c
++++ b/drivers/block/xen-blkback/blkback.c
+@@ -201,7 +201,7 @@ static inline void shrink_free_pagepool(struct xen_blkif_ring *ring, int num)
+
+ #define vaddr(page) ((unsigned long)pfn_to_kaddr(page_to_pfn(page)))
+
+-static int do_block_io_op(struct xen_blkif_ring *ring);
++static int do_block_io_op(struct xen_blkif_ring *ring, unsigned int *eoi_flags);
+ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
+ struct blkif_request *req,
+ struct pending_req *pending_req);
+@@ -612,6 +612,8 @@ int xen_blkif_schedule(void *arg)
+ struct xen_vbd *vbd = &blkif->vbd;
+ unsigned long timeout;
+ int ret;
++ bool do_eoi;
++ unsigned int eoi_flags = XEN_EOI_FLAG_SPURIOUS;
+
+ set_freezable();
+ while (!kthread_should_stop()) {
+@@ -636,16 +638,23 @@ int xen_blkif_schedule(void *arg)
+ if (timeout == 0)
+ goto purge_gnt_list;
+
++ do_eoi = ring->waiting_reqs;
++
+ ring->waiting_reqs = 0;
+ smp_mb(); /* clear flag *before* checking for work */
+
+- ret = do_block_io_op(ring);
++ ret = do_block_io_op(ring, &eoi_flags);
+ if (ret > 0)
+ ring->waiting_reqs = 1;
+ if (ret == -EACCES)
+ wait_event_interruptible(ring->shutdown_wq,
+ kthread_should_stop());
+
++ if (do_eoi && !ring->waiting_reqs) {
++ xen_irq_lateeoi(ring->irq, eoi_flags);
++ eoi_flags |= XEN_EOI_FLAG_SPURIOUS;
++ }
++
+ purge_gnt_list:
+ if (blkif->vbd.feature_gnt_persistent &&
+ time_after(jiffies, ring->next_lru)) {
+@@ -1121,7 +1130,7 @@ static void end_block_io_op(struct bio *bio)
+ * and transmute it to the block API to hand it over to the proper block disk.
+ */
+ static int
+-__do_block_io_op(struct xen_blkif_ring *ring)
++__do_block_io_op(struct xen_blkif_ring *ring, unsigned int *eoi_flags)
+ {
+ union blkif_back_rings *blk_rings = &ring->blk_rings;
+ struct blkif_request req;
+@@ -1144,6 +1153,9 @@ __do_block_io_op(struct xen_blkif_ring *ring)
+ if (RING_REQUEST_CONS_OVERFLOW(&blk_rings->common, rc))
+ break;
+
++ /* We've seen a request, so clear spurious eoi flag. */
++ *eoi_flags &= ~XEN_EOI_FLAG_SPURIOUS;
++
+ if (kthread_should_stop()) {
+ more_to_do = 1;
+ break;
+@@ -1202,13 +1214,13 @@ done:
+ }
+
+ static int
+-do_block_io_op(struct xen_blkif_ring *ring)
++do_block_io_op(struct xen_blkif_ring *ring, unsigned int *eoi_flags)
+ {
+ union blkif_back_rings *blk_rings = &ring->blk_rings;
+ int more_to_do;
+
+ do {
+- more_to_do = __do_block_io_op(ring);
++ more_to_do = __do_block_io_op(ring, eoi_flags);
+ if (more_to_do)
+ break;
+
+diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
+index b9aa5d1ac10b7..5e7c36d73dc62 100644
+--- a/drivers/block/xen-blkback/xenbus.c
++++ b/drivers/block/xen-blkback/xenbus.c
+@@ -246,9 +246,8 @@ static int xen_blkif_map(struct xen_blkif_ring *ring, grant_ref_t *gref,
+ if (req_prod - rsp_prod > size)
+ goto fail;
+
+- err = bind_interdomain_evtchn_to_irqhandler(blkif->domid, evtchn,
+- xen_blkif_be_int, 0,
+- "blkif-backend", ring);
++ err = bind_interdomain_evtchn_to_irqhandler_lateeoi(blkif->domid,
++ evtchn, xen_blkif_be_int, 0, "blkif-backend", ring);
+ if (err < 0)
+ goto fail;
+ ring->irq = err;
+diff --git a/drivers/bus/fsl-mc/mc-io.c b/drivers/bus/fsl-mc/mc-io.c
+index a30b53f1d87d8..305015486b91c 100644
+--- a/drivers/bus/fsl-mc/mc-io.c
++++ b/drivers/bus/fsl-mc/mc-io.c
+@@ -129,7 +129,12 @@ error_destroy_mc_io:
+ */
+ void fsl_destroy_mc_io(struct fsl_mc_io *mc_io)
+ {
+- struct fsl_mc_device *dpmcp_dev = mc_io->dpmcp_dev;
++ struct fsl_mc_device *dpmcp_dev;
++
++ if (!mc_io)
++ return;
++
++ dpmcp_dev = mc_io->dpmcp_dev;
+
+ if (dpmcp_dev)
+ fsl_mc_io_unset_dpmcp(mc_io);
+diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
+index 7960980780832..661d704c8093d 100644
+--- a/drivers/bus/mhi/core/pm.c
++++ b/drivers/bus/mhi/core/pm.c
+@@ -686,7 +686,8 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
+ return -EIO;
+
+ /* Return busy if there are any pending resources */
+- if (atomic_read(&mhi_cntrl->dev_wake))
++ if (atomic_read(&mhi_cntrl->dev_wake) ||
++ atomic_read(&mhi_cntrl->pending_pkts))
+ return -EBUSY;
+
+ /* Take MHI out of M2 state */
+@@ -712,7 +713,8 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
+
+ write_lock_irq(&mhi_cntrl->pm_lock);
+
+- if (atomic_read(&mhi_cntrl->dev_wake)) {
++ if (atomic_read(&mhi_cntrl->dev_wake) ||
++ atomic_read(&mhi_cntrl->pending_pkts)) {
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ return -EBUSY;
+ }
+diff --git a/drivers/clk/ti/clockdomain.c b/drivers/clk/ti/clockdomain.c
+index ee56306f79d5f..700b7f44f6716 100644
+--- a/drivers/clk/ti/clockdomain.c
++++ b/drivers/clk/ti/clockdomain.c
+@@ -148,10 +148,12 @@ static void __init of_ti_clockdomain_setup(struct device_node *node)
+ if (!omap2_clk_is_hw_omap(clk_hw)) {
+ pr_warn("can't setup clkdm for basic clk %s\n",
+ __clk_get_name(clk));
++ clk_put(clk);
+ continue;
+ }
+ to_clk_hw_omap(clk_hw)->clkdm_name = clkdm_name;
+ omap2_init_clk_clkdm(clk_hw);
++ clk_put(clk);
+ }
+ }
+
+diff --git a/drivers/cpufreq/Kconfig b/drivers/cpufreq/Kconfig
+index 2c7171e0b0010..85de313ddec29 100644
+--- a/drivers/cpufreq/Kconfig
++++ b/drivers/cpufreq/Kconfig
+@@ -71,6 +71,7 @@ config CPU_FREQ_DEFAULT_GOV_USERSPACE
+
+ config CPU_FREQ_DEFAULT_GOV_ONDEMAND
+ bool "ondemand"
++ depends on !(X86_INTEL_PSTATE && SMP)
+ select CPU_FREQ_GOV_ONDEMAND
+ select CPU_FREQ_GOV_PERFORMANCE
+ help
+@@ -83,6 +84,7 @@ config CPU_FREQ_DEFAULT_GOV_ONDEMAND
+
+ config CPU_FREQ_DEFAULT_GOV_CONSERVATIVE
+ bool "conservative"
++ depends on !(X86_INTEL_PSTATE && SMP)
+ select CPU_FREQ_GOV_CONSERVATIVE
+ select CPU_FREQ_GOV_PERFORMANCE
+ help
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index e4ff681faaaaa..1e4fbb002a31d 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -691,7 +691,8 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ cpumask_copy(policy->cpus, topology_core_cpumask(cpu));
+ }
+
+- if (check_amd_hwpstate_cpu(cpu) && !acpi_pstate_strict) {
++ if (check_amd_hwpstate_cpu(cpu) && boot_cpu_data.x86 < 0x19 &&
++ !acpi_pstate_strict) {
+ cpumask_clear(policy->cpus);
+ cpumask_set_cpu(cpu, policy->cpus);
+ cpumask_copy(data->freqdomain_cpus,
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index dade36725b8f1..e97ff004ac6a9 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1903,6 +1903,18 @@ void cpufreq_resume(void)
+ }
+ }
+
++/**
++ * cpufreq_driver_test_flags - Test cpufreq driver's flags against given ones.
++ * @flags: Flags to test against the current cpufreq driver's flags.
++ *
++ * Assumes that the driver is there, so callers must ensure that this is the
++ * case.
++ */
++bool cpufreq_driver_test_flags(u16 flags)
++{
++ return !!(cpufreq_driver->flags & flags);
++}
++
+ /**
+ * cpufreq_get_current_driver - return current driver's name
+ *
+@@ -2166,7 +2178,8 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
+ * exactly same freq is called again and so we can save on few function
+ * calls.
+ */
+- if (target_freq == policy->cur)
++ if (target_freq == policy->cur &&
++ !(cpufreq_driver->flags & CPUFREQ_NEED_UPDATE_LIMITS))
+ return 0;
+
+ /* Save last value to restore later on errors */
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 9a515c460a008..ef15ec4959c5c 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -2550,14 +2550,12 @@ static int intel_cpufreq_update_pstate(struct cpudata *cpu, int target_pstate,
+ int old_pstate = cpu->pstate.current_pstate;
+
+ target_pstate = intel_pstate_prepare_request(cpu, target_pstate);
+- if (target_pstate != old_pstate) {
++ if (hwp_active) {
++ intel_cpufreq_adjust_hwp(cpu, target_pstate, fast_switch);
++ cpu->pstate.current_pstate = target_pstate;
++ } else if (target_pstate != old_pstate) {
++ intel_cpufreq_adjust_perf_ctl(cpu, target_pstate, fast_switch);
+ cpu->pstate.current_pstate = target_pstate;
+- if (hwp_active)
+- intel_cpufreq_adjust_hwp(cpu, target_pstate,
+- fast_switch);
+- else
+- intel_cpufreq_adjust_perf_ctl(cpu, target_pstate,
+- fast_switch);
+ }
+
+ intel_cpufreq_trace(cpu, fast_switch ? INTEL_PSTATE_TRACE_FAST_SWITCH :
+@@ -3014,6 +3012,7 @@ static int __init intel_pstate_init(void)
+ hwp_mode_bdw = id->driver_data;
+ intel_pstate.attr = hwp_cpufreq_attrs;
+ intel_cpufreq.attr = hwp_cpufreq_attrs;
++ intel_cpufreq.flags |= CPUFREQ_NEED_UPDATE_LIMITS;
+ if (!default_driver)
+ default_driver = &intel_pstate;
+
+diff --git a/drivers/cpufreq/sti-cpufreq.c b/drivers/cpufreq/sti-cpufreq.c
+index a5ad96d29adca..4ac6fb23792a0 100644
+--- a/drivers/cpufreq/sti-cpufreq.c
++++ b/drivers/cpufreq/sti-cpufreq.c
+@@ -141,7 +141,8 @@ static const struct reg_field sti_stih407_dvfs_regfields[DVFS_MAX_REGFIELDS] = {
+ static const struct reg_field *sti_cpufreq_match(void)
+ {
+ if (of_machine_is_compatible("st,stih407") ||
+- of_machine_is_compatible("st,stih410"))
++ of_machine_is_compatible("st,stih410") ||
++ of_machine_is_compatible("st,stih418"))
+ return sti_stih407_dvfs_regfields;
+
+ return NULL;
+@@ -258,7 +259,8 @@ static int sti_cpufreq_init(void)
+ int ret;
+
+ if ((!of_machine_is_compatible("st,stih407")) &&
+- (!of_machine_is_compatible("st,stih410")))
++ (!of_machine_is_compatible("st,stih410")) &&
++ (!of_machine_is_compatible("st,stih418")))
+ return -ENODEV;
+
+ ddata.cpu = get_cpu_device(0);
+diff --git a/drivers/cpuidle/cpuidle-tegra.c b/drivers/cpuidle/cpuidle-tegra.c
+index a12fb141875a7..e8956706a2917 100644
+--- a/drivers/cpuidle/cpuidle-tegra.c
++++ b/drivers/cpuidle/cpuidle-tegra.c
+@@ -172,7 +172,7 @@ static int tegra_cpuidle_coupled_barrier(struct cpuidle_device *dev)
+ static int tegra_cpuidle_state_enter(struct cpuidle_device *dev,
+ int index, unsigned int cpu)
+ {
+- int ret;
++ int err;
+
+ /*
+ * CC6 state is the "CPU cluster power-off" state. In order to
+@@ -183,9 +183,9 @@ static int tegra_cpuidle_state_enter(struct cpuidle_device *dev,
+ * CPU cores, GIC and L2 cache).
+ */
+ if (index == TEGRA_CC6) {
+- ret = tegra_cpuidle_coupled_barrier(dev);
+- if (ret)
+- return ret;
++ err = tegra_cpuidle_coupled_barrier(dev);
++ if (err)
++ return err;
+ }
+
+ local_fiq_disable();
+@@ -194,15 +194,15 @@ static int tegra_cpuidle_state_enter(struct cpuidle_device *dev,
+
+ switch (index) {
+ case TEGRA_C7:
+- ret = tegra_cpuidle_c7_enter();
++ err = tegra_cpuidle_c7_enter();
+ break;
+
+ case TEGRA_CC6:
+- ret = tegra_cpuidle_cc6_enter(cpu);
++ err = tegra_cpuidle_cc6_enter(cpu);
+ break;
+
+ default:
+- ret = -EINVAL;
++ err = -EINVAL;
+ break;
+ }
+
+@@ -210,7 +210,7 @@ static int tegra_cpuidle_state_enter(struct cpuidle_device *dev,
+ tegra_pm_clear_cpu_in_lp2();
+ local_fiq_enable();
+
+- return ret;
++ return err ?: index;
+ }
+
+ static int tegra_cpuidle_adjust_state_index(int index, unsigned int cpu)
+@@ -236,21 +236,27 @@ static int tegra_cpuidle_enter(struct cpuidle_device *dev,
+ int index)
+ {
+ unsigned int cpu = cpu_logical_map(dev->cpu);
+- int err;
++ int ret;
+
+ index = tegra_cpuidle_adjust_state_index(index, cpu);
+ if (dev->states_usage[index].disable)
+ return -1;
+
+ if (index == TEGRA_C1)
+- err = arm_cpuidle_simple_enter(dev, drv, index);
++ ret = arm_cpuidle_simple_enter(dev, drv, index);
+ else
+- err = tegra_cpuidle_state_enter(dev, index, cpu);
++ ret = tegra_cpuidle_state_enter(dev, index, cpu);
+
+- if (err && (err != -EINTR || index != TEGRA_CC6))
+- pr_err_once("failed to enter state %d err: %d\n", index, err);
++ if (ret < 0) {
++ if (ret != -EINTR || index != TEGRA_CC6)
++ pr_err_once("failed to enter state %d err: %d\n",
++ index, ret);
++ index = -1;
++ } else {
++ index = ret;
++ }
+
+- return err ? -1 : index;
++ return index;
+ }
+
+ static int tegra114_enter_s2idle(struct cpuidle_device *dev,
+diff --git a/drivers/dma/dma-jz4780.c b/drivers/dma/dma-jz4780.c
+index 8beed91428bd6..a608efaa435fb 100644
+--- a/drivers/dma/dma-jz4780.c
++++ b/drivers/dma/dma-jz4780.c
+@@ -639,11 +639,11 @@ static enum dma_status jz4780_dma_tx_status(struct dma_chan *chan,
+ unsigned long flags;
+ unsigned long residue = 0;
+
++ spin_lock_irqsave(&jzchan->vchan.lock, flags);
++
+ status = dma_cookie_status(chan, cookie, txstate);
+ if ((status == DMA_COMPLETE) || (txstate == NULL))
+- return status;
+-
+- spin_lock_irqsave(&jzchan->vchan.lock, flags);
++ goto out_unlock_irqrestore;
+
+ vdesc = vchan_find_desc(&jzchan->vchan, cookie);
+ if (vdesc) {
+@@ -660,6 +660,7 @@ static enum dma_status jz4780_dma_tx_status(struct dma_chan *chan,
+ && jzchan->desc->status & (JZ_DMA_DCS_AR | JZ_DMA_DCS_HLT))
+ status = DMA_ERROR;
+
++out_unlock_irqrestore:
+ spin_unlock_irqrestore(&jzchan->vchan.lock, flags);
+ return status;
+ }
+diff --git a/drivers/extcon/extcon-ptn5150.c b/drivers/extcon/extcon-ptn5150.c
+index d1c997599390a..5f52527526441 100644
+--- a/drivers/extcon/extcon-ptn5150.c
++++ b/drivers/extcon/extcon-ptn5150.c
+@@ -127,7 +127,7 @@ static void ptn5150_irq_work(struct work_struct *work)
+ case PTN5150_DFP_ATTACHED:
+ extcon_set_state_sync(info->edev,
+ EXTCON_USB_HOST, false);
+- gpiod_set_value(info->vbus_gpiod, 0);
++ gpiod_set_value_cansleep(info->vbus_gpiod, 0);
+ extcon_set_state_sync(info->edev, EXTCON_USB,
+ true);
+ break;
+@@ -138,9 +138,9 @@ static void ptn5150_irq_work(struct work_struct *work)
+ PTN5150_REG_CC_VBUS_DETECTION_MASK) >>
+ PTN5150_REG_CC_VBUS_DETECTION_SHIFT);
+ if (vbus)
+- gpiod_set_value(info->vbus_gpiod, 0);
++ gpiod_set_value_cansleep(info->vbus_gpiod, 0);
+ else
+- gpiod_set_value(info->vbus_gpiod, 1);
++ gpiod_set_value_cansleep(info->vbus_gpiod, 1);
+
+ extcon_set_state_sync(info->edev,
+ EXTCON_USB_HOST, true);
+@@ -156,7 +156,7 @@ static void ptn5150_irq_work(struct work_struct *work)
+ EXTCON_USB_HOST, false);
+ extcon_set_state_sync(info->edev,
+ EXTCON_USB, false);
+- gpiod_set_value(info->vbus_gpiod, 0);
++ gpiod_set_value_cansleep(info->vbus_gpiod, 0);
+ }
+ }
+
+diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c
+index 9853bd3c4d456..017e5d8bd869a 100644
+--- a/drivers/firmware/arm_scmi/base.c
++++ b/drivers/firmware/arm_scmi/base.c
+@@ -197,6 +197,8 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
+ protocols_imp[tot_num_ret + loop] = *(list + loop);
+
+ tot_num_ret += loop_num_ret;
++
++ scmi_reset_rx_to_maxsz(handle, t);
+ } while (loop_num_ret);
+
+ scmi_xfer_put(handle, t);
+diff --git a/drivers/firmware/arm_scmi/bus.c b/drivers/firmware/arm_scmi/bus.c
+index db55c43a2cbda..1377ec76a45db 100644
+--- a/drivers/firmware/arm_scmi/bus.c
++++ b/drivers/firmware/arm_scmi/bus.c
+@@ -230,7 +230,7 @@ static void scmi_devices_unregister(void)
+ bus_for_each_dev(&scmi_bus_type, NULL, NULL, __scmi_devices_unregister);
+ }
+
+-static int __init scmi_bus_init(void)
++int __init scmi_bus_init(void)
+ {
+ int retval;
+
+@@ -240,12 +240,10 @@ static int __init scmi_bus_init(void)
+
+ return retval;
+ }
+-subsys_initcall(scmi_bus_init);
+
+-static void __exit scmi_bus_exit(void)
++void __exit scmi_bus_exit(void)
+ {
+ scmi_devices_unregister();
+ bus_unregister(&scmi_bus_type);
+ ida_destroy(&scmi_bus_id);
+ }
+-module_exit(scmi_bus_exit);
+diff --git a/drivers/firmware/arm_scmi/clock.c b/drivers/firmware/arm_scmi/clock.c
+index 75e39882746e1..fa3ad3a150c36 100644
+--- a/drivers/firmware/arm_scmi/clock.c
++++ b/drivers/firmware/arm_scmi/clock.c
+@@ -192,6 +192,8 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
+ }
+
+ tot_rate_cnt += num_returned;
++
++ scmi_reset_rx_to_maxsz(handle, t);
+ /*
+ * check for both returned and remaining to avoid infinite
+ * loop due to buggy firmware
+diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
+index c113e578cc6ce..124080955c4a0 100644
+--- a/drivers/firmware/arm_scmi/common.h
++++ b/drivers/firmware/arm_scmi/common.h
+@@ -147,6 +147,8 @@ int scmi_do_xfer_with_response(const struct scmi_handle *h,
+ struct scmi_xfer *xfer);
+ int scmi_xfer_get_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id,
+ size_t tx_size, size_t rx_size, struct scmi_xfer **p);
++void scmi_reset_rx_to_maxsz(const struct scmi_handle *handle,
++ struct scmi_xfer *xfer);
+ int scmi_handle_put(const struct scmi_handle *handle);
+ struct scmi_handle *scmi_handle_get(struct device *dev);
+ void scmi_set_handle(struct scmi_device *scmi_dev);
+@@ -156,6 +158,9 @@ void scmi_setup_protocol_implemented(const struct scmi_handle *handle,
+
+ int scmi_base_protocol_init(struct scmi_handle *h);
+
++int __init scmi_bus_init(void);
++void __exit scmi_bus_exit(void);
++
+ /* SCMI Transport */
+ /**
+ * struct scmi_chan_info - Structure representing a SCMI channel information
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index 03ec74242c141..5c2f4fab40994 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -402,6 +402,14 @@ int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer)
+ return ret;
+ }
+
++void scmi_reset_rx_to_maxsz(const struct scmi_handle *handle,
++ struct scmi_xfer *xfer)
++{
++ struct scmi_info *info = handle_to_scmi_info(handle);
++
++ xfer->rx.len = info->desc->max_msg_size;
++}
++
+ #define SCMI_MAX_RESPONSE_TIMEOUT (2 * MSEC_PER_SEC)
+
+ /**
+@@ -928,7 +936,21 @@ static struct platform_driver scmi_driver = {
+ .remove = scmi_remove,
+ };
+
+-module_platform_driver(scmi_driver);
++static int __init scmi_driver_init(void)
++{
++ scmi_bus_init();
++
++ return platform_driver_register(&scmi_driver);
++}
++module_init(scmi_driver_init);
++
++static void __exit scmi_driver_exit(void)
++{
++ scmi_bus_exit();
++
++ platform_driver_unregister(&scmi_driver);
++}
++module_exit(scmi_driver_exit);
+
+ MODULE_ALIAS("platform: arm-scmi");
+ MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>");
+diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
+index 4731daaacd19e..51c5a376fb472 100644
+--- a/drivers/firmware/arm_scmi/notify.c
++++ b/drivers/firmware/arm_scmi/notify.c
+@@ -1403,15 +1403,21 @@ static void scmi_protocols_late_init(struct work_struct *work)
+ "finalized PENDING handler - key:%X\n",
+ hndl->key);
+ ret = scmi_event_handler_enable_events(hndl);
++ if (ret) {
++ dev_dbg(ni->handle->dev,
++ "purging INVALID handler - key:%X\n",
++ hndl->key);
++ scmi_put_active_handler(ni, hndl);
++ }
+ } else {
+ ret = scmi_valid_pending_handler(ni, hndl);
+- }
+- if (ret) {
+- dev_dbg(ni->handle->dev,
+- "purging PENDING handler - key:%X\n",
+- hndl->key);
+- /* this hndl can be only a pending one */
+- scmi_put_handler_unlocked(ni, hndl);
++ if (ret) {
++ dev_dbg(ni->handle->dev,
++ "purging PENDING handler - key:%X\n",
++ hndl->key);
++ /* this hndl can be only a pending one */
++ scmi_put_handler_unlocked(ni, hndl);
++ }
+ }
+ }
+ mutex_unlock(&ni->pending_mtx);
+@@ -1468,7 +1474,7 @@ int scmi_notification_init(struct scmi_handle *handle)
+ ni->gid = gid;
+ ni->handle = handle;
+
+- ni->notify_wq = alloc_workqueue("scmi_notify",
++ ni->notify_wq = alloc_workqueue(dev_name(handle->dev),
+ WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
+ 0);
+ if (!ni->notify_wq)
+diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
+index 3e1e87012c95b..3e8b548a12b62 100644
+--- a/drivers/firmware/arm_scmi/perf.c
++++ b/drivers/firmware/arm_scmi/perf.c
+@@ -304,6 +304,8 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
+ }
+
+ tot_opp_cnt += num_returned;
++
++ scmi_reset_rx_to_maxsz(handle, t);
+ /*
+ * check for both returned and remaining to avoid infinite
+ * loop due to buggy firmware
+diff --git a/drivers/firmware/arm_scmi/reset.c b/drivers/firmware/arm_scmi/reset.c
+index 3691bafca0574..86bda46de8eb8 100644
+--- a/drivers/firmware/arm_scmi/reset.c
++++ b/drivers/firmware/arm_scmi/reset.c
+@@ -36,9 +36,7 @@ struct scmi_msg_reset_domain_reset {
+ #define EXPLICIT_RESET_ASSERT BIT(1)
+ #define ASYNCHRONOUS_RESET BIT(2)
+ __le32 reset_state;
+-#define ARCH_RESET_TYPE BIT(31)
+-#define COLD_RESET_STATE BIT(0)
+-#define ARCH_COLD_RESET (ARCH_RESET_TYPE | COLD_RESET_STATE)
++#define ARCH_COLD_RESET 0
+ };
+
+ struct scmi_msg_reset_notify {
+diff --git a/drivers/firmware/arm_scmi/sensors.c b/drivers/firmware/arm_scmi/sensors.c
+index 1af0ad362e823..4beee439b84ba 100644
+--- a/drivers/firmware/arm_scmi/sensors.c
++++ b/drivers/firmware/arm_scmi/sensors.c
+@@ -166,6 +166,8 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle,
+ }
+
+ desc_index += num_returned;
++
++ scmi_reset_rx_to_maxsz(handle, t);
+ /*
+ * check for both returned and remaining to avoid infinite
+ * loop due to buggy firmware
+diff --git a/drivers/firmware/arm_scmi/smc.c b/drivers/firmware/arm_scmi/smc.c
+index a1537d123e385..22f83af6853a1 100644
+--- a/drivers/firmware/arm_scmi/smc.c
++++ b/drivers/firmware/arm_scmi/smc.c
+@@ -149,6 +149,6 @@ static struct scmi_transport_ops scmi_smc_ops = {
+ const struct scmi_desc scmi_smc_desc = {
+ .ops = &scmi_smc_ops,
+ .max_rx_timeout_ms = 30,
+- .max_msg = 1,
++ .max_msg = 20,
+ .max_msg_size = 128,
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+index 8842c55d4490b..fc695126b6e75 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+@@ -46,7 +46,7 @@ const unsigned int amdgpu_ctx_num_entities[AMDGPU_HW_IP_NUM] = {
+ static int amdgpu_ctx_priority_permit(struct drm_file *filp,
+ enum drm_sched_priority priority)
+ {
+- if (priority < 0 || priority >= DRM_SCHED_PRIORITY_MAX)
++ if (priority < 0 || priority >= DRM_SCHED_PRIORITY_COUNT)
+ return -EINVAL;
+
+ /* NORMAL and below are accessible by everyone */
+@@ -65,7 +65,7 @@ static int amdgpu_ctx_priority_permit(struct drm_file *filp,
+ static enum gfx_pipe_priority amdgpu_ctx_sched_prio_to_compute_prio(enum drm_sched_priority prio)
+ {
+ switch (prio) {
+- case DRM_SCHED_PRIORITY_HIGH_HW:
++ case DRM_SCHED_PRIORITY_HIGH:
+ case DRM_SCHED_PRIORITY_KERNEL:
+ return AMDGPU_GFX_PIPE_PRIO_HIGH;
+ default:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index d0b8d0d341af5..b4a8da8fc8fd7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3316,10 +3316,8 @@ fence_driver_init:
+ flush_delayed_work(&adev->delayed_init_work);
+
+ r = sysfs_create_files(&adev->dev->kobj, amdgpu_dev_attributes);
+- if (r) {
++ if (r)
+ dev_err(adev->dev, "Could not create amdgpu device attr\n");
+- return r;
+- }
+
+ if (IS_ENABLED(CONFIG_PERF_EVENTS))
+ r = amdgpu_pmu_init(adev);
+@@ -4376,7 +4374,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
+ retry: /* Rest of adevs pre asic reset from XGMI hive. */
+ list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {
+ r = amdgpu_device_pre_asic_reset(tmp_adev,
+- NULL,
++ (tmp_adev == adev) ? job : NULL,
+ &need_full_reset);
+ /*TODO Should we stop ?*/
+ if (r) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+index 7f9e50247413d..fb3fa9b27d53b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+@@ -596,6 +596,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
+ struct ww_acquire_ctx ticket;
+ struct list_head list, duplicates;
+ uint64_t va_flags;
++ uint64_t vm_size;
+ int r = 0;
+
+ if (args->va_address < AMDGPU_VA_RESERVED_SIZE) {
+@@ -616,6 +617,15 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
+
+ args->va_address &= AMDGPU_GMC_HOLE_MASK;
+
++ vm_size = adev->vm_manager.max_pfn * AMDGPU_GPU_PAGE_SIZE;
++ vm_size -= AMDGPU_VA_RESERVED_SIZE;
++ if (args->va_address + args->map_size > vm_size) {
++ dev_dbg(&dev->pdev->dev,
++ "va_address 0x%llx is in top reserved area 0x%llx\n",
++ args->va_address + args->map_size, vm_size);
++ return -EINVAL;
++ }
++
+ if ((args->flags & ~valid_flags) && (args->flags & ~prt_flags)) {
+ dev_dbg(&dev->pdev->dev, "invalid flags combination 0x%08X\n",
+ args->flags);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index 937029ad5271a..dcfe8a3b03ffb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -251,7 +251,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched)
+ int i;
+
+ /* Signal all jobs not yet scheduled */
+- for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
++ for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
+ struct drm_sched_rq *rq = &sched->sched_rq[i];
+
+ if (!rq)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 06757681b2cec..f1cae42dcc364 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -206,7 +206,8 @@ static int psp_sw_fini(void *handle)
+ adev->psp.ta_fw = NULL;
+ }
+
+- if (adev->asic_type == CHIP_NAVI10)
++ if (adev->asic_type == CHIP_NAVI10 ||
++ adev->asic_type == CHIP_SIENNA_CICHLID)
+ psp_sysfs_fini(adev);
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index 1bedb416eebd0..b4fb5a473df5a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -367,12 +367,19 @@ static ssize_t amdgpu_ras_debugfs_ctrl_write(struct file *f, const char __user *
+ static ssize_t amdgpu_ras_debugfs_eeprom_write(struct file *f, const char __user *buf,
+ size_t size, loff_t *pos)
+ {
+- struct amdgpu_device *adev = (struct amdgpu_device *)file_inode(f)->i_private;
++ struct amdgpu_device *adev =
++ (struct amdgpu_device *)file_inode(f)->i_private;
+ int ret;
+
+- ret = amdgpu_ras_eeprom_reset_table(&adev->psp.ras.ras->eeprom_control);
++ ret = amdgpu_ras_eeprom_reset_table(
++ &(amdgpu_ras_get_context(adev)->eeprom_control));
+
+- return ret == 1 ? size : -EIO;
++ if (ret == 1) {
++ amdgpu_ras_get_context(adev)->flags = RAS_DEFAULT_FLAGS;
++ return size;
++ } else {
++ return -EIO;
++ }
+ }
+
+ static const struct file_operations amdgpu_ras_debugfs_ctrl_ops = {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 13ea8ebc421c6..6d4fc79bf84aa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -267,7 +267,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
+ &ring->sched;
+ }
+
+- for (i = 0; i < DRM_SCHED_PRIORITY_MAX; ++i)
++ for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; ++i)
+ atomic_set(&ring->num_jobs[i], 0);
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+index da871d84b7424..7112137689db0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+@@ -243,7 +243,7 @@ struct amdgpu_ring {
+ bool has_compute_vm_bug;
+ bool no_scheduler;
+
+- atomic_t num_jobs[DRM_SCHED_PRIORITY_MAX];
++ atomic_t num_jobs[DRM_SCHED_PRIORITY_COUNT];
+ struct mutex priority_mutex;
+ /* protected by priority_mutex */
+ int priority;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+index c799691dfa848..17661ede94885 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
+@@ -36,14 +36,14 @@ enum drm_sched_priority amdgpu_to_sched_priority(int amdgpu_priority)
+ {
+ switch (amdgpu_priority) {
+ case AMDGPU_CTX_PRIORITY_VERY_HIGH:
+- return DRM_SCHED_PRIORITY_HIGH_HW;
++ return DRM_SCHED_PRIORITY_HIGH;
+ case AMDGPU_CTX_PRIORITY_HIGH:
+- return DRM_SCHED_PRIORITY_HIGH_SW;
++ return DRM_SCHED_PRIORITY_HIGH;
+ case AMDGPU_CTX_PRIORITY_NORMAL:
+ return DRM_SCHED_PRIORITY_NORMAL;
+ case AMDGPU_CTX_PRIORITY_LOW:
+ case AMDGPU_CTX_PRIORITY_VERY_LOW:
+- return DRM_SCHED_PRIORITY_LOW;
++ return DRM_SCHED_PRIORITY_MIN;
+ case AMDGPU_CTX_PRIORITY_UNSET:
+ return DRM_SCHED_PRIORITY_UNSET;
+ default:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index 978bae7313980..b7fd0cdffce0e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -2101,7 +2101,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
+ ring = adev->mman.buffer_funcs_ring;
+ sched = &ring->sched;
+ r = drm_sched_entity_init(&adev->mman.entity,
+- DRM_SCHED_PRIORITY_KERNEL, &sched,
++ DRM_SCHED_PRIORITY_KERNEL, &sched,
+ 1, NULL);
+ if (r) {
+ DRM_ERROR("Failed setting up TTM BO move entity (%d)\n",
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index 495c3d7bb2b2b..f3b7287e84c43 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -68,6 +68,7 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
+
+ INIT_DELAYED_WORK(&adev->vcn.idle_work, amdgpu_vcn_idle_work_handler);
+ mutex_init(&adev->vcn.vcn_pg_lock);
++ mutex_init(&adev->vcn.vcn1_jpeg1_workaround);
+ atomic_set(&adev->vcn.total_submission_cnt, 0);
+ for (i = 0; i < adev->vcn.num_vcn_inst; i++)
+ atomic_set(&adev->vcn.inst[i].dpg_enc_submission_cnt, 0);
+@@ -237,6 +238,7 @@ int amdgpu_vcn_sw_fini(struct amdgpu_device *adev)
+ }
+
+ release_firmware(adev->vcn.fw);
++ mutex_destroy(&adev->vcn.vcn1_jpeg1_workaround);
+ mutex_destroy(&adev->vcn.vcn_pg_lock);
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+index 7a9b804bc988a..17691158f783e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+@@ -220,6 +220,7 @@ struct amdgpu_vcn {
+ struct amdgpu_vcn_inst inst[AMDGPU_MAX_VCN_INSTANCES];
+ struct amdgpu_vcn_reg internal;
+ struct mutex vcn_pg_lock;
++ struct mutex vcn1_jpeg1_workaround;
+ atomic_t total_submission_cnt;
+
+ unsigned harvest_config;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+index 770025a5e5003..5b6788fb540a3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+@@ -112,8 +112,8 @@ struct amdgpu_bo_list_entry;
+ #define AMDGPU_MMHUB_0 1
+ #define AMDGPU_MMHUB_1 2
+
+-/* hardcode that limit for now */
+-#define AMDGPU_VA_RESERVED_SIZE (1ULL << 20)
++/* Reserve 2MB at top/bottom of address space for kernel use */
++#define AMDGPU_VA_RESERVED_SIZE (2ULL << 20)
+
+ /* max vmids dedicated for process */
+ #define AMDGPU_VM_MAX_RESERVED_VMID 1
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index f73ce97212339..b1cbb958d5cd6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -112,6 +112,22 @@
+ #define mmCP_HYP_ME_UCODE_DATA 0x5817
+ #define mmCP_HYP_ME_UCODE_DATA_BASE_IDX 1
+
++//CC_GC_SA_UNIT_DISABLE
++#define mmCC_GC_SA_UNIT_DISABLE 0x0fe9
++#define mmCC_GC_SA_UNIT_DISABLE_BASE_IDX 0
++#define CC_GC_SA_UNIT_DISABLE__SA_DISABLE__SHIFT 0x8
++#define CC_GC_SA_UNIT_DISABLE__SA_DISABLE_MASK 0x0000FF00L
++//GC_USER_SA_UNIT_DISABLE
++#define mmGC_USER_SA_UNIT_DISABLE 0x0fea
++#define mmGC_USER_SA_UNIT_DISABLE_BASE_IDX 0
++#define GC_USER_SA_UNIT_DISABLE__SA_DISABLE__SHIFT 0x8
++#define GC_USER_SA_UNIT_DISABLE__SA_DISABLE_MASK 0x0000FF00L
++//PA_SC_ENHANCE_3
++#define mmPA_SC_ENHANCE_3 0x1085
++#define mmPA_SC_ENHANCE_3_BASE_IDX 0
++#define PA_SC_ENHANCE_3__FORCE_PBB_WORKLOAD_MODE_TO_ZERO__SHIFT 0x3
++#define PA_SC_ENHANCE_3__FORCE_PBB_WORKLOAD_MODE_TO_ZERO_MASK 0x00000008L
++
+ MODULE_FIRMWARE("amdgpu/navi10_ce.bin");
+ MODULE_FIRMWARE("amdgpu/navi10_pfp.bin");
+ MODULE_FIRMWARE("amdgpu/navi10_me.bin");
+@@ -3091,6 +3107,7 @@ static const struct soc15_reg_golden golden_settings_gc_10_3[] =
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_ADDR_MATCH_MASK, 0xffffffff, 0xffffffcf),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_CM_CTRL1, 0xff8fff0f, 0x580f1008),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2C_CTRL3, 0xf7ffffff, 0x10f80988),
++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmLDS_CONFIG, 0x00000020, 0x00000020),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_CL_ENHANCE, 0xf17fffff, 0x01200007),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_BINNER_TIMEOUT_COUNTER, 0xffffffff, 0x00000800),
+ SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_ENHANCE_2, 0xffffffbf, 0x00000820),
+@@ -3188,6 +3205,8 @@ static int gfx_v10_0_wait_for_rlc_autoload_complete(struct amdgpu_device *adev);
+ static void gfx_v10_0_ring_emit_ce_meta(struct amdgpu_ring *ring, bool resume);
+ static void gfx_v10_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume);
+ static void gfx_v10_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, bool start, bool secure);
++static u32 gfx_v10_3_get_disabled_sa(struct amdgpu_device *adev);
++static void gfx_v10_3_program_pbb_mode(struct amdgpu_device *adev);
+
+ static void gfx10_kiq_set_resources(struct amdgpu_ring *kiq_ring, uint64_t queue_mask)
+ {
+@@ -4518,12 +4537,17 @@ static void gfx_v10_0_setup_rb(struct amdgpu_device *adev)
+ int i, j;
+ u32 data;
+ u32 active_rbs = 0;
++ u32 bitmap;
+ u32 rb_bitmap_width_per_sh = adev->gfx.config.max_backends_per_se /
+ adev->gfx.config.max_sh_per_se;
+
+ mutex_lock(&adev->grbm_idx_mutex);
+ for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+ for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
++ bitmap = i * adev->gfx.config.max_sh_per_se + j;
++ if ((adev->asic_type == CHIP_SIENNA_CICHLID) &&
++ ((gfx_v10_3_get_disabled_sa(adev) >> bitmap) & 1))
++ continue;
+ gfx_v10_0_select_se_sh(adev, i, j, 0xffffffff);
+ data = gfx_v10_0_get_rb_active_bitmap(adev);
+ active_rbs |= data << ((i * adev->gfx.config.max_sh_per_se + j) *
+@@ -6928,6 +6952,9 @@ static int gfx_v10_0_hw_init(void *handle)
+ if (r)
+ return r;
+
++ if (adev->asic_type == CHIP_SIENNA_CICHLID)
++ gfx_v10_3_program_pbb_mode(adev);
++
+ return r;
+ }
+
+@@ -8739,6 +8766,10 @@ static int gfx_v10_0_get_cu_info(struct amdgpu_device *adev,
+ mutex_lock(&adev->grbm_idx_mutex);
+ for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+ for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
++ bitmap = i * adev->gfx.config.max_sh_per_se + j;
++ if ((adev->asic_type == CHIP_SIENNA_CICHLID) &&
++ ((gfx_v10_3_get_disabled_sa(adev) >> bitmap) & 1))
++ continue;
+ mask = 1;
+ ao_bitmap = 0;
+ counter = 0;
+@@ -8773,6 +8804,47 @@ static int gfx_v10_0_get_cu_info(struct amdgpu_device *adev,
+ return 0;
+ }
+
++static u32 gfx_v10_3_get_disabled_sa(struct amdgpu_device *adev)
++{
++ uint32_t efuse_setting, vbios_setting, disabled_sa, max_sa_mask;
++
++ efuse_setting = RREG32_SOC15(GC, 0, mmCC_GC_SA_UNIT_DISABLE);
++ efuse_setting &= CC_GC_SA_UNIT_DISABLE__SA_DISABLE_MASK;
++ efuse_setting >>= CC_GC_SA_UNIT_DISABLE__SA_DISABLE__SHIFT;
++
++ vbios_setting = RREG32_SOC15(GC, 0, mmGC_USER_SA_UNIT_DISABLE);
++ vbios_setting &= GC_USER_SA_UNIT_DISABLE__SA_DISABLE_MASK;
++ vbios_setting >>= GC_USER_SA_UNIT_DISABLE__SA_DISABLE__SHIFT;
++
++ max_sa_mask = amdgpu_gfx_create_bitmask(adev->gfx.config.max_sh_per_se *
++ adev->gfx.config.max_shader_engines);
++ disabled_sa = efuse_setting | vbios_setting;
++ disabled_sa &= max_sa_mask;
++
++ return disabled_sa;
++}
++
++static void gfx_v10_3_program_pbb_mode(struct amdgpu_device *adev)
++{
++ uint32_t max_sa_per_se, max_sa_per_se_mask, max_shader_engines;
++ uint32_t disabled_sa_mask, se_index, disabled_sa_per_se;
++
++ disabled_sa_mask = gfx_v10_3_get_disabled_sa(adev);
++
++ max_sa_per_se = adev->gfx.config.max_sh_per_se;
++ max_sa_per_se_mask = (1 << max_sa_per_se) - 1;
++ max_shader_engines = adev->gfx.config.max_shader_engines;
++
++ for (se_index = 0; max_shader_engines > se_index; se_index++) {
++ disabled_sa_per_se = disabled_sa_mask >> (se_index * max_sa_per_se);
++ disabled_sa_per_se &= max_sa_per_se_mask;
++ if (disabled_sa_per_se == max_sa_per_se_mask) {
++ WREG32_FIELD15(GC, 0, PA_SC_ENHANCE_3, FORCE_PBB_WORKLOAD_MODE_TO_ZERO, 1);
++ break;
++ }
++ }
++}
++
+ const struct amdgpu_ip_block_version gfx_v10_0_ip_block =
+ {
+ .type = AMD_IP_BLOCK_TYPE_GFX,
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c
+index bc300283b6abc..c600b61b5f45d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v1_0.c
+@@ -33,6 +33,7 @@
+
+ static void jpeg_v1_0_set_dec_ring_funcs(struct amdgpu_device *adev);
+ static void jpeg_v1_0_set_irq_funcs(struct amdgpu_device *adev);
++static void jpeg_v1_0_ring_begin_use(struct amdgpu_ring *ring);
+
+ static void jpeg_v1_0_decode_ring_patch_wreg(struct amdgpu_ring *ring, uint32_t *ptr, uint32_t reg_offset, uint32_t val)
+ {
+@@ -564,8 +565,8 @@ static const struct amdgpu_ring_funcs jpeg_v1_0_decode_ring_vm_funcs = {
+ .insert_start = jpeg_v1_0_decode_ring_insert_start,
+ .insert_end = jpeg_v1_0_decode_ring_insert_end,
+ .pad_ib = amdgpu_ring_generic_pad_ib,
+- .begin_use = vcn_v1_0_ring_begin_use,
+- .end_use = amdgpu_vcn_ring_end_use,
++ .begin_use = jpeg_v1_0_ring_begin_use,
++ .end_use = vcn_v1_0_ring_end_use,
+ .emit_wreg = jpeg_v1_0_decode_ring_emit_wreg,
+ .emit_reg_wait = jpeg_v1_0_decode_ring_emit_reg_wait,
+ .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+@@ -586,3 +587,22 @@ static void jpeg_v1_0_set_irq_funcs(struct amdgpu_device *adev)
+ {
+ adev->jpeg.inst->irq.funcs = &jpeg_v1_0_irq_funcs;
+ }
++
++static void jpeg_v1_0_ring_begin_use(struct amdgpu_ring *ring)
++{
++ struct amdgpu_device *adev = ring->adev;
++ bool set_clocks = !cancel_delayed_work_sync(&adev->vcn.idle_work);
++ int cnt = 0;
++
++ mutex_lock(&adev->vcn.vcn1_jpeg1_workaround);
++
++ if (amdgpu_fence_wait_empty(&adev->vcn.inst->ring_dec))
++ DRM_ERROR("JPEG dec: vcn dec ring may not be empty\n");
++
++ for (cnt = 0; cnt < adev->vcn.num_enc_rings; cnt++) {
++ if (amdgpu_fence_wait_empty(&adev->vcn.inst->ring_enc[cnt]))
++ DRM_ERROR("JPEG dec: vcn enc ring[%d] may not be empty\n", cnt);
++ }
++
++ vcn_v1_0_set_pg_for_begin_use(ring, set_clocks);
++}
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+index 927c330fad21c..ec4ce8746d5ef 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
+@@ -54,6 +54,7 @@ static int vcn_v1_0_pause_dpg_mode(struct amdgpu_device *adev,
+ int inst_idx, struct dpg_pause_state *new_state);
+
+ static void vcn_v1_0_idle_work_handler(struct work_struct *work);
++static void vcn_v1_0_ring_begin_use(struct amdgpu_ring *ring);
+
+ /**
+ * vcn_v1_0_early_init - set function pointers
+@@ -1804,11 +1805,24 @@ static void vcn_v1_0_idle_work_handler(struct work_struct *work)
+ }
+ }
+
+-void vcn_v1_0_ring_begin_use(struct amdgpu_ring *ring)
++static void vcn_v1_0_ring_begin_use(struct amdgpu_ring *ring)
+ {
+- struct amdgpu_device *adev = ring->adev;
++ struct amdgpu_device *adev = ring->adev;
+ bool set_clocks = !cancel_delayed_work_sync(&adev->vcn.idle_work);
+
++ mutex_lock(&adev->vcn.vcn1_jpeg1_workaround);
++
++ if (amdgpu_fence_wait_empty(&ring->adev->jpeg.inst->ring_dec))
++ DRM_ERROR("VCN dec: jpeg dec ring may not be empty\n");
++
++ vcn_v1_0_set_pg_for_begin_use(ring, set_clocks);
++
++}
++
++void vcn_v1_0_set_pg_for_begin_use(struct amdgpu_ring *ring, bool set_clocks)
++{
++ struct amdgpu_device *adev = ring->adev;
++
+ if (set_clocks) {
+ amdgpu_gfx_off_ctrl(adev, false);
+ if (adev->pm.dpm_enabled)
+@@ -1844,6 +1858,12 @@ void vcn_v1_0_ring_begin_use(struct amdgpu_ring *ring)
+ }
+ }
+
++void vcn_v1_0_ring_end_use(struct amdgpu_ring *ring)
++{
++ schedule_delayed_work(&ring->adev->vcn.idle_work, VCN_IDLE_TIMEOUT);
++ mutex_unlock(&ring->adev->vcn.vcn1_jpeg1_workaround);
++}
++
+ static const struct amd_ip_funcs vcn_v1_0_ip_funcs = {
+ .name = "vcn_v1_0",
+ .early_init = vcn_v1_0_early_init,
+@@ -1891,7 +1911,7 @@ static const struct amdgpu_ring_funcs vcn_v1_0_dec_ring_vm_funcs = {
+ .insert_end = vcn_v1_0_dec_ring_insert_end,
+ .pad_ib = amdgpu_ring_generic_pad_ib,
+ .begin_use = vcn_v1_0_ring_begin_use,
+- .end_use = amdgpu_vcn_ring_end_use,
++ .end_use = vcn_v1_0_ring_end_use,
+ .emit_wreg = vcn_v1_0_dec_ring_emit_wreg,
+ .emit_reg_wait = vcn_v1_0_dec_ring_emit_reg_wait,
+ .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+@@ -1923,7 +1943,7 @@ static const struct amdgpu_ring_funcs vcn_v1_0_enc_ring_vm_funcs = {
+ .insert_end = vcn_v1_0_enc_ring_insert_end,
+ .pad_ib = amdgpu_ring_generic_pad_ib,
+ .begin_use = vcn_v1_0_ring_begin_use,
+- .end_use = amdgpu_vcn_ring_end_use,
++ .end_use = vcn_v1_0_ring_end_use,
+ .emit_wreg = vcn_v1_0_enc_ring_emit_wreg,
+ .emit_reg_wait = vcn_v1_0_enc_ring_emit_reg_wait,
+ .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.h b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.h
+index f67d7391fc21c..1f1cc7f0ece70 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.h
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.h
+@@ -24,7 +24,8 @@
+ #ifndef __VCN_V1_0_H__
+ #define __VCN_V1_0_H__
+
+-void vcn_v1_0_ring_begin_use(struct amdgpu_ring *ring);
++void vcn_v1_0_ring_end_use(struct amdgpu_ring *ring);
++void vcn_v1_0_set_pg_for_begin_use(struct amdgpu_ring *ring, bool set_clocks);
+
+ extern const struct amdgpu_ip_block_version vcn_v1_0_ip_block;
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c
+index 72e4d61ac7522..ad05933423337 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v10.c
+@@ -58,8 +58,9 @@ static int update_qpd_v10(struct device_queue_manager *dqm,
+ /* check if sh_mem_config register already configured */
+ if (qpd->sh_mem_config == 0) {
+ qpd->sh_mem_config =
+- SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
+- SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT;
++ (SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
++ SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) |
++ (3 << SH_MEM_CONFIG__INITIAL_INST_PREFETCH__SHIFT);
+ #if 0
+ /* TODO:
+ * This shouldn't be an issue with Navi10. Verify.
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 5474f7e4c75b1..6beccd5a0941a 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4882,6 +4882,13 @@ static void amdgpu_dm_connector_destroy(struct drm_connector *connector)
+ struct amdgpu_device *adev = connector->dev->dev_private;
+ struct amdgpu_display_manager *dm = &adev->dm;
+
++ /*
++ * Call only if mst_mgr was iniitalized before since it's not done
++ * for all connector types.
++ */
++ if (aconnector->mst_mgr.dev)
++ drm_dp_mst_topology_mgr_destroy(&aconnector->mst_mgr);
++
+ #if defined(CONFIG_BACKLIGHT_CLASS_DEVICE) ||\
+ defined(CONFIG_BACKLIGHT_CLASS_DEVICE_MODULE)
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c
+index d031bd3d30724..807dca8f7d7aa 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c
+@@ -79,8 +79,7 @@ int dce112_set_clock(struct clk_mgr *clk_mgr_base, int requested_clk_khz)
+ memset(&dce_clk_params, 0, sizeof(dce_clk_params));
+
+ /* Make sure requested clock isn't lower than minimum threshold*/
+- if (requested_clk_khz > 0)
+- requested_clk_khz = max(requested_clk_khz,
++ requested_clk_khz = max(requested_clk_khz,
+ clk_mgr_dce->base.dentist_vco_freq_khz / 62);
+
+ dce_clk_params.target_clock_frequency = requested_clk_khz;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+index 21a3073c8929e..2f8fee05547ac 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+@@ -761,6 +761,7 @@ void rn_clk_mgr_construct(
+ {
+ struct dc_debug_options *debug = &ctx->dc->debug;
+ struct dpm_clocks clock_table = { 0 };
++ enum pp_smu_status status = 0;
+
+ clk_mgr->base.ctx = ctx;
+ clk_mgr->base.funcs = &dcn21_funcs;
+@@ -817,8 +818,10 @@ void rn_clk_mgr_construct(
+ clk_mgr->base.bw_params = &rn_bw_params;
+
+ if (pp_smu && pp_smu->rn_funcs.get_dpm_clock_table) {
+- pp_smu->rn_funcs.get_dpm_clock_table(&pp_smu->rn_funcs.pp_smu, &clock_table);
+- if (ctx->dc_bios && ctx->dc_bios->integrated_info) {
++ status = pp_smu->rn_funcs.get_dpm_clock_table(&pp_smu->rn_funcs.pp_smu, &clock_table);
++
++ if (status == PP_SMU_RESULT_OK &&
++ ctx->dc_bios && ctx->dc_bios->integrated_info) {
+ rn_clk_mgr_helper_populate_bw_params (clk_mgr->base.bw_params, &clock_table, ctx->dc_bios->integrated_info);
+ }
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index 437d1a7a16fe7..b0f8bfd48d102 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -2441,7 +2441,7 @@ enum dc_status dc_link_validate_mode_timing(
+ /* A hack to avoid failing any modes for EDID override feature on
+ * topology change such as lower quality cable for DP or different dongle
+ */
+- if (link->remote_sinks[0])
++ if (link->remote_sinks[0] && link->remote_sinks[0]->sink_signal == SIGNAL_TYPE_VIRTUAL)
+ return DC_OK;
+
+ /* Passive Dongle */
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.h b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.h
+index 99c68ca9c7e00..967d04d75b989 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.h
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.h
+@@ -54,7 +54,7 @@
+ SR(BL_PWM_CNTL2), \
+ SR(BL_PWM_PERIOD_CNTL), \
+ SR(BL_PWM_GRP1_REG_LOCK), \
+- SR(BIOS_SCRATCH_2)
++ NBIO_SR(BIOS_SCRATCH_2)
+
+ #define DCE_PANEL_CNTL_SF(reg_name, field_name, post_fix)\
+ .field_name = reg_name ## __ ## field_name ## post_fix
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c
+index 842abb4c475bc..62651d0041fd9 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.c
+@@ -896,10 +896,10 @@ void enc1_stream_encoder_dp_blank(
+ */
+ REG_UPDATE(DP_VID_STREAM_CNTL, DP_VID_STREAM_DIS_DEFER, 2);
+ /* Larger delay to wait until VBLANK - use max retry of
+- * 10us*5000=50ms. This covers 41.7ms of minimum 24 Hz mode +
++ * 10us*10200=102ms. This covers 100.0ms of minimum 10 Hz mode +
+ * a little more because we may not trust delay accuracy.
+ */
+- max_retries = DP_BLANK_MAX_RETRY * 250;
++ max_retries = DP_BLANK_MAX_RETRY * 501;
+
+ /* disable DP stream */
+ REG_UPDATE(DP_VID_STREAM_CNTL, DP_VID_STREAM_ENABLE, 0);
+diff --git a/drivers/gpu/drm/amd/display/dc/gpio/gpio_base.c b/drivers/gpu/drm/amd/display/dc/gpio/gpio_base.c
+index f67c18375bfdb..dac427b68fd7b 100644
+--- a/drivers/gpu/drm/amd/display/dc/gpio/gpio_base.c
++++ b/drivers/gpu/drm/amd/display/dc/gpio/gpio_base.c
+@@ -63,13 +63,13 @@ enum gpio_result dal_gpio_open_ex(
+ enum gpio_mode mode)
+ {
+ if (gpio->pin) {
+- ASSERT_CRITICAL(false);
++ BREAK_TO_DEBUGGER();
+ return GPIO_RESULT_ALREADY_OPENED;
+ }
+
+ // No action if allocation failed during gpio construct
+ if (!gpio->hw_container.ddc) {
+- ASSERT_CRITICAL(false);
++ BREAK_TO_DEBUGGER();
+ return GPIO_RESULT_NON_SPECIFIC_ERROR;
+ }
+ gpio->mode = mode;
+diff --git a/drivers/gpu/drm/amd/display/dc/os_types.h b/drivers/gpu/drm/amd/display/dc/os_types.h
+index c3bbfe397e8df..aee98b1d7ebf3 100644
+--- a/drivers/gpu/drm/amd/display/dc/os_types.h
++++ b/drivers/gpu/drm/amd/display/dc/os_types.h
+@@ -90,7 +90,7 @@
+ * general debug capabilities
+ *
+ */
+-#if defined(CONFIG_HAVE_KGDB) || defined(CONFIG_KGDB)
++#if defined(CONFIG_DEBUG_KERNEL_DC) && (defined(CONFIG_HAVE_KGDB) || defined(CONFIG_KGDB))
+ #define ASSERT_CRITICAL(expr) do { \
+ if (WARN_ON(!(expr))) { \
+ kgdb_breakpoint(); \
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index 4a3b64aa21ceb..fc63d9e32e1f8 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -2873,7 +2873,7 @@ static int smu7_vblank_too_short(struct pp_hwmgr *hwmgr,
+ if (hwmgr->is_kicker)
+ switch_limit_us = data->is_memory_gddr5 ? 450 : 150;
+ else
+- switch_limit_us = data->is_memory_gddr5 ? 190 : 150;
++ switch_limit_us = data->is_memory_gddr5 ? 200 : 150;
+ break;
+ case CHIP_VEGAM:
+ switch_limit_us = 30;
+diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu_types.h b/drivers/gpu/drm/amd/powerplay/inc/smu_types.h
+index 7b585e205a5a0..3b868f2adc12f 100644
+--- a/drivers/gpu/drm/amd/powerplay/inc/smu_types.h
++++ b/drivers/gpu/drm/amd/powerplay/inc/smu_types.h
+@@ -217,6 +217,7 @@ enum smu_clk_type {
+ __SMU_DUMMY_MAP(DPM_MP0CLK), \
+ __SMU_DUMMY_MAP(DPM_LINK), \
+ __SMU_DUMMY_MAP(DPM_DCEFCLK), \
++ __SMU_DUMMY_MAP(DPM_XGMI), \
+ __SMU_DUMMY_MAP(DS_GFXCLK), \
+ __SMU_DUMMY_MAP(DS_SOCCLK), \
+ __SMU_DUMMY_MAP(DS_LCLK), \
+diff --git a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+index b1547a83e7217..e0992cd7914ec 100644
+--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+@@ -2463,37 +2463,11 @@ static const struct i2c_algorithm navi10_i2c_algo = {
+ .functionality = navi10_i2c_func,
+ };
+
+-static int navi10_i2c_control_init(struct smu_context *smu, struct i2c_adapter *control)
+-{
+- struct amdgpu_device *adev = to_amdgpu_device(control);
+- int res;
+-
+- control->owner = THIS_MODULE;
+- control->class = I2C_CLASS_SPD;
+- control->dev.parent = &adev->pdev->dev;
+- control->algo = &navi10_i2c_algo;
+- snprintf(control->name, sizeof(control->name), "AMDGPU SMU");
+-
+- res = i2c_add_adapter(control);
+- if (res)
+- DRM_ERROR("Failed to register hw i2c, err: %d\n", res);
+-
+- return res;
+-}
+-
+-static void navi10_i2c_control_fini(struct smu_context *smu, struct i2c_adapter *control)
+-{
+- i2c_del_adapter(control);
+-}
+-
+-
+ static const struct pptable_funcs navi10_ppt_funcs = {
+ .get_allowed_feature_mask = navi10_get_allowed_feature_mask,
+ .set_default_dpm_table = navi10_set_default_dpm_table,
+ .dpm_set_vcn_enable = navi10_dpm_set_vcn_enable,
+ .dpm_set_jpeg_enable = navi10_dpm_set_jpeg_enable,
+- .i2c_init = navi10_i2c_control_init,
+- .i2c_fini = navi10_i2c_control_fini,
+ .print_clk_levels = navi10_print_clk_levels,
+ .force_clk_levels = navi10_force_clk_levels,
+ .populate_umd_state_clk = navi10_populate_umd_state_clk,
+diff --git a/drivers/gpu/drm/amd/powerplay/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/powerplay/sienna_cichlid_ppt.c
+index ace682fde22fb..8f41496630a52 100644
+--- a/drivers/gpu/drm/amd/powerplay/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/sienna_cichlid_ppt.c
+@@ -150,14 +150,17 @@ static struct cmn2asic_mapping sienna_cichlid_feature_mask_map[SMU_FEATURE_COUNT
+ FEA_MAP(DPM_GFXCLK),
+ FEA_MAP(DPM_GFX_GPO),
+ FEA_MAP(DPM_UCLK),
++ FEA_MAP(DPM_FCLK),
+ FEA_MAP(DPM_SOCCLK),
+ FEA_MAP(DPM_MP0CLK),
+ FEA_MAP(DPM_LINK),
+ FEA_MAP(DPM_DCEFCLK),
++ FEA_MAP(DPM_XGMI),
+ FEA_MAP(MEM_VDDCI_SCALING),
+ FEA_MAP(MEM_MVDD_SCALING),
+ FEA_MAP(DS_GFXCLK),
+ FEA_MAP(DS_SOCCLK),
++ FEA_MAP(DS_FCLK),
+ FEA_MAP(DS_LCLK),
+ FEA_MAP(DS_DCEFCLK),
+ FEA_MAP(DS_UCLK),
+@@ -447,6 +450,9 @@ static int sienna_cichlid_get_smu_metrics_data(struct smu_context *smu,
+ case METRICS_CURR_DCEFCLK:
+ *value = metrics->CurrClock[PPCLK_DCEFCLK];
+ break;
++ case METRICS_CURR_FCLK:
++ *value = metrics->CurrClock[PPCLK_FCLK];
++ break;
+ case METRICS_AVERAGE_GFXCLK:
+ if (metrics->AverageGfxActivity <= SMU_11_0_7_GFX_BUSY_THRESHOLD)
+ *value = metrics->AverageGfxclkFrequencyPostDs;
+diff --git a/drivers/gpu/drm/ast/ast_drv.c b/drivers/gpu/drm/ast/ast_drv.c
+index 0b58f7aee6b01..9d04f2b5225cf 100644
+--- a/drivers/gpu/drm/ast/ast_drv.c
++++ b/drivers/gpu/drm/ast/ast_drv.c
+@@ -43,9 +43,33 @@ int ast_modeset = -1;
+ MODULE_PARM_DESC(modeset, "Disable/Enable modesetting");
+ module_param_named(modeset, ast_modeset, int, 0400);
+
+-#define PCI_VENDOR_ASPEED 0x1a03
++/*
++ * DRM driver
++ */
++
++DEFINE_DRM_GEM_FOPS(ast_fops);
++
++static struct drm_driver ast_driver = {
++ .driver_features = DRIVER_ATOMIC |
++ DRIVER_GEM |
++ DRIVER_MODESET,
++
++ .fops = &ast_fops,
++ .name = DRIVER_NAME,
++ .desc = DRIVER_DESC,
++ .date = DRIVER_DATE,
++ .major = DRIVER_MAJOR,
++ .minor = DRIVER_MINOR,
++ .patchlevel = DRIVER_PATCHLEVEL,
+
+-static struct drm_driver driver;
++ DRM_GEM_VRAM_DRIVER
++};
++
++/*
++ * PCI driver
++ */
++
++#define PCI_VENDOR_ASPEED 0x1a03
+
+ #define AST_VGA_DEVICE(id, info) { \
+ .class = PCI_BASE_CLASS_DISPLAY << 16, \
+@@ -56,13 +80,13 @@ static struct drm_driver driver;
+ .subdevice = PCI_ANY_ID, \
+ .driver_data = (unsigned long) info }
+
+-static const struct pci_device_id pciidlist[] = {
++static const struct pci_device_id ast_pciidlist[] = {
+ AST_VGA_DEVICE(PCI_CHIP_AST2000, NULL),
+ AST_VGA_DEVICE(PCI_CHIP_AST2100, NULL),
+ {0, 0, 0},
+ };
+
+-MODULE_DEVICE_TABLE(pci, pciidlist);
++MODULE_DEVICE_TABLE(pci, ast_pciidlist);
+
+ static void ast_kick_out_firmware_fb(struct pci_dev *pdev)
+ {
+@@ -94,7 +118,7 @@ static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ if (ret)
+ return ret;
+
+- dev = drm_dev_alloc(&driver, &pdev->dev);
++ dev = drm_dev_alloc(&ast_driver, &pdev->dev);
+ if (IS_ERR(dev))
+ return PTR_ERR(dev);
+
+@@ -118,11 +142,9 @@ err_ast_driver_unload:
+ err_drm_dev_put:
+ drm_dev_put(dev);
+ return ret;
+-
+ }
+
+-static void
+-ast_pci_remove(struct pci_dev *pdev)
++static void ast_pci_remove(struct pci_dev *pdev)
+ {
+ struct drm_device *dev = pci_get_drvdata(pdev);
+
+@@ -217,30 +239,12 @@ static const struct dev_pm_ops ast_pm_ops = {
+
+ static struct pci_driver ast_pci_driver = {
+ .name = DRIVER_NAME,
+- .id_table = pciidlist,
++ .id_table = ast_pciidlist,
+ .probe = ast_pci_probe,
+ .remove = ast_pci_remove,
+ .driver.pm = &ast_pm_ops,
+ };
+
+-DEFINE_DRM_GEM_FOPS(ast_fops);
+-
+-static struct drm_driver driver = {
+- .driver_features = DRIVER_ATOMIC |
+- DRIVER_GEM |
+- DRIVER_MODESET,
+-
+- .fops = &ast_fops,
+- .name = DRIVER_NAME,
+- .desc = DRIVER_DESC,
+- .date = DRIVER_DATE,
+- .major = DRIVER_MAJOR,
+- .minor = DRIVER_MINOR,
+- .patchlevel = DRIVER_PATCHLEVEL,
+-
+- DRM_GEM_VRAM_DRIVER
+-};
+-
+ static int __init ast_init(void)
+ {
+ if (vgacon_text_force() && ast_modeset == -1)
+@@ -261,4 +265,3 @@ module_exit(ast_exit);
+ MODULE_AUTHOR(DRIVER_AUTHOR);
+ MODULE_DESCRIPTION(DRIVER_DESC);
+ MODULE_LICENSE("GPL and additional rights");
+-
+diff --git a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+index 6200f12a37e69..ab8174831cf40 100644
+--- a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
++++ b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+@@ -302,8 +302,12 @@ static int stdp4028_ge_b850v3_fw_probe(struct i2c_client *stdp4028_i2c,
+ const struct i2c_device_id *id)
+ {
+ struct device *dev = &stdp4028_i2c->dev;
++ int ret;
++
++ ret = ge_b850v3_lvds_init(dev);
+
+- ge_b850v3_lvds_init(dev);
++ if (ret)
++ return ret;
+
+ ge_b850v3_lvds_ptr->stdp4028_i2c = stdp4028_i2c;
+ i2c_set_clientdata(stdp4028_i2c, ge_b850v3_lvds_ptr);
+@@ -361,8 +365,12 @@ static int stdp2690_ge_b850v3_fw_probe(struct i2c_client *stdp2690_i2c,
+ const struct i2c_device_id *id)
+ {
+ struct device *dev = &stdp2690_i2c->dev;
++ int ret;
++
++ ret = ge_b850v3_lvds_init(dev);
+
+- ge_b850v3_lvds_init(dev);
++ if (ret)
++ return ret;
+
+ ge_b850v3_lvds_ptr->stdp2690_i2c = stdp2690_i2c;
+ i2c_set_clientdata(stdp2690_i2c, ge_b850v3_lvds_ptr);
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
+index d580b2aa4ce98..979acaa90d002 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
+@@ -365,7 +365,6 @@ static void dw_mipi_message_config(struct dw_mipi_dsi *dsi,
+ if (lpm)
+ val |= CMD_MODE_ALL_LP;
+
+- dsi_write(dsi, DSI_LPCLK_CTRL, lpm ? 0 : PHY_TXREQUESTCLKHS);
+ dsi_write(dsi, DSI_CMD_MODE_CFG, val);
+ }
+
+@@ -541,16 +540,22 @@ static void dw_mipi_dsi_video_mode_config(struct dw_mipi_dsi *dsi)
+ static void dw_mipi_dsi_set_mode(struct dw_mipi_dsi *dsi,
+ unsigned long mode_flags)
+ {
++ u32 val;
++
+ dsi_write(dsi, DSI_PWR_UP, RESET);
+
+ if (mode_flags & MIPI_DSI_MODE_VIDEO) {
+ dsi_write(dsi, DSI_MODE_CFG, ENABLE_VIDEO_MODE);
+ dw_mipi_dsi_video_mode_config(dsi);
+- dsi_write(dsi, DSI_LPCLK_CTRL, PHY_TXREQUESTCLKHS);
+ } else {
+ dsi_write(dsi, DSI_MODE_CFG, ENABLE_CMD_MODE);
+ }
+
++ val = PHY_TXREQUESTCLKHS;
++ if (dsi->mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS)
++ val |= AUTO_CLKLANE_CTRL;
++ dsi_write(dsi, DSI_LPCLK_CTRL, val);
++
+ dsi_write(dsi, DSI_PWR_UP, POWERUP);
+ }
+
+diff --git a/drivers/gpu/drm/drm_bridge_connector.c b/drivers/gpu/drm/drm_bridge_connector.c
+index c6994fe673f31..a58cbde59c34a 100644
+--- a/drivers/gpu/drm/drm_bridge_connector.c
++++ b/drivers/gpu/drm/drm_bridge_connector.c
+@@ -187,6 +187,7 @@ drm_bridge_connector_detect(struct drm_connector *connector, bool force)
+ case DRM_MODE_CONNECTOR_DPI:
+ case DRM_MODE_CONNECTOR_LVDS:
+ case DRM_MODE_CONNECTOR_DSI:
++ case DRM_MODE_CONNECTOR_eDP:
+ status = connector_status_connected;
+ break;
+ default:
+diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
+index 19d73868490e6..69c2c079d8036 100644
+--- a/drivers/gpu/drm/drm_gem.c
++++ b/drivers/gpu/drm/drm_gem.c
+@@ -1085,6 +1085,8 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
+ */
+ drm_gem_object_get(obj);
+
++ vma->vm_private_data = obj;
++
+ if (obj->funcs && obj->funcs->mmap) {
+ ret = obj->funcs->mmap(obj, vma);
+ if (ret) {
+@@ -1107,8 +1109,6 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
+ vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
+ }
+
+- vma->vm_private_data = obj;
+-
+ return 0;
+ }
+ EXPORT_SYMBOL(drm_gem_mmap_obj);
+diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
+index 4b7cfbac4daae..22a5d58a7eaa4 100644
+--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
++++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
+@@ -594,8 +594,13 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+ /* Remove the fake offset */
+ vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node);
+
+- if (obj->import_attach)
++ if (obj->import_attach) {
++ /* Drop the reference drm_gem_mmap_obj() acquired.*/
++ drm_gem_object_put(obj);
++ vma->vm_private_data = NULL;
++
+ return dma_buf_mmap(obj->dma_buf, vma, 0);
++ }
+
+ shmem = to_drm_gem_shmem_obj(obj);
+
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+index 03be314271811..967a5cdc120e3 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+@@ -395,8 +395,8 @@ static void g2d_userptr_put_dma_addr(struct g2d_data *g2d,
+ return;
+
+ out:
+- dma_unmap_sg(to_dma_dev(g2d->drm_dev), g2d_userptr->sgt->sgl,
+- g2d_userptr->sgt->nents, DMA_BIDIRECTIONAL);
++ dma_unmap_sgtable(to_dma_dev(g2d->drm_dev), g2d_userptr->sgt,
++ DMA_BIDIRECTIONAL, 0);
+
+ pages = frame_vector_pages(g2d_userptr->vec);
+ if (!IS_ERR(pages)) {
+@@ -511,10 +511,10 @@ static dma_addr_t *g2d_userptr_get_dma_addr(struct g2d_data *g2d,
+
+ g2d_userptr->sgt = sgt;
+
+- if (!dma_map_sg(to_dma_dev(g2d->drm_dev), sgt->sgl, sgt->nents,
+- DMA_BIDIRECTIONAL)) {
++ ret = dma_map_sgtable(to_dma_dev(g2d->drm_dev), sgt,
++ DMA_BIDIRECTIONAL, 0);
++ if (ret) {
+ DRM_DEV_ERROR(g2d->dev, "failed to map sgt with dma region.\n");
+- ret = -ENOMEM;
+ goto err_sg_free_table;
+ }
+
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index e4f7f6518945b..37e6f2abab004 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -33,6 +33,8 @@
+ #include <uapi/drm/i915_drm.h>
+ #include <uapi/drm/drm_fourcc.h>
+
++#include <asm/hypervisor.h>
++
+ #include <linux/io-mapping.h>
+ #include <linux/i2c.h>
+ #include <linux/i2c-algo-bit.h>
+@@ -1716,7 +1718,9 @@ static inline bool intel_vtd_active(void)
+ if (intel_iommu_gfx_mapped)
+ return true;
+ #endif
+- return false;
++
++ /* Running as a guest, we assume the host is enforcing VT'd */
++ return !hypervisor_is_type(X86_HYPER_NATIVE);
+ }
+
+ static inline bool intel_scanout_needs_vtd_wa(struct drm_i915_private *dev_priv)
+diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
+index 155f2b4b4030a..11223fe348dfe 100644
+--- a/drivers/gpu/drm/lima/lima_gem.c
++++ b/drivers/gpu/drm/lima/lima_gem.c
+@@ -69,8 +69,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
+ return ret;
+
+ if (bo->base.sgt) {
+- dma_unmap_sg(dev, bo->base.sgt->sgl,
+- bo->base.sgt->nents, DMA_BIDIRECTIONAL);
++ dma_unmap_sgtable(dev, bo->base.sgt, DMA_BIDIRECTIONAL, 0);
+ sg_free_table(bo->base.sgt);
+ } else {
+ bo->base.sgt = kmalloc(sizeof(*bo->base.sgt), GFP_KERNEL);
+@@ -80,7 +79,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
+ }
+ }
+
+- dma_map_sg(dev, sgt.sgl, sgt.nents, DMA_BIDIRECTIONAL);
++ ret = dma_map_sgtable(dev, &sgt, DMA_BIDIRECTIONAL, 0);
++ if (ret) {
++ sg_free_table(&sgt);
++ kfree(bo->base.sgt);
++ bo->base.sgt = NULL;
++ return ret;
++ }
+
+ *bo->base.sgt = sgt;
+
+diff --git a/drivers/gpu/drm/lima/lima_vm.c b/drivers/gpu/drm/lima/lima_vm.c
+index 5b92fb82674a9..2b2739adc7f53 100644
+--- a/drivers/gpu/drm/lima/lima_vm.c
++++ b/drivers/gpu/drm/lima/lima_vm.c
+@@ -124,7 +124,7 @@ int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo *bo, bool create)
+ if (err)
+ goto err_out1;
+
+- for_each_sg_dma_page(bo->base.sgt->sgl, &sg_iter, bo->base.sgt->nents, 0) {
++ for_each_sgtable_dma_page(bo->base.sgt, &sg_iter, 0) {
+ err = lima_vm_map_page(vm, sg_page_iter_dma_address(&sg_iter),
+ bo_va->node.start + offset);
+ if (err)
+@@ -298,8 +298,7 @@ int lima_vm_map_bo(struct lima_vm *vm, struct lima_bo *bo, int pageoff)
+ mutex_lock(&vm->lock);
+
+ base = bo_va->node.start + (pageoff << PAGE_SHIFT);
+- for_each_sg_dma_page(bo->base.sgt->sgl, &sg_iter,
+- bo->base.sgt->nents, pageoff) {
++ for_each_sgtable_dma_page(bo->base.sgt, &sg_iter, pageoff) {
+ err = lima_vm_map_page(vm, sg_page_iter_dma_address(&sg_iter),
+ base + offset);
+ if (err)
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
+index 33355dd302f11..1a6cea0e0bd74 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
+@@ -41,8 +41,8 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
+
+ for (i = 0; i < n_sgt; i++) {
+ if (bo->sgts[i].sgl) {
+- dma_unmap_sg(pfdev->dev, bo->sgts[i].sgl,
+- bo->sgts[i].nents, DMA_BIDIRECTIONAL);
++ dma_unmap_sgtable(pfdev->dev, &bo->sgts[i],
++ DMA_BIDIRECTIONAL, 0);
+ sg_free_table(&bo->sgts[i]);
+ }
+ }
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index e8f7b11352d27..776448c527ea9 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -253,7 +253,7 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu,
+ struct io_pgtable_ops *ops = mmu->pgtbl_ops;
+ u64 start_iova = iova;
+
+- for_each_sg(sgt->sgl, sgl, sgt->nents, count) {
++ for_each_sgtable_dma_sg(sgt, sgl, count) {
+ unsigned long paddr = sg_dma_address(sgl);
+ size_t len = sg_dma_len(sgl);
+
+@@ -517,10 +517,9 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
+ if (ret)
+ goto err_pages;
+
+- if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL)) {
+- ret = -EINVAL;
++ ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0);
++ if (ret)
+ goto err_map;
+- }
+
+ mmu_map_sg(pfdev, bomapping->mmu, addr,
+ IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt);
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index 96f763d888af5..9a0d77a680180 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -625,7 +625,7 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
+ return NULL;
+
+ /* Kernel run queue has higher priority than normal run queue*/
+- for (i = DRM_SCHED_PRIORITY_MAX - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
++ for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
+ entity = drm_sched_rq_select_entity(&sched->sched_rq[i]);
+ if (entity)
+ break;
+@@ -852,7 +852,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
+ sched->name = name;
+ sched->timeout = timeout;
+ sched->hang_limit = hang_limit;
+- for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_MAX; i++)
++ for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++)
+ drm_sched_rq_init(sched, &sched->sched_rq[i]);
+
+ init_waitqueue_head(&sched->wake_up_worker);
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index cc6a4e7551e31..760a8c102af3d 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -694,7 +694,7 @@ bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
+ /* Don't evict this BO if it's outside of the
+ * requested placement range
+ */
+- if (place->fpfn >= (bo->mem.start + bo->mem.size) ||
++ if (place->fpfn >= (bo->mem.start + bo->mem.num_pages) ||
+ (place->lpfn && place->lpfn <= bo->mem.start))
+ return false;
+
+diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
+index ac85e17428f88..09c012d54d58f 100644
+--- a/drivers/gpu/drm/vkms/vkms_crtc.c
++++ b/drivers/gpu/drm/vkms/vkms_crtc.c
+@@ -86,6 +86,11 @@ static bool vkms_get_vblank_timestamp(struct drm_crtc *crtc,
+ struct vkms_output *output = &vkmsdev->output;
+ struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
+
++ if (!READ_ONCE(vblank->enabled)) {
++ *vblank_time = ktime_get();
++ return true;
++ }
++
+ *vblank_time = READ_ONCE(output->vblank_hrtimer.node.expires);
+
+ if (WARN_ON(*vblank_time == vblank->time))
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 83dfec327c422..1bd0eb71559ca 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -2773,7 +2773,9 @@ static int wacom_wac_collection(struct hid_device *hdev, struct hid_report *repo
+ if (report->type != HID_INPUT_REPORT)
+ return -1;
+
+- if (WACOM_PEN_FIELD(field) && wacom->wacom_wac.pen_input)
++ if (WACOM_PAD_FIELD(field))
++ return 0;
++ else if (WACOM_PEN_FIELD(field) && wacom->wacom_wac.pen_input)
+ wacom_wac_pen_report(hdev, report);
+ else if (WACOM_FINGER_FIELD(field) && wacom->wacom_wac.touch_input)
+ wacom_wac_finger_report(hdev, report);
+diff --git a/drivers/hwmon/pmbus/max34440.c b/drivers/hwmon/pmbus/max34440.c
+index de04dff28945b..34f42589d90dc 100644
+--- a/drivers/hwmon/pmbus/max34440.c
++++ b/drivers/hwmon/pmbus/max34440.c
+@@ -31,6 +31,13 @@ enum chips { max34440, max34441, max34446, max34451, max34460, max34461 };
+ #define MAX34440_STATUS_OT_FAULT BIT(5)
+ #define MAX34440_STATUS_OT_WARN BIT(6)
+
++/*
++ * The whole max344* family have IOUT_OC_WARN_LIMIT and IOUT_OC_FAULT_LIMIT
++ * swapped from the standard pmbus spec addresses.
++ */
++#define MAX34440_IOUT_OC_WARN_LIMIT 0x46
++#define MAX34440_IOUT_OC_FAULT_LIMIT 0x4A
++
+ #define MAX34451_MFR_CHANNEL_CONFIG 0xe4
+ #define MAX34451_MFR_CHANNEL_CONFIG_SEL_MASK 0x3f
+
+@@ -49,6 +56,14 @@ static int max34440_read_word_data(struct i2c_client *client, int page,
+ const struct max34440_data *data = to_max34440_data(info);
+
+ switch (reg) {
++ case PMBUS_IOUT_OC_FAULT_LIMIT:
++ ret = pmbus_read_word_data(client, page, phase,
++ MAX34440_IOUT_OC_FAULT_LIMIT);
++ break;
++ case PMBUS_IOUT_OC_WARN_LIMIT:
++ ret = pmbus_read_word_data(client, page, phase,
++ MAX34440_IOUT_OC_WARN_LIMIT);
++ break;
+ case PMBUS_VIRT_READ_VOUT_MIN:
+ ret = pmbus_read_word_data(client, page, phase,
+ MAX34440_MFR_VOUT_MIN);
+@@ -115,6 +130,14 @@ static int max34440_write_word_data(struct i2c_client *client, int page,
+ int ret;
+
+ switch (reg) {
++ case PMBUS_IOUT_OC_FAULT_LIMIT:
++ ret = pmbus_write_word_data(client, page, MAX34440_IOUT_OC_FAULT_LIMIT,
++ word);
++ break;
++ case PMBUS_IOUT_OC_WARN_LIMIT:
++ ret = pmbus_write_word_data(client, page, MAX34440_IOUT_OC_WARN_LIMIT,
++ word);
++ break;
+ case PMBUS_VIRT_RESET_POUT_HISTORY:
+ ret = pmbus_write_word_data(client, page,
+ MAX34446_MFR_POUT_PEAK, 0);
+diff --git a/drivers/hwtracing/coresight/coresight-cti-sysfs.c b/drivers/hwtracing/coresight/coresight-cti-sysfs.c
+index 392757f3a019e..7ff7e7780bbfb 100644
+--- a/drivers/hwtracing/coresight/coresight-cti-sysfs.c
++++ b/drivers/hwtracing/coresight/coresight-cti-sysfs.c
+@@ -1065,6 +1065,13 @@ static int cti_create_con_sysfs_attr(struct device *dev,
+ }
+ eattr->var = con;
+ con->con_attrs[attr_idx] = &eattr->attr.attr;
++ /*
++ * Initialize the dynamically allocated attribute
++ * to avoid LOCKDEP splat. See include/linux/sysfs.h
++ * for more details.
++ */
++ sysfs_attr_init(con->con_attrs[attr_idx]);
++
+ return 0;
+ }
+
+diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h
+index f2dc625ea5856..5fe773c4d6cc5 100644
+--- a/drivers/hwtracing/coresight/coresight-priv.h
++++ b/drivers/hwtracing/coresight/coresight-priv.h
+@@ -148,7 +148,8 @@ static inline void coresight_write_reg_pair(void __iomem *addr, u64 val,
+ void coresight_disable_path(struct list_head *path);
+ int coresight_enable_path(struct list_head *path, u32 mode, void *sink_data);
+ struct coresight_device *coresight_get_sink(struct list_head *path);
+-struct coresight_device *coresight_get_enabled_sink(bool reset);
++struct coresight_device *
++coresight_get_enabled_sink(struct coresight_device *source);
+ struct coresight_device *coresight_get_sink_by_id(u32 id);
+ struct coresight_device *
+ coresight_find_default_sink(struct coresight_device *csdev);
+diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
+index cdcb1917216fd..fd46216669449 100644
+--- a/drivers/hwtracing/coresight/coresight.c
++++ b/drivers/hwtracing/coresight/coresight.c
+@@ -540,50 +540,46 @@ struct coresight_device *coresight_get_sink(struct list_head *path)
+ return csdev;
+ }
+
+-static int coresight_enabled_sink(struct device *dev, const void *data)
++static struct coresight_device *
++coresight_find_enabled_sink(struct coresight_device *csdev)
+ {
+- const bool *reset = data;
+- struct coresight_device *csdev = to_coresight_device(dev);
++ int i;
++ struct coresight_device *sink;
+
+ if ((csdev->type == CORESIGHT_DEV_TYPE_SINK ||
+ csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) &&
+- csdev->activated) {
+- /*
+- * Now that we have a handle on the sink for this session,
+- * disable the sysFS "enable_sink" flag so that possible
+- * concurrent perf session that wish to use another sink don't
+- * trip on it. Doing so has no ramification for the current
+- * session.
+- */
+- if (*reset)
+- csdev->activated = false;
++ csdev->activated)
++ return csdev;
+
+- return 1;
++ /*
++ * Recursively explore each port found on this element.
++ */
++ for (i = 0; i < csdev->pdata->nr_outport; i++) {
++ struct coresight_device *child_dev;
++
++ child_dev = csdev->pdata->conns[i].child_dev;
++ if (child_dev)
++ sink = coresight_find_enabled_sink(child_dev);
++ if (sink)
++ return sink;
+ }
+
+- return 0;
++ return NULL;
+ }
+
+ /**
+- * coresight_get_enabled_sink - returns the first enabled sink found on the bus
+- * @deactivate: Whether the 'enable_sink' flag should be reset
+- *
+- * When operated from perf the deactivate parameter should be set to 'true'.
+- * That way the "enabled_sink" flag of the sink that was selected can be reset,
+- * allowing for other concurrent perf sessions to choose a different sink.
++ * coresight_get_enabled_sink - returns the first enabled sink using
++ * connection based search starting from the source reference
+ *
+- * When operated from sysFS users have full control and as such the deactivate
+- * parameter should be set to 'false', hence mandating users to explicitly
+- * clear the flag.
++ * @source: Coresight source device reference
+ */
+-struct coresight_device *coresight_get_enabled_sink(bool deactivate)
++struct coresight_device *
++coresight_get_enabled_sink(struct coresight_device *source)
+ {
+- struct device *dev = NULL;
+-
+- dev = bus_find_device(&coresight_bustype, NULL, &deactivate,
+- coresight_enabled_sink);
++ if (!source)
++ return NULL;
+
+- return dev ? to_coresight_device(dev) : NULL;
++ return coresight_find_enabled_sink(source);
+ }
+
+ static int coresight_sink_by_id(struct device *dev, const void *data)
+@@ -988,11 +984,7 @@ int coresight_enable(struct coresight_device *csdev)
+ goto out;
+ }
+
+- /*
+- * Search for a valid sink for this session but don't reset the
+- * "enable_sink" flag in sysFS. Users get to do that explicitly.
+- */
+- sink = coresight_get_enabled_sink(false);
++ sink = coresight_get_enabled_sink(csdev);
+ if (!sink) {
+ ret = -EINVAL;
+ goto out;
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 0ab5381aa0127..7e7257c6f83fa 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -1171,14 +1171,6 @@ static int i2c_imx_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- /* Request IRQ */
+- ret = devm_request_irq(&pdev->dev, irq, i2c_imx_isr, IRQF_SHARED,
+- pdev->name, i2c_imx);
+- if (ret) {
+- dev_err(&pdev->dev, "can't claim irq %d\n", irq);
+- goto clk_disable;
+- }
+-
+ /* Init queue */
+ init_waitqueue_head(&i2c_imx->queue);
+
+@@ -1197,6 +1189,14 @@ static int i2c_imx_probe(struct platform_device *pdev)
+ if (ret < 0)
+ goto rpm_disable;
+
++ /* Request IRQ */
++ ret = request_threaded_irq(irq, i2c_imx_isr, NULL, IRQF_SHARED,
++ pdev->name, i2c_imx);
++ if (ret) {
++ dev_err(&pdev->dev, "can't claim irq %d\n", irq);
++ goto rpm_disable;
++ }
++
+ /* Set up clock divider */
+ i2c_imx->bitrate = I2C_MAX_STANDARD_MODE_FREQ;
+ ret = of_property_read_u32(pdev->dev.of_node,
+@@ -1239,13 +1239,12 @@ static int i2c_imx_probe(struct platform_device *pdev)
+
+ clk_notifier_unregister:
+ clk_notifier_unregister(i2c_imx->clk, &i2c_imx->clk_change_nb);
++ free_irq(irq, i2c_imx);
+ rpm_disable:
+ pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ pm_runtime_set_suspended(&pdev->dev);
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
+-
+-clk_disable:
+ clk_disable_unprepare(i2c_imx->clk);
+ return ret;
+ }
+@@ -1253,7 +1252,7 @@ clk_disable:
+ static int i2c_imx_remove(struct platform_device *pdev)
+ {
+ struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev);
+- int ret;
++ int irq, ret;
+
+ ret = pm_runtime_get_sync(&pdev->dev);
+ if (ret < 0)
+@@ -1273,6 +1272,9 @@ static int i2c_imx_remove(struct platform_device *pdev)
+ imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2SR);
+
+ clk_notifier_unregister(i2c_imx->clk, &i2c_imx->clk_change_nb);
++ irq = platform_get_irq(pdev, 0);
++ if (irq >= 0)
++ free_irq(irq, i2c_imx);
+ clk_disable_unprepare(i2c_imx->clk);
+
+ pm_runtime_put_noidle(&pdev->dev);
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index 9a810e4a79460..d09b807e1c3a1 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -1212,14 +1212,13 @@ static bool __init intel_idle_acpi_cst_extract(void)
+ if (!intel_idle_cst_usable())
+ continue;
+
+- if (!acpi_processor_claim_cst_control()) {
+- acpi_state_table.count = 0;
+- return false;
+- }
++ if (!acpi_processor_claim_cst_control())
++ break;
+
+ return true;
+ }
+
++ acpi_state_table.count = 0;
+ pr_debug("ACPI _CST not found or not usable\n");
+ return false;
+ }
+@@ -1236,7 +1235,7 @@ static void __init intel_idle_init_cstates_acpi(struct cpuidle_driver *drv)
+ struct acpi_processor_cx *cx;
+ struct cpuidle_state *state;
+
+- if (intel_idle_max_cstate_reached(cstate))
++ if (intel_idle_max_cstate_reached(cstate - 1))
+ break;
+
+ cx = &acpi_state_table.states[cstate];
+diff --git a/drivers/iio/adc/ad7292.c b/drivers/iio/adc/ad7292.c
+index 2eafbe7ac7c7b..ab204e9199e99 100644
+--- a/drivers/iio/adc/ad7292.c
++++ b/drivers/iio/adc/ad7292.c
+@@ -310,8 +310,10 @@ static int ad7292_probe(struct spi_device *spi)
+
+ for_each_available_child_of_node(spi->dev.of_node, child) {
+ diff_channels = of_property_read_bool(child, "diff-channels");
+- if (diff_channels)
++ if (diff_channels) {
++ of_node_put(child);
+ break;
++ }
+ }
+
+ if (diff_channels) {
+diff --git a/drivers/iio/adc/at91-sama5d2_adc.c b/drivers/iio/adc/at91-sama5d2_adc.c
+index de9583d6cddd7..f94641193b980 100644
+--- a/drivers/iio/adc/at91-sama5d2_adc.c
++++ b/drivers/iio/adc/at91-sama5d2_adc.c
+@@ -884,7 +884,7 @@ static bool at91_adc_current_chan_is_touch(struct iio_dev *indio_dev)
+ AT91_SAMA5D2_MAX_CHAN_IDX + 1);
+ }
+
+-static int at91_adc_buffer_preenable(struct iio_dev *indio_dev)
++static int at91_adc_buffer_prepare(struct iio_dev *indio_dev)
+ {
+ int ret;
+ u8 bit;
+@@ -901,7 +901,7 @@ static int at91_adc_buffer_preenable(struct iio_dev *indio_dev)
+ /* we continue with the triggered buffer */
+ ret = at91_adc_dma_start(indio_dev);
+ if (ret) {
+- dev_err(&indio_dev->dev, "buffer postenable failed\n");
++ dev_err(&indio_dev->dev, "buffer prepare failed\n");
+ return ret;
+ }
+
+@@ -989,7 +989,6 @@ static int at91_adc_buffer_postdisable(struct iio_dev *indio_dev)
+ }
+
+ static const struct iio_buffer_setup_ops at91_buffer_setup_ops = {
+- .preenable = &at91_adc_buffer_preenable,
+ .postdisable = &at91_adc_buffer_postdisable,
+ };
+
+@@ -1563,6 +1562,7 @@ static void at91_adc_dma_disable(struct platform_device *pdev)
+ static int at91_adc_set_watermark(struct iio_dev *indio_dev, unsigned int val)
+ {
+ struct at91_adc_state *st = iio_priv(indio_dev);
++ int ret;
+
+ if (val > AT91_HWFIFO_MAX_SIZE)
+ return -EINVAL;
+@@ -1586,7 +1586,15 @@ static int at91_adc_set_watermark(struct iio_dev *indio_dev, unsigned int val)
+ else if (val > 1)
+ at91_adc_dma_init(to_platform_device(&indio_dev->dev));
+
+- return 0;
++ /*
++ * We can start the DMA only after setting the watermark and
++ * having the DMA initialization completed
++ */
++ ret = at91_adc_buffer_prepare(indio_dev);
++ if (ret)
++ at91_adc_dma_disable(to_platform_device(&indio_dev->dev));
++
++ return ret;
+ }
+
+ static int at91_adc_update_scan_mode(struct iio_dev *indio_dev,
+diff --git a/drivers/iio/adc/rcar-gyroadc.c b/drivers/iio/adc/rcar-gyroadc.c
+index d2c1419e72a01..34fa189e9b5e5 100644
+--- a/drivers/iio/adc/rcar-gyroadc.c
++++ b/drivers/iio/adc/rcar-gyroadc.c
+@@ -357,7 +357,7 @@ static int rcar_gyroadc_parse_subdevs(struct iio_dev *indio_dev)
+ num_channels = ARRAY_SIZE(rcar_gyroadc_iio_channels_3);
+ break;
+ default:
+- return -EINVAL;
++ goto err_e_inval;
+ }
+
+ /*
+@@ -374,7 +374,7 @@ static int rcar_gyroadc_parse_subdevs(struct iio_dev *indio_dev)
+ dev_err(dev,
+ "Failed to get child reg property of ADC \"%pOFn\".\n",
+ child);
+- return ret;
++ goto err_of_node_put;
+ }
+
+ /* Channel number is too high. */
+@@ -382,7 +382,7 @@ static int rcar_gyroadc_parse_subdevs(struct iio_dev *indio_dev)
+ dev_err(dev,
+ "Only %i channels supported with %pOFn, but reg = <%i>.\n",
+ num_channels, child, reg);
+- return -EINVAL;
++ goto err_e_inval;
+ }
+ }
+
+@@ -391,7 +391,7 @@ static int rcar_gyroadc_parse_subdevs(struct iio_dev *indio_dev)
+ dev_err(dev,
+ "Channel %i uses different ADC mode than the rest.\n",
+ reg);
+- return -EINVAL;
++ goto err_e_inval;
+ }
+
+ /* Channel is valid, grab the regulator. */
+@@ -401,7 +401,8 @@ static int rcar_gyroadc_parse_subdevs(struct iio_dev *indio_dev)
+ if (IS_ERR(vref)) {
+ dev_dbg(dev, "Channel %i 'vref' supply not connected.\n",
+ reg);
+- return PTR_ERR(vref);
++ ret = PTR_ERR(vref);
++ goto err_of_node_put;
+ }
+
+ priv->vref[reg] = vref;
+@@ -425,8 +426,10 @@ static int rcar_gyroadc_parse_subdevs(struct iio_dev *indio_dev)
+ * attached to the GyroADC at a time, so if we found it,
+ * we can stop parsing here.
+ */
+- if (childmode == RCAR_GYROADC_MODE_SELECT_1_MB88101A)
++ if (childmode == RCAR_GYROADC_MODE_SELECT_1_MB88101A) {
++ of_node_put(child);
+ break;
++ }
+ }
+
+ if (first) {
+@@ -435,6 +438,12 @@ static int rcar_gyroadc_parse_subdevs(struct iio_dev *indio_dev)
+ }
+
+ return 0;
++
++err_e_inval:
++ ret = -EINVAL;
++err_of_node_put:
++ of_node_put(child);
++ return ret;
+ }
+
+ static void rcar_gyroadc_deinit_supplies(struct iio_dev *indio_dev)
+diff --git a/drivers/iio/adc/ti-adc0832.c b/drivers/iio/adc/ti-adc0832.c
+index c7a085dce1f47..0261b3cfc92b6 100644
+--- a/drivers/iio/adc/ti-adc0832.c
++++ b/drivers/iio/adc/ti-adc0832.c
+@@ -29,6 +29,12 @@ struct adc0832 {
+ struct regulator *reg;
+ struct mutex lock;
+ u8 mux_bits;
++ /*
++ * Max size needed: 16x 1 byte ADC data + 8 bytes timestamp
++ * May be shorter if not all channels are enabled subject
++ * to the timestamp remaining 8 byte aligned.
++ */
++ u8 data[24] __aligned(8);
+
+ u8 tx_buf[2] ____cacheline_aligned;
+ u8 rx_buf[2];
+@@ -200,7 +206,6 @@ static irqreturn_t adc0832_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct adc0832 *adc = iio_priv(indio_dev);
+- u8 data[24] = { }; /* 16x 1 byte ADC data + 8 bytes timestamp */
+ int scan_index;
+ int i = 0;
+
+@@ -218,10 +223,10 @@ static irqreturn_t adc0832_trigger_handler(int irq, void *p)
+ goto out;
+ }
+
+- data[i] = ret;
++ adc->data[i] = ret;
+ i++;
+ }
+- iio_push_to_buffers_with_timestamp(indio_dev, data,
++ iio_push_to_buffers_with_timestamp(indio_dev, adc->data,
+ iio_get_time_ns(indio_dev));
+ out:
+ mutex_unlock(&adc->lock);
+diff --git a/drivers/iio/adc/ti-adc12138.c b/drivers/iio/adc/ti-adc12138.c
+index e485719cd2c4c..fcd5d39dd03ea 100644
+--- a/drivers/iio/adc/ti-adc12138.c
++++ b/drivers/iio/adc/ti-adc12138.c
+@@ -47,6 +47,12 @@ struct adc12138 {
+ struct completion complete;
+ /* The number of cclk periods for the S/H's acquisition time */
+ unsigned int acquisition_time;
++ /*
++ * Maximum size needed: 16x 2 bytes ADC data + 8 bytes timestamp.
++ * Less may be need if not all channels are enabled, as long as
++ * the 8 byte alignment of the timestamp is maintained.
++ */
++ __be16 data[20] __aligned(8);
+
+ u8 tx_buf[2] ____cacheline_aligned;
+ u8 rx_buf[2];
+@@ -329,7 +335,6 @@ static irqreturn_t adc12138_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct adc12138 *adc = iio_priv(indio_dev);
+- __be16 data[20] = { }; /* 16x 2 bytes ADC data + 8 bytes timestamp */
+ __be16 trash;
+ int ret;
+ int scan_index;
+@@ -345,7 +350,7 @@ static irqreturn_t adc12138_trigger_handler(int irq, void *p)
+ reinit_completion(&adc->complete);
+
+ ret = adc12138_start_and_read_conv(adc, scan_chan,
+- i ? &data[i - 1] : &trash);
++ i ? &adc->data[i - 1] : &trash);
+ if (ret) {
+ dev_warn(&adc->spi->dev,
+ "failed to start conversion\n");
+@@ -362,7 +367,7 @@ static irqreturn_t adc12138_trigger_handler(int irq, void *p)
+ }
+
+ if (i) {
+- ret = adc12138_read_conv_data(adc, &data[i - 1]);
++ ret = adc12138_read_conv_data(adc, &adc->data[i - 1]);
+ if (ret) {
+ dev_warn(&adc->spi->dev,
+ "failed to get conversion data\n");
+@@ -370,7 +375,7 @@ static irqreturn_t adc12138_trigger_handler(int irq, void *p)
+ }
+ }
+
+- iio_push_to_buffers_with_timestamp(indio_dev, data,
++ iio_push_to_buffers_with_timestamp(indio_dev, adc->data,
+ iio_get_time_ns(indio_dev));
+ out:
+ mutex_unlock(&adc->lock);
+diff --git a/drivers/iio/gyro/itg3200_buffer.c b/drivers/iio/gyro/itg3200_buffer.c
+index d3fbe9d86467c..1c3c1bd53374a 100644
+--- a/drivers/iio/gyro/itg3200_buffer.c
++++ b/drivers/iio/gyro/itg3200_buffer.c
+@@ -46,13 +46,20 @@ static irqreturn_t itg3200_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct itg3200 *st = iio_priv(indio_dev);
+- __be16 buf[ITG3200_SCAN_ELEMENTS + sizeof(s64)/sizeof(u16)];
+-
+- int ret = itg3200_read_all_channels(st->i2c, buf);
++ /*
++ * Ensure correct alignment and padding including for the
++ * timestamp that may be inserted.
++ */
++ struct {
++ __be16 buf[ITG3200_SCAN_ELEMENTS];
++ s64 ts __aligned(8);
++ } scan;
++
++ int ret = itg3200_read_all_channels(st->i2c, scan.buf);
+ if (ret < 0)
+ goto error_ret;
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buf, pf->timestamp);
++ iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp);
+
+ iio_trigger_notify_done(indio_dev->trig);
+
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+index cd38b3fccc7b2..eb522b38acf3f 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+@@ -122,6 +122,13 @@ struct inv_mpu6050_chip_config {
+ u8 user_ctrl;
+ };
+
++/*
++ * Maximum of 6 + 6 + 2 + 7 (for MPU9x50) = 21 round up to 24 and plus 8.
++ * May be less if fewer channels are enabled, as long as the timestamp
++ * remains 8 byte aligned
++ */
++#define INV_MPU6050_OUTPUT_DATA_SIZE 32
++
+ /**
+ * struct inv_mpu6050_hw - Other important hardware information.
+ * @whoami: Self identification byte from WHO_AM_I register
+@@ -165,6 +172,7 @@ struct inv_mpu6050_hw {
+ * @magn_raw_to_gauss: coefficient to convert mag raw value to Gauss.
+ * @magn_orient: magnetometer sensor chip orientation if available.
+ * @suspended_sensors: sensors mask of sensors turned off for suspend
++ * @data: dma safe buffer used for bulk reads.
+ */
+ struct inv_mpu6050_state {
+ struct mutex lock;
+@@ -190,6 +198,7 @@ struct inv_mpu6050_state {
+ s32 magn_raw_to_gauss[3];
+ struct iio_mount_matrix magn_orient;
+ unsigned int suspended_sensors;
++ u8 data[INV_MPU6050_OUTPUT_DATA_SIZE] ____cacheline_aligned;
+ };
+
+ /*register and associated bit definition*/
+@@ -334,9 +343,6 @@ struct inv_mpu6050_state {
+ #define INV_ICM20608_TEMP_OFFSET 8170
+ #define INV_ICM20608_TEMP_SCALE 3059976
+
+-/* 6 + 6 + 2 + 7 (for MPU9x50) = 21 round up to 24 and plus 8 */
+-#define INV_MPU6050_OUTPUT_DATA_SIZE 32
+-
+ #define INV_MPU6050_REG_INT_PIN_CFG 0x37
+ #define INV_MPU6050_ACTIVE_HIGH 0x00
+ #define INV_MPU6050_ACTIVE_LOW 0x80
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+index b533fa2dad0ab..d8e6b88ddffcb 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
+@@ -13,7 +13,6 @@
+ #include <linux/interrupt.h>
+ #include <linux/poll.h>
+ #include <linux/math64.h>
+-#include <asm/unaligned.h>
+ #include "inv_mpu_iio.h"
+
+ /**
+@@ -121,7 +120,6 @@ irqreturn_t inv_mpu6050_read_fifo(int irq, void *p)
+ struct inv_mpu6050_state *st = iio_priv(indio_dev);
+ size_t bytes_per_datum;
+ int result;
+- u8 data[INV_MPU6050_OUTPUT_DATA_SIZE];
+ u16 fifo_count;
+ s64 timestamp;
+ int int_status;
+@@ -160,11 +158,11 @@ irqreturn_t inv_mpu6050_read_fifo(int irq, void *p)
+ * read fifo_count register to know how many bytes are inside the FIFO
+ * right now
+ */
+- result = regmap_bulk_read(st->map, st->reg->fifo_count_h, data,
+- INV_MPU6050_FIFO_COUNT_BYTE);
++ result = regmap_bulk_read(st->map, st->reg->fifo_count_h,
++ st->data, INV_MPU6050_FIFO_COUNT_BYTE);
+ if (result)
+ goto end_session;
+- fifo_count = get_unaligned_be16(&data[0]);
++ fifo_count = be16_to_cpup((__be16 *)&st->data[0]);
+
+ /*
+ * Handle fifo overflow by resetting fifo.
+@@ -182,7 +180,7 @@ irqreturn_t inv_mpu6050_read_fifo(int irq, void *p)
+ inv_mpu6050_update_period(st, pf->timestamp, nb);
+ for (i = 0; i < nb; ++i) {
+ result = regmap_bulk_read(st->map, st->reg->fifo_r_w,
+- data, bytes_per_datum);
++ st->data, bytes_per_datum);
+ if (result)
+ goto flush_fifo;
+ /* skip first samples if needed */
+@@ -191,7 +189,7 @@ irqreturn_t inv_mpu6050_read_fifo(int irq, void *p)
+ continue;
+ }
+ timestamp = inv_mpu6050_get_timestamp(st);
+- iio_push_to_buffers_with_timestamp(indio_dev, data, timestamp);
++ iio_push_to_buffers_with_timestamp(indio_dev, st->data, timestamp);
+ }
+
+ end_session:
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+index d80ba2e688ed0..9275346a9cc1e 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+@@ -383,6 +383,7 @@ struct st_lsm6dsx_sensor {
+ * @iio_devs: Pointers to acc/gyro iio_dev instances.
+ * @settings: Pointer to the specific sensor settings in use.
+ * @orientation: sensor chip orientation relative to main hardware.
++ * @scan: Temporary buffers used to align data before iio_push_to_buffers()
+ */
+ struct st_lsm6dsx_hw {
+ struct device *dev;
+@@ -411,6 +412,11 @@ struct st_lsm6dsx_hw {
+ const struct st_lsm6dsx_settings *settings;
+
+ struct iio_mount_matrix orientation;
++ /* Ensure natural alignment of buffer elements */
++ struct {
++ __le16 channels[3];
++ s64 ts __aligned(8);
++ } scan[3];
+ };
+
+ static __maybe_unused const struct iio_event_spec st_lsm6dsx_event = {
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+index 7de10bd636ea0..12ed0a2e55e46 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+@@ -353,9 +353,6 @@ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw)
+ int err, sip, acc_sip, gyro_sip, ts_sip, ext_sip, read_len, offset;
+ u16 fifo_len, pattern_len = hw->sip * ST_LSM6DSX_SAMPLE_SIZE;
+ u16 fifo_diff_mask = hw->settings->fifo_ops.fifo_diff.mask;
+- u8 gyro_buff[ST_LSM6DSX_IIO_BUFF_SIZE];
+- u8 acc_buff[ST_LSM6DSX_IIO_BUFF_SIZE];
+- u8 ext_buff[ST_LSM6DSX_IIO_BUFF_SIZE];
+ bool reset_ts = false;
+ __le16 fifo_status;
+ s64 ts = 0;
+@@ -416,19 +413,22 @@ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw)
+
+ while (acc_sip > 0 || gyro_sip > 0 || ext_sip > 0) {
+ if (gyro_sip > 0 && !(sip % gyro_sensor->decimator)) {
+- memcpy(gyro_buff, &hw->buff[offset],
+- ST_LSM6DSX_SAMPLE_SIZE);
+- offset += ST_LSM6DSX_SAMPLE_SIZE;
++ memcpy(hw->scan[ST_LSM6DSX_ID_GYRO].channels,
++ &hw->buff[offset],
++ sizeof(hw->scan[ST_LSM6DSX_ID_GYRO].channels));
++ offset += sizeof(hw->scan[ST_LSM6DSX_ID_GYRO].channels);
+ }
+ if (acc_sip > 0 && !(sip % acc_sensor->decimator)) {
+- memcpy(acc_buff, &hw->buff[offset],
+- ST_LSM6DSX_SAMPLE_SIZE);
+- offset += ST_LSM6DSX_SAMPLE_SIZE;
++ memcpy(hw->scan[ST_LSM6DSX_ID_ACC].channels,
++ &hw->buff[offset],
++ sizeof(hw->scan[ST_LSM6DSX_ID_ACC].channels));
++ offset += sizeof(hw->scan[ST_LSM6DSX_ID_ACC].channels);
+ }
+ if (ext_sip > 0 && !(sip % ext_sensor->decimator)) {
+- memcpy(ext_buff, &hw->buff[offset],
+- ST_LSM6DSX_SAMPLE_SIZE);
+- offset += ST_LSM6DSX_SAMPLE_SIZE;
++ memcpy(hw->scan[ST_LSM6DSX_ID_EXT0].channels,
++ &hw->buff[offset],
++ sizeof(hw->scan[ST_LSM6DSX_ID_EXT0].channels));
++ offset += sizeof(hw->scan[ST_LSM6DSX_ID_EXT0].channels);
+ }
+
+ if (ts_sip-- > 0) {
+@@ -458,19 +458,22 @@ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw)
+ if (gyro_sip > 0 && !(sip % gyro_sensor->decimator)) {
+ iio_push_to_buffers_with_timestamp(
+ hw->iio_devs[ST_LSM6DSX_ID_GYRO],
+- gyro_buff, gyro_sensor->ts_ref + ts);
++ &hw->scan[ST_LSM6DSX_ID_GYRO],
++ gyro_sensor->ts_ref + ts);
+ gyro_sip--;
+ }
+ if (acc_sip > 0 && !(sip % acc_sensor->decimator)) {
+ iio_push_to_buffers_with_timestamp(
+ hw->iio_devs[ST_LSM6DSX_ID_ACC],
+- acc_buff, acc_sensor->ts_ref + ts);
++ &hw->scan[ST_LSM6DSX_ID_ACC],
++ acc_sensor->ts_ref + ts);
+ acc_sip--;
+ }
+ if (ext_sip > 0 && !(sip % ext_sensor->decimator)) {
+ iio_push_to_buffers_with_timestamp(
+ hw->iio_devs[ST_LSM6DSX_ID_EXT0],
+- ext_buff, ext_sensor->ts_ref + ts);
++ &hw->scan[ST_LSM6DSX_ID_EXT0],
++ ext_sensor->ts_ref + ts);
+ ext_sip--;
+ }
+ sip++;
+@@ -555,7 +558,14 @@ int st_lsm6dsx_read_tagged_fifo(struct st_lsm6dsx_hw *hw)
+ {
+ u16 pattern_len = hw->sip * ST_LSM6DSX_TAGGED_SAMPLE_SIZE;
+ u16 fifo_len, fifo_diff_mask;
+- u8 iio_buff[ST_LSM6DSX_IIO_BUFF_SIZE], tag;
++ /*
++ * Alignment needed as this can ultimately be passed to a
++ * call to iio_push_to_buffers_with_timestamp() which
++ * must be passed a buffer that is aligned to 8 bytes so
++ * as to allow insertion of a naturally aligned timestamp.
++ */
++ u8 iio_buff[ST_LSM6DSX_IIO_BUFF_SIZE] __aligned(8);
++ u8 tag;
+ bool reset_ts = false;
+ int i, err, read_len;
+ __le16 fifo_status;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
+index ed83471dc7ddf..8c8d8870ca075 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
+@@ -313,6 +313,8 @@ st_lsm6dsx_shub_read(struct st_lsm6dsx_sensor *sensor, u8 addr,
+
+ err = st_lsm6dsx_shub_read_output(hw, data,
+ len & ST_LS6DSX_READ_OP_MASK);
++ if (err < 0)
++ return err;
+
+ st_lsm6dsx_shub_master_enable(sensor, false);
+
+diff --git a/drivers/iio/light/si1145.c b/drivers/iio/light/si1145.c
+index 8f5f857c2e7d9..b304801c79163 100644
+--- a/drivers/iio/light/si1145.c
++++ b/drivers/iio/light/si1145.c
+@@ -168,6 +168,7 @@ struct si1145_part_info {
+ * @part_info: Part information
+ * @trig: Pointer to iio trigger
+ * @meas_rate: Value of MEAS_RATE register. Only set in HW in auto mode
++ * @buffer: Used to pack data read from sensor.
+ */
+ struct si1145_data {
+ struct i2c_client *client;
+@@ -179,6 +180,14 @@ struct si1145_data {
+ bool autonomous;
+ struct iio_trigger *trig;
+ int meas_rate;
++ /*
++ * Ensure timestamp will be naturally aligned if present.
++ * Maximum buffer size (may be only partly used if not all
++ * channels are enabled):
++ * 6*2 bytes channels data + 4 bytes alignment +
++ * 8 bytes timestamp
++ */
++ u8 buffer[24] __aligned(8);
+ };
+
+ /*
+@@ -440,12 +449,6 @@ static irqreturn_t si1145_trigger_handler(int irq, void *private)
+ struct iio_poll_func *pf = private;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct si1145_data *data = iio_priv(indio_dev);
+- /*
+- * Maximum buffer size:
+- * 6*2 bytes channels data + 4 bytes alignment +
+- * 8 bytes timestamp
+- */
+- u8 buffer[24];
+ int i, j = 0;
+ int ret;
+ u8 irq_status = 0;
+@@ -478,7 +481,7 @@ static irqreturn_t si1145_trigger_handler(int irq, void *private)
+
+ ret = i2c_smbus_read_i2c_block_data_or_emulated(
+ data->client, indio_dev->channels[i].address,
+- sizeof(u16) * run, &buffer[j]);
++ sizeof(u16) * run, &data->buffer[j]);
+ if (ret < 0)
+ goto done;
+ j += run * sizeof(u16);
+@@ -493,7 +496,7 @@ static irqreturn_t si1145_trigger_handler(int irq, void *private)
+ goto done;
+ }
+
+- iio_push_to_buffers_with_timestamp(indio_dev, buffer,
++ iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
+ iio_get_time_ns(indio_dev));
+
+ done:
+diff --git a/drivers/iio/temperature/ltc2983.c b/drivers/iio/temperature/ltc2983.c
+index 55ff28a0f1c74..3b5ba26d7d867 100644
+--- a/drivers/iio/temperature/ltc2983.c
++++ b/drivers/iio/temperature/ltc2983.c
+@@ -1285,18 +1285,20 @@ static int ltc2983_parse_dt(struct ltc2983_data *st)
+ ret = of_property_read_u32(child, "reg", &sensor.chan);
+ if (ret) {
+ dev_err(dev, "reg property must given for child nodes\n");
+- return ret;
++ goto put_child;
+ }
+
+ /* check if we have a valid channel */
+ if (sensor.chan < LTC2983_MIN_CHANNELS_NR ||
+ sensor.chan > LTC2983_MAX_CHANNELS_NR) {
++ ret = -EINVAL;
+ dev_err(dev,
+ "chan:%d must be from 1 to 20\n", sensor.chan);
+- return -EINVAL;
++ goto put_child;
+ } else if (channel_avail_mask & BIT(sensor.chan)) {
++ ret = -EINVAL;
+ dev_err(dev, "chan:%d already in use\n", sensor.chan);
+- return -EINVAL;
++ goto put_child;
+ }
+
+ ret = of_property_read_u32(child, "adi,sensor-type",
+@@ -1304,7 +1306,7 @@ static int ltc2983_parse_dt(struct ltc2983_data *st)
+ if (ret) {
+ dev_err(dev,
+ "adi,sensor-type property must given for child nodes\n");
+- return ret;
++ goto put_child;
+ }
+
+ dev_dbg(dev, "Create new sensor, type %u, chann %u",
+@@ -1334,13 +1336,15 @@ static int ltc2983_parse_dt(struct ltc2983_data *st)
+ st->sensors[chan] = ltc2983_adc_new(child, st, &sensor);
+ } else {
+ dev_err(dev, "Unknown sensor type %d\n", sensor.type);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_child;
+ }
+
+ if (IS_ERR(st->sensors[chan])) {
+ dev_err(dev, "Failed to create sensor %ld",
+ PTR_ERR(st->sensors[chan]));
+- return PTR_ERR(st->sensors[chan]);
++ ret = PTR_ERR(st->sensors[chan]);
++ goto put_child;
+ }
+ /* set generic sensor parameters */
+ st->sensors[chan]->chan = sensor.chan;
+@@ -1351,6 +1355,9 @@ static int ltc2983_parse_dt(struct ltc2983_data *st)
+ }
+
+ return 0;
++put_child:
++ of_node_put(child);
++ return ret;
+ }
+
+ static int ltc2983_setup(struct ltc2983_data *st, bool assign_iio)
+diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
+index 6d3ed7c6e19eb..3962da54ffbf4 100644
+--- a/drivers/infiniband/core/rdma_core.c
++++ b/drivers/infiniband/core/rdma_core.c
+@@ -130,17 +130,6 @@ static int uverbs_destroy_uobject(struct ib_uobject *uobj,
+ lockdep_assert_held(&ufile->hw_destroy_rwsem);
+ assert_uverbs_usecnt(uobj, UVERBS_LOOKUP_WRITE);
+
+- if (reason == RDMA_REMOVE_ABORT_HWOBJ) {
+- reason = RDMA_REMOVE_ABORT;
+- ret = uobj->uapi_object->type_class->destroy_hw(uobj, reason,
+- attrs);
+- /*
+- * Drivers are not permitted to ignore RDMA_REMOVE_ABORT, see
+- * ib_is_destroy_retryable, cleanup_retryable == false here.
+- */
+- WARN_ON(ret);
+- }
+-
+ if (reason == RDMA_REMOVE_ABORT) {
+ WARN_ON(!list_empty(&uobj->list));
+ WARN_ON(!uobj->context);
+@@ -674,11 +663,22 @@ void rdma_alloc_abort_uobject(struct ib_uobject *uobj,
+ bool hw_obj_valid)
+ {
+ struct ib_uverbs_file *ufile = uobj->ufile;
++ int ret;
++
++ if (hw_obj_valid) {
++ ret = uobj->uapi_object->type_class->destroy_hw(
++ uobj, RDMA_REMOVE_ABORT, attrs);
++ /*
++ * If the driver couldn't destroy the object then go ahead and
++ * commit it. Leaking objects that can't be destroyed is only
++ * done during FD close after the driver has a few more tries to
++ * destroy it.
++ */
++ if (WARN_ON(ret))
++ return rdma_alloc_commit_uobject(uobj, attrs);
++ }
+
+- uverbs_destroy_uobject(uobj,
+- hw_obj_valid ? RDMA_REMOVE_ABORT_HWOBJ :
+- RDMA_REMOVE_ABORT,
+- attrs);
++ uverbs_destroy_uobject(uobj, RDMA_REMOVE_ABORT, attrs);
+
+ /* Matches the down_read in rdma_alloc_begin_uobject */
+ up_read(&ufile->hw_destroy_rwsem);
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index b805cc8124657..2a7b5ffb2a2ef 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -3318,7 +3318,8 @@ static int mlx5_add_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num)
+ int err;
+
+ dev->port[port_num].roce.nb.notifier_call = mlx5_netdev_event;
+- err = register_netdevice_notifier(&dev->port[port_num].roce.nb);
++ err = register_netdevice_notifier_net(mlx5_core_net(dev->mdev),
++ &dev->port[port_num].roce.nb);
+ if (err) {
+ dev->port[port_num].roce.nb.notifier_call = NULL;
+ return err;
+@@ -3330,7 +3331,8 @@ static int mlx5_add_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num)
+ static void mlx5_remove_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num)
+ {
+ if (dev->port[port_num].roce.nb.notifier_call) {
+- unregister_netdevice_notifier(&dev->port[port_num].roce.nb);
++ unregister_netdevice_notifier_net(mlx5_core_net(dev->mdev),
++ &dev->port[port_num].roce.nb);
+ dev->port[port_num].roce.nb.notifier_call = NULL;
+ }
+ }
+diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+index c7169d2c69e5b..c4bc58736e489 100644
+--- a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
++++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+@@ -727,6 +727,7 @@ int qedr_iw_destroy_listen(struct iw_cm_id *cm_id)
+ listener->qed_handle);
+
+ cm_id->rem_ref(cm_id);
++ kfree(listener);
+ return rc;
+ }
+
+diff --git a/drivers/input/serio/hil_mlc.c b/drivers/input/serio/hil_mlc.c
+index 65f4e9d62a67d..d36e89d6fc546 100644
+--- a/drivers/input/serio/hil_mlc.c
++++ b/drivers/input/serio/hil_mlc.c
+@@ -74,7 +74,7 @@ EXPORT_SYMBOL(hil_mlc_unregister);
+ static LIST_HEAD(hil_mlcs);
+ static DEFINE_RWLOCK(hil_mlcs_lock);
+ static struct timer_list hil_mlcs_kicker;
+-static int hil_mlcs_probe;
++static int hil_mlcs_probe, hil_mlc_stop;
+
+ static void hil_mlcs_process(unsigned long unused);
+ static DECLARE_TASKLET_DISABLED_OLD(hil_mlcs_tasklet, hil_mlcs_process);
+@@ -702,9 +702,13 @@ static int hilse_donode(hil_mlc *mlc)
+ if (!mlc->ostarted) {
+ mlc->ostarted = 1;
+ mlc->opacket = pack;
+- mlc->out(mlc);
++ rc = mlc->out(mlc);
+ nextidx = HILSEN_DOZE;
+ write_unlock_irqrestore(&mlc->lock, flags);
++ if (rc) {
++ hil_mlc_stop = 1;
++ return 1;
++ }
+ break;
+ }
+ mlc->ostarted = 0;
+@@ -715,8 +719,13 @@ static int hilse_donode(hil_mlc *mlc)
+
+ case HILSE_CTS:
+ write_lock_irqsave(&mlc->lock, flags);
+- nextidx = mlc->cts(mlc) ? node->bad : node->good;
++ rc = mlc->cts(mlc);
++ nextidx = rc ? node->bad : node->good;
+ write_unlock_irqrestore(&mlc->lock, flags);
++ if (rc) {
++ hil_mlc_stop = 1;
++ return 1;
++ }
+ break;
+
+ default:
+@@ -780,6 +789,12 @@ static void hil_mlcs_process(unsigned long unused)
+
+ static void hil_mlcs_timer(struct timer_list *unused)
+ {
++ if (hil_mlc_stop) {
++ /* could not send packet - stop immediately. */
++ pr_warn(PREFIX "HIL seems stuck - Disabling HIL MLC.\n");
++ return;
++ }
++
+ hil_mlcs_probe = 1;
+ tasklet_schedule(&hil_mlcs_tasklet);
+ /* Re-insert the periodic task. */
+diff --git a/drivers/input/serio/hp_sdc_mlc.c b/drivers/input/serio/hp_sdc_mlc.c
+index 232d30c825bd1..3e85e90393746 100644
+--- a/drivers/input/serio/hp_sdc_mlc.c
++++ b/drivers/input/serio/hp_sdc_mlc.c
+@@ -210,7 +210,7 @@ static int hp_sdc_mlc_cts(hil_mlc *mlc)
+ priv->tseq[2] = 1;
+ priv->tseq[3] = 0;
+ priv->tseq[4] = 0;
+- __hp_sdc_enqueue_transaction(&priv->trans);
++ return __hp_sdc_enqueue_transaction(&priv->trans);
+ busy:
+ return 1;
+ done:
+@@ -219,7 +219,7 @@ static int hp_sdc_mlc_cts(hil_mlc *mlc)
+ return 0;
+ }
+
+-static void hp_sdc_mlc_out(hil_mlc *mlc)
++static int hp_sdc_mlc_out(hil_mlc *mlc)
+ {
+ struct hp_sdc_mlc_priv_s *priv;
+
+@@ -234,7 +234,7 @@ static void hp_sdc_mlc_out(hil_mlc *mlc)
+ do_data:
+ if (priv->emtestmode) {
+ up(&mlc->osem);
+- return;
++ return 0;
+ }
+ /* Shouldn't be sending commands when loop may be busy */
+ BUG_ON(down_trylock(&mlc->csem));
+@@ -296,7 +296,7 @@ static void hp_sdc_mlc_out(hil_mlc *mlc)
+ BUG_ON(down_trylock(&mlc->csem));
+ }
+ enqueue:
+- hp_sdc_enqueue_transaction(&priv->trans);
++ return hp_sdc_enqueue_transaction(&priv->trans);
+ }
+
+ static int __init hp_sdc_mlc_init(void)
+diff --git a/drivers/interconnect/qcom/sdm845.c b/drivers/interconnect/qcom/sdm845.c
+index f6c7b969520d0..86f08c0f4c41b 100644
+--- a/drivers/interconnect/qcom/sdm845.c
++++ b/drivers/interconnect/qcom/sdm845.c
+@@ -151,7 +151,7 @@ DEFINE_QBCM(bcm_mc0, "MC0", true, &ebi);
+ DEFINE_QBCM(bcm_sh0, "SH0", true, &qns_llcc);
+ DEFINE_QBCM(bcm_mm0, "MM0", false, &qns_mem_noc_hf);
+ DEFINE_QBCM(bcm_sh1, "SH1", false, &qns_apps_io);
+-DEFINE_QBCM(bcm_mm1, "MM1", false, &qxm_camnoc_hf0_uncomp, &qxm_camnoc_hf1_uncomp, &qxm_camnoc_sf_uncomp, &qxm_camnoc_hf0, &qxm_camnoc_hf1, &qxm_mdp0, &qxm_mdp1);
++DEFINE_QBCM(bcm_mm1, "MM1", true, &qxm_camnoc_hf0_uncomp, &qxm_camnoc_hf1_uncomp, &qxm_camnoc_sf_uncomp, &qxm_camnoc_hf0, &qxm_camnoc_hf1, &qxm_mdp0, &qxm_mdp1);
+ DEFINE_QBCM(bcm_sh2, "SH2", false, &qns_memnoc_snoc);
+ DEFINE_QBCM(bcm_mm2, "MM2", false, &qns2_mem_noc);
+ DEFINE_QBCM(bcm_sh3, "SH3", false, &acm_tcu);
+diff --git a/drivers/irqchip/irq-loongson-htvec.c b/drivers/irqchip/irq-loongson-htvec.c
+index 13e6016fe4646..6392aafb9a631 100644
+--- a/drivers/irqchip/irq-loongson-htvec.c
++++ b/drivers/irqchip/irq-loongson-htvec.c
+@@ -151,7 +151,7 @@ static void htvec_reset(struct htvec *priv)
+ /* Clear IRQ cause registers, mask all interrupts */
+ for (idx = 0; idx < priv->num_parents; idx++) {
+ writel_relaxed(0x0, priv->base + HTVEC_EN_OFF + 4 * idx);
+- writel_relaxed(0xFFFFFFFF, priv->base);
++ writel_relaxed(0xFFFFFFFF, priv->base + 4 * idx);
+ }
+ }
+
+@@ -172,7 +172,7 @@ static int htvec_of_init(struct device_node *node,
+ goto free_priv;
+ }
+
+- /* Interrupt may come from any of the 4 interrupt line */
++ /* Interrupt may come from any of the 8 interrupt lines */
+ for (i = 0; i < HTVEC_MAX_PARENT_IRQ; i++) {
+ parent_irq[i] = irq_of_parse_and_map(node, i);
+ if (parent_irq[i] <= 0)
+diff --git a/drivers/leds/leds-bcm6328.c b/drivers/leds/leds-bcm6328.c
+index bad7efb751120..df385c1d8a22b 100644
+--- a/drivers/leds/leds-bcm6328.c
++++ b/drivers/leds/leds-bcm6328.c
+@@ -383,7 +383,7 @@ static int bcm6328_led(struct device *dev, struct device_node *nc, u32 reg,
+ led->cdev.brightness_set = bcm6328_led_set;
+ led->cdev.blink_set = bcm6328_blink_set;
+
+- rc = led_classdev_register(dev, &led->cdev);
++ rc = devm_led_classdev_register(dev, &led->cdev);
+ if (rc < 0)
+ return rc;
+
+diff --git a/drivers/leds/leds-bcm6358.c b/drivers/leds/leds-bcm6358.c
+index 94fefd456ba07..80145f9d7c146 100644
+--- a/drivers/leds/leds-bcm6358.c
++++ b/drivers/leds/leds-bcm6358.c
+@@ -137,7 +137,7 @@ static int bcm6358_led(struct device *dev, struct device_node *nc, u32 reg,
+
+ led->cdev.brightness_set = bcm6358_led_set;
+
+- rc = led_classdev_register(dev, &led->cdev);
++ rc = devm_led_classdev_register(dev, &led->cdev);
+ if (rc < 0)
+ return rc;
+
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index c61ab86a28b52..d910833feeb4d 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -1367,7 +1367,7 @@ __acquires(bitmap->lock)
+ if (bitmap->bp[page].hijacked ||
+ bitmap->bp[page].map == NULL)
+ csize = ((sector_t)1) << (bitmap->chunkshift +
+- PAGE_COUNTER_SHIFT - 1);
++ PAGE_COUNTER_SHIFT);
+ else
+ csize = ((sector_t)1) << bitmap->chunkshift;
+ *blocks = csize - (offset & (csize - 1));
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 6072782070230..cd3c249d8609c 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -9545,7 +9545,7 @@ static int __init md_init(void)
+ goto err_misc_wq;
+
+ md_rdev_misc_wq = alloc_workqueue("md_rdev_misc", 0, 0);
+- if (!md_misc_wq)
++ if (!md_rdev_misc_wq)
+ goto err_rdev_misc_wq;
+
+ if ((ret = register_blkdev(MD_MAJOR, "md")) < 0)
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 225380efd1e24..4839f41f0ada7 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -2429,8 +2429,6 @@ static int resize_stripes(struct r5conf *conf, int newsize)
+ } else
+ err = -ENOMEM;
+
+- mutex_unlock(&conf->cache_size_mutex);
+-
+ conf->slab_cache = sc;
+ conf->active_name = 1-conf->active_name;
+
+@@ -2453,6 +2451,8 @@ static int resize_stripes(struct r5conf *conf, int newsize)
+
+ if (!err)
+ conf->pool_size = newsize;
++ mutex_unlock(&conf->cache_size_mutex);
++
+ return err;
+ }
+
+diff --git a/drivers/media/i2c/imx274.c b/drivers/media/i2c/imx274.c
+index 6011cec5e351d..e6aa9f32b6a83 100644
+--- a/drivers/media/i2c/imx274.c
++++ b/drivers/media/i2c/imx274.c
+@@ -1235,6 +1235,8 @@ static int imx274_s_frame_interval(struct v4l2_subdev *sd,
+ ret = imx274_set_frame_interval(imx274, fi->interval);
+
+ if (!ret) {
++ fi->interval = imx274->frame_interval;
++
+ /*
+ * exposure time range is decided by frame interval
+ * need to update it after frame interval changes
+@@ -1730,9 +1732,9 @@ static int imx274_set_frame_interval(struct stimx274 *priv,
+ __func__, frame_interval.numerator,
+ frame_interval.denominator);
+
+- if (frame_interval.numerator == 0) {
+- err = -EINVAL;
+- goto fail;
++ if (frame_interval.numerator == 0 || frame_interval.denominator == 0) {
++ frame_interval.denominator = IMX274_DEF_FRAME_RATE;
++ frame_interval.numerator = 1;
+ }
+
+ req_frame_rate = (u32)(frame_interval.denominator
+diff --git a/drivers/media/pci/tw5864/tw5864-video.c b/drivers/media/pci/tw5864/tw5864-video.c
+index ec1e06da7e4fb..a65114e7ca346 100644
+--- a/drivers/media/pci/tw5864/tw5864-video.c
++++ b/drivers/media/pci/tw5864/tw5864-video.c
+@@ -767,6 +767,9 @@ static int tw5864_enum_frameintervals(struct file *file, void *priv,
+ fintv->type = V4L2_FRMIVAL_TYPE_STEPWISE;
+
+ ret = tw5864_frameinterval_get(input, &frameinterval);
++ if (ret)
++ return ret;
++
+ fintv->stepwise.step = frameinterval;
+ fintv->stepwise.min = frameinterval;
+ fintv->stepwise.max = frameinterval;
+@@ -785,6 +788,9 @@ static int tw5864_g_parm(struct file *file, void *priv,
+ cp->capability = V4L2_CAP_TIMEPERFRAME;
+
+ ret = tw5864_frameinterval_get(input, &cp->timeperframe);
++ if (ret)
++ return ret;
++
+ cp->timeperframe.numerator *= input->frame_interval;
+ cp->capturemode = 0;
+ cp->readbuffers = 2;
+diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+index 61fed1e35a005..b1ca4e3adae32 100644
+--- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+@@ -571,6 +571,13 @@ static int mtk_jpeg_queue_setup(struct vb2_queue *q,
+ if (!q_data)
+ return -EINVAL;
+
++ if (*num_planes) {
++ for (i = 0; i < *num_planes; i++)
++ if (sizes[i] < q_data->sizeimage[i])
++ return -EINVAL;
++ return 0;
++ }
++
+ *num_planes = q_data->fmt->colplanes;
+ for (i = 0; i < q_data->fmt->colplanes; i++) {
+ sizes[i] = q_data->sizeimage[i];
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index a30a8a731eda8..36abe47997b01 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1848,30 +1848,35 @@ int uvc_xu_ctrl_query(struct uvc_video_chain *chain,
+ {
+ struct uvc_entity *entity;
+ struct uvc_control *ctrl;
+- unsigned int i, found = 0;
++ unsigned int i;
++ bool found;
+ u32 reqflags;
+ u16 size;
+ u8 *data = NULL;
+ int ret;
+
+ /* Find the extension unit. */
++ found = false;
+ list_for_each_entry(entity, &chain->entities, chain) {
+ if (UVC_ENTITY_TYPE(entity) == UVC_VC_EXTENSION_UNIT &&
+- entity->id == xqry->unit)
++ entity->id == xqry->unit) {
++ found = true;
+ break;
++ }
+ }
+
+- if (entity->id != xqry->unit) {
++ if (!found) {
+ uvc_trace(UVC_TRACE_CONTROL, "Extension unit %u not found.\n",
+ xqry->unit);
+ return -ENOENT;
+ }
+
+ /* Find the control and perform delayed initialization if needed. */
++ found = false;
+ for (i = 0; i < entity->ncontrols; ++i) {
+ ctrl = &entity->controls[i];
+ if (ctrl->index == xqry->selector - 1) {
+- found = 1;
++ found = true;
+ break;
+ }
+ }
+@@ -2028,13 +2033,6 @@ static int uvc_ctrl_add_info(struct uvc_device *dev, struct uvc_control *ctrl,
+ goto done;
+ }
+
+- /*
+- * Retrieve control flags from the device. Ignore errors and work with
+- * default flag values from the uvc_ctrl array when the device doesn't
+- * properly implement GET_INFO on standard controls.
+- */
+- uvc_ctrl_get_flags(dev, ctrl, &ctrl->info);
+-
+ ctrl->initialized = 1;
+
+ uvc_trace(UVC_TRACE_CONTROL, "Added control %pUl/%u to device %s "
+@@ -2257,6 +2255,13 @@ static void uvc_ctrl_init_ctrl(struct uvc_device *dev, struct uvc_control *ctrl)
+ if (uvc_entity_match_guid(ctrl->entity, info->entity) &&
+ ctrl->index == info->index) {
+ uvc_ctrl_add_info(dev, ctrl, info);
++ /*
++ * Retrieve control flags from the device. Ignore errors
++ * and work with default flag values from the uvc_ctrl
++ * array when the device doesn't properly implement
++ * GET_INFO on standard controls.
++ */
++ uvc_ctrl_get_flags(dev, ctrl, &ctrl->info);
+ break;
+ }
+ }
+diff --git a/drivers/memory/brcmstb_dpfe.c b/drivers/memory/brcmstb_dpfe.c
+index ddff687c79eaa..dcf50bb8dd690 100644
+--- a/drivers/memory/brcmstb_dpfe.c
++++ b/drivers/memory/brcmstb_dpfe.c
+@@ -656,8 +656,10 @@ static int brcmstb_dpfe_download_firmware(struct brcmstb_dpfe_priv *priv)
+ return (ret == -ENOENT) ? -EPROBE_DEFER : ret;
+
+ ret = __verify_firmware(&init, fw);
+- if (ret)
+- return -EFAULT;
++ if (ret) {
++ ret = -EFAULT;
++ goto release_fw;
++ }
+
+ __disable_dcpu(priv);
+
+@@ -676,18 +678,20 @@ static int brcmstb_dpfe_download_firmware(struct brcmstb_dpfe_priv *priv)
+
+ ret = __write_firmware(priv->dmem, dmem, dmem_size, is_big_endian);
+ if (ret)
+- return ret;
++ goto release_fw;
+ ret = __write_firmware(priv->imem, imem, imem_size, is_big_endian);
+ if (ret)
+- return ret;
++ goto release_fw;
+
+ ret = __verify_fw_checksum(&init, priv, header, init.chksum);
+ if (ret)
+- return ret;
++ goto release_fw;
+
+ __enable_dcpu(priv);
+
+- return 0;
++release_fw:
++ release_firmware(fw);
++ return ret;
+ }
+
+ static ssize_t generic_show(unsigned int command, u32 response[],
+diff --git a/drivers/memory/emif.c b/drivers/memory/emif.c
+index bb6a71d267988..5c4d8319c9cfb 100644
+--- a/drivers/memory/emif.c
++++ b/drivers/memory/emif.c
+@@ -163,35 +163,12 @@ static const struct file_operations emif_mr4_fops = {
+
+ static int __init_or_module emif_debugfs_init(struct emif_data *emif)
+ {
+- struct dentry *dentry;
+- int ret;
+-
+- dentry = debugfs_create_dir(dev_name(emif->dev), NULL);
+- if (!dentry) {
+- ret = -ENOMEM;
+- goto err0;
+- }
+- emif->debugfs_root = dentry;
+-
+- dentry = debugfs_create_file("regcache_dump", S_IRUGO,
+- emif->debugfs_root, emif, &emif_regdump_fops);
+- if (!dentry) {
+- ret = -ENOMEM;
+- goto err1;
+- }
+-
+- dentry = debugfs_create_file("mr4", S_IRUGO,
+- emif->debugfs_root, emif, &emif_mr4_fops);
+- if (!dentry) {
+- ret = -ENOMEM;
+- goto err1;
+- }
+-
++ emif->debugfs_root = debugfs_create_dir(dev_name(emif->dev), NULL);
++ debugfs_create_file("regcache_dump", S_IRUGO, emif->debugfs_root, emif,
++ &emif_regdump_fops);
++ debugfs_create_file("mr4", S_IRUGO, emif->debugfs_root, emif,
++ &emif_mr4_fops);
+ return 0;
+-err1:
+- debugfs_remove_recursive(emif->debugfs_root);
+-err0:
+- return ret;
+ }
+
+ static void __exit emif_debugfs_exit(struct emif_data *emif)
+diff --git a/drivers/memory/tegra/tegra124.c b/drivers/memory/tegra/tegra124.c
+index 493b5dc3a4b38..0cede24479bfa 100644
+--- a/drivers/memory/tegra/tegra124.c
++++ b/drivers/memory/tegra/tegra124.c
+@@ -957,7 +957,6 @@ static const struct tegra_smmu_swgroup tegra124_swgroups[] = {
+ static const unsigned int tegra124_group_drm[] = {
+ TEGRA_SWGROUP_DC,
+ TEGRA_SWGROUP_DCB,
+- TEGRA_SWGROUP_GPU,
+ TEGRA_SWGROUP_VIC,
+ };
+
+diff --git a/drivers/message/fusion/mptscsih.c b/drivers/message/fusion/mptscsih.c
+index 8543f0324d5a8..0d1b2b0eb8439 100644
+--- a/drivers/message/fusion/mptscsih.c
++++ b/drivers/message/fusion/mptscsih.c
+@@ -1176,8 +1176,10 @@ mptscsih_remove(struct pci_dev *pdev)
+ MPT_SCSI_HOST *hd;
+ int sz1;
+
+- if((hd = shost_priv(host)) == NULL)
+- return;
++ if (host == NULL)
++ hd = NULL;
++ else
++ hd = shost_priv(host);
+
+ mptscsih_shutdown(pdev);
+
+@@ -1193,14 +1195,15 @@ mptscsih_remove(struct pci_dev *pdev)
+ "Free'd ScsiLookup (%d) memory\n",
+ ioc->name, sz1));
+
+- kfree(hd->info_kbuf);
++ if (hd)
++ kfree(hd->info_kbuf);
+
+ /* NULL the Scsi_Host pointer
+ */
+ ioc->sh = NULL;
+
+- scsi_host_put(host);
+-
++ if (host)
++ scsi_host_put(host);
+ mpt_detach(pdev);
+
+ }
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 7939c55daceb2..9d68677493163 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -518,7 +518,7 @@ fastrpc_map_dma_buf(struct dma_buf_attachment *attachment,
+
+ table = &a->sgt;
+
+- if (!dma_map_sg(attachment->dev, table->sgl, table->nents, dir))
++ if (!dma_map_sgtable(attachment->dev, table, dir, 0))
+ return ERR_PTR(-ENOMEM);
+
+ return table;
+@@ -528,7 +528,7 @@ static void fastrpc_unmap_dma_buf(struct dma_buf_attachment *attach,
+ struct sg_table *table,
+ enum dma_data_direction dir)
+ {
+- dma_unmap_sg(attach->dev, table->sgl, table->nents, dir);
++ dma_unmap_sgtable(attach->dev, table, dir, 0);
+ }
+
+ static void fastrpc_release(struct dma_buf *dmabuf)
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi_security.c b/drivers/misc/habanalabs/gaudi/gaudi_security.c
+index 8d5d6ddee6eda..615b547ad2b7d 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi_security.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi_security.c
+@@ -831,8 +831,7 @@ static void gaudi_init_mme_protection_bits(struct hl_device *hdev)
+ PROT_BITS_OFFS;
+ word_offset = ((mmMME0_QM_ARB_MST_CHOISE_PUSH_OFST_23 &
+ PROT_BITS_OFFS) >> 7) << 2;
+- mask = 1 << ((mmMME0_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmMME0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmMME0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmMME0_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmMME0_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmMME0_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -1311,8 +1310,7 @@ static void gaudi_init_mme_protection_bits(struct hl_device *hdev)
+ PROT_BITS_OFFS;
+ word_offset = ((mmMME2_QM_ARB_MST_CHOISE_PUSH_OFST_23 &
+ PROT_BITS_OFFS) >> 7) << 2;
+- mask = 1 << ((mmMME2_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmMME2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmMME2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmMME2_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmMME2_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmMME2_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -1790,8 +1788,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev)
+ word_offset =
+ ((mmDMA0_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7)
+ << 2;
+- mask = 1 << ((mmDMA0_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmDMA0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmDMA0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA0_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA0_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA0_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -2186,8 +2183,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev)
+ word_offset =
+ ((mmDMA1_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7)
+ << 2;
+- mask = 1 << ((mmDMA1_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmDMA1_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmDMA1_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA1_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA1_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA1_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -2582,8 +2578,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev)
+ word_offset =
+ ((mmDMA2_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7)
+ << 2;
+- mask = 1 << ((mmDMA2_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmDMA2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmDMA2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA2_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA2_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA2_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -2978,8 +2973,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev)
+ word_offset =
+ ((mmDMA3_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7)
+ << 2;
+- mask = 1 << ((mmDMA3_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmDMA3_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmDMA3_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA3_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA3_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA3_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -3374,8 +3368,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev)
+ word_offset =
+ ((mmDMA4_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7)
+ << 2;
+- mask = 1 << ((mmDMA4_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmDMA4_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmDMA4_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA4_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA4_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA4_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -3770,8 +3763,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev)
+ word_offset =
+ ((mmDMA5_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7)
+ << 2;
+- mask = 1 << ((mmDMA5_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmDMA5_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmDMA5_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA5_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA5_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA5_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -4166,8 +4158,8 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev)
+ word_offset =
+ ((mmDMA6_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7)
+ << 2;
+- mask = 1 << ((mmDMA6_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmDMA6_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++
++ mask = 1 << ((mmDMA6_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA6_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA6_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA6_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -4562,8 +4554,7 @@ static void gaudi_init_dma_protection_bits(struct hl_device *hdev)
+ word_offset =
+ ((mmDMA7_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS) >> 7)
+ << 2;
+- mask = 1 << ((mmDMA7_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmDMA7_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmDMA7_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA7_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA7_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmDMA7_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -5491,8 +5482,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev)
+
+ word_offset = ((mmTPC0_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS)
+ >> 7) << 2;
+- mask = 1 << ((mmTPC0_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmTPC0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmTPC0_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC0_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC0_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC0_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -5947,8 +5937,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev)
+
+ word_offset = ((mmTPC1_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS)
+ >> 7) << 2;
+- mask = 1 << ((mmTPC1_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmTPC1_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmTPC1_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC1_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC1_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC1_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -6402,8 +6391,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev)
+ PROT_BITS_OFFS;
+ word_offset = ((mmTPC2_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS)
+ >> 7) << 2;
+- mask = 1 << ((mmTPC2_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmTPC2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmTPC2_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC2_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC2_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC2_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -6857,8 +6845,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev)
+ PROT_BITS_OFFS;
+ word_offset = ((mmTPC3_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS)
+ >> 7) << 2;
+- mask = 1 << ((mmTPC3_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmTPC3_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmTPC3_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC3_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC3_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC3_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -7312,8 +7299,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev)
+ PROT_BITS_OFFS;
+ word_offset = ((mmTPC4_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS)
+ >> 7) << 2;
+- mask = 1 << ((mmTPC4_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmTPC4_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmTPC4_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC4_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC4_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC4_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -7767,8 +7753,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev)
+ PROT_BITS_OFFS;
+ word_offset = ((mmTPC5_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS)
+ >> 7) << 2;
+- mask = 1 << ((mmTPC5_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmTPC5_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmTPC5_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC5_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC5_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC5_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -8223,8 +8208,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev)
+
+ word_offset = ((mmTPC6_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS)
+ >> 7) << 2;
+- mask = 1 << ((mmTPC6_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmTPC6_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmTPC6_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC6_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC6_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC6_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+@@ -8681,8 +8665,7 @@ static void gaudi_init_tpc_protection_bits(struct hl_device *hdev)
+ PROT_BITS_OFFS;
+ word_offset = ((mmTPC7_QM_ARB_MST_CHOISE_PUSH_OFST_23 & PROT_BITS_OFFS)
+ >> 7) << 2;
+- mask = 1 << ((mmTPC7_QM_ARB_MST_QUIET_PER & 0x7F) >> 2);
+- mask |= 1 << ((mmTPC7_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
++ mask = 1 << ((mmTPC7_QM_ARB_SLV_CHOISE_WDT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC7_QM_ARB_MSG_MAX_INFLIGHT & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC7_QM_ARB_MSG_AWUSER_31_11 & 0x7F) >> 2);
+ mask |= 1 << ((mmTPC7_QM_ARB_MSG_AWUSER_SEC_PROP & 0x7F) >> 2);
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index 284cba11e2795..d335a34ad05b3 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -662,6 +662,43 @@ static int sdhci_acpi_emmc_amd_probe_slot(struct platform_device *pdev,
+ (host->mmc->caps & MMC_CAP_1_8V_DDR))
+ host->mmc->caps2 = MMC_CAP2_HS400_1_8V;
+
++ /*
++ * There are two types of presets out in the wild:
++ * 1) Default/broken presets.
++ * These presets have two sets of problems:
++ * a) The clock divisor for SDR12, SDR25, and SDR50 is too small.
++ * This results in clock frequencies that are 2x higher than
++ * acceptable. i.e., SDR12 = 25 MHz, SDR25 = 50 MHz, SDR50 =
++ * 100 MHz.x
++ * b) The HS200 and HS400 driver strengths don't match.
++ * By default, the SDR104 preset register has a driver strength of
++ * A, but the (internal) HS400 preset register has a driver
++ * strength of B. As part of initializing HS400, HS200 tuning
++ * needs to be performed. Having different driver strengths
++ * between tuning and operation is wrong. It results in different
++ * rise/fall times that lead to incorrect sampling.
++ * 2) Firmware with properly initialized presets.
++ * These presets have proper clock divisors. i.e., SDR12 => 12MHz,
++ * SDR25 => 25 MHz, SDR50 => 50 MHz. Additionally the HS200 and
++ * HS400 preset driver strengths match.
++ *
++ * Enabling presets for HS400 doesn't work for the following reasons:
++ * 1) sdhci_set_ios has a hard coded list of timings that are used
++ * to determine if presets should be enabled.
++ * 2) sdhci_get_preset_value is using a non-standard register to
++ * read out HS400 presets. The AMD controller doesn't support this
++ * non-standard register. In fact, it doesn't expose the HS400
++ * preset register anywhere in the SDHCI memory map. This results
++ * in reading a garbage value and using the wrong presets.
++ *
++ * Since HS400 and HS200 presets must be identical, we could
++ * instead use the the SDR104 preset register.
++ *
++ * If the above issues are resolved we could remove this quirk for
++ * firmware that that has valid presets (i.e., SDR12 <= 12 MHz).
++ */
++ host->quirks2 |= SDHCI_QUIRK2_PRESET_VALUE_BROKEN;
++
+ host->mmc_host_ops.select_drive_strength = amd_select_drive_strength;
+ host->mmc_host_ops.set_ios = amd_set_ios;
+ host->mmc_host_ops.execute_tuning = amd_sdhci_execute_tuning;
+diff --git a/drivers/mmc/host/sdhci-esdhc.h b/drivers/mmc/host/sdhci-esdhc.h
+index a30796e79b1cb..6de02f09c3222 100644
+--- a/drivers/mmc/host/sdhci-esdhc.h
++++ b/drivers/mmc/host/sdhci-esdhc.h
+@@ -5,6 +5,7 @@
+ * Copyright (c) 2007 Freescale Semiconductor, Inc.
+ * Copyright (c) 2009 MontaVista Software, Inc.
+ * Copyright (c) 2010 Pengutronix e.K.
++ * Copyright 2020 NXP
+ * Author: Wolfram Sang <kernel@pengutronix.de>
+ */
+
+@@ -88,6 +89,7 @@
+ /* DLL Config 0 Register */
+ #define ESDHC_DLLCFG0 0x160
+ #define ESDHC_DLL_ENABLE 0x80000000
++#define ESDHC_DLL_RESET 0x40000000
+ #define ESDHC_DLL_FREQ_SEL 0x08000000
+
+ /* DLL Config 1 Register */
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 45881b3099567..156e75302df56 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -4,6 +4,7 @@
+ *
+ * Copyright (c) 2007, 2010, 2012 Freescale Semiconductor, Inc.
+ * Copyright (c) 2009 MontaVista Software, Inc.
++ * Copyright 2020 NXP
+ *
+ * Authors: Xiaobo Xie <X.Xie@freescale.com>
+ * Anton Vorontsov <avorontsov@ru.mvista.com>
+@@ -19,6 +20,7 @@
+ #include <linux/clk.h>
+ #include <linux/ktime.h>
+ #include <linux/dma-mapping.h>
++#include <linux/iopoll.h>
+ #include <linux/mmc/host.h>
+ #include <linux/mmc/mmc.h>
+ #include "sdhci-pltfm.h"
+@@ -743,6 +745,21 @@ static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock)
+ if (host->mmc->actual_clock == MMC_HS200_MAX_DTR)
+ temp |= ESDHC_DLL_FREQ_SEL;
+ sdhci_writel(host, temp, ESDHC_DLLCFG0);
++
++ temp |= ESDHC_DLL_RESET;
++ sdhci_writel(host, temp, ESDHC_DLLCFG0);
++ udelay(1);
++ temp &= ~ESDHC_DLL_RESET;
++ sdhci_writel(host, temp, ESDHC_DLLCFG0);
++
++ /* Wait max 20 ms */
++ if (read_poll_timeout(sdhci_readl, temp,
++ temp & ESDHC_DLL_STS_SLV_LOCK,
++ 10, 20000, false,
++ host, ESDHC_DLLSTAT0))
++ pr_err("%s: timeout for delay chain lock.\n",
++ mmc_hostname(host->mmc));
++
+ temp = sdhci_readl(host, ESDHC_TBCTL);
+ sdhci_writel(host, temp | ESDHC_HS400_WNDW_ADJUST, ESDHC_TBCTL);
+
+@@ -1052,6 +1069,17 @@ static int esdhc_execute_tuning(struct mmc_host *mmc, u32 opcode)
+
+ esdhc_tuning_block_enable(host, true);
+
++ /*
++ * The eSDHC controller takes the data timeout value into account
++ * during tuning. If the SD card is too slow sending the response, the
++ * timer will expire and a "Buffer Read Ready" interrupt without data
++ * is triggered. This leads to tuning errors.
++ *
++ * Just set the timeout to the maximum value because the core will
++ * already take care of it in sdhci_send_tuning().
++ */
++ sdhci_writeb(host, 0xe, SDHCI_TIMEOUT_CONTROL);
++
+ hs400_tuning = host->flags & SDHCI_HS400_TUNING;
+
+ do {
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 914f5184295ff..23da7f7fe093a 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -24,6 +24,8 @@
+ #include <linux/iopoll.h>
+ #include <linux/gpio.h>
+ #include <linux/pm_runtime.h>
++#include <linux/pm_qos.h>
++#include <linux/debugfs.h>
+ #include <linux/mmc/slot-gpio.h>
+ #include <linux/mmc/sdhci-pci-data.h>
+ #include <linux/acpi.h>
+@@ -516,6 +518,8 @@ struct intel_host {
+ bool rpm_retune_ok;
+ u32 glk_rx_ctrl1;
+ u32 glk_tun_val;
++ u32 active_ltr;
++ u32 idle_ltr;
+ };
+
+ static const guid_t intel_dsm_guid =
+@@ -760,6 +764,108 @@ static int intel_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ return 0;
+ }
+
++#define INTEL_ACTIVELTR 0x804
++#define INTEL_IDLELTR 0x808
++
++#define INTEL_LTR_REQ BIT(15)
++#define INTEL_LTR_SCALE_MASK GENMASK(11, 10)
++#define INTEL_LTR_SCALE_1US (2 << 10)
++#define INTEL_LTR_SCALE_32US (3 << 10)
++#define INTEL_LTR_VALUE_MASK GENMASK(9, 0)
++
++static void intel_cache_ltr(struct sdhci_pci_slot *slot)
++{
++ struct intel_host *intel_host = sdhci_pci_priv(slot);
++ struct sdhci_host *host = slot->host;
++
++ intel_host->active_ltr = readl(host->ioaddr + INTEL_ACTIVELTR);
++ intel_host->idle_ltr = readl(host->ioaddr + INTEL_IDLELTR);
++}
++
++static void intel_ltr_set(struct device *dev, s32 val)
++{
++ struct sdhci_pci_chip *chip = dev_get_drvdata(dev);
++ struct sdhci_pci_slot *slot = chip->slots[0];
++ struct intel_host *intel_host = sdhci_pci_priv(slot);
++ struct sdhci_host *host = slot->host;
++ u32 ltr;
++
++ pm_runtime_get_sync(dev);
++
++ /*
++ * Program latency tolerance (LTR) accordingly what has been asked
++ * by the PM QoS layer or disable it in case we were passed
++ * negative value or PM_QOS_LATENCY_ANY.
++ */
++ ltr = readl(host->ioaddr + INTEL_ACTIVELTR);
++
++ if (val == PM_QOS_LATENCY_ANY || val < 0) {
++ ltr &= ~INTEL_LTR_REQ;
++ } else {
++ ltr |= INTEL_LTR_REQ;
++ ltr &= ~INTEL_LTR_SCALE_MASK;
++ ltr &= ~INTEL_LTR_VALUE_MASK;
++
++ if (val > INTEL_LTR_VALUE_MASK) {
++ val >>= 5;
++ if (val > INTEL_LTR_VALUE_MASK)
++ val = INTEL_LTR_VALUE_MASK;
++ ltr |= INTEL_LTR_SCALE_32US | val;
++ } else {
++ ltr |= INTEL_LTR_SCALE_1US | val;
++ }
++ }
++
++ if (ltr == intel_host->active_ltr)
++ goto out;
++
++ writel(ltr, host->ioaddr + INTEL_ACTIVELTR);
++ writel(ltr, host->ioaddr + INTEL_IDLELTR);
++
++ /* Cache the values into lpss structure */
++ intel_cache_ltr(slot);
++out:
++ pm_runtime_put_autosuspend(dev);
++}
++
++static bool intel_use_ltr(struct sdhci_pci_chip *chip)
++{
++ switch (chip->pdev->device) {
++ case PCI_DEVICE_ID_INTEL_BYT_EMMC:
++ case PCI_DEVICE_ID_INTEL_BYT_EMMC2:
++ case PCI_DEVICE_ID_INTEL_BYT_SDIO:
++ case PCI_DEVICE_ID_INTEL_BYT_SD:
++ case PCI_DEVICE_ID_INTEL_BSW_EMMC:
++ case PCI_DEVICE_ID_INTEL_BSW_SDIO:
++ case PCI_DEVICE_ID_INTEL_BSW_SD:
++ return false;
++ default:
++ return true;
++ }
++}
++
++static void intel_ltr_expose(struct sdhci_pci_chip *chip)
++{
++ struct device *dev = &chip->pdev->dev;
++
++ if (!intel_use_ltr(chip))
++ return;
++
++ dev->power.set_latency_tolerance = intel_ltr_set;
++ dev_pm_qos_expose_latency_tolerance(dev);
++}
++
++static void intel_ltr_hide(struct sdhci_pci_chip *chip)
++{
++ struct device *dev = &chip->pdev->dev;
++
++ if (!intel_use_ltr(chip))
++ return;
++
++ dev_pm_qos_hide_latency_tolerance(dev);
++ dev->power.set_latency_tolerance = NULL;
++}
++
+ static void byt_probe_slot(struct sdhci_pci_slot *slot)
+ {
+ struct mmc_host_ops *ops = &slot->host->mmc_host_ops;
+@@ -774,6 +880,43 @@ static void byt_probe_slot(struct sdhci_pci_slot *slot)
+ ops->start_signal_voltage_switch = intel_start_signal_voltage_switch;
+
+ device_property_read_u32(dev, "max-frequency", &mmc->f_max);
++
++ if (!mmc->slotno) {
++ slot->chip->slots[mmc->slotno] = slot;
++ intel_ltr_expose(slot->chip);
++ }
++}
++
++static void byt_add_debugfs(struct sdhci_pci_slot *slot)
++{
++ struct intel_host *intel_host = sdhci_pci_priv(slot);
++ struct mmc_host *mmc = slot->host->mmc;
++ struct dentry *dir = mmc->debugfs_root;
++
++ if (!intel_use_ltr(slot->chip))
++ return;
++
++ debugfs_create_x32("active_ltr", 0444, dir, &intel_host->active_ltr);
++ debugfs_create_x32("idle_ltr", 0444, dir, &intel_host->idle_ltr);
++
++ intel_cache_ltr(slot);
++}
++
++static int byt_add_host(struct sdhci_pci_slot *slot)
++{
++ int ret = sdhci_add_host(slot->host);
++
++ if (!ret)
++ byt_add_debugfs(slot);
++ return ret;
++}
++
++static void byt_remove_slot(struct sdhci_pci_slot *slot, int dead)
++{
++ struct mmc_host *mmc = slot->host->mmc;
++
++ if (!mmc->slotno)
++ intel_ltr_hide(slot->chip);
+ }
+
+ static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
+@@ -855,6 +998,8 @@ static int glk_emmc_add_host(struct sdhci_pci_slot *slot)
+ if (ret)
+ goto cleanup;
+
++ byt_add_debugfs(slot);
++
+ return 0;
+
+ cleanup:
+@@ -1032,6 +1177,8 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_emmc = {
+ #endif
+ .allow_runtime_pm = true,
+ .probe_slot = byt_emmc_probe_slot,
++ .add_host = byt_add_host,
++ .remove_slot = byt_remove_slot,
+ .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC |
+ SDHCI_QUIRK_NO_LED,
+ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
+@@ -1045,6 +1192,7 @@ static const struct sdhci_pci_fixes sdhci_intel_glk_emmc = {
+ .allow_runtime_pm = true,
+ .probe_slot = glk_emmc_probe_slot,
+ .add_host = glk_emmc_add_host,
++ .remove_slot = byt_remove_slot,
+ #ifdef CONFIG_PM_SLEEP
+ .suspend = sdhci_cqhci_suspend,
+ .resume = sdhci_cqhci_resume,
+@@ -1075,6 +1223,8 @@ static const struct sdhci_pci_fixes sdhci_ni_byt_sdio = {
+ SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+ .allow_runtime_pm = true,
+ .probe_slot = ni_byt_sdio_probe_slot,
++ .add_host = byt_add_host,
++ .remove_slot = byt_remove_slot,
+ .ops = &sdhci_intel_byt_ops,
+ .priv_size = sizeof(struct intel_host),
+ };
+@@ -1092,6 +1242,8 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_sdio = {
+ SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+ .allow_runtime_pm = true,
+ .probe_slot = byt_sdio_probe_slot,
++ .add_host = byt_add_host,
++ .remove_slot = byt_remove_slot,
+ .ops = &sdhci_intel_byt_ops,
+ .priv_size = sizeof(struct intel_host),
+ };
+@@ -1111,6 +1263,8 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_sd = {
+ .allow_runtime_pm = true,
+ .own_cd_for_runtime_pm = true,
+ .probe_slot = byt_sd_probe_slot,
++ .add_host = byt_add_host,
++ .remove_slot = byt_remove_slot,
+ .ops = &sdhci_intel_byt_ops,
+ .priv_size = sizeof(struct intel_host),
+ };
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 592a55a34b58e..3561ae8a481a0 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1384,9 +1384,11 @@ static inline void sdhci_auto_cmd_select(struct sdhci_host *host,
+ /*
+ * In case of Version 4.10 or later, use of 'Auto CMD Auto
+ * Select' is recommended rather than use of 'Auto CMD12
+- * Enable' or 'Auto CMD23 Enable'.
++ * Enable' or 'Auto CMD23 Enable'. We require Version 4 Mode
++ * here because some controllers (e.g sdhci-of-dwmshc) expect it.
+ */
+- if (host->version >= SDHCI_SPEC_410 && (use_cmd12 || use_cmd23)) {
++ if (host->version >= SDHCI_SPEC_410 && host->v4_mode &&
++ (use_cmd12 || use_cmd23)) {
+ *mode |= SDHCI_TRNS_AUTO_SEL;
+
+ ctrl2 = sdhci_readw(host, SDHCI_HOST_CONTROL2);
+diff --git a/drivers/mmc/host/via-sdmmc.c b/drivers/mmc/host/via-sdmmc.c
+index 49dab9f42b6d6..9b755ea0fa03c 100644
+--- a/drivers/mmc/host/via-sdmmc.c
++++ b/drivers/mmc/host/via-sdmmc.c
+@@ -1257,11 +1257,14 @@ static void __maybe_unused via_init_sdc_pm(struct via_crdr_mmc_host *host)
+ static int __maybe_unused via_sd_suspend(struct device *dev)
+ {
+ struct via_crdr_mmc_host *host;
++ unsigned long flags;
+
+ host = dev_get_drvdata(dev);
+
++ spin_lock_irqsave(&host->lock, flags);
+ via_save_pcictrlreg(host);
+ via_save_sdcreg(host);
++ spin_unlock_irqrestore(&host->lock, flags);
+
+ device_wakeup_enable(dev);
+
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 42cac572f82dc..7847de75a74ca 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -1639,6 +1639,19 @@ int ubi_thread(void *u)
+ !ubi->thread_enabled || ubi_dbg_is_bgt_disabled(ubi)) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ spin_unlock(&ubi->wl_lock);
++
++ /*
++ * Check kthread_should_stop() after we set the task
++ * state to guarantee that we either see the stop bit
++ * and exit or the task state is reset to runnable such
++ * that it's not scheduled out indefinitely and detects
++ * the stop bit at kthread_should_stop().
++ */
++ if (kthread_should_stop()) {
++ set_current_state(TASK_RUNNING);
++ break;
++ }
++
+ schedule();
+ continue;
+ }
+diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
+index 2ac7a667bde35..bc21a82cf3a76 100644
+--- a/drivers/net/can/flexcan.c
++++ b/drivers/net/can/flexcan.c
+@@ -1722,8 +1722,6 @@ static int __maybe_unused flexcan_suspend(struct device *device)
+ err = flexcan_chip_disable(priv);
+ if (err)
+ return err;
+-
+- err = pm_runtime_force_suspend(device);
+ }
+ netif_stop_queue(dev);
+ netif_device_detach(dev);
+@@ -1749,10 +1747,6 @@ static int __maybe_unused flexcan_resume(struct device *device)
+ if (err)
+ return err;
+ } else {
+- err = pm_runtime_force_resume(device);
+- if (err)
+- return err;
+-
+ err = flexcan_chip_enable(priv);
+ }
+ }
+@@ -1783,8 +1777,16 @@ static int __maybe_unused flexcan_noirq_suspend(struct device *device)
+ struct net_device *dev = dev_get_drvdata(device);
+ struct flexcan_priv *priv = netdev_priv(dev);
+
+- if (netif_running(dev) && device_may_wakeup(device))
+- flexcan_enable_wakeup_irq(priv, true);
++ if (netif_running(dev)) {
++ int err;
++
++ if (device_may_wakeup(device))
++ flexcan_enable_wakeup_irq(priv, true);
++
++ err = pm_runtime_force_suspend(device);
++ if (err)
++ return err;
++ }
+
+ return 0;
+ }
+@@ -1794,8 +1796,16 @@ static int __maybe_unused flexcan_noirq_resume(struct device *device)
+ struct net_device *dev = dev_get_drvdata(device);
+ struct flexcan_priv *priv = netdev_priv(dev);
+
+- if (netif_running(dev) && device_may_wakeup(device))
+- flexcan_enable_wakeup_irq(priv, false);
++ if (netif_running(dev)) {
++ int err;
++
++ err = pm_runtime_force_resume(device);
++ if (err)
++ return err;
++
++ if (device_may_wakeup(device))
++ flexcan_enable_wakeup_irq(priv, false);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 7b5d521924872..b8d534b719d4f 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -8735,6 +8735,11 @@ static void bnxt_report_link(struct bnxt *bp)
+ u16 fec;
+
+ netif_carrier_on(bp->dev);
++ speed = bnxt_fw_to_ethtool_speed(bp->link_info.link_speed);
++ if (speed == SPEED_UNKNOWN) {
++ netdev_info(bp->dev, "NIC Link is Up, speed unknown\n");
++ return;
++ }
+ if (bp->link_info.duplex == BNXT_LINK_DUPLEX_FULL)
+ duplex = "full";
+ else
+@@ -8747,7 +8752,6 @@ static void bnxt_report_link(struct bnxt *bp)
+ flow_ctrl = "ON - receive";
+ else
+ flow_ctrl = "none";
+- speed = bnxt_fw_to_ethtool_speed(bp->link_info.link_speed);
+ netdev_info(bp->dev, "NIC Link is Up, %u Mbps %s duplex, Flow control: %s\n",
+ speed, duplex, flow_ctrl);
+ if (bp->flags & BNXT_FLAG_EEE_CAP)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+index 3803af9231c68..c0ff5f70aa431 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+@@ -77,6 +77,8 @@ enum npc_kpu_ld_ltype {
+ NPC_LT_LD_ICMP,
+ NPC_LT_LD_SCTP,
+ NPC_LT_LD_ICMP6,
++ NPC_LT_LD_CUSTOM0,
++ NPC_LT_LD_CUSTOM1,
+ NPC_LT_LD_IGMP = 8,
+ NPC_LT_LD_ESP,
+ NPC_LT_LD_AH,
+@@ -85,8 +87,6 @@ enum npc_kpu_ld_ltype {
+ NPC_LT_LD_NSH,
+ NPC_LT_LD_TU_MPLS_IN_NSH,
+ NPC_LT_LD_TU_MPLS_IN_IP,
+- NPC_LT_LD_CUSTOM0 = 0xE,
+- NPC_LT_LD_CUSTOM1 = 0xF,
+ };
+
+ enum npc_kpu_le_ltype {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
+index d046db7bb047d..3a9fa629503f0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
+@@ -90,9 +90,4 @@ int mlx5_create_encryption_key(struct mlx5_core_dev *mdev,
+ u32 key_type, u32 *p_key_id);
+ void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id);
+
+-static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev)
+-{
+- return devlink_net(priv_to_devlink(dev));
+-}
+-
+ #endif
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index f6aa80fe343f5..05e90ef15871c 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -607,6 +607,9 @@ static void mlxsw_emad_transmit_retry(struct mlxsw_core *mlxsw_core,
+ err = mlxsw_emad_transmit(trans->core, trans);
+ if (err == 0)
+ return;
++
++ if (!atomic_dec_and_test(&trans->active))
++ return;
+ } else {
+ err = -EIO;
+ }
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 26988ad7ec979..8867d4ac871c1 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -1512,7 +1512,6 @@ static void ionic_txrx_deinit(struct ionic_lif *lif)
+ if (lif->rxqcqs) {
+ for (i = 0; i < lif->nxqs; i++) {
+ ionic_lif_qcq_deinit(lif, lif->rxqcqs[i].qcq);
+- ionic_rx_flush(&lif->rxqcqs[i].qcq->cq);
+ ionic_rx_empty(&lif->rxqcqs[i].qcq->q);
+ }
+ }
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+index def65fee27b5a..39e85870c15e9 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+@@ -253,19 +253,6 @@ static bool ionic_rx_service(struct ionic_cq *cq, struct ionic_cq_info *cq_info)
+ return true;
+ }
+
+-void ionic_rx_flush(struct ionic_cq *cq)
+-{
+- struct ionic_dev *idev = &cq->lif->ionic->idev;
+- u32 work_done;
+-
+- work_done = ionic_cq_service(cq, cq->num_descs,
+- ionic_rx_service, NULL, NULL);
+-
+- if (work_done)
+- ionic_intr_credits(idev->intr_ctrl, cq->bound_intr->index,
+- work_done, IONIC_INTR_CRED_RESET_COALESCE);
+-}
+-
+ static struct page *ionic_rx_page_alloc(struct ionic_queue *q,
+ dma_addr_t *dma_addr)
+ {
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.h b/drivers/net/ethernet/pensando/ionic/ionic_txrx.h
+index a5883be0413f6..7667b72232b8a 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.h
++++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.h
+@@ -4,7 +4,6 @@
+ #ifndef _IONIC_TXRX_H_
+ #define _IONIC_TXRX_H_
+
+-void ionic_rx_flush(struct ionic_cq *cq);
+ void ionic_tx_flush(struct ionic_cq *cq);
+
+ void ionic_rx_fill(struct ionic_queue *q);
+diff --git a/drivers/net/wan/hdlc_fr.c b/drivers/net/wan/hdlc_fr.c
+index d6cfd51613ed8..3a44dad87602d 100644
+--- a/drivers/net/wan/hdlc_fr.c
++++ b/drivers/net/wan/hdlc_fr.c
+@@ -273,63 +273,69 @@ static inline struct net_device **get_dev_p(struct pvc_device *pvc,
+
+ static int fr_hard_header(struct sk_buff **skb_p, u16 dlci)
+ {
+- u16 head_len;
+ struct sk_buff *skb = *skb_p;
+
+- switch (skb->protocol) {
+- case cpu_to_be16(NLPID_CCITT_ANSI_LMI):
+- head_len = 4;
+- skb_push(skb, head_len);
+- skb->data[3] = NLPID_CCITT_ANSI_LMI;
+- break;
+-
+- case cpu_to_be16(NLPID_CISCO_LMI):
+- head_len = 4;
+- skb_push(skb, head_len);
+- skb->data[3] = NLPID_CISCO_LMI;
+- break;
+-
+- case cpu_to_be16(ETH_P_IP):
+- head_len = 4;
+- skb_push(skb, head_len);
+- skb->data[3] = NLPID_IP;
+- break;
+-
+- case cpu_to_be16(ETH_P_IPV6):
+- head_len = 4;
+- skb_push(skb, head_len);
+- skb->data[3] = NLPID_IPV6;
+- break;
+-
+- case cpu_to_be16(ETH_P_802_3):
+- head_len = 10;
+- if (skb_headroom(skb) < head_len) {
+- struct sk_buff *skb2 = skb_realloc_headroom(skb,
+- head_len);
++ if (!skb->dev) { /* Control packets */
++ switch (dlci) {
++ case LMI_CCITT_ANSI_DLCI:
++ skb_push(skb, 4);
++ skb->data[3] = NLPID_CCITT_ANSI_LMI;
++ break;
++
++ case LMI_CISCO_DLCI:
++ skb_push(skb, 4);
++ skb->data[3] = NLPID_CISCO_LMI;
++ break;
++
++ default:
++ return -EINVAL;
++ }
++
++ } else if (skb->dev->type == ARPHRD_DLCI) {
++ switch (skb->protocol) {
++ case htons(ETH_P_IP):
++ skb_push(skb, 4);
++ skb->data[3] = NLPID_IP;
++ break;
++
++ case htons(ETH_P_IPV6):
++ skb_push(skb, 4);
++ skb->data[3] = NLPID_IPV6;
++ break;
++
++ default:
++ skb_push(skb, 10);
++ skb->data[3] = FR_PAD;
++ skb->data[4] = NLPID_SNAP;
++ /* OUI 00-00-00 indicates an Ethertype follows */
++ skb->data[5] = 0x00;
++ skb->data[6] = 0x00;
++ skb->data[7] = 0x00;
++ /* This should be an Ethertype: */
++ *(__be16 *)(skb->data + 8) = skb->protocol;
++ }
++
++ } else if (skb->dev->type == ARPHRD_ETHER) {
++ if (skb_headroom(skb) < 10) {
++ struct sk_buff *skb2 = skb_realloc_headroom(skb, 10);
+ if (!skb2)
+ return -ENOBUFS;
+ dev_kfree_skb(skb);
+ skb = *skb_p = skb2;
+ }
+- skb_push(skb, head_len);
++ skb_push(skb, 10);
+ skb->data[3] = FR_PAD;
+ skb->data[4] = NLPID_SNAP;
+- skb->data[5] = FR_PAD;
++ /* OUI 00-80-C2 stands for the 802.1 organization */
++ skb->data[5] = 0x00;
+ skb->data[6] = 0x80;
+ skb->data[7] = 0xC2;
++ /* PID 00-07 stands for Ethernet frames without FCS */
+ skb->data[8] = 0x00;
+- skb->data[9] = 0x07; /* bridged Ethernet frame w/out FCS */
+- break;
++ skb->data[9] = 0x07;
+
+- default:
+- head_len = 10;
+- skb_push(skb, head_len);
+- skb->data[3] = FR_PAD;
+- skb->data[4] = NLPID_SNAP;
+- skb->data[5] = FR_PAD;
+- skb->data[6] = FR_PAD;
+- skb->data[7] = FR_PAD;
+- *(__be16*)(skb->data + 8) = skb->protocol;
++ } else {
++ return -EINVAL;
+ }
+
+ dlci_to_q922(skb->data, dlci);
+@@ -425,8 +431,8 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
+ skb_put(skb, pad);
+ memset(skb->data + len, 0, pad);
+ }
+- skb->protocol = cpu_to_be16(ETH_P_802_3);
+ }
++ skb->dev = dev;
+ if (!fr_hard_header(&skb, pvc->dlci)) {
+ dev->stats.tx_bytes += skb->len;
+ dev->stats.tx_packets++;
+@@ -494,10 +500,8 @@ static void fr_lmi_send(struct net_device *dev, int fullrep)
+ memset(skb->data, 0, len);
+ skb_reserve(skb, 4);
+ if (lmi == LMI_CISCO) {
+- skb->protocol = cpu_to_be16(NLPID_CISCO_LMI);
+ fr_hard_header(&skb, LMI_CISCO_DLCI);
+ } else {
+- skb->protocol = cpu_to_be16(NLPID_CCITT_ANSI_LMI);
+ fr_hard_header(&skb, LMI_CCITT_ANSI_DLCI);
+ }
+ data = skb_tail_pointer(skb);
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index 215ade6faf328..a00498338b1cc 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -949,6 +949,7 @@ static void ath10k_htt_rx_h_rates(struct ath10k *ar,
+ u8 preamble = 0;
+ u8 group_id;
+ u32 info1, info2, info3;
++ u32 stbc, nsts_su;
+
+ info1 = __le32_to_cpu(rxd->ppdu_start.info1);
+ info2 = __le32_to_cpu(rxd->ppdu_start.info2);
+@@ -993,11 +994,16 @@ static void ath10k_htt_rx_h_rates(struct ath10k *ar,
+ */
+ bw = info2 & 3;
+ sgi = info3 & 1;
++ stbc = (info2 >> 3) & 1;
+ group_id = (info2 >> 4) & 0x3F;
+
+ if (GROUP_ID_IS_SU_MIMO(group_id)) {
+ mcs = (info3 >> 4) & 0x0F;
+- nss = ((info2 >> 10) & 0x07) + 1;
++ nsts_su = ((info2 >> 10) & 0x07);
++ if (stbc)
++ nss = (nsts_su >> 2) + 1;
++ else
++ nss = (nsts_su + 1);
+ } else {
+ /* Hardware doesn't decode VHT-SIG-B into Rx descriptor
+ * so it's impossible to decode MCS. Also since
+@@ -3583,12 +3589,14 @@ ath10k_update_per_peer_tx_stats(struct ath10k *ar,
+ }
+
+ if (ar->htt.disable_tx_comp) {
+- arsta->tx_retries += peer_stats->retry_pkts;
+ arsta->tx_failed += peer_stats->failed_pkts;
+- ath10k_dbg(ar, ATH10K_DBG_HTT, "htt tx retries %d tx failed %d\n",
+- arsta->tx_retries, arsta->tx_failed);
++ ath10k_dbg(ar, ATH10K_DBG_HTT, "tx failed %d\n",
++ arsta->tx_failed);
+ }
+
++ arsta->tx_retries += peer_stats->retry_pkts;
++ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt tx retries %d", arsta->tx_retries);
++
+ if (ath10k_debug_is_extd_tx_stats_enabled(ar))
+ ath10k_accumulate_per_peer_tx_stats(ar, arsta, peer_stats,
+ rate_idx);
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 2177e9d92bdff..03c7edf05a1d1 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -8542,12 +8542,13 @@ static void ath10k_sta_statistics(struct ieee80211_hw *hw,
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
+
+ if (ar->htt.disable_tx_comp) {
+- sinfo->tx_retries = arsta->tx_retries;
+- sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_RETRIES);
+ sinfo->tx_failed = arsta->tx_failed;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_FAILED);
+ }
+
++ sinfo->tx_retries = arsta->tx_retries;
++ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_RETRIES);
++
+ ath10k_mac_sta_get_peer_stats_info(ar, sta, sinfo);
+ }
+
+diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
+index 63f882c690bff..0841e69b10b1a 100644
+--- a/drivers/net/wireless/ath/ath10k/sdio.c
++++ b/drivers/net/wireless/ath/ath10k/sdio.c
+@@ -557,6 +557,10 @@ static int ath10k_sdio_mbox_rx_alloc(struct ath10k *ar,
+ le16_to_cpu(htc_hdr->len),
+ ATH10K_HTC_MBOX_MAX_PAYLOAD_LENGTH);
+ ret = -ENOMEM;
++
++ queue_work(ar->workqueue, &ar->restart_work);
++ ath10k_warn(ar, "exceeds length, start recovery\n");
++
+ goto err;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 791d971784ce0..055c3bb61e4c5 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -1421,7 +1421,7 @@ struct htt_ppdu_stats_info *ath11k_dp_htt_get_ppdu_desc(struct ath11k *ar,
+ }
+ spin_unlock_bh(&ar->data_lock);
+
+- ppdu_info = kzalloc(sizeof(*ppdu_info), GFP_KERNEL);
++ ppdu_info = kzalloc(sizeof(*ppdu_info), GFP_ATOMIC);
+ if (!ppdu_info)
+ return NULL;
+
+diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c
+index 1af76775b1a87..99cff8fb39773 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_tx.c
+@@ -514,6 +514,8 @@ void ath11k_dp_tx_completion_handler(struct ath11k_base *ab, int ring_id)
+ u32 msdu_id;
+ u8 mac_id;
+
++ spin_lock_bh(&status_ring->lock);
++
+ ath11k_hal_srng_access_begin(ab, status_ring);
+
+ while ((ATH11K_TX_COMPL_NEXT(tx_ring->tx_status_head) !=
+@@ -533,6 +535,8 @@ void ath11k_dp_tx_completion_handler(struct ath11k_base *ab, int ring_id)
+
+ ath11k_hal_srng_access_end(ab, status_ring);
+
++ spin_unlock_bh(&status_ring->lock);
++
+ while (ATH11K_TX_COMPL_NEXT(tx_ring->tx_status_tail) != tx_ring->tx_status_head) {
+ struct hal_wbm_release_ring *tx_status;
+ u32 desc_id;
+diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
+index 7c9dc91cc48a9..c79a7c7eb56ee 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.c
++++ b/drivers/net/wireless/ath/ath11k/reg.c
+@@ -206,7 +206,7 @@ int ath11k_regd_update(struct ath11k *ar, bool init)
+ ab = ar->ab;
+ pdev_id = ar->pdev_idx;
+
+- spin_lock(&ab->base_lock);
++ spin_lock_bh(&ab->base_lock);
+
+ if (init) {
+ /* Apply the regd received during init through
+@@ -227,7 +227,7 @@ int ath11k_regd_update(struct ath11k *ar, bool init)
+
+ if (!regd) {
+ ret = -EINVAL;
+- spin_unlock(&ab->base_lock);
++ spin_unlock_bh(&ab->base_lock);
+ goto err;
+ }
+
+@@ -238,7 +238,7 @@ int ath11k_regd_update(struct ath11k *ar, bool init)
+ if (regd_copy)
+ ath11k_copy_regd(regd, regd_copy);
+
+- spin_unlock(&ab->base_lock);
++ spin_unlock_bh(&ab->base_lock);
+
+ if (!regd_copy) {
+ ret = -ENOMEM;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
+index a5cced2c89ac6..921b94c4f5f9a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
+@@ -304,10 +304,12 @@ void brcmf_fweh_detach(struct brcmf_pub *drvr)
+ {
+ struct brcmf_fweh_info *fweh = &drvr->fweh;
+
+- /* cancel the worker */
+- cancel_work_sync(&fweh->event_work);
+- WARN_ON(!list_empty(&fweh->event_q));
+- memset(fweh->evt_handler, 0, sizeof(fweh->evt_handler));
++ /* cancel the worker if initialized */
++ if (fweh->event_work.func) {
++ cancel_work_sync(&fweh->event_work);
++ WARN_ON(!list_empty(&fweh->event_q));
++ memset(fweh->evt_handler, 0, sizeof(fweh->evt_handler));
++ }
+ }
+
+ /**
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 3c07d1bbe1c6e..ac3ee93a23780 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -4278,6 +4278,7 @@ static void brcmf_sdio_firmware_callback(struct device *dev, int err,
+ brcmf_sdiod_writeb(sdiod, SBSDIO_FUNC1_MESBUSYCTRL,
+ CY_43012_MESBUSYCTRL, &err);
+ break;
++ case SDIO_DEVICE_ID_BROADCOM_4329:
+ case SDIO_DEVICE_ID_BROADCOM_4339:
+ brcmf_dbg(INFO, "set F2 watermark to 0x%x*4 bytes for 4339\n",
+ CY_4339_F2_WATERMARK);
+diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
+index ae477f7756af1..8ee24e351bdc2 100644
+--- a/drivers/net/xen-netback/common.h
++++ b/drivers/net/xen-netback/common.h
+@@ -140,6 +140,20 @@ struct xenvif_queue { /* Per-queue data for xenvif */
+ char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+ struct xenvif *vif; /* Parent VIF */
+
++ /*
++ * TX/RX common EOI handling.
++ * When feature-split-event-channels = 0, interrupt handler sets
++ * NETBK_COMMON_EOI, otherwise NETBK_RX_EOI and NETBK_TX_EOI are set
++ * by the RX and TX interrupt handlers.
++ * RX and TX handler threads will issue an EOI when either
++ * NETBK_COMMON_EOI or their specific bits (NETBK_RX_EOI or
++ * NETBK_TX_EOI) are set and they will reset those bits.
++ */
++ atomic_t eoi_pending;
++#define NETBK_RX_EOI 0x01
++#define NETBK_TX_EOI 0x02
++#define NETBK_COMMON_EOI 0x04
++
+ /* Use NAPI for guest TX */
+ struct napi_struct napi;
+ /* When feature-split-event-channels = 0, tx_irq = rx_irq. */
+@@ -378,6 +392,7 @@ int xenvif_dealloc_kthread(void *data);
+
+ irqreturn_t xenvif_ctrl_irq_fn(int irq, void *data);
+
++bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread);
+ void xenvif_rx_action(struct xenvif_queue *queue);
+ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb);
+
+diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
+index 8af4972856915..acb786d8b1d8f 100644
+--- a/drivers/net/xen-netback/interface.c
++++ b/drivers/net/xen-netback/interface.c
+@@ -77,12 +77,28 @@ int xenvif_schedulable(struct xenvif *vif)
+ !vif->disabled;
+ }
+
++static bool xenvif_handle_tx_interrupt(struct xenvif_queue *queue)
++{
++ bool rc;
++
++ rc = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
++ if (rc)
++ napi_schedule(&queue->napi);
++ return rc;
++}
++
+ static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
+ {
+ struct xenvif_queue *queue = dev_id;
++ int old;
+
+- if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
+- napi_schedule(&queue->napi);
++ old = atomic_fetch_or(NETBK_TX_EOI, &queue->eoi_pending);
++ WARN(old & NETBK_TX_EOI, "Interrupt while EOI pending\n");
++
++ if (!xenvif_handle_tx_interrupt(queue)) {
++ atomic_andnot(NETBK_TX_EOI, &queue->eoi_pending);
++ xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
++ }
+
+ return IRQ_HANDLED;
+ }
+@@ -116,19 +132,46 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
+ return work_done;
+ }
+
++static bool xenvif_handle_rx_interrupt(struct xenvif_queue *queue)
++{
++ bool rc;
++
++ rc = xenvif_have_rx_work(queue, false);
++ if (rc)
++ xenvif_kick_thread(queue);
++ return rc;
++}
++
+ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
+ {
+ struct xenvif_queue *queue = dev_id;
++ int old;
+
+- xenvif_kick_thread(queue);
++ old = atomic_fetch_or(NETBK_RX_EOI, &queue->eoi_pending);
++ WARN(old & NETBK_RX_EOI, "Interrupt while EOI pending\n");
++
++ if (!xenvif_handle_rx_interrupt(queue)) {
++ atomic_andnot(NETBK_RX_EOI, &queue->eoi_pending);
++ xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
++ }
+
+ return IRQ_HANDLED;
+ }
+
+ irqreturn_t xenvif_interrupt(int irq, void *dev_id)
+ {
+- xenvif_tx_interrupt(irq, dev_id);
+- xenvif_rx_interrupt(irq, dev_id);
++ struct xenvif_queue *queue = dev_id;
++ int old;
++
++ old = atomic_fetch_or(NETBK_COMMON_EOI, &queue->eoi_pending);
++ WARN(old, "Interrupt while EOI pending\n");
++
++ /* Use bitwise or as we need to call both functions. */
++ if ((!xenvif_handle_tx_interrupt(queue) |
++ !xenvif_handle_rx_interrupt(queue))) {
++ atomic_andnot(NETBK_COMMON_EOI, &queue->eoi_pending);
++ xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
++ }
+
+ return IRQ_HANDLED;
+ }
+@@ -605,7 +648,7 @@ int xenvif_connect_ctrl(struct xenvif *vif, grant_ref_t ring_ref,
+ if (req_prod - rsp_prod > RING_SIZE(&vif->ctrl))
+ goto err_unmap;
+
+- err = bind_interdomain_evtchn_to_irq(vif->domid, evtchn);
++ err = bind_interdomain_evtchn_to_irq_lateeoi(vif->domid, evtchn);
+ if (err < 0)
+ goto err_unmap;
+
+@@ -709,7 +752,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
+
+ if (tx_evtchn == rx_evtchn) {
+ /* feature-split-event-channels == 0 */
+- err = bind_interdomain_evtchn_to_irqhandler(
++ err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
+ queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+ queue->name, queue);
+ if (err < 0)
+@@ -720,7 +763,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
+ /* feature-split-event-channels == 1 */
+ snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+ "%s-tx", queue->name);
+- err = bind_interdomain_evtchn_to_irqhandler(
++ err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
+ queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+ queue->tx_irq_name, queue);
+ if (err < 0)
+@@ -730,7 +773,7 @@ int xenvif_connect_data(struct xenvif_queue *queue,
+
+ snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+ "%s-rx", queue->name);
+- err = bind_interdomain_evtchn_to_irqhandler(
++ err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
+ queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+ queue->rx_irq_name, queue);
+ if (err < 0)
+diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
+index 6dfca72656449..bc3421d145768 100644
+--- a/drivers/net/xen-netback/netback.c
++++ b/drivers/net/xen-netback/netback.c
+@@ -169,6 +169,10 @@ void xenvif_napi_schedule_or_enable_events(struct xenvif_queue *queue)
+
+ if (more_to_do)
+ napi_schedule(&queue->napi);
++ else if (atomic_fetch_andnot(NETBK_TX_EOI | NETBK_COMMON_EOI,
++ &queue->eoi_pending) &
++ (NETBK_TX_EOI | NETBK_COMMON_EOI))
++ xen_irq_lateeoi(queue->tx_irq, 0);
+ }
+
+ static void tx_add_credit(struct xenvif_queue *queue)
+@@ -1643,9 +1647,14 @@ static bool xenvif_ctrl_work_todo(struct xenvif *vif)
+ irqreturn_t xenvif_ctrl_irq_fn(int irq, void *data)
+ {
+ struct xenvif *vif = data;
++ unsigned int eoi_flag = XEN_EOI_FLAG_SPURIOUS;
+
+- while (xenvif_ctrl_work_todo(vif))
++ while (xenvif_ctrl_work_todo(vif)) {
+ xenvif_ctrl_action(vif);
++ eoi_flag = 0;
++ }
++
++ xen_irq_lateeoi(irq, eoi_flag);
+
+ return IRQ_HANDLED;
+ }
+diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c
+index ac034f69a170b..b8febe1d1bfd3 100644
+--- a/drivers/net/xen-netback/rx.c
++++ b/drivers/net/xen-netback/rx.c
+@@ -503,13 +503,13 @@ static bool xenvif_rx_queue_ready(struct xenvif_queue *queue)
+ return queue->stalled && prod - cons >= 1;
+ }
+
+-static bool xenvif_have_rx_work(struct xenvif_queue *queue)
++bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread)
+ {
+ return xenvif_rx_ring_slots_available(queue) ||
+ (queue->vif->stall_timeout &&
+ (xenvif_rx_queue_stalled(queue) ||
+ xenvif_rx_queue_ready(queue))) ||
+- kthread_should_stop() ||
++ (test_kthread && kthread_should_stop()) ||
+ queue->vif->disabled;
+ }
+
+@@ -540,15 +540,20 @@ static void xenvif_wait_for_rx_work(struct xenvif_queue *queue)
+ {
+ DEFINE_WAIT(wait);
+
+- if (xenvif_have_rx_work(queue))
++ if (xenvif_have_rx_work(queue, true))
+ return;
+
+ for (;;) {
+ long ret;
+
+ prepare_to_wait(&queue->wq, &wait, TASK_INTERRUPTIBLE);
+- if (xenvif_have_rx_work(queue))
++ if (xenvif_have_rx_work(queue, true))
+ break;
++ if (atomic_fetch_andnot(NETBK_RX_EOI | NETBK_COMMON_EOI,
++ &queue->eoi_pending) &
++ (NETBK_RX_EOI | NETBK_COMMON_EOI))
++ xen_irq_lateeoi(queue->rx_irq, 0);
++
+ ret = schedule_timeout(xenvif_rx_queue_timeout(queue));
+ if (!ret)
+ break;
+diff --git a/drivers/nfc/s3fwrn5/Kconfig b/drivers/nfc/s3fwrn5/Kconfig
+index af9d18690afeb..3f8b6da582803 100644
+--- a/drivers/nfc/s3fwrn5/Kconfig
++++ b/drivers/nfc/s3fwrn5/Kconfig
+@@ -2,6 +2,7 @@
+ config NFC_S3FWRN5
+ tristate
+ select CRYPTO
++ select CRYPTO_HASH
+ help
+ Core driver for Samsung S3FWRN5 NFC chip. Contains core utilities
+ of chip. It's intended to be used by PHYs to avoid duplicating lots
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 9e378d0a0c01c..116902b1b2c34 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1926,7 +1926,6 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
+ complete(&queue->cm_done);
+ return 0;
+ case RDMA_CM_EVENT_REJECTED:
+- nvme_rdma_destroy_queue_ib(queue);
+ cm_error = nvme_rdma_conn_rejected(queue, ev);
+ break;
+ case RDMA_CM_EVENT_ROUTE_ERROR:
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 3aac77a295ba1..82336bbaf8dca 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -302,6 +302,9 @@ static void qcom_pcie_deinit_2_1_0(struct qcom_pcie *pcie)
+ reset_control_assert(res->por_reset);
+ reset_control_assert(res->ext_reset);
+ reset_control_assert(res->phy_reset);
++
++ writel(1, pcie->parf + PCIE20_PARF_PHY_CTRL);
++
+ regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
+ }
+
+@@ -314,6 +317,16 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
+ u32 val;
+ int ret;
+
++ /* reset the PCIe interface as uboot can leave it undefined state */
++ reset_control_assert(res->pci_reset);
++ reset_control_assert(res->axi_reset);
++ reset_control_assert(res->ahb_reset);
++ reset_control_assert(res->por_reset);
++ reset_control_assert(res->ext_reset);
++ reset_control_assert(res->phy_reset);
++
++ writel(1, pcie->parf + PCIE20_PARF_PHY_CTRL);
++
+ ret = regulator_bulk_enable(ARRAY_SIZE(res->supplies), res->supplies);
+ if (ret < 0) {
+ dev_err(dev, "cannot enable regulators\n");
+diff --git a/drivers/pci/ecam.c b/drivers/pci/ecam.c
+index 8f065a42fc1a2..b54d32a316693 100644
+--- a/drivers/pci/ecam.c
++++ b/drivers/pci/ecam.c
+@@ -168,4 +168,14 @@ const struct pci_ecam_ops pci_32b_ops = {
+ .write = pci_generic_config_write32,
+ }
+ };
++
++/* ECAM ops for 32-bit read only (non-compliant) */
++const struct pci_ecam_ops pci_32b_read_ops = {
++ .bus_shift = 20,
++ .pci_ops = {
++ .map_bus = pci_ecam_map_bus,
++ .read = pci_generic_config_read32,
++ .write = pci_generic_config_write,
++ }
++};
+ #endif
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index d5869a03f7483..d9aa551f84236 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -944,6 +944,16 @@ static bool acpi_pci_bridge_d3(struct pci_dev *dev)
+ if (!dev->is_hotplug_bridge)
+ return false;
+
++ /* Assume D3 support if the bridge is power-manageable by ACPI. */
++ adev = ACPI_COMPANION(&dev->dev);
++ if (!adev && !pci_dev_is_added(dev)) {
++ adev = acpi_pci_find_companion(&dev->dev);
++ ACPI_COMPANION_SET(&dev->dev, adev);
++ }
++
++ if (adev && acpi_device_power_manageable(adev))
++ return true;
++
+ /*
+ * Look for a special _DSD property for the root port and if it
+ * is set we know the hierarchy behind it supports D3 just fine.
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index a123f6e21f08a..08b9d025a3e81 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1772,8 +1772,6 @@ static int bq27xxx_battery_status(struct bq27xxx_device_info *di,
+ status = POWER_SUPPLY_STATUS_FULL;
+ else if (di->cache.flags & BQ27000_FLAG_CHGS)
+ status = POWER_SUPPLY_STATUS_CHARGING;
+- else if (power_supply_am_i_supplied(di->bat) > 0)
+- status = POWER_SUPPLY_STATUS_NOT_CHARGING;
+ else
+ status = POWER_SUPPLY_STATUS_DISCHARGING;
+ } else if (di->opts & BQ27Z561_O_BITS) {
+@@ -1792,6 +1790,10 @@ static int bq27xxx_battery_status(struct bq27xxx_device_info *di,
+ status = POWER_SUPPLY_STATUS_CHARGING;
+ }
+
++ if ((status == POWER_SUPPLY_STATUS_DISCHARGING) &&
++ (power_supply_am_i_supplied(di->bat) > 0))
++ status = POWER_SUPPLY_STATUS_NOT_CHARGING;
++
+ val->intval = status;
+
+ return 0;
+diff --git a/drivers/power/supply/test_power.c b/drivers/power/supply/test_power.c
+index 04acd76bbaa12..4895ee5e63a9a 100644
+--- a/drivers/power/supply/test_power.c
++++ b/drivers/power/supply/test_power.c
+@@ -353,6 +353,7 @@ static int param_set_ac_online(const char *key, const struct kernel_param *kp)
+ static int param_get_ac_online(char *buffer, const struct kernel_param *kp)
+ {
+ strcpy(buffer, map_get_key(map_ac_online, ac_online, "unknown"));
++ strcat(buffer, "\n");
+ return strlen(buffer);
+ }
+
+@@ -366,6 +367,7 @@ static int param_set_usb_online(const char *key, const struct kernel_param *kp)
+ static int param_get_usb_online(char *buffer, const struct kernel_param *kp)
+ {
+ strcpy(buffer, map_get_key(map_ac_online, usb_online, "unknown"));
++ strcat(buffer, "\n");
+ return strlen(buffer);
+ }
+
+@@ -380,6 +382,7 @@ static int param_set_battery_status(const char *key,
+ static int param_get_battery_status(char *buffer, const struct kernel_param *kp)
+ {
+ strcpy(buffer, map_get_key(map_status, battery_status, "unknown"));
++ strcat(buffer, "\n");
+ return strlen(buffer);
+ }
+
+@@ -394,6 +397,7 @@ static int param_set_battery_health(const char *key,
+ static int param_get_battery_health(char *buffer, const struct kernel_param *kp)
+ {
+ strcpy(buffer, map_get_key(map_health, battery_health, "unknown"));
++ strcat(buffer, "\n");
+ return strlen(buffer);
+ }
+
+@@ -409,6 +413,7 @@ static int param_get_battery_present(char *buffer,
+ const struct kernel_param *kp)
+ {
+ strcpy(buffer, map_get_key(map_present, battery_present, "unknown"));
++ strcat(buffer, "\n");
+ return strlen(buffer);
+ }
+
+@@ -426,6 +431,7 @@ static int param_get_battery_technology(char *buffer,
+ {
+ strcpy(buffer,
+ map_get_key(map_technology, battery_technology, "unknown"));
++ strcat(buffer, "\n");
+ return strlen(buffer);
+ }
+
+diff --git a/drivers/remoteproc/remoteproc_debugfs.c b/drivers/remoteproc/remoteproc_debugfs.c
+index 2e3b3e22e1d01..7ca823f6aa638 100644
+--- a/drivers/remoteproc/remoteproc_debugfs.c
++++ b/drivers/remoteproc/remoteproc_debugfs.c
+@@ -94,7 +94,7 @@ static ssize_t rproc_coredump_write(struct file *filp,
+ goto out;
+ }
+
+- if (!strncmp(buf, "disable", count)) {
++ if (!strncmp(buf, "disabled", count)) {
+ rproc->dump_conf = RPROC_COREDUMP_DISABLED;
+ } else if (!strncmp(buf, "inline", count)) {
+ rproc->dump_conf = RPROC_COREDUMP_INLINE;
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index f40312b16da06..b5570c83a28c6 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -970,7 +970,7 @@ static int qcom_glink_rx_open_ack(struct qcom_glink *glink, unsigned int lcid)
+ return -EINVAL;
+ }
+
+- complete(&channel->open_ack);
++ complete_all(&channel->open_ack);
+
+ return 0;
+ }
+@@ -1178,7 +1178,7 @@ static int qcom_glink_announce_create(struct rpmsg_device *rpdev)
+ __be32 *val = defaults;
+ int size;
+
+- if (glink->intentless)
++ if (glink->intentless || !completion_done(&channel->open_ack))
+ return 0;
+
+ prop = of_find_property(np, "qcom,intents", NULL);
+@@ -1413,7 +1413,7 @@ static int qcom_glink_rx_open(struct qcom_glink *glink, unsigned int rcid,
+ channel->rcid = ret;
+ spin_unlock_irqrestore(&glink->idr_lock, flags);
+
+- complete(&channel->open_req);
++ complete_all(&channel->open_req);
+
+ if (create_device) {
+ rpdev = kzalloc(sizeof(*rpdev), GFP_KERNEL);
+diff --git a/drivers/rtc/rtc-rx8010.c b/drivers/rtc/rtc-rx8010.c
+index fe010151ec8f2..08c93d4924946 100644
+--- a/drivers/rtc/rtc-rx8010.c
++++ b/drivers/rtc/rtc-rx8010.c
+@@ -407,16 +407,26 @@ static int rx8010_ioctl(struct device *dev, unsigned int cmd, unsigned long arg)
+ }
+ }
+
+-static struct rtc_class_ops rx8010_rtc_ops = {
++static const struct rtc_class_ops rx8010_rtc_ops_default = {
+ .read_time = rx8010_get_time,
+ .set_time = rx8010_set_time,
+ .ioctl = rx8010_ioctl,
+ };
+
++static const struct rtc_class_ops rx8010_rtc_ops_alarm = {
++ .read_time = rx8010_get_time,
++ .set_time = rx8010_set_time,
++ .ioctl = rx8010_ioctl,
++ .read_alarm = rx8010_read_alarm,
++ .set_alarm = rx8010_set_alarm,
++ .alarm_irq_enable = rx8010_alarm_irq_enable,
++};
++
+ static int rx8010_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+ {
+ struct i2c_adapter *adapter = client->adapter;
++ const struct rtc_class_ops *rtc_ops;
+ struct rx8010_data *rx8010;
+ int err = 0;
+
+@@ -447,16 +457,16 @@ static int rx8010_probe(struct i2c_client *client,
+
+ if (err) {
+ dev_err(&client->dev, "unable to request IRQ\n");
+- client->irq = 0;
+- } else {
+- rx8010_rtc_ops.read_alarm = rx8010_read_alarm;
+- rx8010_rtc_ops.set_alarm = rx8010_set_alarm;
+- rx8010_rtc_ops.alarm_irq_enable = rx8010_alarm_irq_enable;
++ return err;
+ }
++
++ rtc_ops = &rx8010_rtc_ops_alarm;
++ } else {
++ rtc_ops = &rx8010_rtc_ops_default;
+ }
+
+ rx8010->rtc = devm_rtc_device_register(&client->dev, client->name,
+- &rx8010_rtc_ops, THIS_MODULE);
++ rtc_ops, THIS_MODULE);
+
+ if (IS_ERR(rx8010->rtc)) {
+ dev_err(&client->dev, "unable to register the class device\n");
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index 1ea046324e8f6..c4afca0d773c6 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -50,6 +50,7 @@ static inline int ap_test_bit(unsigned int *ptr, unsigned int nr)
+ #define AP_RESPONSE_NO_FIRST_PART 0x13
+ #define AP_RESPONSE_MESSAGE_TOO_BIG 0x15
+ #define AP_RESPONSE_REQ_FAC_NOT_INST 0x16
++#define AP_RESPONSE_INVALID_DOMAIN 0x42
+
+ /*
+ * Known device types
+diff --git a/drivers/s390/crypto/ap_queue.c b/drivers/s390/crypto/ap_queue.c
+index 688ebebbf98cb..99f73bbb1c751 100644
+--- a/drivers/s390/crypto/ap_queue.c
++++ b/drivers/s390/crypto/ap_queue.c
+@@ -237,6 +237,9 @@ static enum ap_sm_wait ap_sm_write(struct ap_queue *aq)
+ case AP_RESPONSE_RESET_IN_PROGRESS:
+ aq->sm_state = AP_SM_STATE_RESET_WAIT;
+ return AP_SM_WAIT_TIMEOUT;
++ case AP_RESPONSE_INVALID_DOMAIN:
++ AP_DBF(DBF_WARN, "AP_RESPONSE_INVALID_DOMAIN on NQAP\n");
++ fallthrough;
+ case AP_RESPONSE_MESSAGE_TOO_BIG:
+ case AP_RESPONSE_REQ_FAC_NOT_INST:
+ list_del_init(&ap_msg->list);
+@@ -278,11 +281,6 @@ static enum ap_sm_wait ap_sm_reset(struct ap_queue *aq)
+ aq->sm_state = AP_SM_STATE_RESET_WAIT;
+ aq->interrupt = AP_INTR_DISABLED;
+ return AP_SM_WAIT_TIMEOUT;
+- case AP_RESPONSE_BUSY:
+- return AP_SM_WAIT_TIMEOUT;
+- case AP_RESPONSE_Q_NOT_AVAIL:
+- case AP_RESPONSE_DECONFIGURED:
+- case AP_RESPONSE_CHECKSTOPPED:
+ default:
+ aq->sm_state = AP_SM_STATE_BORKED;
+ return AP_SM_WAIT_NONE;
+diff --git a/drivers/s390/crypto/zcrypt_debug.h b/drivers/s390/crypto/zcrypt_debug.h
+index 241dbb5f75bf3..3225489a1c411 100644
+--- a/drivers/s390/crypto/zcrypt_debug.h
++++ b/drivers/s390/crypto/zcrypt_debug.h
+@@ -21,6 +21,14 @@
+
+ #define ZCRYPT_DBF(...) \
+ debug_sprintf_event(zcrypt_dbf_info, ##__VA_ARGS__)
++#define ZCRYPT_DBF_ERR(...) \
++ debug_sprintf_event(zcrypt_dbf_info, DBF_ERR, ##__VA_ARGS__)
++#define ZCRYPT_DBF_WARN(...) \
++ debug_sprintf_event(zcrypt_dbf_info, DBF_WARN, ##__VA_ARGS__)
++#define ZCRYPT_DBF_INFO(...) \
++ debug_sprintf_event(zcrypt_dbf_info, DBF_INFO, ##__VA_ARGS__)
++#define ZCRYPT_DBF_DBG(...) \
++ debug_sprintf_event(zcrypt_dbf_info, DBF_DEBUG, ##__VA_ARGS__)
+
+ extern debug_info_t *zcrypt_dbf_info;
+
+diff --git a/drivers/s390/crypto/zcrypt_error.h b/drivers/s390/crypto/zcrypt_error.h
+index 54a04f8c38ef9..39e626e3a3794 100644
+--- a/drivers/s390/crypto/zcrypt_error.h
++++ b/drivers/s390/crypto/zcrypt_error.h
+@@ -52,7 +52,6 @@ struct error_hdr {
+ #define REP82_ERROR_INVALID_COMMAND 0x30
+ #define REP82_ERROR_MALFORMED_MSG 0x40
+ #define REP82_ERROR_INVALID_SPECIAL_CMD 0x41
+-#define REP82_ERROR_INVALID_DOMAIN_PRECHECK 0x42
+ #define REP82_ERROR_RESERVED_FIELDO 0x50 /* old value */
+ #define REP82_ERROR_WORD_ALIGNMENT 0x60
+ #define REP82_ERROR_MESSAGE_LENGTH 0x80
+@@ -67,7 +66,6 @@ struct error_hdr {
+ #define REP82_ERROR_ZERO_BUFFER_LEN 0xB0
+
+ #define REP88_ERROR_MODULE_FAILURE 0x10
+-
+ #define REP88_ERROR_MESSAGE_TYPE 0x20
+ #define REP88_ERROR_MESSAGE_MALFORMD 0x22
+ #define REP88_ERROR_MESSAGE_LENGTH 0x23
+@@ -85,78 +83,56 @@ static inline int convert_error(struct zcrypt_queue *zq,
+ int queue = AP_QID_QUEUE(zq->queue->qid);
+
+ switch (ehdr->reply_code) {
+- case REP82_ERROR_OPERAND_INVALID:
+- case REP82_ERROR_OPERAND_SIZE:
+- case REP82_ERROR_EVEN_MOD_IN_OPND:
+- case REP88_ERROR_MESSAGE_MALFORMD:
+- case REP82_ERROR_INVALID_DOMAIN_PRECHECK:
+- case REP82_ERROR_INVALID_DOMAIN_PENDING:
+- case REP82_ERROR_INVALID_SPECIAL_CMD:
+- case REP82_ERROR_FILTERED_BY_HYPERVISOR:
+- // REP88_ERROR_INVALID_KEY // '82' CEX2A
+- // REP88_ERROR_OPERAND // '84' CEX2A
+- // REP88_ERROR_OPERAND_EVEN_MOD // '85' CEX2A
+- /* Invalid input data. */
++ case REP82_ERROR_INVALID_MSG_LEN: /* 0x23 */
++ case REP82_ERROR_RESERVD_FIELD: /* 0x24 */
++ case REP82_ERROR_FORMAT_FIELD: /* 0x29 */
++ case REP82_ERROR_MALFORMED_MSG: /* 0x40 */
++ case REP82_ERROR_INVALID_SPECIAL_CMD: /* 0x41 */
++ case REP82_ERROR_MESSAGE_LENGTH: /* 0x80 */
++ case REP82_ERROR_OPERAND_INVALID: /* 0x82 */
++ case REP82_ERROR_OPERAND_SIZE: /* 0x84 */
++ case REP82_ERROR_EVEN_MOD_IN_OPND: /* 0x85 */
++ case REP82_ERROR_INVALID_DOMAIN_PENDING: /* 0x8A */
++ case REP82_ERROR_FILTERED_BY_HYPERVISOR: /* 0x8B */
++ case REP82_ERROR_PACKET_TRUNCATED: /* 0xA0 */
++ case REP88_ERROR_MESSAGE_MALFORMD: /* 0x22 */
++ case REP88_ERROR_KEY_TYPE: /* 0x34 */
++ /* RY indicates malformed request */
+ ZCRYPT_DBF(DBF_WARN,
+- "device=%02x.%04x reply=0x%02x => rc=EINVAL\n",
++ "dev=%02x.%04x RY=0x%02x => rc=EINVAL\n",
+ card, queue, ehdr->reply_code);
+ return -EINVAL;
+- case REP82_ERROR_MESSAGE_TYPE:
+- // REP88_ERROR_MESSAGE_TYPE // '20' CEX2A
++ case REP82_ERROR_MACHINE_FAILURE: /* 0x10 */
++ case REP82_ERROR_MESSAGE_TYPE: /* 0x20 */
++ case REP82_ERROR_TRANSPORT_FAIL: /* 0x90 */
+ /*
+- * To sent a message of the wrong type is a bug in the
+- * device driver. Send error msg, disable the device
+- * and then repeat the request.
++ * Msg to wrong type or card/infrastructure failure.
++ * Trigger rescan of the ap bus, trigger retry request.
+ */
+ atomic_set(&zcrypt_rescan_req, 1);
+- zq->online = 0;
+- pr_err("Cryptographic device %02x.%04x failed and was set offline\n",
+- card, queue);
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x reply=0x%02x => online=0 rc=EAGAIN\n",
+- card, queue, ehdr->reply_code);
+- return -EAGAIN;
+- case REP82_ERROR_TRANSPORT_FAIL:
+- /* Card or infrastructure failure, disable card */
+- atomic_set(&zcrypt_rescan_req, 1);
+- zq->online = 0;
+- pr_err("Cryptographic device %02x.%04x failed and was set offline\n",
+- card, queue);
+ /* For type 86 response show the apfs value (failure reason) */
+- if (ehdr->type == TYPE86_RSP_CODE) {
++ if (ehdr->reply_code == REP82_ERROR_TRANSPORT_FAIL &&
++ ehdr->type == TYPE86_RSP_CODE) {
+ struct {
+ struct type86_hdr hdr;
+ struct type86_fmt2_ext fmt2;
+ } __packed * head = reply->msg;
+ unsigned int apfs = *((u32 *)head->fmt2.apfs);
+
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x reply=0x%02x apfs=0x%x => online=0 rc=EAGAIN\n",
+- card, queue, apfs, ehdr->reply_code);
++ ZCRYPT_DBF(DBF_WARN,
++ "dev=%02x.%04x RY=0x%02x apfs=0x%x => bus rescan, rc=EAGAIN\n",
++ card, queue, ehdr->reply_code, apfs);
+ } else
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x reply=0x%02x => online=0 rc=EAGAIN\n",
++ ZCRYPT_DBF(DBF_WARN,
++ "dev=%02x.%04x RY=0x%02x => bus rescan, rc=EAGAIN\n",
+ card, queue, ehdr->reply_code);
+ return -EAGAIN;
+- case REP82_ERROR_MACHINE_FAILURE:
+- // REP88_ERROR_MODULE_FAILURE // '10' CEX2A
+- /* If a card fails disable it and repeat the request. */
+- atomic_set(&zcrypt_rescan_req, 1);
+- zq->online = 0;
+- pr_err("Cryptographic device %02x.%04x failed and was set offline\n",
+- card, queue);
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x reply=0x%02x => online=0 rc=EAGAIN\n",
+- card, queue, ehdr->reply_code);
+- return -EAGAIN;
+ default:
+- zq->online = 0;
+- pr_err("Cryptographic device %02x.%04x failed and was set offline\n",
+- card, queue);
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x reply=0x%02x => online=0 rc=EAGAIN\n",
++ /* Assume request is valid and a retry will be worth it */
++ ZCRYPT_DBF(DBF_WARN,
++ "dev=%02x.%04x RY=0x%02x => rc=EAGAIN\n",
+ card, queue, ehdr->reply_code);
+- return -EAGAIN; /* repeat the request on a different device. */
++ return -EAGAIN;
+ }
+ }
+
+diff --git a/drivers/s390/crypto/zcrypt_msgtype50.c b/drivers/s390/crypto/zcrypt_msgtype50.c
+index 7aedc338b4459..88916addd513e 100644
+--- a/drivers/s390/crypto/zcrypt_msgtype50.c
++++ b/drivers/s390/crypto/zcrypt_msgtype50.c
+@@ -356,15 +356,15 @@ static int convert_type80(struct zcrypt_queue *zq,
+ if (t80h->len < sizeof(*t80h) + outputdatalength) {
+ /* The result is too short, the CEXxA card may not do that.. */
+ zq->online = 0;
+- pr_err("Cryptographic device %02x.%04x failed and was set offline\n",
++ pr_err("Crypto dev=%02x.%04x code=0x%02x => online=0 rc=EAGAIN\n",
+ AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid));
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x code=0x%02x => online=0 rc=EAGAIN\n",
+- AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid),
+- t80h->code);
+- return -EAGAIN; /* repeat the request on a different device. */
++ AP_QID_QUEUE(zq->queue->qid),
++ t80h->code);
++ ZCRYPT_DBF_ERR("dev=%02x.%04x code=0x%02x => online=0 rc=EAGAIN\n",
++ AP_QID_CARD(zq->queue->qid),
++ AP_QID_QUEUE(zq->queue->qid),
++ t80h->code);
++ return -EAGAIN;
+ }
+ if (zq->zcard->user_space_type == ZCRYPT_CEX2A)
+ BUG_ON(t80h->len > CEX2A_MAX_RESPONSE_SIZE);
+@@ -376,10 +376,10 @@ static int convert_type80(struct zcrypt_queue *zq,
+ return 0;
+ }
+
+-static int convert_response(struct zcrypt_queue *zq,
+- struct ap_message *reply,
+- char __user *outputdata,
+- unsigned int outputdatalength)
++static int convert_response_cex2a(struct zcrypt_queue *zq,
++ struct ap_message *reply,
++ char __user *outputdata,
++ unsigned int outputdatalength)
+ {
+ /* Response type byte is the second byte in the response. */
+ unsigned char rtype = ((unsigned char *) reply->msg)[1];
+@@ -393,15 +393,15 @@ static int convert_response(struct zcrypt_queue *zq,
+ outputdata, outputdatalength);
+ default: /* Unknown response type, this should NEVER EVER happen */
+ zq->online = 0;
+- pr_err("Cryptographic device %02x.%04x failed and was set offline\n",
++ pr_err("Crypto dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n",
+ AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid));
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x rtype=0x%02x => online=0 rc=EAGAIN\n",
+- AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid),
+- (unsigned int) rtype);
+- return -EAGAIN; /* repeat the request on a different device. */
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) rtype);
++ ZCRYPT_DBF_ERR("dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n",
++ AP_QID_CARD(zq->queue->qid),
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) rtype);
++ return -EAGAIN;
+ }
+ }
+
+@@ -476,8 +476,9 @@ static long zcrypt_cex2a_modexpo(struct zcrypt_queue *zq,
+ if (rc == 0) {
+ rc = ap_msg.rc;
+ if (rc == 0)
+- rc = convert_response(zq, &ap_msg, mex->outputdata,
+- mex->outputdatalength);
++ rc = convert_response_cex2a(zq, &ap_msg,
++ mex->outputdata,
++ mex->outputdatalength);
+ } else
+ /* Signal pending. */
+ ap_cancel_message(zq->queue, &ap_msg);
+@@ -520,8 +521,9 @@ static long zcrypt_cex2a_modexpo_crt(struct zcrypt_queue *zq,
+ if (rc == 0) {
+ rc = ap_msg.rc;
+ if (rc == 0)
+- rc = convert_response(zq, &ap_msg, crt->outputdata,
+- crt->outputdatalength);
++ rc = convert_response_cex2a(zq, &ap_msg,
++ crt->outputdata,
++ crt->outputdatalength);
+ } else
+ /* Signal pending. */
+ ap_cancel_message(zq->queue, &ap_msg);
+diff --git a/drivers/s390/crypto/zcrypt_msgtype6.c b/drivers/s390/crypto/zcrypt_msgtype6.c
+index d77991c74c252..21ea3b73c8674 100644
+--- a/drivers/s390/crypto/zcrypt_msgtype6.c
++++ b/drivers/s390/crypto/zcrypt_msgtype6.c
+@@ -650,23 +650,22 @@ static int convert_type86_ica(struct zcrypt_queue *zq,
+ (service_rc == 8 && service_rs == 72) ||
+ (service_rc == 8 && service_rs == 770) ||
+ (service_rc == 12 && service_rs == 769)) {
+- ZCRYPT_DBF(DBF_DEBUG,
+- "device=%02x.%04x rc/rs=%d/%d => rc=EINVAL\n",
+- AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid),
+- (int) service_rc, (int) service_rs);
++ ZCRYPT_DBF_WARN("dev=%02x.%04x rc/rs=%d/%d => rc=EINVAL\n",
++ AP_QID_CARD(zq->queue->qid),
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) service_rc, (int) service_rs);
+ return -EINVAL;
+ }
+ zq->online = 0;
+- pr_err("Cryptographic device %02x.%04x failed and was set offline\n",
++ pr_err("Crypto dev=%02x.%04x rc/rs=%d/%d online=0 rc=EAGAIN\n",
+ AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid));
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x rc/rs=%d/%d => online=0 rc=EAGAIN\n",
+- AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid),
+- (int) service_rc, (int) service_rs);
+- return -EAGAIN; /* repeat the request on a different device. */
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) service_rc, (int) service_rs);
++ ZCRYPT_DBF_ERR("dev=%02x.%04x rc/rs=%d/%d => online=0 rc=EAGAIN\n",
++ AP_QID_CARD(zq->queue->qid),
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) service_rc, (int) service_rs);
++ return -EAGAIN;
+ }
+ data = msg->text;
+ reply_len = msg->length - 2;
+@@ -800,17 +799,18 @@ static int convert_response_ica(struct zcrypt_queue *zq,
+ return convert_type86_ica(zq, reply,
+ outputdata, outputdatalength);
+ fallthrough; /* wrong cprb version is an unknown response */
+- default: /* Unknown response type, this should NEVER EVER happen */
++ default:
++ /* Unknown response type, this should NEVER EVER happen */
+ zq->online = 0;
+- pr_err("Cryptographic device %02x.%04x failed and was set offline\n",
++ pr_err("Crypto dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n",
+ AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid));
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x rtype=0x%02x => online=0 rc=EAGAIN\n",
+- AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid),
+- (int) msg->hdr.type);
+- return -EAGAIN; /* repeat the request on a different device. */
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) msg->hdr.type);
++ ZCRYPT_DBF_ERR("dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n",
++ AP_QID_CARD(zq->queue->qid),
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) msg->hdr.type);
++ return -EAGAIN;
+ }
+ }
+
+@@ -836,15 +836,15 @@ static int convert_response_xcrb(struct zcrypt_queue *zq,
+ default: /* Unknown response type, this should NEVER EVER happen */
+ xcRB->status = 0x0008044DL; /* HDD_InvalidParm */
+ zq->online = 0;
+- pr_err("Cryptographic device %02x.%04x failed and was set offline\n",
++ pr_err("Crypto dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n",
+ AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid));
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x rtype=0x%02x => online=0 rc=EAGAIN\n",
+- AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid),
+- (int) msg->hdr.type);
+- return -EAGAIN; /* repeat the request on a different device. */
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) msg->hdr.type);
++ ZCRYPT_DBF_ERR("dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n",
++ AP_QID_CARD(zq->queue->qid),
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) msg->hdr.type);
++ return -EAGAIN;
+ }
+ }
+
+@@ -865,15 +865,15 @@ static int convert_response_ep11_xcrb(struct zcrypt_queue *zq,
+ fallthrough; /* wrong cprb version is an unknown resp */
+ default: /* Unknown response type, this should NEVER EVER happen */
+ zq->online = 0;
+- pr_err("Cryptographic device %02x.%04x failed and was set offline\n",
++ pr_err("Crypto dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n",
+ AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid));
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x rtype=0x%02x => online=0 rc=EAGAIN\n",
+- AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid),
+- (int) msg->hdr.type);
+- return -EAGAIN; /* repeat the request on a different device. */
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) msg->hdr.type);
++ ZCRYPT_DBF_ERR("dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n",
++ AP_QID_CARD(zq->queue->qid),
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) msg->hdr.type);
++ return -EAGAIN;
+ }
+ }
+
+@@ -895,15 +895,15 @@ static int convert_response_rng(struct zcrypt_queue *zq,
+ fallthrough; /* wrong cprb version is an unknown response */
+ default: /* Unknown response type, this should NEVER EVER happen */
+ zq->online = 0;
+- pr_err("Cryptographic device %02x.%04x failed and was set offline\n",
++ pr_err("Crypto dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n",
+ AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid));
+- ZCRYPT_DBF(DBF_ERR,
+- "device=%02x.%04x rtype=0x%02x => online=0 rc=EAGAIN\n",
+- AP_QID_CARD(zq->queue->qid),
+- AP_QID_QUEUE(zq->queue->qid),
+- (int) msg->hdr.type);
+- return -EAGAIN; /* repeat the request on a different device. */
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) msg->hdr.type);
++ ZCRYPT_DBF_ERR("dev=%02x.%04x unknown response type 0x%02x => online=0 rc=EAGAIN\n",
++ AP_QID_CARD(zq->queue->qid),
++ AP_QID_QUEUE(zq->queue->qid),
++ (int) msg->hdr.type);
++ return -EAGAIN;
+ }
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 5d93ccc731535..5ab955007a07b 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -157,6 +157,14 @@ qla2x00_sysfs_write_fw_dump(struct file *filp, struct kobject *kobj,
+ vha->host_no);
+ }
+ break;
++ case 10:
++ if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
++ ql_log(ql_log_info, vha, 0x70e9,
++ "Issuing MPI firmware dump on host#%ld.\n",
++ vha->host_no);
++ ha->isp_ops->mpi_fw_dump(vha, 0);
++ }
++ break;
+ }
+ return count;
+ }
+@@ -744,8 +752,6 @@ qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj,
+ qla83xx_idc_audit(vha, IDC_AUDIT_TIMESTAMP);
+ qla83xx_idc_unlock(vha, 0);
+ break;
+- } else if (IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
+- qla27xx_reset_mpi(vha);
+ } else {
+ /* Make sure FC side is not in reset */
+ WARN_ON_ONCE(qla2x00_wait_for_hba_online(vha) !=
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index 0ced18f3104e5..76711b34643a8 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -938,6 +938,5 @@ extern void qla24xx_process_purex_list(struct purex_list *);
+
+ /* nvme.c */
+ void qla_nvme_unregister_remote_port(struct fc_port *fcport);
+-void qla27xx_reset_mpi(scsi_qla_host_t *vha);
+ void qla_handle_els_plogi_done(scsi_qla_host_t *vha, struct event_arg *ea);
+ #endif /* _QLA_GBL_H */
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 8d4b651e14422..91f2cfc12aaa2 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -3298,6 +3298,8 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ j, fwdt->dump_size);
+ dump_size += fwdt->dump_size;
+ }
++ /* Add space for spare MPI fw dump. */
++ dump_size += ha->fwdt[1].dump_size;
+ } else {
+ req_q_size = req->length * sizeof(request_t);
+ rsp_q_size = rsp->length * sizeof(response_t);
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 25e0a16847632..96db78c882009 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -767,7 +767,7 @@ qla27xx_handle_8200_aen(scsi_qla_host_t *vha, uint16_t *mb)
+ ql_log(ql_log_warn, vha, 0x02f0,
+ "MPI Heartbeat stop. MPI reset is%s needed. "
+ "MB0[%xh] MB1[%xh] MB2[%xh] MB3[%xh]\n",
+- mb[0] & BIT_8 ? "" : " not",
++ mb[1] & BIT_8 ? "" : " not",
+ mb[0], mb[1], mb[2], mb[3]);
+
+ if ((mb[1] & BIT_8) == 0)
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 2a88e7e79bd50..9028bcddc98c9 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -1229,14 +1229,15 @@ void qlt_schedule_sess_for_deletion(struct fc_port *sess)
+ case DSC_DELETE_PEND:
+ return;
+ case DSC_DELETED:
+- if (tgt && tgt->tgt_stop && (tgt->sess_count == 0))
+- wake_up_all(&tgt->waitQ);
+- if (sess->vha->fcport_count == 0)
+- wake_up_all(&sess->vha->fcport_waitQ);
+-
+ if (!sess->plogi_link[QLT_PLOGI_LINK_SAME_WWN] &&
+- !sess->plogi_link[QLT_PLOGI_LINK_CONFLICT])
++ !sess->plogi_link[QLT_PLOGI_LINK_CONFLICT]) {
++ if (tgt && tgt->tgt_stop && tgt->sess_count == 0)
++ wake_up_all(&tgt->waitQ);
++
++ if (sess->vha->fcport_count == 0)
++ wake_up_all(&sess->vha->fcport_waitQ);
+ return;
++ }
+ break;
+ case DSC_UPD_FCPORT:
+ /*
+diff --git a/drivers/scsi/qla2xxx/qla_tmpl.c b/drivers/scsi/qla2xxx/qla_tmpl.c
+index 8dc82cfd38b27..2847243f6cfd3 100644
+--- a/drivers/scsi/qla2xxx/qla_tmpl.c
++++ b/drivers/scsi/qla2xxx/qla_tmpl.c
+@@ -12,33 +12,6 @@
+ #define IOBASE(vha) IOBAR(ISPREG(vha))
+ #define INVALID_ENTRY ((struct qla27xx_fwdt_entry *)0xffffffffffffffffUL)
+
+-/* hardware_lock assumed held. */
+-static void
+-qla27xx_write_remote_reg(struct scsi_qla_host *vha,
+- u32 addr, u32 data)
+-{
+- struct device_reg_24xx __iomem *reg = &vha->hw->iobase->isp24;
+-
+- ql_dbg(ql_dbg_misc, vha, 0xd300,
+- "%s: addr/data = %xh/%xh\n", __func__, addr, data);
+-
+- wrt_reg_dword(®->iobase_addr, 0x40);
+- wrt_reg_dword(®->iobase_c4, data);
+- wrt_reg_dword(®->iobase_window, addr);
+-}
+-
+-void
+-qla27xx_reset_mpi(scsi_qla_host_t *vha)
+-{
+- ql_dbg(ql_dbg_misc + ql_dbg_verbose, vha, 0xd301,
+- "Entered %s.\n", __func__);
+-
+- qla27xx_write_remote_reg(vha, 0x104050, 0x40004);
+- qla27xx_write_remote_reg(vha, 0x10405c, 0x4);
+-
+- vha->hw->stat.num_mpi_reset++;
+-}
+-
+ static inline void
+ qla27xx_insert16(uint16_t value, void *buf, ulong *len)
+ {
+@@ -1028,7 +1001,6 @@ void
+ qla27xx_mpi_fwdump(scsi_qla_host_t *vha, int hardware_locked)
+ {
+ ulong flags = 0;
+- bool need_mpi_reset = true;
+
+ #ifndef __CHECKER__
+ if (!hardware_locked)
+@@ -1036,14 +1008,20 @@ qla27xx_mpi_fwdump(scsi_qla_host_t *vha, int hardware_locked)
+ #endif
+ if (!vha->hw->mpi_fw_dump) {
+ ql_log(ql_log_warn, vha, 0x02f3, "-> mpi_fwdump no buffer\n");
+- } else if (vha->hw->mpi_fw_dumped) {
+- ql_log(ql_log_warn, vha, 0x02f4,
+- "-> MPI firmware already dumped (%p) -- ignoring request\n",
+- vha->hw->mpi_fw_dump);
+ } else {
+ struct fwdt *fwdt = &vha->hw->fwdt[1];
+ ulong len;
+ void *buf = vha->hw->mpi_fw_dump;
++ bool walk_template_only = false;
++
++ if (vha->hw->mpi_fw_dumped) {
++ /* Use the spare area for any further dumps. */
++ buf += fwdt->dump_size;
++ walk_template_only = true;
++ ql_log(ql_log_warn, vha, 0x02f4,
++ "-> MPI firmware already dumped -- dump saving to temporary buffer %p.\n",
++ buf);
++ }
+
+ ql_log(ql_log_warn, vha, 0x02f5, "-> fwdt1 running...\n");
+ if (!fwdt->template) {
+@@ -1058,9 +1036,10 @@ qla27xx_mpi_fwdump(scsi_qla_host_t *vha, int hardware_locked)
+ ql_log(ql_log_warn, vha, 0x02f7,
+ "-> fwdt1 fwdump residual=%+ld\n",
+ fwdt->dump_size - len);
+- } else {
+- need_mpi_reset = false;
+ }
++ vha->hw->stat.num_mpi_reset++;
++ if (walk_template_only)
++ goto bailout;
+
+ vha->hw->mpi_fw_dump_len = len;
+ vha->hw->mpi_fw_dumped = 1;
+@@ -1072,8 +1051,6 @@ qla27xx_mpi_fwdump(scsi_qla_host_t *vha, int hardware_locked)
+ }
+
+ bailout:
+- if (need_mpi_reset)
+- qla27xx_reset_mpi(vha);
+ #ifndef __CHECKER__
+ if (!hardware_locked)
+ spin_unlock_irqrestore(&vha->hw->hardware_lock, flags);
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 7affaaf8b98e0..198130b6a9963 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -530,7 +530,7 @@ static void scsi_uninit_cmd(struct scsi_cmnd *cmd)
+ }
+ }
+
+-static void scsi_free_sgtables(struct scsi_cmnd *cmd)
++void scsi_free_sgtables(struct scsi_cmnd *cmd)
+ {
+ if (cmd->sdb.table.nents)
+ sg_free_table_chained(&cmd->sdb.table,
+@@ -539,6 +539,7 @@ static void scsi_free_sgtables(struct scsi_cmnd *cmd)
+ sg_free_table_chained(&cmd->prot_sdb->table,
+ SCSI_INLINE_PROT_SG_CNT);
+ }
++EXPORT_SYMBOL_GPL(scsi_free_sgtables);
+
+ static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd)
+ {
+@@ -966,7 +967,7 @@ static inline bool scsi_cmd_needs_dma_drain(struct scsi_device *sdev,
+ }
+
+ /**
+- * scsi_init_io - SCSI I/O initialization function.
++ * scsi_alloc_sgtables - allocate S/G tables for a command
+ * @cmd: command descriptor we wish to initialize
+ *
+ * Returns:
+@@ -974,7 +975,7 @@ static inline bool scsi_cmd_needs_dma_drain(struct scsi_device *sdev,
+ * * BLK_STS_RESOURCE - if the failure is retryable
+ * * BLK_STS_IOERR - if the failure is fatal
+ */
+-blk_status_t scsi_init_io(struct scsi_cmnd *cmd)
++blk_status_t scsi_alloc_sgtables(struct scsi_cmnd *cmd)
+ {
+ struct scsi_device *sdev = cmd->device;
+ struct request *rq = cmd->request;
+@@ -1066,7 +1067,7 @@ out_free_sgtables:
+ scsi_free_sgtables(cmd);
+ return ret;
+ }
+-EXPORT_SYMBOL(scsi_init_io);
++EXPORT_SYMBOL(scsi_alloc_sgtables);
+
+ /**
+ * scsi_initialize_rq - initialize struct scsi_cmnd partially
+@@ -1154,7 +1155,7 @@ static blk_status_t scsi_setup_scsi_cmnd(struct scsi_device *sdev,
+ * submit a request without an attached bio.
+ */
+ if (req->bio) {
+- blk_status_t ret = scsi_init_io(cmd);
++ blk_status_t ret = scsi_alloc_sgtables(cmd);
+ if (unlikely(ret != BLK_STS_OK))
+ return ret;
+ } else {
+@@ -1194,7 +1195,6 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev,
+ struct request *req)
+ {
+ struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req);
+- blk_status_t ret;
+
+ if (!blk_rq_bytes(req))
+ cmd->sc_data_direction = DMA_NONE;
+@@ -1204,14 +1204,8 @@ static blk_status_t scsi_setup_cmnd(struct scsi_device *sdev,
+ cmd->sc_data_direction = DMA_FROM_DEVICE;
+
+ if (blk_rq_is_scsi(req))
+- ret = scsi_setup_scsi_cmnd(sdev, req);
+- else
+- ret = scsi_setup_fs_cmnd(sdev, req);
+-
+- if (ret != BLK_STS_OK)
+- scsi_free_sgtables(cmd);
+-
+- return ret;
++ return scsi_setup_scsi_cmnd(sdev, req);
++ return scsi_setup_fs_cmnd(sdev, req);
+ }
+
+ static blk_status_t
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 16503e22691ed..e93a9a874004f 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -866,7 +866,7 @@ static blk_status_t sd_setup_unmap_cmnd(struct scsi_cmnd *cmd)
+ cmd->transfersize = data_len;
+ rq->timeout = SD_TIMEOUT;
+
+- return scsi_init_io(cmd);
++ return scsi_alloc_sgtables(cmd);
+ }
+
+ static blk_status_t sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd,
+@@ -897,7 +897,7 @@ static blk_status_t sd_setup_write_same16_cmnd(struct scsi_cmnd *cmd,
+ cmd->transfersize = data_len;
+ rq->timeout = unmap ? SD_TIMEOUT : SD_WRITE_SAME_TIMEOUT;
+
+- return scsi_init_io(cmd);
++ return scsi_alloc_sgtables(cmd);
+ }
+
+ static blk_status_t sd_setup_write_same10_cmnd(struct scsi_cmnd *cmd,
+@@ -928,7 +928,7 @@ static blk_status_t sd_setup_write_same10_cmnd(struct scsi_cmnd *cmd,
+ cmd->transfersize = data_len;
+ rq->timeout = unmap ? SD_TIMEOUT : SD_WRITE_SAME_TIMEOUT;
+
+- return scsi_init_io(cmd);
++ return scsi_alloc_sgtables(cmd);
+ }
+
+ static blk_status_t sd_setup_write_zeroes_cmnd(struct scsi_cmnd *cmd)
+@@ -1069,7 +1069,7 @@ static blk_status_t sd_setup_write_same_cmnd(struct scsi_cmnd *cmd)
+ * knows how much to actually write.
+ */
+ rq->__data_len = sdp->sector_size;
+- ret = scsi_init_io(cmd);
++ ret = scsi_alloc_sgtables(cmd);
+ rq->__data_len = blk_rq_bytes(rq);
+
+ return ret;
+@@ -1187,23 +1187,24 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
+ unsigned int dif;
+ bool dix;
+
+- ret = scsi_init_io(cmd);
++ ret = scsi_alloc_sgtables(cmd);
+ if (ret != BLK_STS_OK)
+ return ret;
+
++ ret = BLK_STS_IOERR;
+ if (!scsi_device_online(sdp) || sdp->changed) {
+ scmd_printk(KERN_ERR, cmd, "device offline or changed\n");
+- return BLK_STS_IOERR;
++ goto fail;
+ }
+
+ if (blk_rq_pos(rq) + blk_rq_sectors(rq) > get_capacity(rq->rq_disk)) {
+ scmd_printk(KERN_ERR, cmd, "access beyond end of device\n");
+- return BLK_STS_IOERR;
++ goto fail;
+ }
+
+ if ((blk_rq_pos(rq) & mask) || (blk_rq_sectors(rq) & mask)) {
+ scmd_printk(KERN_ERR, cmd, "request not aligned to the logical block size\n");
+- return BLK_STS_IOERR;
++ goto fail;
+ }
+
+ /*
+@@ -1225,7 +1226,7 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
+ if (req_op(rq) == REQ_OP_ZONE_APPEND) {
+ ret = sd_zbc_prepare_zone_append(cmd, &lba, nr_blocks);
+ if (ret)
+- return ret;
++ goto fail;
+ }
+
+ fua = rq->cmd_flags & REQ_FUA ? 0x8 : 0;
+@@ -1253,7 +1254,7 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
+ }
+
+ if (unlikely(ret != BLK_STS_OK))
+- return ret;
++ goto fail;
+
+ /*
+ * We shouldn't disconnect in the middle of a sector, so with a dumb
+@@ -1277,10 +1278,12 @@ static blk_status_t sd_setup_read_write_cmnd(struct scsi_cmnd *cmd)
+ blk_rq_sectors(rq)));
+
+ /*
+- * This indicates that the command is ready from our end to be
+- * queued.
++ * This indicates that the command is ready from our end to be queued.
+ */
+ return BLK_STS_OK;
++fail:
++ scsi_free_sgtables(cmd);
++ return ret;
+ }
+
+ static blk_status_t sd_init_command(struct scsi_cmnd *cmd)
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index 3b3a53c6a0de5..7e8fe55f3b339 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -392,15 +392,11 @@ static blk_status_t sr_init_command(struct scsi_cmnd *SCpnt)
+ struct request *rq = SCpnt->request;
+ blk_status_t ret;
+
+- ret = scsi_init_io(SCpnt);
++ ret = scsi_alloc_sgtables(SCpnt);
+ if (ret != BLK_STS_OK)
+- goto out;
++ return ret;
+ cd = scsi_cd(rq->rq_disk);
+
+- /* from here on until we're complete, any goto out
+- * is used for a killable error condition */
+- ret = BLK_STS_IOERR;
+-
+ SCSI_LOG_HLQUEUE(1, scmd_printk(KERN_INFO, SCpnt,
+ "Doing sr request, block = %d\n", block));
+
+@@ -509,12 +505,12 @@ static blk_status_t sr_init_command(struct scsi_cmnd *SCpnt)
+ SCpnt->allowed = MAX_RETRIES;
+
+ /*
+- * This indicates that the command is ready from our end to be
+- * queued.
++ * This indicates that the command is ready from our end to be queued.
+ */
+- ret = BLK_STS_OK;
++ return BLK_STS_OK;
+ out:
+- return ret;
++ scsi_free_sgtables(SCpnt);
++ return BLK_STS_IOERR;
+ }
+
+ static int sr_block_open(struct block_device *bdev, fmode_t mode)
+diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
+index ef60e790a750a..344ba687c13be 100644
+--- a/drivers/soc/qcom/rpmh-internal.h
++++ b/drivers/soc/qcom/rpmh-internal.h
+@@ -8,6 +8,7 @@
+ #define __RPM_INTERNAL_H__
+
+ #include <linux/bitmap.h>
++#include <linux/wait.h>
+ #include <soc/qcom/tcs.h>
+
+ #define TCS_TYPE_NR 4
+@@ -106,6 +107,8 @@ struct rpmh_ctrlr {
+ * @lock: Synchronize state of the controller. If RPMH's cache
+ * lock will also be held, the order is: drv->lock then
+ * cache_lock.
++ * @tcs_wait: Wait queue used to wait for @tcs_in_use to free up a
++ * slot
+ * @client: Handle to the DRV's client.
+ */
+ struct rsc_drv {
+@@ -118,6 +121,7 @@ struct rsc_drv {
+ struct tcs_group tcs[TCS_TYPE_NR];
+ DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
+ spinlock_t lock;
++ wait_queue_head_t tcs_wait;
+ struct rpmh_ctrlr client;
+ };
+
+diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
+index ae66757825813..a297911afe571 100644
+--- a/drivers/soc/qcom/rpmh-rsc.c
++++ b/drivers/soc/qcom/rpmh-rsc.c
+@@ -19,6 +19,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
++#include <linux/wait.h>
+
+ #include <soc/qcom/cmd-db.h>
+ #include <soc/qcom/tcs.h>
+@@ -453,6 +454,7 @@ skip:
+ if (!drv->tcs[ACTIVE_TCS].num_tcs)
+ enable_tcs_irq(drv, i, false);
+ spin_unlock(&drv->lock);
++ wake_up(&drv->tcs_wait);
+ if (req)
+ rpmh_tx_done(req, err);
+ }
+@@ -571,73 +573,34 @@ static int find_free_tcs(struct tcs_group *tcs)
+ }
+
+ /**
+- * tcs_write() - Store messages into a TCS right now, or return -EBUSY.
++ * claim_tcs_for_req() - Claim a tcs in the given tcs_group; only for active.
+ * @drv: The controller.
++ * @tcs: The tcs_group used for ACTIVE_ONLY transfers.
+ * @msg: The data to be sent.
+ *
+- * Grabs a TCS for ACTIVE_ONLY transfers and writes the messages to it.
++ * Claims a tcs in the given tcs_group while making sure that no existing cmd
++ * is in flight that would conflict with the one in @msg.
+ *
+- * If there are no free TCSes for ACTIVE_ONLY transfers or if a command for
+- * the same address is already transferring returns -EBUSY which means the
+- * client should retry shortly.
++ * Context: Must be called with the drv->lock held since that protects
++ * tcs_in_use.
+ *
+- * Return: 0 on success, -EBUSY if client should retry, or an error.
+- * Client should have interrupts enabled for a bit before retrying.
++ * Return: The id of the claimed tcs or -EBUSY if a matching msg is in flight
++ * or the tcs_group is full.
+ */
+-static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
++static int claim_tcs_for_req(struct rsc_drv *drv, struct tcs_group *tcs,
++ const struct tcs_request *msg)
+ {
+- struct tcs_group *tcs;
+- int tcs_id;
+- unsigned long flags;
+ int ret;
+
+- tcs = get_tcs_for_msg(drv, msg);
+- if (IS_ERR(tcs))
+- return PTR_ERR(tcs);
+-
+- spin_lock_irqsave(&drv->lock, flags);
+ /*
+ * The h/w does not like if we send a request to the same address,
+ * when one is already in-flight or being processed.
+ */
+ ret = check_for_req_inflight(drv, tcs, msg);
+ if (ret)
+- goto unlock;
+-
+- ret = find_free_tcs(tcs);
+- if (ret < 0)
+- goto unlock;
+- tcs_id = ret;
+-
+- tcs->req[tcs_id - tcs->offset] = msg;
+- set_bit(tcs_id, drv->tcs_in_use);
+- if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS) {
+- /*
+- * Clear previously programmed WAKE commands in selected
+- * repurposed TCS to avoid triggering them. tcs->slots will be
+- * cleaned from rpmh_flush() by invoking rpmh_rsc_invalidate()
+- */
+- write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0);
+- write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0);
+- enable_tcs_irq(drv, tcs_id, true);
+- }
+- spin_unlock_irqrestore(&drv->lock, flags);
+-
+- /*
+- * These two can be done after the lock is released because:
+- * - We marked "tcs_in_use" under lock.
+- * - Once "tcs_in_use" has been marked nobody else could be writing
+- * to these registers until the interrupt goes off.
+- * - The interrupt can't go off until we trigger w/ the last line
+- * of __tcs_set_trigger() below.
+- */
+- __tcs_buffer_write(drv, tcs_id, 0, msg);
+- __tcs_set_trigger(drv, tcs_id, true);
++ return ret;
+
+- return 0;
+-unlock:
+- spin_unlock_irqrestore(&drv->lock, flags);
+- return ret;
++ return find_free_tcs(tcs);
+ }
+
+ /**
+@@ -664,18 +627,47 @@ unlock:
+ */
+ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
+ {
+- int ret;
++ struct tcs_group *tcs;
++ int tcs_id;
++ unsigned long flags;
+
+- do {
+- ret = tcs_write(drv, msg);
+- if (ret == -EBUSY) {
+- pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n",
+- msg->cmds[0].addr);
+- udelay(10);
+- }
+- } while (ret == -EBUSY);
++ tcs = get_tcs_for_msg(drv, msg);
++ if (IS_ERR(tcs))
++ return PTR_ERR(tcs);
+
+- return ret;
++ spin_lock_irqsave(&drv->lock, flags);
++
++ /* Wait forever for a free tcs. It better be there eventually! */
++ wait_event_lock_irq(drv->tcs_wait,
++ (tcs_id = claim_tcs_for_req(drv, tcs, msg)) >= 0,
++ drv->lock);
++
++ tcs->req[tcs_id - tcs->offset] = msg;
++ set_bit(tcs_id, drv->tcs_in_use);
++ if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS) {
++ /*
++ * Clear previously programmed WAKE commands in selected
++ * repurposed TCS to avoid triggering them. tcs->slots will be
++ * cleaned from rpmh_flush() by invoking rpmh_rsc_invalidate()
++ */
++ write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0);
++ write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0);
++ enable_tcs_irq(drv, tcs_id, true);
++ }
++ spin_unlock_irqrestore(&drv->lock, flags);
++
++ /*
++ * These two can be done after the lock is released because:
++ * - We marked "tcs_in_use" under lock.
++ * - Once "tcs_in_use" has been marked nobody else could be writing
++ * to these registers until the interrupt goes off.
++ * - The interrupt can't go off until we trigger w/ the last line
++ * of __tcs_set_trigger() below.
++ */
++ __tcs_buffer_write(drv, tcs_id, 0, msg);
++ __tcs_set_trigger(drv, tcs_id, true);
++
++ return 0;
+ }
+
+ /**
+@@ -983,6 +975,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
+ return ret;
+
+ spin_lock_init(&drv->lock);
++ init_waitqueue_head(&drv->tcs_wait);
+ bitmap_zero(drv->tcs_in_use, MAX_TCS_NR);
+
+ irq = platform_get_irq(pdev, drv->id);
+diff --git a/drivers/soc/ti/k3-ringacc.c b/drivers/soc/ti/k3-ringacc.c
+index 6dcc21dde0cb7..1147dc4c1d596 100644
+--- a/drivers/soc/ti/k3-ringacc.c
++++ b/drivers/soc/ti/k3-ringacc.c
+@@ -10,6 +10,7 @@
+ #include <linux/init.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
++#include <linux/sys_soc.h>
+ #include <linux/soc/ti/k3-ringacc.h>
+ #include <linux/soc/ti/ti_sci_protocol.h>
+ #include <linux/soc/ti/ti_sci_inta_msi.h>
+@@ -208,6 +209,15 @@ struct k3_ringacc {
+ const struct k3_ringacc_ops *ops;
+ };
+
++/**
++ * struct k3_ringacc - Rings accelerator SoC data
++ *
++ * @dma_ring_reset_quirk: DMA reset w/a enable
++ */
++struct k3_ringacc_soc_data {
++ unsigned dma_ring_reset_quirk:1;
++};
++
+ static long k3_ringacc_ring_get_fifo_pos(struct k3_ring *ring)
+ {
+ return K3_RINGACC_FIFO_WINDOW_SIZE_BYTES -
+@@ -1051,9 +1061,6 @@ static int k3_ringacc_probe_dt(struct k3_ringacc *ringacc)
+ return ret;
+ }
+
+- ringacc->dma_ring_reset_quirk =
+- of_property_read_bool(node, "ti,dma-ring-reset-quirk");
+-
+ ringacc->tisci = ti_sci_get_by_phandle(node, "ti,sci");
+ if (IS_ERR(ringacc->tisci)) {
+ ret = PTR_ERR(ringacc->tisci);
+@@ -1084,9 +1091,22 @@ static int k3_ringacc_probe_dt(struct k3_ringacc *ringacc)
+ ringacc->rm_gp_range);
+ }
+
++static const struct k3_ringacc_soc_data k3_ringacc_soc_data_sr1 = {
++ .dma_ring_reset_quirk = 1,
++};
++
++static const struct soc_device_attribute k3_ringacc_socinfo[] = {
++ { .family = "AM65X",
++ .revision = "SR1.0",
++ .data = &k3_ringacc_soc_data_sr1
++ },
++ {/* sentinel */}
++};
++
+ static int k3_ringacc_init(struct platform_device *pdev,
+ struct k3_ringacc *ringacc)
+ {
++ const struct soc_device_attribute *soc;
+ void __iomem *base_fifo, *base_rt;
+ struct device *dev = &pdev->dev;
+ struct resource *res;
+@@ -1103,6 +1123,13 @@ static int k3_ringacc_init(struct platform_device *pdev,
+ if (ret)
+ return ret;
+
++ soc = soc_device_match(k3_ringacc_socinfo);
++ if (soc && soc->data) {
++ const struct k3_ringacc_soc_data *soc_data = soc->data;
++
++ ringacc->dma_ring_reset_quirk = soc_data->dma_ring_reset_quirk;
++ }
++
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rt");
+ base_rt = devm_ioremap_resource(dev, res);
+ if (IS_ERR(base_rt))
+diff --git a/drivers/spi/spi-mtk-nor.c b/drivers/spi/spi-mtk-nor.c
+index b08d8e9a8ee98..89531587d9cc0 100644
+--- a/drivers/spi/spi-mtk-nor.c
++++ b/drivers/spi/spi-mtk-nor.c
+@@ -89,7 +89,7 @@
+ // Buffered page program can do one 128-byte transfer
+ #define MTK_NOR_PP_SIZE 128
+
+-#define CLK_TO_US(sp, clkcnt) ((clkcnt) * 1000000 / sp->spi_freq)
++#define CLK_TO_US(sp, clkcnt) DIV_ROUND_UP(clkcnt, sp->spi_freq / 1000000)
+
+ struct mtk_nor {
+ struct spi_controller *ctlr;
+@@ -177,6 +177,10 @@ static int mtk_nor_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)
+ if ((op->addr.nbytes == 3) || (op->addr.nbytes == 4)) {
+ if ((op->data.dir == SPI_MEM_DATA_IN) &&
+ mtk_nor_match_read(op)) {
++ // limit size to prevent timeout calculation overflow
++ if (op->data.nbytes > 0x400000)
++ op->data.nbytes = 0x400000;
++
+ if ((op->addr.val & MTK_NOR_DMA_ALIGN_MASK) ||
+ (op->data.nbytes < MTK_NOR_DMA_ALIGN))
+ op->data.nbytes = 1;
+diff --git a/drivers/spi/spi-sprd.c b/drivers/spi/spi-sprd.c
+index 6678f1cbc5660..0443fec3a6ab5 100644
+--- a/drivers/spi/spi-sprd.c
++++ b/drivers/spi/spi-sprd.c
+@@ -563,11 +563,11 @@ static int sprd_spi_dma_request(struct sprd_spi *ss)
+
+ ss->dma.dma_chan[SPRD_SPI_TX] = dma_request_chan(ss->dev, "tx_chn");
+ if (IS_ERR_OR_NULL(ss->dma.dma_chan[SPRD_SPI_TX])) {
++ dma_release_channel(ss->dma.dma_chan[SPRD_SPI_RX]);
+ if (PTR_ERR(ss->dma.dma_chan[SPRD_SPI_TX]) == -EPROBE_DEFER)
+ return PTR_ERR(ss->dma.dma_chan[SPRD_SPI_TX]);
+
+ dev_err(ss->dev, "request TX DMA channel failed!\n");
+- dma_release_channel(ss->dma.dma_chan[SPRD_SPI_RX]);
+ return PTR_ERR(ss->dma.dma_chan[SPRD_SPI_TX]);
+ }
+
+diff --git a/drivers/staging/comedi/drivers/cb_pcidas.c b/drivers/staging/comedi/drivers/cb_pcidas.c
+index 48ec2ee953dc5..d740c47827751 100644
+--- a/drivers/staging/comedi/drivers/cb_pcidas.c
++++ b/drivers/staging/comedi/drivers/cb_pcidas.c
+@@ -1342,6 +1342,7 @@ static int cb_pcidas_auto_attach(struct comedi_device *dev,
+ if (dev->irq && board->has_ao_fifo) {
+ dev->write_subdev = s;
+ s->subdev_flags |= SDF_CMD_WRITE;
++ s->len_chanlist = s->n_chan;
+ s->do_cmdtest = cb_pcidas_ao_cmdtest;
+ s->do_cmd = cb_pcidas_ao_cmd;
+ s->cancel = cb_pcidas_ao_cancel;
+diff --git a/drivers/staging/fieldbus/anybuss/arcx-anybus.c b/drivers/staging/fieldbus/anybuss/arcx-anybus.c
+index 5b8d0bae9ff3d..b5fded15e8a69 100644
+--- a/drivers/staging/fieldbus/anybuss/arcx-anybus.c
++++ b/drivers/staging/fieldbus/anybuss/arcx-anybus.c
+@@ -293,7 +293,7 @@ static int controller_probe(struct platform_device *pdev)
+ regulator = devm_regulator_register(dev, &can_power_desc, &config);
+ if (IS_ERR(regulator)) {
+ err = PTR_ERR(regulator);
+- goto out_reset;
++ goto out_ida;
+ }
+ /* make controller info visible to userspace */
+ cd->class_dev = kzalloc(sizeof(*cd->class_dev), GFP_KERNEL);
+diff --git a/drivers/staging/octeon/ethernet-mdio.c b/drivers/staging/octeon/ethernet-mdio.c
+index cfb673a52b257..0bf545849b119 100644
+--- a/drivers/staging/octeon/ethernet-mdio.c
++++ b/drivers/staging/octeon/ethernet-mdio.c
+@@ -147,12 +147,6 @@ int cvm_oct_phy_setup_device(struct net_device *dev)
+
+ phy_node = of_parse_phandle(priv->of_node, "phy-handle", 0);
+ if (!phy_node && of_phy_is_fixed_link(priv->of_node)) {
+- int rc;
+-
+- rc = of_phy_register_fixed_link(priv->of_node);
+- if (rc)
+- return rc;
+-
+ phy_node = of_node_get(priv->of_node);
+ }
+ if (!phy_node)
+diff --git a/drivers/staging/octeon/ethernet-rx.c b/drivers/staging/octeon/ethernet-rx.c
+index 2c16230f993cb..9ebd665e5d427 100644
+--- a/drivers/staging/octeon/ethernet-rx.c
++++ b/drivers/staging/octeon/ethernet-rx.c
+@@ -69,15 +69,17 @@ static inline int cvm_oct_check_rcv_error(struct cvmx_wqe *work)
+ else
+ port = work->word1.cn38xx.ipprt;
+
+- if ((work->word2.snoip.err_code == 10) && (work->word1.len <= 64)) {
++ if ((work->word2.snoip.err_code == 10) && (work->word1.len <= 64))
+ /*
+ * Ignore length errors on min size packets. Some
+ * equipment incorrectly pads packets to 64+4FCS
+ * instead of 60+4FCS. Note these packets still get
+ * counted as frame errors.
+ */
+- } else if (work->word2.snoip.err_code == 5 ||
+- work->word2.snoip.err_code == 7) {
++ return 0;
++
++ if (work->word2.snoip.err_code == 5 ||
++ work->word2.snoip.err_code == 7) {
+ /*
+ * We received a packet with either an alignment error
+ * or a FCS error. This may be signalling that we are
+@@ -108,7 +110,10 @@ static inline int cvm_oct_check_rcv_error(struct cvmx_wqe *work)
+ /* Port received 0xd5 preamble */
+ work->packet_ptr.s.addr += i + 1;
+ work->word1.len -= i + 5;
+- } else if ((*ptr & 0xf) == 0xd) {
++ return 0;
++ }
++
++ if ((*ptr & 0xf) == 0xd) {
+ /* Port received 0xd preamble */
+ work->packet_ptr.s.addr += i;
+ work->word1.len -= i + 4;
+@@ -118,21 +123,20 @@ static inline int cvm_oct_check_rcv_error(struct cvmx_wqe *work)
+ ((*(ptr + 1) & 0xf) << 4);
+ ptr++;
+ }
+- } else {
+- printk_ratelimited("Port %d unknown preamble, packet dropped\n",
+- port);
+- cvm_oct_free_work(work);
+- return 1;
++ return 0;
+ }
++
++ printk_ratelimited("Port %d unknown preamble, packet dropped\n",
++ port);
++ cvm_oct_free_work(work);
++ return 1;
+ }
+- } else {
+- printk_ratelimited("Port %d receive error code %d, packet dropped\n",
+- port, work->word2.snoip.err_code);
+- cvm_oct_free_work(work);
+- return 1;
+ }
+
+- return 0;
++ printk_ratelimited("Port %d receive error code %d, packet dropped\n",
++ port, work->word2.snoip.err_code);
++ cvm_oct_free_work(work);
++ return 1;
+ }
+
+ static void copy_segments_to_skb(struct cvmx_wqe *work, struct sk_buff *skb)
+diff --git a/drivers/staging/octeon/ethernet.c b/drivers/staging/octeon/ethernet.c
+index 204f0b1e27397..5dea6e96ec901 100644
+--- a/drivers/staging/octeon/ethernet.c
++++ b/drivers/staging/octeon/ethernet.c
+@@ -13,6 +13,7 @@
+ #include <linux/phy.h>
+ #include <linux/slab.h>
+ #include <linux/interrupt.h>
++#include <linux/of_mdio.h>
+ #include <linux/of_net.h>
+ #include <linux/if_ether.h>
+ #include <linux/if_vlan.h>
+@@ -892,6 +893,14 @@ static int cvm_oct_probe(struct platform_device *pdev)
+ break;
+ }
+
++ if (priv->of_node && of_phy_is_fixed_link(priv->of_node)) {
++ if (of_phy_register_fixed_link(priv->of_node)) {
++ netdev_err(dev, "Failed to register fixed link for interface %d, port %d\n",
++ interface, priv->port);
++ dev->netdev_ops = NULL;
++ }
++ }
++
+ if (!dev->netdev_ops) {
+ free_netdev(dev);
+ } else if (register_netdev(dev) < 0) {
+diff --git a/drivers/staging/wfx/sta.c b/drivers/staging/wfx/sta.c
+index 7dace7c17bf5c..536c62001c709 100644
+--- a/drivers/staging/wfx/sta.c
++++ b/drivers/staging/wfx/sta.c
+@@ -761,17 +761,6 @@ int wfx_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ return -EOPNOTSUPP;
+ }
+
+- for (i = 0; i < ARRAY_SIZE(wdev->vif); i++) {
+- if (!wdev->vif[i]) {
+- wdev->vif[i] = vif;
+- wvif->id = i;
+- break;
+- }
+- }
+- if (i == ARRAY_SIZE(wdev->vif)) {
+- mutex_unlock(&wdev->conf_mutex);
+- return -EOPNOTSUPP;
+- }
+ // FIXME: prefer use of container_of() to get vif
+ wvif->vif = vif;
+ wvif->wdev = wdev;
+@@ -788,12 +777,22 @@ int wfx_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ init_completion(&wvif->scan_complete);
+ INIT_WORK(&wvif->scan_work, wfx_hw_scan_work);
+
+- mutex_unlock(&wdev->conf_mutex);
++ wfx_tx_queues_init(wvif);
++ wfx_tx_policy_init(wvif);
++
++ for (i = 0; i < ARRAY_SIZE(wdev->vif); i++) {
++ if (!wdev->vif[i]) {
++ wdev->vif[i] = vif;
++ wvif->id = i;
++ break;
++ }
++ }
++ WARN(i == ARRAY_SIZE(wdev->vif), "try to instantiate more vif than supported");
+
+ hif_set_macaddr(wvif, vif->addr);
+
+- wfx_tx_queues_init(wvif);
+- wfx_tx_policy_init(wvif);
++ mutex_unlock(&wdev->conf_mutex);
++
+ wvif = NULL;
+ while ((wvif = wvif_iterate(wdev, wvif)) != NULL) {
+ // Combo mode does not support Block Acks. We can re-enable them
+@@ -825,6 +824,7 @@ void wfx_remove_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ wvif->vif = NULL;
+
+ mutex_unlock(&wdev->conf_mutex);
++
+ wvif = NULL;
+ while ((wvif = wvif_iterate(wdev, wvif)) != NULL) {
+ // Combo mode does not support Block Acks. We can re-enable them
+diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c
+index 64637e09a0953..2f6199ebf7698 100644
+--- a/drivers/tee/tee_core.c
++++ b/drivers/tee/tee_core.c
+@@ -200,7 +200,8 @@ int tee_session_calc_client_uuid(uuid_t *uuid, u32 connection_method,
+ int name_len;
+ int rc;
+
+- if (connection_method == TEE_IOCTL_LOGIN_PUBLIC) {
++ if (connection_method == TEE_IOCTL_LOGIN_PUBLIC ||
++ connection_method == TEE_IOCTL_LOGIN_REE_KERNEL) {
+ /* Nil UUID to be passed to TEE environment */
+ uuid_copy(uuid, &uuid_null);
+ return 0;
+diff --git a/drivers/tty/serial/21285.c b/drivers/tty/serial/21285.c
+index 718e010fcb048..09baef4ccc39a 100644
+--- a/drivers/tty/serial/21285.c
++++ b/drivers/tty/serial/21285.c
+@@ -50,25 +50,25 @@ static const char serial21285_name[] = "Footbridge UART";
+
+ static bool is_enabled(struct uart_port *port, int bit)
+ {
+- unsigned long private_data = (unsigned long)port->private_data;
++ unsigned long *private_data = (unsigned long *)&port->private_data;
+
+- if (test_bit(bit, &private_data))
++ if (test_bit(bit, private_data))
+ return true;
+ return false;
+ }
+
+ static void enable(struct uart_port *port, int bit)
+ {
+- unsigned long private_data = (unsigned long)port->private_data;
++ unsigned long *private_data = (unsigned long *)&port->private_data;
+
+- set_bit(bit, &private_data);
++ set_bit(bit, private_data);
+ }
+
+ static void disable(struct uart_port *port, int bit)
+ {
+- unsigned long private_data = (unsigned long)port->private_data;
++ unsigned long *private_data = (unsigned long *)&port->private_data;
+
+- clear_bit(bit, &private_data);
++ clear_bit(bit, private_data);
+ }
+
+ #define is_tx_enabled(port) is_enabled(port, tx_enabled_bit)
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index e17465a8a773c..ffa90a1c4c0a4 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -314,9 +314,10 @@ MODULE_DEVICE_TABLE(of, lpuart_dt_ids);
+ /* Forward declare this for the dma callbacks*/
+ static void lpuart_dma_tx_complete(void *arg);
+
+-static inline bool is_ls1028a_lpuart(struct lpuart_port *sport)
++static inline bool is_layerscape_lpuart(struct lpuart_port *sport)
+ {
+- return sport->devtype == LS1028A_LPUART;
++ return (sport->devtype == LS1021A_LPUART ||
++ sport->devtype == LS1028A_LPUART);
+ }
+
+ static inline bool is_imx8qxp_lpuart(struct lpuart_port *sport)
+@@ -1644,11 +1645,11 @@ static int lpuart32_startup(struct uart_port *port)
+ UARTFIFO_FIFOSIZE_MASK);
+
+ /*
+- * The LS1028A has a fixed length of 16 words. Although it supports the
+- * RX/TXSIZE fields their encoding is different. Eg the reference manual
+- * states 0b101 is 16 words.
++ * The LS1021A and LS1028A have a fixed FIFO depth of 16 words.
++ * Although they support the RX/TXSIZE fields, their encoding is
++ * different. Eg the reference manual states 0b101 is 16 words.
+ */
+- if (is_ls1028a_lpuart(sport)) {
++ if (is_layerscape_lpuart(sport)) {
+ sport->rxfifo_size = 16;
+ sport->txfifo_size = 16;
+ sport->port.fifosize = sport->txfifo_size;
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index 0db53b5b3acf6..78acc270e39ac 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -743,8 +743,13 @@ static void k_fn(struct vc_data *vc, unsigned char value, char up_flag)
+ return;
+
+ if ((unsigned)value < ARRAY_SIZE(func_table)) {
++ unsigned long flags;
++
++ spin_lock_irqsave(&func_buf_lock, flags);
+ if (func_table[value])
+ puts_queue(vc, func_table[value]);
++ spin_unlock_irqrestore(&func_buf_lock, flags);
++
+ } else
+ pr_err("k_fn called with value=%d\n", value);
+ }
+@@ -1991,13 +1996,11 @@ out:
+ #undef s
+ #undef v
+
+-/* FIXME: This one needs untangling and locking */
++/* FIXME: This one needs untangling */
+ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ {
+ struct kbsentry *kbs;
+- char *p;
+ u_char *q;
+- u_char __user *up;
+ int sz, fnw_sz;
+ int delta;
+ char *first_free, *fj, *fnw;
+@@ -2023,23 +2026,19 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ i = array_index_nospec(kbs->kb_func, MAX_NR_FUNC);
+
+ switch (cmd) {
+- case KDGKBSENT:
+- sz = sizeof(kbs->kb_string) - 1; /* sz should have been
+- a struct member */
+- up = user_kdgkb->kb_string;
+- p = func_table[i];
+- if(p)
+- for ( ; *p && sz; p++, sz--)
+- if (put_user(*p, up++)) {
+- ret = -EFAULT;
+- goto reterr;
+- }
+- if (put_user('\0', up)) {
+- ret = -EFAULT;
+- goto reterr;
+- }
+- kfree(kbs);
+- return ((p && *p) ? -EOVERFLOW : 0);
++ case KDGKBSENT: {
++ /* size should have been a struct member */
++ ssize_t len = sizeof(user_kdgkb->kb_string);
++
++ spin_lock_irqsave(&func_buf_lock, flags);
++ len = strlcpy(kbs->kb_string, func_table[i] ? : "", len);
++ spin_unlock_irqrestore(&func_buf_lock, flags);
++
++ ret = copy_to_user(user_kdgkb->kb_string, kbs->kb_string,
++ len + 1) ? -EFAULT : 0;
++
++ goto reterr;
++ }
+ case KDSKBSENT:
+ if (!perm) {
+ ret = -EPERM;
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index bc33938e2f20e..21bc7dd4ad7ee 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -485,7 +485,7 @@ static int vt_k_ioctl(struct tty_struct *tty, unsigned int cmd,
+ return 0;
+ }
+
+-static inline int do_fontx_ioctl(int cmd,
++static inline int do_fontx_ioctl(struct vc_data *vc, int cmd,
+ struct consolefontdesc __user *user_cfd,
+ struct console_font_op *op)
+ {
+@@ -503,15 +503,16 @@ static inline int do_fontx_ioctl(int cmd,
+ op->height = cfdarg.charheight;
+ op->charcount = cfdarg.charcount;
+ op->data = cfdarg.chardata;
+- return con_font_op(vc_cons[fg_console].d, op);
+- case GIO_FONTX: {
++ return con_font_op(vc, op);
++
++ case GIO_FONTX:
+ op->op = KD_FONT_OP_GET;
+ op->flags = KD_FONT_FLAG_OLD;
+ op->width = 8;
+ op->height = cfdarg.charheight;
+ op->charcount = cfdarg.charcount;
+ op->data = cfdarg.chardata;
+- i = con_font_op(vc_cons[fg_console].d, op);
++ i = con_font_op(vc, op);
+ if (i)
+ return i;
+ cfdarg.charheight = op->height;
+@@ -519,12 +520,11 @@ static inline int do_fontx_ioctl(int cmd,
+ if (copy_to_user(user_cfd, &cfdarg, sizeof(struct consolefontdesc)))
+ return -EFAULT;
+ return 0;
+- }
+ }
+ return -EINVAL;
+ }
+
+-static int vt_io_fontreset(struct console_font_op *op)
++static int vt_io_fontreset(struct vc_data *vc, struct console_font_op *op)
+ {
+ int ret;
+
+@@ -538,19 +538,19 @@ static int vt_io_fontreset(struct console_font_op *op)
+
+ op->op = KD_FONT_OP_SET_DEFAULT;
+ op->data = NULL;
+- ret = con_font_op(vc_cons[fg_console].d, op);
++ ret = con_font_op(vc, op);
+ if (ret)
+ return ret;
+
+ console_lock();
+- con_set_default_unimap(vc_cons[fg_console].d);
++ con_set_default_unimap(vc);
+ console_unlock();
+
+ return 0;
+ }
+
+ static inline int do_unimap_ioctl(int cmd, struct unimapdesc __user *user_ud,
+- struct vc_data *vc)
++ bool perm, struct vc_data *vc)
+ {
+ struct unimapdesc tmp;
+
+@@ -558,9 +558,11 @@ static inline int do_unimap_ioctl(int cmd, struct unimapdesc __user *user_ud,
+ return -EFAULT;
+ switch (cmd) {
+ case PIO_UNIMAP:
++ if (!perm)
++ return -EPERM;
+ return con_set_unimap(vc, tmp.entry_ct, tmp.entries);
+ case GIO_UNIMAP:
+- if (fg_console != vc->vc_num)
++ if (!perm && fg_console != vc->vc_num)
+ return -EPERM;
+ return con_get_unimap(vc, tmp.entry_ct, &(user_ud->entry_ct),
+ tmp.entries);
+@@ -583,7 +585,7 @@ static int vt_io_ioctl(struct vc_data *vc, unsigned int cmd, void __user *up,
+ op.height = 0;
+ op.charcount = 256;
+ op.data = up;
+- return con_font_op(vc_cons[fg_console].d, &op);
++ return con_font_op(vc, &op);
+
+ case GIO_FONT:
+ op.op = KD_FONT_OP_GET;
+@@ -592,7 +594,7 @@ static int vt_io_ioctl(struct vc_data *vc, unsigned int cmd, void __user *up,
+ op.height = 32;
+ op.charcount = 256;
+ op.data = up;
+- return con_font_op(vc_cons[fg_console].d, &op);
++ return con_font_op(vc, &op);
+
+ case PIO_CMAP:
+ if (!perm)
+@@ -608,13 +610,13 @@ static int vt_io_ioctl(struct vc_data *vc, unsigned int cmd, void __user *up,
+
+ fallthrough;
+ case GIO_FONTX:
+- return do_fontx_ioctl(cmd, up, &op);
++ return do_fontx_ioctl(vc, cmd, up, &op);
+
+ case PIO_FONTRESET:
+ if (!perm)
+ return -EPERM;
+
+- return vt_io_fontreset(&op);
++ return vt_io_fontreset(vc, &op);
+
+ case PIO_SCRNMAP:
+ if (!perm)
+@@ -640,10 +642,7 @@ static int vt_io_ioctl(struct vc_data *vc, unsigned int cmd, void __user *up,
+
+ case PIO_UNIMAP:
+ case GIO_UNIMAP:
+- if (!perm)
+- return -EPERM;
+-
+- return do_unimap_ioctl(cmd, up, vc);
++ return do_unimap_ioctl(cmd, up, perm, vc);
+
+ default:
+ return -ENOIOCTLCMD;
+@@ -1068,8 +1067,9 @@ struct compat_consolefontdesc {
+ };
+
+ static inline int
+-compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd,
+- int perm, struct console_font_op *op)
++compat_fontx_ioctl(struct vc_data *vc, int cmd,
++ struct compat_consolefontdesc __user *user_cfd,
++ int perm, struct console_font_op *op)
+ {
+ struct compat_consolefontdesc cfdarg;
+ int i;
+@@ -1087,7 +1087,8 @@ compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd,
+ op->height = cfdarg.charheight;
+ op->charcount = cfdarg.charcount;
+ op->data = compat_ptr(cfdarg.chardata);
+- return con_font_op(vc_cons[fg_console].d, op);
++ return con_font_op(vc, op);
++
+ case GIO_FONTX:
+ op->op = KD_FONT_OP_GET;
+ op->flags = KD_FONT_FLAG_OLD;
+@@ -1095,7 +1096,7 @@ compat_fontx_ioctl(int cmd, struct compat_consolefontdesc __user *user_cfd,
+ op->height = cfdarg.charheight;
+ op->charcount = cfdarg.charcount;
+ op->data = compat_ptr(cfdarg.chardata);
+- i = con_font_op(vc_cons[fg_console].d, op);
++ i = con_font_op(vc, op);
+ if (i)
+ return i;
+ cfdarg.charheight = op->height;
+@@ -1185,7 +1186,7 @@ long vt_compat_ioctl(struct tty_struct *tty,
+ */
+ case PIO_FONTX:
+ case GIO_FONTX:
+- return compat_fontx_ioctl(cmd, up, perm, &op);
++ return compat_fontx_ioctl(vc, cmd, up, perm, &op);
+
+ case KDFONTOP:
+ return compat_kdfontop_ioctl(up, perm, &op, vc);
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index 73efb80815db8..6dca744e39e95 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -1048,8 +1048,6 @@ void uio_unregister_device(struct uio_info *info)
+
+ idev = info->uio_dev;
+
+- uio_free_minor(idev);
+-
+ mutex_lock(&idev->info_lock);
+ uio_dev_del_attributes(idev);
+
+@@ -1064,6 +1062,8 @@ void uio_unregister_device(struct uio_info *info)
+
+ device_unregister(&idev->dev);
+
++ uio_free_minor(idev);
++
+ return;
+ }
+ EXPORT_SYMBOL_GPL(uio_unregister_device);
+diff --git a/drivers/usb/cdns3/ep0.c b/drivers/usb/cdns3/ep0.c
+index d9779abc65b2b..89b94e45ee15d 100644
+--- a/drivers/usb/cdns3/ep0.c
++++ b/drivers/usb/cdns3/ep0.c
+@@ -137,48 +137,36 @@ static int cdns3_req_ep0_set_configuration(struct cdns3_device *priv_dev,
+ struct usb_ctrlrequest *ctrl_req)
+ {
+ enum usb_device_state device_state = priv_dev->gadget.state;
+- struct cdns3_endpoint *priv_ep;
+ u32 config = le16_to_cpu(ctrl_req->wValue);
+ int result = 0;
+- int i;
+
+ switch (device_state) {
+ case USB_STATE_ADDRESS:
+- /* Configure non-control EPs */
+- for (i = 0; i < CDNS3_ENDPOINTS_MAX_COUNT; i++) {
+- priv_ep = priv_dev->eps[i];
+- if (!priv_ep)
+- continue;
+-
+- if (priv_ep->flags & EP_CLAIMED)
+- cdns3_ep_config(priv_ep);
+- }
+-
+ result = cdns3_ep0_delegate_req(priv_dev, ctrl_req);
+
+- if (result)
+- return result;
+-
+- if (!config) {
+- cdns3_hw_reset_eps_config(priv_dev);
+- usb_gadget_set_state(&priv_dev->gadget,
+- USB_STATE_ADDRESS);
+- }
++ if (result || !config)
++ goto reset_config;
+
+ break;
+ case USB_STATE_CONFIGURED:
+ result = cdns3_ep0_delegate_req(priv_dev, ctrl_req);
++ if (!config && !result)
++ goto reset_config;
+
+- if (!config && !result) {
+- cdns3_hw_reset_eps_config(priv_dev);
+- usb_gadget_set_state(&priv_dev->gadget,
+- USB_STATE_ADDRESS);
+- }
+ break;
+ default:
+- result = -EINVAL;
++ return -EINVAL;
+ }
+
++ return 0;
++
++reset_config:
++ if (result != USB_GADGET_DELAYED_STATUS)
++ cdns3_hw_reset_eps_config(priv_dev);
++
++ usb_gadget_set_state(&priv_dev->gadget,
++ USB_STATE_ADDRESS);
++
+ return result;
+ }
+
+@@ -705,6 +693,7 @@ static int cdns3_gadget_ep0_queue(struct usb_ep *ep,
+ unsigned long flags;
+ int ret = 0;
+ u8 zlp = 0;
++ int i;
+
+ spin_lock_irqsave(&priv_dev->lock, flags);
+ trace_cdns3_ep0_queue(priv_dev, request);
+@@ -718,6 +707,17 @@ static int cdns3_gadget_ep0_queue(struct usb_ep *ep,
+ /* send STATUS stage. Should be called only for SET_CONFIGURATION */
+ if (priv_dev->ep0_stage == CDNS3_STATUS_STAGE) {
+ cdns3_select_ep(priv_dev, 0x00);
++
++ /*
++ * Configure all non-control EPs which are not enabled by class driver
++ */
++ for (i = 0; i < CDNS3_ENDPOINTS_MAX_COUNT; i++) {
++ priv_ep = priv_dev->eps[i];
++ if (priv_ep && priv_ep->flags & EP_CLAIMED &&
++ !(priv_ep->flags & EP_ENABLED))
++ cdns3_ep_config(priv_ep, 0);
++ }
++
+ cdns3_set_hw_configuration(priv_dev);
+ cdns3_ep0_complete_setup(priv_dev, 0, 1);
+ request->actual = 0;
+@@ -803,6 +803,7 @@ void cdns3_ep0_config(struct cdns3_device *priv_dev)
+ struct cdns3_usb_regs __iomem *regs;
+ struct cdns3_endpoint *priv_ep;
+ u32 max_packet_size = 64;
++ u32 ep_cfg;
+
+ regs = priv_dev->regs;
+
+@@ -834,8 +835,10 @@ void cdns3_ep0_config(struct cdns3_device *priv_dev)
+ BIT(0) | BIT(16));
+ }
+
+- writel(EP_CFG_ENABLE | EP_CFG_MAXPKTSIZE(max_packet_size),
+- ®s->ep_cfg);
++ ep_cfg = EP_CFG_ENABLE | EP_CFG_MAXPKTSIZE(max_packet_size);
++
++ if (!(priv_ep->flags & EP_CONFIGURED))
++ writel(ep_cfg, ®s->ep_cfg);
+
+ writel(EP_STS_EN_SETUPEN | EP_STS_EN_DESCMISEN | EP_STS_EN_TRBERREN,
+ ®s->ep_sts_en);
+@@ -843,8 +846,10 @@ void cdns3_ep0_config(struct cdns3_device *priv_dev)
+ /* init ep in */
+ cdns3_select_ep(priv_dev, USB_DIR_IN);
+
+- writel(EP_CFG_ENABLE | EP_CFG_MAXPKTSIZE(max_packet_size),
+- ®s->ep_cfg);
++ if (!(priv_ep->flags & EP_CONFIGURED))
++ writel(ep_cfg, ®s->ep_cfg);
++
++ priv_ep->flags |= EP_CONFIGURED;
+
+ writel(EP_STS_EN_SETUPEN | EP_STS_EN_TRBERREN, ®s->ep_sts_en);
+
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index 02a69e20014b1..e0e1cb907ffd8 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -296,6 +296,8 @@ static void cdns3_ep_stall_flush(struct cdns3_endpoint *priv_ep)
+ */
+ void cdns3_hw_reset_eps_config(struct cdns3_device *priv_dev)
+ {
++ int i;
++
+ writel(USB_CONF_CFGRST, &priv_dev->regs->usb_conf);
+
+ cdns3_allow_enable_l1(priv_dev, 0);
+@@ -304,6 +306,10 @@ void cdns3_hw_reset_eps_config(struct cdns3_device *priv_dev)
+ priv_dev->out_mem_is_allocated = 0;
+ priv_dev->wait_for_setup = 0;
+ priv_dev->using_streams = 0;
++
++ for (i = 0; i < CDNS3_ENDPOINTS_MAX_COUNT; i++)
++ if (priv_dev->eps[i])
++ priv_dev->eps[i]->flags &= ~EP_CONFIGURED;
+ }
+
+ /**
+@@ -1907,27 +1913,6 @@ static int cdns3_ep_onchip_buffer_reserve(struct cdns3_device *priv_dev,
+ return 0;
+ }
+
+-static void cdns3_stream_ep_reconfig(struct cdns3_device *priv_dev,
+- struct cdns3_endpoint *priv_ep)
+-{
+- if (!priv_ep->use_streams || priv_dev->gadget.speed < USB_SPEED_SUPER)
+- return;
+-
+- if (priv_dev->dev_ver >= DEV_VER_V3) {
+- u32 mask = BIT(priv_ep->num + (priv_ep->dir ? 16 : 0));
+-
+- /*
+- * Stream capable endpoints are handled by using ep_tdl
+- * register. Other endpoints use TDL from TRB feature.
+- */
+- cdns3_clear_register_bit(&priv_dev->regs->tdl_from_trb, mask);
+- }
+-
+- /* Enable Stream Bit TDL chk and SID chk */
+- cdns3_set_register_bit(&priv_dev->regs->ep_cfg, EP_CFG_STREAM_EN |
+- EP_CFG_TDL_CHK | EP_CFG_SID_CHK);
+-}
+-
+ static void cdns3_configure_dmult(struct cdns3_device *priv_dev,
+ struct cdns3_endpoint *priv_ep)
+ {
+@@ -1965,8 +1950,9 @@ static void cdns3_configure_dmult(struct cdns3_device *priv_dev,
+ /**
+ * cdns3_ep_config Configure hardware endpoint
+ * @priv_ep: extended endpoint object
++ * @enable: set EP_CFG_ENABLE bit in ep_cfg register.
+ */
+-void cdns3_ep_config(struct cdns3_endpoint *priv_ep)
++int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable)
+ {
+ bool is_iso_ep = (priv_ep->type == USB_ENDPOINT_XFER_ISOC);
+ struct cdns3_device *priv_dev = priv_ep->cdns3_dev;
+@@ -2027,7 +2013,7 @@ void cdns3_ep_config(struct cdns3_endpoint *priv_ep)
+ break;
+ default:
+ /* all other speed are not supported */
+- return;
++ return -EINVAL;
+ }
+
+ if (max_packet_size == 1024)
+@@ -2037,11 +2023,33 @@ void cdns3_ep_config(struct cdns3_endpoint *priv_ep)
+ else
+ priv_ep->trb_burst_size = 16;
+
+- ret = cdns3_ep_onchip_buffer_reserve(priv_dev, buffering + 1,
+- !!priv_ep->dir);
+- if (ret) {
+- dev_err(priv_dev->dev, "onchip mem is full, ep is invalid\n");
+- return;
++ /* onchip buffer is only allocated before configuration */
++ if (!priv_dev->hw_configured_flag) {
++ ret = cdns3_ep_onchip_buffer_reserve(priv_dev, buffering + 1,
++ !!priv_ep->dir);
++ if (ret) {
++ dev_err(priv_dev->dev, "onchip mem is full, ep is invalid\n");
++ return ret;
++ }
++ }
++
++ if (enable)
++ ep_cfg |= EP_CFG_ENABLE;
++
++ if (priv_ep->use_streams && priv_dev->gadget.speed >= USB_SPEED_SUPER) {
++ if (priv_dev->dev_ver >= DEV_VER_V3) {
++ u32 mask = BIT(priv_ep->num + (priv_ep->dir ? 16 : 0));
++
++ /*
++ * Stream capable endpoints are handled by using ep_tdl
++ * register. Other endpoints use TDL from TRB feature.
++ */
++ cdns3_clear_register_bit(&priv_dev->regs->tdl_from_trb,
++ mask);
++ }
++
++ /* Enable Stream Bit TDL chk and SID chk */
++ ep_cfg |= EP_CFG_STREAM_EN | EP_CFG_TDL_CHK | EP_CFG_SID_CHK;
+ }
+
+ ep_cfg |= EP_CFG_MAXPKTSIZE(max_packet_size) |
+@@ -2051,9 +2059,12 @@ void cdns3_ep_config(struct cdns3_endpoint *priv_ep)
+
+ cdns3_select_ep(priv_dev, bEndpointAddress);
+ writel(ep_cfg, &priv_dev->regs->ep_cfg);
++ priv_ep->flags |= EP_CONFIGURED;
+
+ dev_dbg(priv_dev->dev, "Configure %s: with val %08x\n",
+ priv_ep->name, ep_cfg);
++
++ return 0;
+ }
+
+ /* Find correct direction for HW endpoint according to description */
+@@ -2194,7 +2205,7 @@ static int cdns3_gadget_ep_enable(struct usb_ep *ep,
+ u32 bEndpointAddress;
+ unsigned long flags;
+ int enable = 1;
+- int ret;
++ int ret = 0;
+ int val;
+
+ priv_ep = ep_to_cdns3_ep(ep);
+@@ -2233,6 +2244,17 @@ static int cdns3_gadget_ep_enable(struct usb_ep *ep,
+ bEndpointAddress = priv_ep->num | priv_ep->dir;
+ cdns3_select_ep(priv_dev, bEndpointAddress);
+
++ /*
++ * For some versions of controller at some point during ISO OUT traffic
++ * DMA reads Transfer Ring for the EP which has never got doorbell.
++ * This issue was detected only on simulation, but to avoid this issue
++ * driver add protection against it. To fix it driver enable ISO OUT
++ * endpoint before setting DRBL. This special treatment of ISO OUT
++ * endpoints are recommended by controller specification.
++ */
++ if (priv_ep->type == USB_ENDPOINT_XFER_ISOC && !priv_ep->dir)
++ enable = 0;
++
+ if (usb_ss_max_streams(comp_desc) && usb_endpoint_xfer_bulk(desc)) {
+ /*
+ * Enable stream support (SS mode) related interrupts
+@@ -2243,13 +2265,17 @@ static int cdns3_gadget_ep_enable(struct usb_ep *ep,
+ EP_STS_EN_SIDERREN | EP_STS_EN_MD_EXITEN |
+ EP_STS_EN_STREAMREN;
+ priv_ep->use_streams = true;
+- cdns3_stream_ep_reconfig(priv_dev, priv_ep);
++ ret = cdns3_ep_config(priv_ep, enable);
+ priv_dev->using_streams |= true;
+ }
++ } else {
++ ret = cdns3_ep_config(priv_ep, enable);
+ }
+
+- ret = cdns3_allocate_trb_pool(priv_ep);
++ if (ret)
++ goto exit;
+
++ ret = cdns3_allocate_trb_pool(priv_ep);
+ if (ret)
+ goto exit;
+
+@@ -2279,20 +2305,6 @@ static int cdns3_gadget_ep_enable(struct usb_ep *ep,
+
+ writel(reg, &priv_dev->regs->ep_sts_en);
+
+- /*
+- * For some versions of controller at some point during ISO OUT traffic
+- * DMA reads Transfer Ring for the EP which has never got doorbell.
+- * This issue was detected only on simulation, but to avoid this issue
+- * driver add protection against it. To fix it driver enable ISO OUT
+- * endpoint before setting DRBL. This special treatment of ISO OUT
+- * endpoints are recommended by controller specification.
+- */
+- if (priv_ep->type == USB_ENDPOINT_XFER_ISOC && !priv_ep->dir)
+- enable = 0;
+-
+- if (enable)
+- cdns3_set_register_bit(&priv_dev->regs->ep_cfg, EP_CFG_ENABLE);
+-
+ ep->desc = desc;
+ priv_ep->flags &= ~(EP_PENDING_REQUEST | EP_STALLED | EP_STALL_PENDING |
+ EP_QUIRK_ISO_OUT_EN | EP_QUIRK_EXTRA_BUF_EN);
+diff --git a/drivers/usb/cdns3/gadget.h b/drivers/usb/cdns3/gadget.h
+index 52765b098b9e1..8212bddf6c8d1 100644
+--- a/drivers/usb/cdns3/gadget.h
++++ b/drivers/usb/cdns3/gadget.h
+@@ -1154,6 +1154,7 @@ struct cdns3_endpoint {
+ #define EP_QUIRK_EXTRA_BUF_DET BIT(12)
+ #define EP_QUIRK_EXTRA_BUF_EN BIT(13)
+ #define EP_TDLCHK_EN BIT(15)
++#define EP_CONFIGURED BIT(16)
+ u32 flags;
+
+ struct cdns3_request *descmis_req;
+@@ -1351,7 +1352,7 @@ void cdns3_gadget_giveback(struct cdns3_endpoint *priv_ep,
+ int cdns3_init_ep0(struct cdns3_device *priv_dev,
+ struct cdns3_endpoint *priv_ep);
+ void cdns3_ep0_config(struct cdns3_device *priv_dev);
+-void cdns3_ep_config(struct cdns3_endpoint *priv_ep);
++int cdns3_ep_config(struct cdns3_endpoint *priv_ep, bool enable);
+ void cdns3_check_ep0_interrupt_proceed(struct cdns3_device *priv_dev, int dir);
+ int __cdns3_gadget_wakeup(struct cdns3_device *priv_dev);
+
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 24d79eec6654e..71664bfcf1bd8 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -507,6 +507,7 @@ static void acm_read_bulk_callback(struct urb *urb)
+ "%s - cooling babbling device\n", __func__);
+ usb_mark_last_busy(acm->dev);
+ set_bit(rb->index, &acm->urbs_in_error_delay);
++ set_bit(ACM_ERROR_DELAY, &acm->flags);
+ cooldown = true;
+ break;
+ default:
+@@ -532,7 +533,7 @@ static void acm_read_bulk_callback(struct urb *urb)
+
+ if (stopped || stalled || cooldown) {
+ if (stalled)
+- schedule_work(&acm->work);
++ schedule_delayed_work(&acm->dwork, 0);
+ else if (cooldown)
+ schedule_delayed_work(&acm->dwork, HZ / 2);
+ return;
+@@ -562,13 +563,13 @@ static void acm_write_bulk(struct urb *urb)
+ acm_write_done(acm, wb);
+ spin_unlock_irqrestore(&acm->write_lock, flags);
+ set_bit(EVENT_TTY_WAKEUP, &acm->flags);
+- schedule_work(&acm->work);
++ schedule_delayed_work(&acm->dwork, 0);
+ }
+
+ static void acm_softint(struct work_struct *work)
+ {
+ int i;
+- struct acm *acm = container_of(work, struct acm, work);
++ struct acm *acm = container_of(work, struct acm, dwork.work);
+
+ if (test_bit(EVENT_RX_STALL, &acm->flags)) {
+ smp_mb(); /* against acm_suspend() */
+@@ -584,7 +585,7 @@ static void acm_softint(struct work_struct *work)
+ if (test_and_clear_bit(ACM_ERROR_DELAY, &acm->flags)) {
+ for (i = 0; i < acm->rx_buflimit; i++)
+ if (test_and_clear_bit(i, &acm->urbs_in_error_delay))
+- acm_submit_read_urb(acm, i, GFP_NOIO);
++ acm_submit_read_urb(acm, i, GFP_KERNEL);
+ }
+
+ if (test_and_clear_bit(EVENT_TTY_WAKEUP, &acm->flags))
+@@ -1364,7 +1365,6 @@ made_compressed_probe:
+ acm->ctrlsize = ctrlsize;
+ acm->readsize = readsize;
+ acm->rx_buflimit = num_rx_buf;
+- INIT_WORK(&acm->work, acm_softint);
+ INIT_DELAYED_WORK(&acm->dwork, acm_softint);
+ init_waitqueue_head(&acm->wioctl);
+ spin_lock_init(&acm->write_lock);
+@@ -1574,7 +1574,6 @@ static void acm_disconnect(struct usb_interface *intf)
+ }
+
+ acm_kill_urbs(acm);
+- cancel_work_sync(&acm->work);
+ cancel_delayed_work_sync(&acm->dwork);
+
+ tty_unregister_device(acm_tty_driver, acm->minor);
+@@ -1617,7 +1616,6 @@ static int acm_suspend(struct usb_interface *intf, pm_message_t message)
+ return 0;
+
+ acm_kill_urbs(acm);
+- cancel_work_sync(&acm->work);
+ cancel_delayed_work_sync(&acm->dwork);
+ acm->urbs_in_error_delay = 0;
+
+diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h
+index cd5e9d8ab2375..b95ff769072e7 100644
+--- a/drivers/usb/class/cdc-acm.h
++++ b/drivers/usb/class/cdc-acm.h
+@@ -112,8 +112,7 @@ struct acm {
+ # define ACM_ERROR_DELAY 3
+ unsigned long urbs_in_error_delay; /* these need to be restarted after a delay */
+ struct usb_cdc_line_coding line; /* bits, stop, parity */
+- struct work_struct work; /* work queue entry for various purposes*/
+- struct delayed_work dwork; /* for cool downs needed in error recovery */
++ struct delayed_work dwork; /* work queue entry for various purposes */
+ unsigned int ctrlin; /* input control lines (DCD, DSR, RI, break, overruns) */
+ unsigned int ctrlout; /* output control lines (DTR, RTS) */
+ struct async_icount iocount; /* counters for control line changes */
+diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
+index b351962279e4d..1b53dc9237579 100644
+--- a/drivers/usb/core/driver.c
++++ b/drivers/usb/core/driver.c
+@@ -839,6 +839,22 @@ const struct usb_device_id *usb_device_match_id(struct usb_device *udev,
+ return NULL;
+ }
+
++bool usb_driver_applicable(struct usb_device *udev,
++ struct usb_device_driver *udrv)
++{
++ if (udrv->id_table && udrv->match)
++ return usb_device_match_id(udev, udrv->id_table) != NULL &&
++ udrv->match(udev);
++
++ if (udrv->id_table)
++ return usb_device_match_id(udev, udrv->id_table) != NULL;
++
++ if (udrv->match)
++ return udrv->match(udev);
++
++ return false;
++}
++
+ static int usb_device_match(struct device *dev, struct device_driver *drv)
+ {
+ /* devices and interfaces are handled separately */
+@@ -853,17 +869,14 @@ static int usb_device_match(struct device *dev, struct device_driver *drv)
+ udev = to_usb_device(dev);
+ udrv = to_usb_device_driver(drv);
+
+- if (udrv->id_table)
+- return usb_device_match_id(udev, udrv->id_table) != NULL;
+-
+- if (udrv->match)
+- return udrv->match(udev);
+-
+ /* If the device driver under consideration does not have a
+ * id_table or a match function, then let the driver's probe
+ * function decide.
+ */
+- return 1;
++ if (!udrv->id_table && !udrv->match)
++ return 1;
++
++ return usb_driver_applicable(udev, udrv);
+
+ } else if (is_usb_interface(dev)) {
+ struct usb_interface *intf;
+@@ -941,8 +954,7 @@ static int __usb_bus_reprobe_drivers(struct device *dev, void *data)
+ return 0;
+
+ udev = to_usb_device(dev);
+- if (usb_device_match_id(udev, new_udriver->id_table) == NULL &&
+- (!new_udriver->match || new_udriver->match(udev) == 0))
++ if (!usb_driver_applicable(udev, new_udriver))
+ return 0;
+
+ ret = device_reprobe(dev);
+diff --git a/drivers/usb/core/generic.c b/drivers/usb/core/generic.c
+index 2b2f1ab6e36aa..d87175fc8a98d 100644
+--- a/drivers/usb/core/generic.c
++++ b/drivers/usb/core/generic.c
+@@ -205,9 +205,7 @@ static int __check_usb_generic(struct device_driver *drv, void *data)
+ udrv = to_usb_device_driver(drv);
+ if (udrv == &usb_generic_driver)
+ return 0;
+- if (usb_device_match_id(udev, udrv->id_table) != NULL)
+- return 1;
+- return (udrv->match && udrv->match(udev));
++ return usb_driver_applicable(udev, udrv);
+ }
+
+ static bool usb_generic_driver_match(struct usb_device *udev)
+diff --git a/drivers/usb/core/usb.h b/drivers/usb/core/usb.h
+index 98e7d1ee63dc3..0ebaf8a784f76 100644
+--- a/drivers/usb/core/usb.h
++++ b/drivers/usb/core/usb.h
+@@ -74,6 +74,8 @@ extern int usb_match_device(struct usb_device *dev,
+ const struct usb_device_id *id);
+ extern const struct usb_device_id *usb_device_match_id(struct usb_device *udev,
+ const struct usb_device_id *id);
++extern bool usb_driver_applicable(struct usb_device *udev,
++ struct usb_device_driver *udrv);
+ extern void usb_forced_unbind_intf(struct usb_interface *intf);
+ extern void usb_unbind_and_rebind_marked_interfaces(struct usb_device *udev);
+
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 2f9f4ad562d4e..60b5a69409737 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -121,9 +121,6 @@ static void __dwc3_set_mode(struct work_struct *work)
+ int ret;
+ u32 reg;
+
+- if (dwc->dr_mode != USB_DR_MODE_OTG)
+- return;
+-
+ pm_runtime_get_sync(dwc->dev);
+
+ if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_OTG)
+@@ -209,6 +206,9 @@ void dwc3_set_mode(struct dwc3 *dwc, u32 mode)
+ {
+ unsigned long flags;
+
++ if (dwc->dr_mode != USB_DR_MODE_OTG)
++ return;
++
+ spin_lock_irqsave(&dwc->lock, flags);
+ dwc->desired_dr_role = mode;
+ spin_unlock_irqrestore(&dwc->lock, flags);
+@@ -1564,6 +1564,17 @@ static int dwc3_probe(struct platform_device *pdev)
+
+ err5:
+ dwc3_event_buffers_cleanup(dwc);
++
++ usb_phy_shutdown(dwc->usb2_phy);
++ usb_phy_shutdown(dwc->usb3_phy);
++ phy_exit(dwc->usb2_generic_phy);
++ phy_exit(dwc->usb3_generic_phy);
++
++ usb_phy_set_suspend(dwc->usb2_phy, 1);
++ usb_phy_set_suspend(dwc->usb3_phy, 1);
++ phy_power_off(dwc->usb2_generic_phy);
++ phy_power_off(dwc->usb3_generic_phy);
++
+ dwc3_ulpi_exit(dwc);
+
+ err4:
+@@ -1599,9 +1610,9 @@ static int dwc3_remove(struct platform_device *pdev)
+ dwc3_core_exit(dwc);
+ dwc3_ulpi_exit(dwc);
+
+- pm_runtime_put_sync(&pdev->dev);
+- pm_runtime_allow(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_put_noidle(&pdev->dev);
++ pm_runtime_set_suspended(&pdev->dev);
+
+ dwc3_free_event_buffers(dwc);
+ dwc3_free_scratch_buffers(dwc);
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index ba0f743f35528..6d843e6c29410 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -710,6 +710,7 @@ struct dwc3_ep {
+ #define DWC3_EP_IGNORE_NEXT_NOSTREAM BIT(8)
+ #define DWC3_EP_FORCE_RESTART_STREAM BIT(9)
+ #define DWC3_EP_FIRST_STREAM_PRIMED BIT(10)
++#define DWC3_EP_PENDING_CLEAR_STALL BIT(11)
+
+ /* This last one is specific to EP0 */
+ #define DWC3_EP0_DIR_IN BIT(31)
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index f5a61f57c74f0..242b6210380a4 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -147,7 +147,8 @@ static int dwc3_pci_quirks(struct dwc3_pci *dwc)
+
+ if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
+ if (pdev->device == PCI_DEVICE_ID_INTEL_BXT ||
+- pdev->device == PCI_DEVICE_ID_INTEL_BXT_M) {
++ pdev->device == PCI_DEVICE_ID_INTEL_BXT_M ||
++ pdev->device == PCI_DEVICE_ID_INTEL_EHLLP) {
+ guid_parse(PCI_INTEL_BXT_DSM_GUID, &dwc->guid);
+ dwc->has_dsm_for_pm = true;
+ }
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index 59f2e8c31bd1b..cc816142eb95e 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -524,6 +524,11 @@ static int dwc3_ep0_handle_endpoint(struct dwc3 *dwc,
+ ret = __dwc3_gadget_ep_set_halt(dep, set, true);
+ if (ret)
+ return -EINVAL;
++
++ /* ClearFeature(Halt) may need delayed status */
++ if (!set && (dep->flags & DWC3_EP_END_TRANSFER_PENDING))
++ return USB_GADGET_DELAYED_STATUS;
++
+ break;
+ default:
+ return -EINVAL;
+@@ -942,12 +947,16 @@ static void dwc3_ep0_xfer_complete(struct dwc3 *dwc,
+ static void __dwc3_ep0_do_control_data(struct dwc3 *dwc,
+ struct dwc3_ep *dep, struct dwc3_request *req)
+ {
++ unsigned int trb_length = 0;
+ int ret;
+
+ req->direction = !!dep->number;
+
+ if (req->request.length == 0) {
+- dwc3_ep0_prepare_one_trb(dep, dwc->ep0_trb_addr, 0,
++ if (!req->direction)
++ trb_length = dep->endpoint.maxpacket;
++
++ dwc3_ep0_prepare_one_trb(dep, dwc->bounce_addr, trb_length,
+ DWC3_TRBCTL_CONTROL_DATA, false);
+ ret = dwc3_ep0_start_trans(dep);
+ } else if (!IS_ALIGNED(req->request.length, dep->endpoint.maxpacket)
+@@ -994,9 +1003,12 @@ static void __dwc3_ep0_do_control_data(struct dwc3 *dwc,
+
+ req->trb = &dwc->ep0_trb[dep->trb_enqueue - 1];
+
++ if (!req->direction)
++ trb_length = dep->endpoint.maxpacket;
++
+ /* Now prepare one extra TRB to align transfer size */
+ dwc3_ep0_prepare_one_trb(dep, dwc->bounce_addr,
+- 0, DWC3_TRBCTL_CONTROL_DATA,
++ trb_length, DWC3_TRBCTL_CONTROL_DATA,
+ false);
+ ret = dwc3_ep0_start_trans(dep);
+ } else {
+@@ -1042,6 +1054,17 @@ static void dwc3_ep0_do_control_status(struct dwc3 *dwc,
+ __dwc3_ep0_do_control_status(dwc, dep);
+ }
+
++void dwc3_ep0_send_delayed_status(struct dwc3 *dwc)
++{
++ unsigned int direction = !dwc->ep0_expect_in;
++
++ if (dwc->ep0state != EP0_STATUS_PHASE)
++ return;
++
++ dwc->delayed_status = false;
++ __dwc3_ep0_do_control_status(dwc, dwc->eps[direction]);
++}
++
+ static void dwc3_ep0_end_control_data(struct dwc3 *dwc, struct dwc3_ep *dep)
+ {
+ struct dwc3_gadget_ep_cmd_params params;
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index c2a0f64f8d1e1..e822ba03d3cc3 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1095,6 +1095,8 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
+ struct scatterlist *s;
+ int i;
+ unsigned int length = req->request.length;
++ unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);
++ unsigned int rem = length % maxp;
+ unsigned int remaining = req->request.num_mapped_sgs
+ - req->num_queued_sgs;
+
+@@ -1106,8 +1108,6 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
+ length -= sg_dma_len(s);
+
+ for_each_sg(sg, s, remaining, i) {
+- unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);
+- unsigned int rem = length % maxp;
+ unsigned int trb_length;
+ unsigned chain = true;
+
+@@ -1628,8 +1628,13 @@ static int __dwc3_gadget_ep_queue(struct dwc3_ep *dep, struct dwc3_request *req)
+ if (dep->flags & DWC3_EP_WAIT_TRANSFER_COMPLETE)
+ return 0;
+
+- /* Start the transfer only after the END_TRANSFER is completed */
+- if (dep->flags & DWC3_EP_END_TRANSFER_PENDING) {
++ /*
++ * Start the transfer only after the END_TRANSFER is completed
++ * and endpoint STALL is cleared.
++ */
++ if ((dep->flags & DWC3_EP_END_TRANSFER_PENDING) ||
++ (dep->flags & DWC3_EP_WEDGE) ||
++ (dep->flags & DWC3_EP_STALL)) {
+ dep->flags |= DWC3_EP_DELAY_START;
+ return 0;
+ }
+@@ -1822,6 +1827,18 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
+ return 0;
+ }
+
++ dwc3_stop_active_transfer(dep, true, true);
++
++ list_for_each_entry_safe(req, tmp, &dep->started_list, list)
++ dwc3_gadget_move_cancelled_request(req);
++
++ if (dep->flags & DWC3_EP_END_TRANSFER_PENDING) {
++ dep->flags |= DWC3_EP_PENDING_CLEAR_STALL;
++ return 0;
++ }
++
++ dwc3_gadget_ep_cleanup_cancelled_requests(dep);
++
+ ret = dwc3_send_clear_stall_ep_cmd(dep);
+ if (ret) {
+ dev_err(dwc->dev, "failed to clear STALL on %s\n",
+@@ -1831,18 +1848,11 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
+
+ dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
+
+- dwc3_stop_active_transfer(dep, true, true);
+-
+- list_for_each_entry_safe(req, tmp, &dep->started_list, list)
+- dwc3_gadget_move_cancelled_request(req);
+-
+- list_for_each_entry_safe(req, tmp, &dep->pending_list, list)
+- dwc3_gadget_move_cancelled_request(req);
++ if ((dep->flags & DWC3_EP_DELAY_START) &&
++ !usb_endpoint_xfer_isoc(dep->endpoint.desc))
++ __dwc3_gadget_kick_transfer(dep);
+
+- if (!(dep->flags & DWC3_EP_END_TRANSFER_PENDING)) {
+- dep->flags &= ~DWC3_EP_DELAY_START;
+- dwc3_gadget_ep_cleanup_cancelled_requests(dep);
+- }
++ dep->flags &= ~DWC3_EP_DELAY_START;
+ }
+
+ return ret;
+@@ -2732,6 +2742,11 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+ ret = dwc3_gadget_ep_reclaim_trb_linear(dep, req, event,
+ status);
+
++ req->request.actual = req->request.length - req->remaining;
++
++ if (!dwc3_gadget_ep_request_completed(req))
++ goto out;
++
+ if (req->needs_extra_trb) {
+ unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);
+
+@@ -2747,11 +2762,6 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+ req->needs_extra_trb = false;
+ }
+
+- req->request.actual = req->request.length - req->remaining;
+-
+- if (!dwc3_gadget_ep_request_completed(req))
+- goto out;
+-
+ dwc3_gadget_giveback(dep, req, status);
+
+ out:
+@@ -2997,6 +3007,26 @@ static void dwc3_endpoint_interrupt(struct dwc3 *dwc,
+ dep->flags &= ~DWC3_EP_END_TRANSFER_PENDING;
+ dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
+ dwc3_gadget_ep_cleanup_cancelled_requests(dep);
++
++ if (dep->flags & DWC3_EP_PENDING_CLEAR_STALL) {
++ struct dwc3 *dwc = dep->dwc;
++
++ dep->flags &= ~DWC3_EP_PENDING_CLEAR_STALL;
++ if (dwc3_send_clear_stall_ep_cmd(dep)) {
++ struct usb_ep *ep0 = &dwc->eps[0]->endpoint;
++
++ dev_err(dwc->dev, "failed to clear STALL on %s\n",
++ dep->name);
++ if (dwc->delayed_status)
++ __dwc3_gadget_ep0_set_halt(ep0, 1);
++ return;
++ }
++
++ dep->flags &= ~(DWC3_EP_STALL | DWC3_EP_WEDGE);
++ if (dwc->delayed_status)
++ dwc3_ep0_send_delayed_status(dwc);
++ }
++
+ if ((dep->flags & DWC3_EP_DELAY_START) &&
+ !usb_endpoint_xfer_isoc(dep->endpoint.desc))
+ __dwc3_gadget_kick_transfer(dep);
+diff --git a/drivers/usb/dwc3/gadget.h b/drivers/usb/dwc3/gadget.h
+index bd85eb7fa9ef8..a7791cb827c49 100644
+--- a/drivers/usb/dwc3/gadget.h
++++ b/drivers/usb/dwc3/gadget.h
+@@ -113,6 +113,7 @@ int dwc3_gadget_ep0_set_halt(struct usb_ep *ep, int value);
+ int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct usb_request *request,
+ gfp_t gfp_flags);
+ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol);
++void dwc3_ep0_send_delayed_status(struct dwc3 *dwc);
+
+ /**
+ * dwc3_gadget_ep_get_transfer_index - Gets transfer index from HW
+diff --git a/drivers/usb/host/ehci-tegra.c b/drivers/usb/host/ehci-tegra.c
+index e077b2ca53c51..869d9c4de5fcd 100644
+--- a/drivers/usb/host/ehci-tegra.c
++++ b/drivers/usb/host/ehci-tegra.c
+@@ -479,8 +479,8 @@ static int tegra_ehci_probe(struct platform_device *pdev)
+ u_phy->otg->host = hcd_to_bus(hcd);
+
+ irq = platform_get_irq(pdev, 0);
+- if (!irq) {
+- err = -ENODEV;
++ if (irq < 0) {
++ err = irq;
+ goto cleanup_phy;
+ }
+
+diff --git a/drivers/usb/host/fsl-mph-dr-of.c b/drivers/usb/host/fsl-mph-dr-of.c
+index ae8f60f6e6a5e..44a7e58a26e3d 100644
+--- a/drivers/usb/host/fsl-mph-dr-of.c
++++ b/drivers/usb/host/fsl-mph-dr-of.c
+@@ -94,10 +94,13 @@ static struct platform_device *fsl_usb2_device_register(
+
+ pdev->dev.coherent_dma_mask = ofdev->dev.coherent_dma_mask;
+
+- if (!pdev->dev.dma_mask)
++ if (!pdev->dev.dma_mask) {
+ pdev->dev.dma_mask = &ofdev->dev.coherent_dma_mask;
+- else
+- dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
++ } else {
++ retval = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
++ if (retval)
++ goto error;
++ }
+
+ retval = platform_device_add_data(pdev, pdata, sizeof(*pdata));
+ if (retval)
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 3feaafebfe581..90a1a750c150d 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -22,6 +22,8 @@
+ #define SSIC_PORT_CFG2_OFFSET 0x30
+ #define PROG_DONE (1 << 30)
+ #define SSIC_PORT_UNUSED (1 << 31)
++#define SPARSE_DISABLE_BIT 17
++#define SPARSE_CNTL_ENABLE 0xC12C
+
+ /* Device for a quirk */
+ #define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73
+@@ -160,6 +162,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ (pdev->device == 0x15e0 || pdev->device == 0x15e1))
+ xhci->quirks |= XHCI_SNPS_BROKEN_SUSPEND;
+
++ if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x15e5)
++ xhci->quirks |= XHCI_DISABLE_SPARSE;
++
+ if (pdev->vendor == PCI_VENDOR_ID_AMD)
+ xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+
+@@ -490,6 +495,15 @@ static void xhci_pme_quirk(struct usb_hcd *hcd)
+ readl(reg);
+ }
+
++static void xhci_sparse_control_quirk(struct usb_hcd *hcd)
++{
++ u32 reg;
++
++ reg = readl(hcd->regs + SPARSE_CNTL_ENABLE);
++ reg &= ~BIT(SPARSE_DISABLE_BIT);
++ writel(reg, hcd->regs + SPARSE_CNTL_ENABLE);
++}
++
+ static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup)
+ {
+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+@@ -509,6 +523,9 @@ static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup)
+ if (xhci->quirks & XHCI_SSIC_PORT_UNUSED)
+ xhci_ssic_port_unused_quirk(hcd, true);
+
++ if (xhci->quirks & XHCI_DISABLE_SPARSE)
++ xhci_sparse_control_quirk(hcd);
++
+ ret = xhci_suspend(xhci, do_wakeup);
+ if (ret && (xhci->quirks & XHCI_SSIC_PORT_UNUSED))
+ xhci_ssic_port_unused_quirk(hcd, false);
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index e534f524b7f87..e88f4f9539955 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -982,12 +982,15 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup)
+ xhci->shared_hcd->state != HC_STATE_SUSPENDED)
+ return -EINVAL;
+
+- xhci_dbc_suspend(xhci);
+-
+ /* Clear root port wake on bits if wakeup not allowed. */
+ if (!do_wakeup)
+ xhci_disable_port_wake_on_bits(xhci);
+
++ if (!HCD_HW_ACCESSIBLE(hcd))
++ return 0;
++
++ xhci_dbc_suspend(xhci);
++
+ /* Don't poll the roothubs on bus suspend. */
+ xhci_dbg(xhci, "%s: stopping port polling.\n", __func__);
+ clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index ea1754f185a22..564945eae5022 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1874,6 +1874,7 @@ struct xhci_hcd {
+ #define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34)
+ #define XHCI_SNPS_BROKEN_SUSPEND BIT_ULL(35)
+ #define XHCI_RENESAS_FW_QUIRK BIT_ULL(36)
++#define XHCI_DISABLE_SPARSE BIT_ULL(38)
+
+ unsigned int num_active_eps;
+ unsigned int limit_active_eps;
+diff --git a/drivers/usb/misc/adutux.c b/drivers/usb/misc/adutux.c
+index a7eefe11f31aa..45a3879799352 100644
+--- a/drivers/usb/misc/adutux.c
++++ b/drivers/usb/misc/adutux.c
+@@ -209,6 +209,7 @@ static void adu_interrupt_out_callback(struct urb *urb)
+
+ if (status != 0) {
+ if ((status != -ENOENT) &&
++ (status != -ESHUTDOWN) &&
+ (status != -ECONNRESET)) {
+ dev_dbg(&dev->udev->dev,
+ "%s :nonzero status received: %d\n", __func__,
+diff --git a/drivers/usb/misc/apple-mfi-fastcharge.c b/drivers/usb/misc/apple-mfi-fastcharge.c
+index b403094a6b3a5..579d8c84de42c 100644
+--- a/drivers/usb/misc/apple-mfi-fastcharge.c
++++ b/drivers/usb/misc/apple-mfi-fastcharge.c
+@@ -163,17 +163,23 @@ static const struct power_supply_desc apple_mfi_fc_desc = {
+ .property_is_writeable = apple_mfi_fc_property_is_writeable
+ };
+
++static bool mfi_fc_match(struct usb_device *udev)
++{
++ int idProduct;
++
++ idProduct = le16_to_cpu(udev->descriptor.idProduct);
++ /* See comment above mfi_fc_id_table[] */
++ return (idProduct >= 0x1200 && idProduct <= 0x12ff);
++}
++
+ static int mfi_fc_probe(struct usb_device *udev)
+ {
+ struct power_supply_config battery_cfg = {};
+ struct mfi_device *mfi = NULL;
+- int err, idProduct;
++ int err;
+
+- idProduct = le16_to_cpu(udev->descriptor.idProduct);
+- /* See comment above mfi_fc_id_table[] */
+- if (idProduct < 0x1200 || idProduct > 0x12ff) {
++ if (!mfi_fc_match(udev))
+ return -ENODEV;
+- }
+
+ mfi = kzalloc(sizeof(struct mfi_device), GFP_KERNEL);
+ if (!mfi) {
+@@ -220,6 +226,7 @@ static struct usb_device_driver mfi_fc_driver = {
+ .probe = mfi_fc_probe,
+ .disconnect = mfi_fc_disconnect,
+ .id_table = mfi_fc_id_table,
++ .match = mfi_fc_match,
+ .generic_subclass = 1,
+ };
+
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index a48e3f90d1961..af1b02f3e35f1 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -2789,6 +2789,9 @@ static void tcpm_reset_port(struct tcpm_port *port)
+
+ static void tcpm_detach(struct tcpm_port *port)
+ {
++ if (tcpm_port_is_disconnected(port))
++ port->hard_reset_count = 0;
++
+ if (!port->attached)
+ return;
+
+@@ -2797,9 +2800,6 @@ static void tcpm_detach(struct tcpm_port *port)
+ port->tcpc->set_bist_data(port->tcpc, false);
+ }
+
+- if (tcpm_port_is_disconnected(port))
+- port->hard_reset_count = 0;
+-
+ tcpm_reset_port(port);
+ }
+
+@@ -3573,7 +3573,7 @@ static void run_state_machine(struct tcpm_port *port)
+ */
+ tcpm_set_pwr_role(port, TYPEC_SOURCE);
+ tcpm_pd_send_control(port, PD_CTRL_PS_RDY);
+- tcpm_set_state(port, SRC_STARTUP, 0);
++ tcpm_set_state(port, SRC_STARTUP, PD_T_SWAP_SRC_START);
+ break;
+
+ case VCONN_SWAP_ACCEPT:
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index ef1c550f82662..4b6195666c589 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -239,7 +239,6 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ u64 paend;
+ struct scatterlist *sg;
+ struct device *dma = mvdev->mdev->device;
+- int ret;
+
+ for (map = vhost_iotlb_itree_first(iotlb, mr->start, mr->end - 1);
+ map; map = vhost_iotlb_itree_next(map, start, mr->end - 1)) {
+@@ -277,8 +276,8 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ done:
+ mr->log_size = log_entity_size;
+ mr->nsg = nsg;
+- ret = dma_map_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0);
+- if (!ret)
++ err = dma_map_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0);
++ if (!err)
+ goto err_map;
+
+ err = create_direct_mr(mvdev, mr);
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index 62d6403271450..995a13244d9c6 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -60,7 +60,8 @@ struct vdpasim_virtqueue {
+
+ static u64 vdpasim_features = (1ULL << VIRTIO_F_ANY_LAYOUT) |
+ (1ULL << VIRTIO_F_VERSION_1) |
+- (1ULL << VIRTIO_F_ACCESS_PLATFORM);
++ (1ULL << VIRTIO_F_ACCESS_PLATFORM) |
++ (1ULL << VIRTIO_NET_F_MAC);
+
+ /* State of each vdpasim device */
+ struct vdpasim {
+@@ -361,7 +362,9 @@ static struct vdpasim *vdpasim_create(void)
+ spin_lock_init(&vdpasim->iommu_lock);
+
+ dev = &vdpasim->vdpa.dev;
+- dev->coherent_dma_mask = DMA_BIT_MASK(64);
++ dev->dma_mask = &dev->coherent_dma_mask;
++ if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
++ goto err_iommu;
+ set_dma_ops(dev, &vdpasim_dma_ops);
+
+ vdpasim->iommu = vhost_iotlb_alloc(2048, 0);
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index 62a9bb0efc558..676175bd9a679 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -428,12 +428,11 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
+ void __user *argp = (void __user *)arg;
+ u64 __user *featurep = argp;
+ u64 features;
+- long r;
++ long r = 0;
+
+ if (cmd == VHOST_SET_BACKEND_FEATURES) {
+- r = copy_from_user(&features, featurep, sizeof(features));
+- if (r)
+- return r;
++ if (copy_from_user(&features, featurep, sizeof(features)))
++ return -EFAULT;
+ if (features & ~VHOST_VDPA_BACKEND_FEATURES)
+ return -EOPNOTSUPP;
+ vhost_set_backend_features(&v->vdev, features);
+@@ -476,7 +475,8 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
+ break;
+ case VHOST_GET_BACKEND_FEATURES:
+ features = VHOST_VDPA_BACKEND_FEATURES;
+- r = copy_to_user(featurep, &features, sizeof(features));
++ if (copy_to_user(featurep, &features, sizeof(features)))
++ r = -EFAULT;
+ break;
+ default:
+ r = vhost_dev_ioctl(&v->vdev, cmd, argp);
+@@ -595,19 +595,21 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
+ struct vhost_dev *dev = &v->vdev;
+ struct vhost_iotlb *iotlb = dev->iotlb;
+ struct page **page_list;
+- struct vm_area_struct **vmas;
++ unsigned long list_size = PAGE_SIZE / sizeof(struct page *);
+ unsigned int gup_flags = FOLL_LONGTERM;
+- unsigned long map_pfn, last_pfn = 0;
+- unsigned long npages, lock_limit;
+- unsigned long i, nmap = 0;
++ unsigned long npages, cur_base, map_pfn, last_pfn = 0;
++ unsigned long locked, lock_limit, pinned, i;
+ u64 iova = msg->iova;
+- long pinned;
+ int ret = 0;
+
+ if (vhost_iotlb_itree_first(iotlb, msg->iova,
+ msg->iova + msg->size - 1))
+ return -EEXIST;
+
++ page_list = (struct page **) __get_free_page(GFP_KERNEL);
++ if (!page_list)
++ return -ENOMEM;
++
+ if (msg->perm & VHOST_ACCESS_WO)
+ gup_flags |= FOLL_WRITE;
+
+@@ -615,86 +617,61 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
+ if (!npages)
+ return -EINVAL;
+
+- page_list = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+- vmas = kvmalloc_array(npages, sizeof(struct vm_area_struct *),
+- GFP_KERNEL);
+- if (!page_list || !vmas) {
+- ret = -ENOMEM;
+- goto free;
+- }
+-
+ mmap_read_lock(dev->mm);
+
++ locked = atomic64_add_return(npages, &dev->mm->pinned_vm);
+ lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+- if (npages + atomic64_read(&dev->mm->pinned_vm) > lock_limit) {
+- ret = -ENOMEM;
+- goto unlock;
+- }
+
+- pinned = pin_user_pages(msg->uaddr & PAGE_MASK, npages, gup_flags,
+- page_list, vmas);
+- if (npages != pinned) {
+- if (pinned < 0) {
+- ret = pinned;
+- } else {
+- unpin_user_pages(page_list, pinned);
+- ret = -ENOMEM;
+- }
+- goto unlock;
++ if (locked > lock_limit) {
++ ret = -ENOMEM;
++ goto out;
+ }
+
++ cur_base = msg->uaddr & PAGE_MASK;
+ iova &= PAGE_MASK;
+- map_pfn = page_to_pfn(page_list[0]);
+-
+- /* One more iteration to avoid extra vdpa_map() call out of loop. */
+- for (i = 0; i <= npages; i++) {
+- unsigned long this_pfn;
+- u64 csize;
+-
+- /* The last chunk may have no valid PFN next to it */
+- this_pfn = i < npages ? page_to_pfn(page_list[i]) : -1UL;
+-
+- if (last_pfn && (this_pfn == -1UL ||
+- this_pfn != last_pfn + 1)) {
+- /* Pin a contiguous chunk of memory */
+- csize = last_pfn - map_pfn + 1;
+- ret = vhost_vdpa_map(v, iova, csize << PAGE_SHIFT,
+- map_pfn << PAGE_SHIFT,
+- msg->perm);
+- if (ret) {
+- /*
+- * Unpin the rest chunks of memory on the
+- * flight with no corresponding vdpa_map()
+- * calls having been made yet. On the other
+- * hand, vdpa_unmap() in the failure path
+- * is in charge of accounting the number of
+- * pinned pages for its own.
+- * This asymmetrical pattern of accounting
+- * is for efficiency to pin all pages at
+- * once, while there is no other callsite
+- * of vdpa_map() than here above.
+- */
+- unpin_user_pages(&page_list[nmap],
+- npages - nmap);
+- goto out;
++
++ while (npages) {
++ pinned = min_t(unsigned long, npages, list_size);
++ ret = pin_user_pages(cur_base, pinned,
++ gup_flags, page_list, NULL);
++ if (ret != pinned)
++ goto out;
++
++ if (!last_pfn)
++ map_pfn = page_to_pfn(page_list[0]);
++
++ for (i = 0; i < ret; i++) {
++ unsigned long this_pfn = page_to_pfn(page_list[i]);
++ u64 csize;
++
++ if (last_pfn && (this_pfn != last_pfn + 1)) {
++ /* Pin a contiguous chunk of memory */
++ csize = (last_pfn - map_pfn + 1) << PAGE_SHIFT;
++ if (vhost_vdpa_map(v, iova, csize,
++ map_pfn << PAGE_SHIFT,
++ msg->perm))
++ goto out;
++ map_pfn = this_pfn;
++ iova += csize;
+ }
+- atomic64_add(csize, &dev->mm->pinned_vm);
+- nmap += csize;
+- iova += csize << PAGE_SHIFT;
+- map_pfn = this_pfn;
++
++ last_pfn = this_pfn;
+ }
+- last_pfn = this_pfn;
++
++ cur_base += ret << PAGE_SHIFT;
++ npages -= ret;
+ }
+
+- WARN_ON(nmap != npages);
++ /* Pin the rest chunk */
++ ret = vhost_vdpa_map(v, iova, (last_pfn - map_pfn + 1) << PAGE_SHIFT,
++ map_pfn << PAGE_SHIFT, msg->perm);
+ out:
+- if (ret)
++ if (ret) {
+ vhost_vdpa_unmap(v, msg->iova, msg->size);
+-unlock:
++ atomic64_sub(npages, &dev->mm->pinned_vm);
++ }
+ mmap_read_unlock(dev->mm);
+-free:
+- kvfree(vmas);
+- kvfree(page_list);
++ free_page((unsigned long)page_list);
+ return ret;
+ }
+
+diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
+index e059a9a47cdf1..8bd8b403f0872 100644
+--- a/drivers/vhost/vringh.c
++++ b/drivers/vhost/vringh.c
+@@ -284,13 +284,14 @@ __vringh_iov(struct vringh *vrh, u16 i,
+ desc_max = vrh->vring.num;
+ up_next = -1;
+
++ /* You must want something! */
++ if (WARN_ON(!riov && !wiov))
++ return -EINVAL;
++
+ if (riov)
+ riov->i = riov->used = 0;
+- else if (wiov)
++ if (wiov)
+ wiov->i = wiov->used = 0;
+- else
+- /* You must want something! */
+- BUG();
+
+ for (;;) {
+ void *addr;
+diff --git a/drivers/video/fbdev/pvr2fb.c b/drivers/video/fbdev/pvr2fb.c
+index 2d9f69b93392a..f4add36cb5f4d 100644
+--- a/drivers/video/fbdev/pvr2fb.c
++++ b/drivers/video/fbdev/pvr2fb.c
+@@ -1028,6 +1028,8 @@ static int __init pvr2fb_setup(char *options)
+ if (!options || !*options)
+ return 0;
+
++ cable_arg[0] = output_arg[0] = 0;
++
+ while ((this_opt = strsep(&options, ","))) {
+ if (!*this_opt)
+ continue;
+diff --git a/drivers/w1/masters/mxc_w1.c b/drivers/w1/masters/mxc_w1.c
+index 1ca880e014769..090cbbf9e1e22 100644
+--- a/drivers/w1/masters/mxc_w1.c
++++ b/drivers/w1/masters/mxc_w1.c
+@@ -7,7 +7,7 @@
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/io.h>
+-#include <linux/jiffies.h>
++#include <linux/ktime.h>
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/platform_device.h>
+@@ -40,12 +40,12 @@ struct mxc_w1_device {
+ static u8 mxc_w1_ds2_reset_bus(void *data)
+ {
+ struct mxc_w1_device *dev = data;
+- unsigned long timeout;
++ ktime_t timeout;
+
+ writeb(MXC_W1_CONTROL_RPP, dev->regs + MXC_W1_CONTROL);
+
+ /* Wait for reset sequence 511+512us, use 1500us for sure */
+- timeout = jiffies + usecs_to_jiffies(1500);
++ timeout = ktime_add_us(ktime_get(), 1500);
+
+ udelay(511 + 512);
+
+@@ -55,7 +55,7 @@ static u8 mxc_w1_ds2_reset_bus(void *data)
+ /* PST bit is valid after the RPP bit is self-cleared */
+ if (!(ctrl & MXC_W1_CONTROL_RPP))
+ return !(ctrl & MXC_W1_CONTROL_PST);
+- } while (time_is_after_jiffies(timeout));
++ } while (ktime_before(ktime_get(), timeout));
+
+ return 1;
+ }
+@@ -68,12 +68,12 @@ static u8 mxc_w1_ds2_reset_bus(void *data)
+ static u8 mxc_w1_ds2_touch_bit(void *data, u8 bit)
+ {
+ struct mxc_w1_device *dev = data;
+- unsigned long timeout;
++ ktime_t timeout;
+
+ writeb(MXC_W1_CONTROL_WR(bit), dev->regs + MXC_W1_CONTROL);
+
+ /* Wait for read/write bit (60us, Max 120us), use 200us for sure */
+- timeout = jiffies + usecs_to_jiffies(200);
++ timeout = ktime_add_us(ktime_get(), 200);
+
+ udelay(60);
+
+@@ -83,7 +83,7 @@ static u8 mxc_w1_ds2_touch_bit(void *data, u8 bit)
+ /* RDST bit is valid after the WR1/RD bit is self-cleared */
+ if (!(ctrl & MXC_W1_CONTROL_WR(bit)))
+ return !!(ctrl & MXC_W1_CONTROL_RDST);
+- } while (time_is_after_jiffies(timeout));
++ } while (ktime_before(ktime_get(), timeout));
+
+ return 0;
+ }
+diff --git a/drivers/watchdog/rdc321x_wdt.c b/drivers/watchdog/rdc321x_wdt.c
+index 57187efeb86f1..f0c94ea51c3e4 100644
+--- a/drivers/watchdog/rdc321x_wdt.c
++++ b/drivers/watchdog/rdc321x_wdt.c
+@@ -231,6 +231,8 @@ static int rdc321x_wdt_probe(struct platform_device *pdev)
+
+ rdc321x_wdt_device.sb_pdev = pdata->sb_pdev;
+ rdc321x_wdt_device.base_reg = r->start;
++ rdc321x_wdt_device.queue = 0;
++ rdc321x_wdt_device.default_ticks = ticks;
+
+ err = misc_register(&rdc321x_wdt_misc);
+ if (err < 0) {
+@@ -245,14 +247,11 @@ static int rdc321x_wdt_probe(struct platform_device *pdev)
+ rdc321x_wdt_device.base_reg, RDC_WDT_RST);
+
+ init_completion(&rdc321x_wdt_device.stop);
+- rdc321x_wdt_device.queue = 0;
+
+ clear_bit(0, &rdc321x_wdt_device.inuse);
+
+ timer_setup(&rdc321x_wdt_device.timer, rdc321x_wdt_trigger, 0);
+
+- rdc321x_wdt_device.default_ticks = ticks;
+-
+ dev_info(&pdev->dev, "watchdog init success\n");
+
+ return 0;
+diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
+index 64df919a2111b..fe5ad0e89cd8a 100644
+--- a/drivers/xen/events/events_2l.c
++++ b/drivers/xen/events/events_2l.c
+@@ -91,6 +91,8 @@ static void evtchn_2l_unmask(evtchn_port_t port)
+
+ BUG_ON(!irqs_disabled());
+
++ smp_wmb(); /* All writes before unmask must be visible. */
++
+ if (unlikely((cpu != cpu_from_evtchn(port))))
+ do_hypercall = 1;
+ else {
+@@ -159,7 +161,7 @@ static inline xen_ulong_t active_evtchns(unsigned int cpu,
+ * a bitset of words which contain pending event bits. The second
+ * level is a bitset of pending events themselves.
+ */
+-static void evtchn_2l_handle_events(unsigned cpu)
++static void evtchn_2l_handle_events(unsigned cpu, struct evtchn_loop_ctrl *ctrl)
+ {
+ int irq;
+ xen_ulong_t pending_words;
+@@ -240,10 +242,7 @@ static void evtchn_2l_handle_events(unsigned cpu)
+
+ /* Process port. */
+ port = (word_idx * BITS_PER_EVTCHN_WORD) + bit_idx;
+- irq = get_evtchn_to_irq(port);
+-
+- if (irq != -1)
+- generic_handle_irq(irq);
++ handle_irq_for_port(port, ctrl);
+
+ bit_idx = (bit_idx + 1) % BITS_PER_EVTCHN_WORD;
+
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index 6f02c18fa65c8..cc317739e7860 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -33,6 +33,10 @@
+ #include <linux/slab.h>
+ #include <linux/irqnr.h>
+ #include <linux/pci.h>
++#include <linux/spinlock.h>
++#include <linux/cpuhotplug.h>
++#include <linux/atomic.h>
++#include <linux/ktime.h>
+
+ #ifdef CONFIG_X86
+ #include <asm/desc.h>
+@@ -63,6 +67,15 @@
+
+ #include "events_internal.h"
+
++#undef MODULE_PARAM_PREFIX
++#define MODULE_PARAM_PREFIX "xen."
++
++static uint __read_mostly event_loop_timeout = 2;
++module_param(event_loop_timeout, uint, 0644);
++
++static uint __read_mostly event_eoi_delay = 10;
++module_param(event_eoi_delay, uint, 0644);
++
+ const struct evtchn_ops *evtchn_ops;
+
+ /*
+@@ -71,6 +84,24 @@ const struct evtchn_ops *evtchn_ops;
+ */
+ static DEFINE_MUTEX(irq_mapping_update_lock);
+
++/*
++ * Lock protecting event handling loop against removing event channels.
++ * Adding of event channels is no issue as the associated IRQ becomes active
++ * only after everything is setup (before request_[threaded_]irq() the handler
++ * can't be entered for an event, as the event channel will be unmasked only
++ * then).
++ */
++static DEFINE_RWLOCK(evtchn_rwlock);
++
++/*
++ * Lock hierarchy:
++ *
++ * irq_mapping_update_lock
++ * evtchn_rwlock
++ * IRQ-desc lock
++ * percpu eoi_list_lock
++ */
++
+ static LIST_HEAD(xen_irq_list_head);
+
+ /* IRQ <-> VIRQ mapping. */
+@@ -95,17 +126,20 @@ static bool (*pirq_needs_eoi)(unsigned irq);
+ static struct irq_info *legacy_info_ptrs[NR_IRQS_LEGACY];
+
+ static struct irq_chip xen_dynamic_chip;
++static struct irq_chip xen_lateeoi_chip;
+ static struct irq_chip xen_percpu_chip;
+ static struct irq_chip xen_pirq_chip;
+ static void enable_dynirq(struct irq_data *data);
+ static void disable_dynirq(struct irq_data *data);
+
++static DEFINE_PER_CPU(unsigned int, irq_epoch);
++
+ static void clear_evtchn_to_irq_row(unsigned row)
+ {
+ unsigned col;
+
+ for (col = 0; col < EVTCHN_PER_ROW; col++)
+- evtchn_to_irq[row][col] = -1;
++ WRITE_ONCE(evtchn_to_irq[row][col], -1);
+ }
+
+ static void clear_evtchn_to_irq_all(void)
+@@ -142,7 +176,7 @@ static int set_evtchn_to_irq(evtchn_port_t evtchn, unsigned int irq)
+ clear_evtchn_to_irq_row(row);
+ }
+
+- evtchn_to_irq[row][col] = irq;
++ WRITE_ONCE(evtchn_to_irq[row][col], irq);
+ return 0;
+ }
+
+@@ -152,7 +186,7 @@ int get_evtchn_to_irq(evtchn_port_t evtchn)
+ return -1;
+ if (evtchn_to_irq[EVTCHN_ROW(evtchn)] == NULL)
+ return -1;
+- return evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)];
++ return READ_ONCE(evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)]);
+ }
+
+ /* Get info for IRQ */
+@@ -261,10 +295,14 @@ static void xen_irq_info_cleanup(struct irq_info *info)
+ */
+ evtchn_port_t evtchn_from_irq(unsigned irq)
+ {
+- if (WARN(irq >= nr_irqs, "Invalid irq %d!\n", irq))
++ const struct irq_info *info = NULL;
++
++ if (likely(irq < nr_irqs))
++ info = info_for_irq(irq);
++ if (!info)
+ return 0;
+
+- return info_for_irq(irq)->evtchn;
++ return info->evtchn;
+ }
+
+ unsigned int irq_from_evtchn(evtchn_port_t evtchn)
+@@ -375,9 +413,157 @@ void notify_remote_via_irq(int irq)
+ }
+ EXPORT_SYMBOL_GPL(notify_remote_via_irq);
+
++struct lateeoi_work {
++ struct delayed_work delayed;
++ spinlock_t eoi_list_lock;
++ struct list_head eoi_list;
++};
++
++static DEFINE_PER_CPU(struct lateeoi_work, lateeoi);
++
++static void lateeoi_list_del(struct irq_info *info)
++{
++ struct lateeoi_work *eoi = &per_cpu(lateeoi, info->eoi_cpu);
++ unsigned long flags;
++
++ spin_lock_irqsave(&eoi->eoi_list_lock, flags);
++ list_del_init(&info->eoi_list);
++ spin_unlock_irqrestore(&eoi->eoi_list_lock, flags);
++}
++
++static void lateeoi_list_add(struct irq_info *info)
++{
++ struct lateeoi_work *eoi = &per_cpu(lateeoi, info->eoi_cpu);
++ struct irq_info *elem;
++ u64 now = get_jiffies_64();
++ unsigned long delay;
++ unsigned long flags;
++
++ if (now < info->eoi_time)
++ delay = info->eoi_time - now;
++ else
++ delay = 1;
++
++ spin_lock_irqsave(&eoi->eoi_list_lock, flags);
++
++ if (list_empty(&eoi->eoi_list)) {
++ list_add(&info->eoi_list, &eoi->eoi_list);
++ mod_delayed_work_on(info->eoi_cpu, system_wq,
++ &eoi->delayed, delay);
++ } else {
++ list_for_each_entry_reverse(elem, &eoi->eoi_list, eoi_list) {
++ if (elem->eoi_time <= info->eoi_time)
++ break;
++ }
++ list_add(&info->eoi_list, &elem->eoi_list);
++ }
++
++ spin_unlock_irqrestore(&eoi->eoi_list_lock, flags);
++}
++
++static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious)
++{
++ evtchn_port_t evtchn;
++ unsigned int cpu;
++ unsigned int delay = 0;
++
++ evtchn = info->evtchn;
++ if (!VALID_EVTCHN(evtchn) || !list_empty(&info->eoi_list))
++ return;
++
++ if (spurious) {
++ if ((1 << info->spurious_cnt) < (HZ << 2))
++ info->spurious_cnt++;
++ if (info->spurious_cnt > 1) {
++ delay = 1 << (info->spurious_cnt - 2);
++ if (delay > HZ)
++ delay = HZ;
++ if (!info->eoi_time)
++ info->eoi_cpu = smp_processor_id();
++ info->eoi_time = get_jiffies_64() + delay;
++ }
++ } else {
++ info->spurious_cnt = 0;
++ }
++
++ cpu = info->eoi_cpu;
++ if (info->eoi_time &&
++ (info->irq_epoch == per_cpu(irq_epoch, cpu) || delay)) {
++ lateeoi_list_add(info);
++ return;
++ }
++
++ info->eoi_time = 0;
++ unmask_evtchn(evtchn);
++}
++
++static void xen_irq_lateeoi_worker(struct work_struct *work)
++{
++ struct lateeoi_work *eoi;
++ struct irq_info *info;
++ u64 now = get_jiffies_64();
++ unsigned long flags;
++
++ eoi = container_of(to_delayed_work(work), struct lateeoi_work, delayed);
++
++ read_lock_irqsave(&evtchn_rwlock, flags);
++
++ while (true) {
++ spin_lock(&eoi->eoi_list_lock);
++
++ info = list_first_entry_or_null(&eoi->eoi_list, struct irq_info,
++ eoi_list);
++
++ if (info == NULL || now < info->eoi_time) {
++ spin_unlock(&eoi->eoi_list_lock);
++ break;
++ }
++
++ list_del_init(&info->eoi_list);
++
++ spin_unlock(&eoi->eoi_list_lock);
++
++ info->eoi_time = 0;
++
++ xen_irq_lateeoi_locked(info, false);
++ }
++
++ if (info)
++ mod_delayed_work_on(info->eoi_cpu, system_wq,
++ &eoi->delayed, info->eoi_time - now);
++
++ read_unlock_irqrestore(&evtchn_rwlock, flags);
++}
++
++static void xen_cpu_init_eoi(unsigned int cpu)
++{
++ struct lateeoi_work *eoi = &per_cpu(lateeoi, cpu);
++
++ INIT_DELAYED_WORK(&eoi->delayed, xen_irq_lateeoi_worker);
++ spin_lock_init(&eoi->eoi_list_lock);
++ INIT_LIST_HEAD(&eoi->eoi_list);
++}
++
++void xen_irq_lateeoi(unsigned int irq, unsigned int eoi_flags)
++{
++ struct irq_info *info;
++ unsigned long flags;
++
++ read_lock_irqsave(&evtchn_rwlock, flags);
++
++ info = info_for_irq(irq);
++
++ if (info)
++ xen_irq_lateeoi_locked(info, eoi_flags & XEN_EOI_FLAG_SPURIOUS);
++
++ read_unlock_irqrestore(&evtchn_rwlock, flags);
++}
++EXPORT_SYMBOL_GPL(xen_irq_lateeoi);
++
+ static void xen_irq_init(unsigned irq)
+ {
+ struct irq_info *info;
++
+ #ifdef CONFIG_SMP
+ /* By default all event channels notify CPU#0. */
+ cpumask_copy(irq_get_affinity_mask(irq), cpumask_of(0));
+@@ -392,6 +578,7 @@ static void xen_irq_init(unsigned irq)
+
+ set_info_for_irq(irq, info);
+
++ INIT_LIST_HEAD(&info->eoi_list);
+ list_add_tail(&info->list, &xen_irq_list_head);
+ }
+
+@@ -440,16 +627,24 @@ static int __must_check xen_allocate_irq_gsi(unsigned gsi)
+ static void xen_free_irq(unsigned irq)
+ {
+ struct irq_info *info = info_for_irq(irq);
++ unsigned long flags;
+
+ if (WARN_ON(!info))
+ return;
+
++ write_lock_irqsave(&evtchn_rwlock, flags);
++
++ if (!list_empty(&info->eoi_list))
++ lateeoi_list_del(info);
++
+ list_del(&info->list);
+
+ set_info_for_irq(irq, NULL);
+
+ WARN_ON(info->refcnt > 0);
+
++ write_unlock_irqrestore(&evtchn_rwlock, flags);
++
+ kfree(info);
+
+ /* Legacy IRQ descriptors are managed by the arch. */
+@@ -841,7 +1036,7 @@ int xen_pirq_from_irq(unsigned irq)
+ }
+ EXPORT_SYMBOL_GPL(xen_pirq_from_irq);
+
+-int bind_evtchn_to_irq(evtchn_port_t evtchn)
++static int bind_evtchn_to_irq_chip(evtchn_port_t evtchn, struct irq_chip *chip)
+ {
+ int irq;
+ int ret;
+@@ -858,7 +1053,7 @@ int bind_evtchn_to_irq(evtchn_port_t evtchn)
+ if (irq < 0)
+ goto out;
+
+- irq_set_chip_and_handler_name(irq, &xen_dynamic_chip,
++ irq_set_chip_and_handler_name(irq, chip,
+ handle_edge_irq, "event");
+
+ ret = xen_irq_info_evtchn_setup(irq, evtchn);
+@@ -879,8 +1074,19 @@ out:
+
+ return irq;
+ }
++
++int bind_evtchn_to_irq(evtchn_port_t evtchn)
++{
++ return bind_evtchn_to_irq_chip(evtchn, &xen_dynamic_chip);
++}
+ EXPORT_SYMBOL_GPL(bind_evtchn_to_irq);
+
++int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn)
++{
++ return bind_evtchn_to_irq_chip(evtchn, &xen_lateeoi_chip);
++}
++EXPORT_SYMBOL_GPL(bind_evtchn_to_irq_lateeoi);
++
+ static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu)
+ {
+ struct evtchn_bind_ipi bind_ipi;
+@@ -922,8 +1128,9 @@ static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu)
+ return irq;
+ }
+
+-int bind_interdomain_evtchn_to_irq(unsigned int remote_domain,
+- evtchn_port_t remote_port)
++static int bind_interdomain_evtchn_to_irq_chip(unsigned int remote_domain,
++ evtchn_port_t remote_port,
++ struct irq_chip *chip)
+ {
+ struct evtchn_bind_interdomain bind_interdomain;
+ int err;
+@@ -934,10 +1141,26 @@ int bind_interdomain_evtchn_to_irq(unsigned int remote_domain,
+ err = HYPERVISOR_event_channel_op(EVTCHNOP_bind_interdomain,
+ &bind_interdomain);
+
+- return err ? : bind_evtchn_to_irq(bind_interdomain.local_port);
++ return err ? : bind_evtchn_to_irq_chip(bind_interdomain.local_port,
++ chip);
++}
++
++int bind_interdomain_evtchn_to_irq(unsigned int remote_domain,
++ evtchn_port_t remote_port)
++{
++ return bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port,
++ &xen_dynamic_chip);
+ }
+ EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irq);
+
++int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
++ evtchn_port_t remote_port)
++{
++ return bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port,
++ &xen_lateeoi_chip);
++}
++EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irq_lateeoi);
++
+ static int find_virq(unsigned int virq, unsigned int cpu, evtchn_port_t *evtchn)
+ {
+ struct evtchn_status status;
+@@ -1034,14 +1257,15 @@ static void unbind_from_irq(unsigned int irq)
+ mutex_unlock(&irq_mapping_update_lock);
+ }
+
+-int bind_evtchn_to_irqhandler(evtchn_port_t evtchn,
+- irq_handler_t handler,
+- unsigned long irqflags,
+- const char *devname, void *dev_id)
++static int bind_evtchn_to_irqhandler_chip(evtchn_port_t evtchn,
++ irq_handler_t handler,
++ unsigned long irqflags,
++ const char *devname, void *dev_id,
++ struct irq_chip *chip)
+ {
+ int irq, retval;
+
+- irq = bind_evtchn_to_irq(evtchn);
++ irq = bind_evtchn_to_irq_chip(evtchn, chip);
+ if (irq < 0)
+ return irq;
+ retval = request_irq(irq, handler, irqflags, devname, dev_id);
+@@ -1052,18 +1276,38 @@ int bind_evtchn_to_irqhandler(evtchn_port_t evtchn,
+
+ return irq;
+ }
++
++int bind_evtchn_to_irqhandler(evtchn_port_t evtchn,
++ irq_handler_t handler,
++ unsigned long irqflags,
++ const char *devname, void *dev_id)
++{
++ return bind_evtchn_to_irqhandler_chip(evtchn, handler, irqflags,
++ devname, dev_id,
++ &xen_dynamic_chip);
++}
+ EXPORT_SYMBOL_GPL(bind_evtchn_to_irqhandler);
+
+-int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
+- evtchn_port_t remote_port,
+- irq_handler_t handler,
+- unsigned long irqflags,
+- const char *devname,
+- void *dev_id)
++int bind_evtchn_to_irqhandler_lateeoi(evtchn_port_t evtchn,
++ irq_handler_t handler,
++ unsigned long irqflags,
++ const char *devname, void *dev_id)
++{
++ return bind_evtchn_to_irqhandler_chip(evtchn, handler, irqflags,
++ devname, dev_id,
++ &xen_lateeoi_chip);
++}
++EXPORT_SYMBOL_GPL(bind_evtchn_to_irqhandler_lateeoi);
++
++static int bind_interdomain_evtchn_to_irqhandler_chip(
++ unsigned int remote_domain, evtchn_port_t remote_port,
++ irq_handler_t handler, unsigned long irqflags,
++ const char *devname, void *dev_id, struct irq_chip *chip)
+ {
+ int irq, retval;
+
+- irq = bind_interdomain_evtchn_to_irq(remote_domain, remote_port);
++ irq = bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port,
++ chip);
+ if (irq < 0)
+ return irq;
+
+@@ -1075,8 +1319,33 @@ int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
+
+ return irq;
+ }
++
++int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
++ evtchn_port_t remote_port,
++ irq_handler_t handler,
++ unsigned long irqflags,
++ const char *devname,
++ void *dev_id)
++{
++ return bind_interdomain_evtchn_to_irqhandler_chip(remote_domain,
++ remote_port, handler, irqflags, devname,
++ dev_id, &xen_dynamic_chip);
++}
+ EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irqhandler);
+
++int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
++ evtchn_port_t remote_port,
++ irq_handler_t handler,
++ unsigned long irqflags,
++ const char *devname,
++ void *dev_id)
++{
++ return bind_interdomain_evtchn_to_irqhandler_chip(remote_domain,
++ remote_port, handler, irqflags, devname,
++ dev_id, &xen_lateeoi_chip);
++}
++EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irqhandler_lateeoi);
++
+ int bind_virq_to_irqhandler(unsigned int virq, unsigned int cpu,
+ irq_handler_t handler,
+ unsigned long irqflags, const char *devname, void *dev_id)
+@@ -1189,7 +1458,7 @@ int evtchn_get(evtchn_port_t evtchn)
+ goto done;
+
+ err = -EINVAL;
+- if (info->refcnt <= 0)
++ if (info->refcnt <= 0 || info->refcnt == SHRT_MAX)
+ goto done;
+
+ info->refcnt++;
+@@ -1228,21 +1497,81 @@ void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector)
+ notify_remote_via_irq(irq);
+ }
+
++struct evtchn_loop_ctrl {
++ ktime_t timeout;
++ unsigned count;
++ bool defer_eoi;
++};
++
++void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl)
++{
++ int irq;
++ struct irq_info *info;
++
++ irq = get_evtchn_to_irq(port);
++ if (irq == -1)
++ return;
++
++ /*
++ * Check for timeout every 256 events.
++ * We are setting the timeout value only after the first 256
++ * events in order to not hurt the common case of few loop
++ * iterations. The 256 is basically an arbitrary value.
++ *
++ * In case we are hitting the timeout we need to defer all further
++ * EOIs in order to ensure to leave the event handling loop rather
++ * sooner than later.
++ */
++ if (!ctrl->defer_eoi && !(++ctrl->count & 0xff)) {
++ ktime_t kt = ktime_get();
++
++ if (!ctrl->timeout) {
++ kt = ktime_add_ms(kt,
++ jiffies_to_msecs(event_loop_timeout));
++ ctrl->timeout = kt;
++ } else if (kt > ctrl->timeout) {
++ ctrl->defer_eoi = true;
++ }
++ }
++
++ info = info_for_irq(irq);
++
++ if (ctrl->defer_eoi) {
++ info->eoi_cpu = smp_processor_id();
++ info->irq_epoch = __this_cpu_read(irq_epoch);
++ info->eoi_time = get_jiffies_64() + event_eoi_delay;
++ }
++
++ generic_handle_irq(irq);
++}
++
+ static void __xen_evtchn_do_upcall(void)
+ {
+ struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
+ int cpu = smp_processor_id();
++ struct evtchn_loop_ctrl ctrl = { 0 };
++
++ read_lock(&evtchn_rwlock);
+
+ do {
+ vcpu_info->evtchn_upcall_pending = 0;
+
+- xen_evtchn_handle_events(cpu);
++ xen_evtchn_handle_events(cpu, &ctrl);
+
+ BUG_ON(!irqs_disabled());
+
+ virt_rmb(); /* Hypervisor can set upcall pending. */
+
+ } while (vcpu_info->evtchn_upcall_pending);
++
++ read_unlock(&evtchn_rwlock);
++
++ /*
++ * Increment irq_epoch only now to defer EOIs only for
++ * xen_irq_lateeoi() invocations occurring from inside the loop
++ * above.
++ */
++ __this_cpu_inc(irq_epoch);
+ }
+
+ void xen_evtchn_do_upcall(struct pt_regs *regs)
+@@ -1606,6 +1935,21 @@ static struct irq_chip xen_dynamic_chip __read_mostly = {
+ .irq_retrigger = retrigger_dynirq,
+ };
+
++static struct irq_chip xen_lateeoi_chip __read_mostly = {
++ /* The chip name needs to contain "xen-dyn" for irqbalance to work. */
++ .name = "xen-dyn-lateeoi",
++
++ .irq_disable = disable_dynirq,
++ .irq_mask = disable_dynirq,
++ .irq_unmask = enable_dynirq,
++
++ .irq_ack = mask_ack_dynirq,
++ .irq_mask_ack = mask_ack_dynirq,
++
++ .irq_set_affinity = set_affinity_irq,
++ .irq_retrigger = retrigger_dynirq,
++};
++
+ static struct irq_chip xen_pirq_chip __read_mostly = {
+ .name = "xen-pirq",
+
+@@ -1676,12 +2020,31 @@ void xen_setup_callback_vector(void) {}
+ static inline void xen_alloc_callback_vector(void) {}
+ #endif
+
+-#undef MODULE_PARAM_PREFIX
+-#define MODULE_PARAM_PREFIX "xen."
+-
+ static bool fifo_events = true;
+ module_param(fifo_events, bool, 0);
+
++static int xen_evtchn_cpu_prepare(unsigned int cpu)
++{
++ int ret = 0;
++
++ xen_cpu_init_eoi(cpu);
++
++ if (evtchn_ops->percpu_init)
++ ret = evtchn_ops->percpu_init(cpu);
++
++ return ret;
++}
++
++static int xen_evtchn_cpu_dead(unsigned int cpu)
++{
++ int ret = 0;
++
++ if (evtchn_ops->percpu_deinit)
++ ret = evtchn_ops->percpu_deinit(cpu);
++
++ return ret;
++}
++
+ void __init xen_init_IRQ(void)
+ {
+ int ret = -EINVAL;
+@@ -1692,6 +2055,12 @@ void __init xen_init_IRQ(void)
+ if (ret < 0)
+ xen_evtchn_2l_init();
+
++ xen_cpu_init_eoi(smp_processor_id());
++
++ cpuhp_setup_state_nocalls(CPUHP_XEN_EVTCHN_PREPARE,
++ "xen/evtchn:prepare",
++ xen_evtchn_cpu_prepare, xen_evtchn_cpu_dead);
++
+ evtchn_to_irq = kcalloc(EVTCHN_ROW(xen_evtchn_max_channels()),
+ sizeof(*evtchn_to_irq), GFP_KERNEL);
+ BUG_ON(!evtchn_to_irq);
+diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
+index c60ee0450173e..6085a808da95c 100644
+--- a/drivers/xen/events/events_fifo.c
++++ b/drivers/xen/events/events_fifo.c
+@@ -227,19 +227,25 @@ static bool evtchn_fifo_is_masked(evtchn_port_t port)
+ return sync_test_bit(EVTCHN_FIFO_BIT(MASKED, word), BM(word));
+ }
+ /*
+- * Clear MASKED, spinning if BUSY is set.
++ * Clear MASKED if not PENDING, spinning if BUSY is set.
++ * Return true if mask was cleared.
+ */
+-static void clear_masked(volatile event_word_t *word)
++static bool clear_masked_cond(volatile event_word_t *word)
+ {
+ event_word_t new, old, w;
+
+ w = *word;
+
+ do {
++ if (w & (1 << EVTCHN_FIFO_PENDING))
++ return false;
++
+ old = w & ~(1 << EVTCHN_FIFO_BUSY);
+ new = old & ~(1 << EVTCHN_FIFO_MASKED);
+ w = sync_cmpxchg(word, old, new);
+ } while (w != old);
++
++ return true;
+ }
+
+ static void evtchn_fifo_unmask(evtchn_port_t port)
+@@ -248,8 +254,7 @@ static void evtchn_fifo_unmask(evtchn_port_t port)
+
+ BUG_ON(!irqs_disabled());
+
+- clear_masked(word);
+- if (evtchn_fifo_is_pending(port)) {
++ if (!clear_masked_cond(word)) {
+ struct evtchn_unmask unmask = { .port = port };
+ (void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask);
+ }
+@@ -270,19 +275,9 @@ static uint32_t clear_linked(volatile event_word_t *word)
+ return w & EVTCHN_FIFO_LINK_MASK;
+ }
+
+-static void handle_irq_for_port(evtchn_port_t port)
+-{
+- int irq;
+-
+- irq = get_evtchn_to_irq(port);
+- if (irq != -1)
+- generic_handle_irq(irq);
+-}
+-
+-static void consume_one_event(unsigned cpu,
++static void consume_one_event(unsigned cpu, struct evtchn_loop_ctrl *ctrl,
+ struct evtchn_fifo_control_block *control_block,
+- unsigned priority, unsigned long *ready,
+- bool drop)
++ unsigned priority, unsigned long *ready)
+ {
+ struct evtchn_fifo_queue *q = &per_cpu(cpu_queue, cpu);
+ uint32_t head;
+@@ -315,16 +310,17 @@ static void consume_one_event(unsigned cpu,
+ clear_bit(priority, ready);
+
+ if (evtchn_fifo_is_pending(port) && !evtchn_fifo_is_masked(port)) {
+- if (unlikely(drop))
++ if (unlikely(!ctrl))
+ pr_warn("Dropping pending event for port %u\n", port);
+ else
+- handle_irq_for_port(port);
++ handle_irq_for_port(port, ctrl);
+ }
+
+ q->head[priority] = head;
+ }
+
+-static void __evtchn_fifo_handle_events(unsigned cpu, bool drop)
++static void __evtchn_fifo_handle_events(unsigned cpu,
++ struct evtchn_loop_ctrl *ctrl)
+ {
+ struct evtchn_fifo_control_block *control_block;
+ unsigned long ready;
+@@ -336,14 +332,15 @@ static void __evtchn_fifo_handle_events(unsigned cpu, bool drop)
+
+ while (ready) {
+ q = find_first_bit(&ready, EVTCHN_FIFO_MAX_QUEUES);
+- consume_one_event(cpu, control_block, q, &ready, drop);
++ consume_one_event(cpu, ctrl, control_block, q, &ready);
+ ready |= xchg(&control_block->ready, 0);
+ }
+ }
+
+-static void evtchn_fifo_handle_events(unsigned cpu)
++static void evtchn_fifo_handle_events(unsigned cpu,
++ struct evtchn_loop_ctrl *ctrl)
+ {
+- __evtchn_fifo_handle_events(cpu, false);
++ __evtchn_fifo_handle_events(cpu, ctrl);
+ }
+
+ static void evtchn_fifo_resume(void)
+@@ -380,21 +377,6 @@ static void evtchn_fifo_resume(void)
+ event_array_pages = 0;
+ }
+
+-static const struct evtchn_ops evtchn_ops_fifo = {
+- .max_channels = evtchn_fifo_max_channels,
+- .nr_channels = evtchn_fifo_nr_channels,
+- .setup = evtchn_fifo_setup,
+- .bind_to_cpu = evtchn_fifo_bind_to_cpu,
+- .clear_pending = evtchn_fifo_clear_pending,
+- .set_pending = evtchn_fifo_set_pending,
+- .is_pending = evtchn_fifo_is_pending,
+- .test_and_set_mask = evtchn_fifo_test_and_set_mask,
+- .mask = evtchn_fifo_mask,
+- .unmask = evtchn_fifo_unmask,
+- .handle_events = evtchn_fifo_handle_events,
+- .resume = evtchn_fifo_resume,
+-};
+-
+ static int evtchn_fifo_alloc_control_block(unsigned cpu)
+ {
+ void *control_block = NULL;
+@@ -417,19 +399,36 @@ static int evtchn_fifo_alloc_control_block(unsigned cpu)
+ return ret;
+ }
+
+-static int xen_evtchn_cpu_prepare(unsigned int cpu)
++static int evtchn_fifo_percpu_init(unsigned int cpu)
+ {
+ if (!per_cpu(cpu_control_block, cpu))
+ return evtchn_fifo_alloc_control_block(cpu);
+ return 0;
+ }
+
+-static int xen_evtchn_cpu_dead(unsigned int cpu)
++static int evtchn_fifo_percpu_deinit(unsigned int cpu)
+ {
+- __evtchn_fifo_handle_events(cpu, true);
++ __evtchn_fifo_handle_events(cpu, NULL);
+ return 0;
+ }
+
++static const struct evtchn_ops evtchn_ops_fifo = {
++ .max_channels = evtchn_fifo_max_channels,
++ .nr_channels = evtchn_fifo_nr_channels,
++ .setup = evtchn_fifo_setup,
++ .bind_to_cpu = evtchn_fifo_bind_to_cpu,
++ .clear_pending = evtchn_fifo_clear_pending,
++ .set_pending = evtchn_fifo_set_pending,
++ .is_pending = evtchn_fifo_is_pending,
++ .test_and_set_mask = evtchn_fifo_test_and_set_mask,
++ .mask = evtchn_fifo_mask,
++ .unmask = evtchn_fifo_unmask,
++ .handle_events = evtchn_fifo_handle_events,
++ .resume = evtchn_fifo_resume,
++ .percpu_init = evtchn_fifo_percpu_init,
++ .percpu_deinit = evtchn_fifo_percpu_deinit,
++};
++
+ int __init xen_evtchn_fifo_init(void)
+ {
+ int cpu = smp_processor_id();
+@@ -443,9 +442,5 @@ int __init xen_evtchn_fifo_init(void)
+
+ evtchn_ops = &evtchn_ops_fifo;
+
+- cpuhp_setup_state_nocalls(CPUHP_XEN_EVTCHN_PREPARE,
+- "xen/evtchn:prepare",
+- xen_evtchn_cpu_prepare, xen_evtchn_cpu_dead);
+-
+ return ret;
+ }
+diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h
+index 10684feb094e1..82937d90d7d72 100644
+--- a/drivers/xen/events/events_internal.h
++++ b/drivers/xen/events/events_internal.h
+@@ -30,11 +30,16 @@ enum xen_irq_type {
+ */
+ struct irq_info {
+ struct list_head list;
+- int refcnt;
++ struct list_head eoi_list;
++ short refcnt;
++ short spurious_cnt;
+ enum xen_irq_type type; /* type */
+ unsigned irq;
+ evtchn_port_t evtchn; /* event channel */
+ unsigned short cpu; /* cpu bound */
++ unsigned short eoi_cpu; /* EOI must happen on this cpu */
++ unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
++ u64 eoi_time; /* Time in jiffies when to EOI. */
+
+ union {
+ unsigned short virq;
+@@ -53,6 +58,8 @@ struct irq_info {
+ #define PIRQ_SHAREABLE (1 << 1)
+ #define PIRQ_MSI_GROUP (1 << 2)
+
++struct evtchn_loop_ctrl;
++
+ struct evtchn_ops {
+ unsigned (*max_channels)(void);
+ unsigned (*nr_channels)(void);
+@@ -67,14 +74,18 @@ struct evtchn_ops {
+ void (*mask)(evtchn_port_t port);
+ void (*unmask)(evtchn_port_t port);
+
+- void (*handle_events)(unsigned cpu);
++ void (*handle_events)(unsigned cpu, struct evtchn_loop_ctrl *ctrl);
+ void (*resume)(void);
++
++ int (*percpu_init)(unsigned int cpu);
++ int (*percpu_deinit)(unsigned int cpu);
+ };
+
+ extern const struct evtchn_ops *evtchn_ops;
+
+ extern int **evtchn_to_irq;
+ int get_evtchn_to_irq(evtchn_port_t evtchn);
++void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl);
+
+ struct irq_info *info_for_irq(unsigned irq);
+ unsigned cpu_from_irq(unsigned irq);
+@@ -132,9 +143,10 @@ static inline void unmask_evtchn(evtchn_port_t port)
+ return evtchn_ops->unmask(port);
+ }
+
+-static inline void xen_evtchn_handle_events(unsigned cpu)
++static inline void xen_evtchn_handle_events(unsigned cpu,
++ struct evtchn_loop_ctrl *ctrl)
+ {
+- return evtchn_ops->handle_events(cpu);
++ return evtchn_ops->handle_events(cpu, ctrl);
+ }
+
+ static inline void xen_evtchn_resume(void)
+diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
+index 6e0b1dd5573cb..5dc016d68f833 100644
+--- a/drivers/xen/evtchn.c
++++ b/drivers/xen/evtchn.c
+@@ -167,7 +167,6 @@ static irqreturn_t evtchn_interrupt(int irq, void *data)
+ "Interrupt for port %u, but apparently not enabled; per-user %p\n",
+ evtchn->port, u);
+
+- disable_irq_nosync(irq);
+ evtchn->enabled = false;
+
+ spin_lock(&u->ring_prod_lock);
+@@ -293,7 +292,7 @@ static ssize_t evtchn_write(struct file *file, const char __user *buf,
+ evtchn = find_evtchn(u, port);
+ if (evtchn && !evtchn->enabled) {
+ evtchn->enabled = true;
+- enable_irq(irq_from_evtchn(port));
++ xen_irq_lateeoi(irq_from_evtchn(port), 0);
+ }
+ }
+
+@@ -393,8 +392,8 @@ static int evtchn_bind_to_user(struct per_user_data *u, evtchn_port_t port)
+ if (rc < 0)
+ goto err;
+
+- rc = bind_evtchn_to_irqhandler(port, evtchn_interrupt, 0,
+- u->name, evtchn);
++ rc = bind_evtchn_to_irqhandler_lateeoi(port, evtchn_interrupt, 0,
++ u->name, evtchn);
+ if (rc < 0)
+ goto err;
+
+diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
+index b1b6eebafd5de..4c13cbc99896a 100644
+--- a/drivers/xen/gntdev-dmabuf.c
++++ b/drivers/xen/gntdev-dmabuf.c
+@@ -247,10 +247,9 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
+
+ if (sgt) {
+ if (gntdev_dmabuf_attach->dir != DMA_NONE)
+- dma_unmap_sg_attrs(attach->dev, sgt->sgl,
+- sgt->nents,
+- gntdev_dmabuf_attach->dir,
+- DMA_ATTR_SKIP_CPU_SYNC);
++ dma_unmap_sgtable(attach->dev, sgt,
++ gntdev_dmabuf_attach->dir,
++ DMA_ATTR_SKIP_CPU_SYNC);
+ sg_free_table(sgt);
+ }
+
+@@ -288,8 +287,8 @@ dmabuf_exp_ops_map_dma_buf(struct dma_buf_attachment *attach,
+ sgt = dmabuf_pages_to_sgt(gntdev_dmabuf->pages,
+ gntdev_dmabuf->nr_pages);
+ if (!IS_ERR(sgt)) {
+- if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
+- DMA_ATTR_SKIP_CPU_SYNC)) {
++ if (dma_map_sgtable(attach->dev, sgt, dir,
++ DMA_ATTR_SKIP_CPU_SYNC)) {
+ sg_free_table(sgt);
+ kfree(sgt);
+ sgt = ERR_PTR(-ENOMEM);
+@@ -633,7 +632,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
+
+ /* Now convert sgt to array of pages and check for page validity. */
+ i = 0;
+- for_each_sg_page(sgt->sgl, &sg_iter, sgt->nents, 0) {
++ for_each_sgtable_page(sgt, &sg_iter, 0) {
+ struct page *page = sg_page_iter_page(&sg_iter);
+ /*
+ * Check if page is valid: this can happen if we are given
+diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
+index 9eae1fceec1e5..a7d293fa8d140 100644
+--- a/drivers/xen/pvcalls-back.c
++++ b/drivers/xen/pvcalls-back.c
+@@ -66,6 +66,7 @@ struct sock_mapping {
+ atomic_t write;
+ atomic_t io;
+ atomic_t release;
++ atomic_t eoi;
+ void (*saved_data_ready)(struct sock *sk);
+ struct pvcalls_ioworker ioworker;
+ };
+@@ -87,7 +88,7 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
+ struct pvcalls_fedata *fedata,
+ struct sock_mapping *map);
+
+-static void pvcalls_conn_back_read(void *opaque)
++static bool pvcalls_conn_back_read(void *opaque)
+ {
+ struct sock_mapping *map = (struct sock_mapping *)opaque;
+ struct msghdr msg;
+@@ -107,17 +108,17 @@ static void pvcalls_conn_back_read(void *opaque)
+ virt_mb();
+
+ if (error)
+- return;
++ return false;
+
+ size = pvcalls_queued(prod, cons, array_size);
+ if (size >= array_size)
+- return;
++ return false;
+ spin_lock_irqsave(&map->sock->sk->sk_receive_queue.lock, flags);
+ if (skb_queue_empty(&map->sock->sk->sk_receive_queue)) {
+ atomic_set(&map->read, 0);
+ spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock,
+ flags);
+- return;
++ return true;
+ }
+ spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock, flags);
+ wanted = array_size - size;
+@@ -141,7 +142,7 @@ static void pvcalls_conn_back_read(void *opaque)
+ ret = inet_recvmsg(map->sock, &msg, wanted, MSG_DONTWAIT);
+ WARN_ON(ret > wanted);
+ if (ret == -EAGAIN) /* shouldn't happen */
+- return;
++ return true;
+ if (!ret)
+ ret = -ENOTCONN;
+ spin_lock_irqsave(&map->sock->sk->sk_receive_queue.lock, flags);
+@@ -160,10 +161,10 @@ static void pvcalls_conn_back_read(void *opaque)
+ virt_wmb();
+ notify_remote_via_irq(map->irq);
+
+- return;
++ return true;
+ }
+
+-static void pvcalls_conn_back_write(struct sock_mapping *map)
++static bool pvcalls_conn_back_write(struct sock_mapping *map)
+ {
+ struct pvcalls_data_intf *intf = map->ring;
+ struct pvcalls_data *data = &map->data;
+@@ -180,7 +181,7 @@ static void pvcalls_conn_back_write(struct sock_mapping *map)
+ array_size = XEN_FLEX_RING_SIZE(map->ring_order);
+ size = pvcalls_queued(prod, cons, array_size);
+ if (size == 0)
+- return;
++ return false;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.msg_flags |= MSG_DONTWAIT;
+@@ -198,12 +199,11 @@ static void pvcalls_conn_back_write(struct sock_mapping *map)
+
+ atomic_set(&map->write, 0);
+ ret = inet_sendmsg(map->sock, &msg, size);
+- if (ret == -EAGAIN || (ret >= 0 && ret < size)) {
++ if (ret == -EAGAIN) {
+ atomic_inc(&map->write);
+ atomic_inc(&map->io);
++ return true;
+ }
+- if (ret == -EAGAIN)
+- return;
+
+ /* write the data, then update the indexes */
+ virt_wmb();
+@@ -216,9 +216,13 @@ static void pvcalls_conn_back_write(struct sock_mapping *map)
+ }
+ /* update the indexes, then notify the other end */
+ virt_wmb();
+- if (prod != cons + ret)
++ if (prod != cons + ret) {
+ atomic_inc(&map->write);
++ atomic_inc(&map->io);
++ }
+ notify_remote_via_irq(map->irq);
++
++ return true;
+ }
+
+ static void pvcalls_back_ioworker(struct work_struct *work)
+@@ -227,6 +231,7 @@ static void pvcalls_back_ioworker(struct work_struct *work)
+ struct pvcalls_ioworker, register_work);
+ struct sock_mapping *map = container_of(ioworker, struct sock_mapping,
+ ioworker);
++ unsigned int eoi_flags = XEN_EOI_FLAG_SPURIOUS;
+
+ while (atomic_read(&map->io) > 0) {
+ if (atomic_read(&map->release) > 0) {
+@@ -234,10 +239,18 @@ static void pvcalls_back_ioworker(struct work_struct *work)
+ return;
+ }
+
+- if (atomic_read(&map->read) > 0)
+- pvcalls_conn_back_read(map);
+- if (atomic_read(&map->write) > 0)
+- pvcalls_conn_back_write(map);
++ if (atomic_read(&map->read) > 0 &&
++ pvcalls_conn_back_read(map))
++ eoi_flags = 0;
++ if (atomic_read(&map->write) > 0 &&
++ pvcalls_conn_back_write(map))
++ eoi_flags = 0;
++
++ if (atomic_read(&map->eoi) > 0 && !atomic_read(&map->write)) {
++ atomic_set(&map->eoi, 0);
++ xen_irq_lateeoi(map->irq, eoi_flags);
++ eoi_flags = XEN_EOI_FLAG_SPURIOUS;
++ }
+
+ atomic_dec(&map->io);
+ }
+@@ -334,12 +347,9 @@ static struct sock_mapping *pvcalls_new_active_socket(
+ goto out;
+ map->bytes = page;
+
+- ret = bind_interdomain_evtchn_to_irqhandler(fedata->dev->otherend_id,
+- evtchn,
+- pvcalls_back_conn_event,
+- 0,
+- "pvcalls-backend",
+- map);
++ ret = bind_interdomain_evtchn_to_irqhandler_lateeoi(
++ fedata->dev->otherend_id, evtchn,
++ pvcalls_back_conn_event, 0, "pvcalls-backend", map);
+ if (ret < 0)
+ goto out;
+ map->irq = ret;
+@@ -873,15 +883,18 @@ static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
+ {
+ struct xenbus_device *dev = dev_id;
+ struct pvcalls_fedata *fedata = NULL;
++ unsigned int eoi_flags = XEN_EOI_FLAG_SPURIOUS;
+
+- if (dev == NULL)
+- return IRQ_HANDLED;
++ if (dev) {
++ fedata = dev_get_drvdata(&dev->dev);
++ if (fedata) {
++ pvcalls_back_work(fedata);
++ eoi_flags = 0;
++ }
++ }
+
+- fedata = dev_get_drvdata(&dev->dev);
+- if (fedata == NULL)
+- return IRQ_HANDLED;
++ xen_irq_lateeoi(irq, eoi_flags);
+
+- pvcalls_back_work(fedata);
+ return IRQ_HANDLED;
+ }
+
+@@ -891,12 +904,15 @@ static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map)
+ struct pvcalls_ioworker *iow;
+
+ if (map == NULL || map->sock == NULL || map->sock->sk == NULL ||
+- map->sock->sk->sk_user_data != map)
++ map->sock->sk->sk_user_data != map) {
++ xen_irq_lateeoi(irq, 0);
+ return IRQ_HANDLED;
++ }
+
+ iow = &map->ioworker;
+
+ atomic_inc(&map->write);
++ atomic_inc(&map->eoi);
+ atomic_inc(&map->io);
+ queue_work(iow->wq, &iow->register_work);
+
+@@ -932,7 +948,7 @@ static int backend_connect(struct xenbus_device *dev)
+ goto error;
+ }
+
+- err = bind_interdomain_evtchn_to_irq(dev->otherend_id, evtchn);
++ err = bind_interdomain_evtchn_to_irq_lateeoi(dev->otherend_id, evtchn);
+ if (err < 0)
+ goto error;
+ fedata->irq = err;
+diff --git a/drivers/xen/xen-pciback/pci_stub.c b/drivers/xen/xen-pciback/pci_stub.c
+index e876c3d6dad1f..cb904ac830064 100644
+--- a/drivers/xen/xen-pciback/pci_stub.c
++++ b/drivers/xen/xen-pciback/pci_stub.c
+@@ -734,10 +734,17 @@ static pci_ers_result_t common_process(struct pcistub_device *psdev,
+ wmb();
+ notify_remote_via_irq(pdev->evtchn_irq);
+
++ /* Enable IRQ to signal "request done". */
++ xen_pcibk_lateeoi(pdev, 0);
++
+ ret = wait_event_timeout(xen_pcibk_aer_wait_queue,
+ !(test_bit(_XEN_PCIB_active, (unsigned long *)
+ &sh_info->flags)), 300*HZ);
+
++ /* Enable IRQ for pcifront request if not already active. */
++ if (!test_bit(_PDEVF_op_active, &pdev->flags))
++ xen_pcibk_lateeoi(pdev, 0);
++
+ if (!ret) {
+ if (test_bit(_XEN_PCIB_active,
+ (unsigned long *)&sh_info->flags)) {
+@@ -751,12 +758,6 @@ static pci_ers_result_t common_process(struct pcistub_device *psdev,
+ }
+ clear_bit(_PCIB_op_pending, (unsigned long *)&pdev->flags);
+
+- if (test_bit(_XEN_PCIF_active,
+- (unsigned long *)&sh_info->flags)) {
+- dev_dbg(&psdev->dev->dev, "schedule pci_conf service\n");
+- xen_pcibk_test_and_schedule_op(psdev->pdev);
+- }
+-
+ res = (pci_ers_result_t)aer_op->err;
+ return res;
+ }
+diff --git a/drivers/xen/xen-pciback/pciback.h b/drivers/xen/xen-pciback/pciback.h
+index f1ed2dbf685cb..95e28ee48d52b 100644
+--- a/drivers/xen/xen-pciback/pciback.h
++++ b/drivers/xen/xen-pciback/pciback.h
+@@ -14,6 +14,7 @@
+ #include <linux/spinlock.h>
+ #include <linux/workqueue.h>
+ #include <linux/atomic.h>
++#include <xen/events.h>
+ #include <xen/interface/io/pciif.h>
+
+ #define DRV_NAME "xen-pciback"
+@@ -27,6 +28,8 @@ struct pci_dev_entry {
+ #define PDEVF_op_active (1<<(_PDEVF_op_active))
+ #define _PCIB_op_pending (1)
+ #define PCIB_op_pending (1<<(_PCIB_op_pending))
++#define _EOI_pending (2)
++#define EOI_pending (1<<(_EOI_pending))
+
+ struct xen_pcibk_device {
+ void *pci_dev_data;
+@@ -183,10 +186,15 @@ static inline void xen_pcibk_release_devices(struct xen_pcibk_device *pdev)
+ irqreturn_t xen_pcibk_handle_event(int irq, void *dev_id);
+ void xen_pcibk_do_op(struct work_struct *data);
+
++static inline void xen_pcibk_lateeoi(struct xen_pcibk_device *pdev,
++ unsigned int eoi_flag)
++{
++ if (test_and_clear_bit(_EOI_pending, &pdev->flags))
++ xen_irq_lateeoi(pdev->evtchn_irq, eoi_flag);
++}
++
+ int xen_pcibk_xenbus_register(void);
+ void xen_pcibk_xenbus_unregister(void);
+-
+-void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev);
+ #endif
+
+ /* Handles shared IRQs that can to device domain and control domain. */
+diff --git a/drivers/xen/xen-pciback/pciback_ops.c b/drivers/xen/xen-pciback/pciback_ops.c
+index e11a7438e1a25..3fbc21466a934 100644
+--- a/drivers/xen/xen-pciback/pciback_ops.c
++++ b/drivers/xen/xen-pciback/pciback_ops.c
+@@ -276,26 +276,41 @@ int xen_pcibk_disable_msix(struct xen_pcibk_device *pdev,
+ return 0;
+ }
+ #endif
++
++static inline bool xen_pcibk_test_op_pending(struct xen_pcibk_device *pdev)
++{
++ return test_bit(_XEN_PCIF_active,
++ (unsigned long *)&pdev->sh_info->flags) &&
++ !test_and_set_bit(_PDEVF_op_active, &pdev->flags);
++}
++
+ /*
+ * Now the same evtchn is used for both pcifront conf_read_write request
+ * as well as pcie aer front end ack. We use a new work_queue to schedule
+ * xen_pcibk conf_read_write service for avoiding confict with aer_core
+ * do_recovery job which also use the system default work_queue
+ */
+-void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev)
++static void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev)
+ {
++ bool eoi = true;
++
+ /* Check that frontend is requesting an operation and that we are not
+ * already processing a request */
+- if (test_bit(_XEN_PCIF_active, (unsigned long *)&pdev->sh_info->flags)
+- && !test_and_set_bit(_PDEVF_op_active, &pdev->flags)) {
++ if (xen_pcibk_test_op_pending(pdev)) {
+ schedule_work(&pdev->op_work);
++ eoi = false;
+ }
+ /*_XEN_PCIB_active should have been cleared by pcifront. And also make
+ sure xen_pcibk is waiting for ack by checking _PCIB_op_pending*/
+ if (!test_bit(_XEN_PCIB_active, (unsigned long *)&pdev->sh_info->flags)
+ && test_bit(_PCIB_op_pending, &pdev->flags)) {
+ wake_up(&xen_pcibk_aer_wait_queue);
++ eoi = false;
+ }
++
++ /* EOI if there was nothing to do. */
++ if (eoi)
++ xen_pcibk_lateeoi(pdev, XEN_EOI_FLAG_SPURIOUS);
+ }
+
+ /* Performing the configuration space reads/writes must not be done in atomic
+@@ -303,10 +318,8 @@ void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev)
+ * use of semaphores). This function is intended to be called from a work
+ * queue in process context taking a struct xen_pcibk_device as a parameter */
+
+-void xen_pcibk_do_op(struct work_struct *data)
++static void xen_pcibk_do_one_op(struct xen_pcibk_device *pdev)
+ {
+- struct xen_pcibk_device *pdev =
+- container_of(data, struct xen_pcibk_device, op_work);
+ struct pci_dev *dev;
+ struct xen_pcibk_dev_data *dev_data = NULL;
+ struct xen_pci_op *op = &pdev->op;
+@@ -379,16 +392,31 @@ void xen_pcibk_do_op(struct work_struct *data)
+ smp_mb__before_atomic(); /* /after/ clearing PCIF_active */
+ clear_bit(_PDEVF_op_active, &pdev->flags);
+ smp_mb__after_atomic(); /* /before/ final check for work */
++}
+
+- /* Check to see if the driver domain tried to start another request in
+- * between clearing _XEN_PCIF_active and clearing _PDEVF_op_active.
+- */
+- xen_pcibk_test_and_schedule_op(pdev);
++void xen_pcibk_do_op(struct work_struct *data)
++{
++ struct xen_pcibk_device *pdev =
++ container_of(data, struct xen_pcibk_device, op_work);
++
++ do {
++ xen_pcibk_do_one_op(pdev);
++ } while (xen_pcibk_test_op_pending(pdev));
++
++ xen_pcibk_lateeoi(pdev, 0);
+ }
+
+ irqreturn_t xen_pcibk_handle_event(int irq, void *dev_id)
+ {
+ struct xen_pcibk_device *pdev = dev_id;
++ bool eoi;
++
++ /* IRQs might come in before pdev->evtchn_irq is written. */
++ if (unlikely(pdev->evtchn_irq != irq))
++ pdev->evtchn_irq = irq;
++
++ eoi = test_and_set_bit(_EOI_pending, &pdev->flags);
++ WARN(eoi, "IRQ while EOI pending\n");
+
+ xen_pcibk_test_and_schedule_op(pdev);
+
+diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c
+index b500466a6c371..4b99ec3dec58a 100644
+--- a/drivers/xen/xen-pciback/xenbus.c
++++ b/drivers/xen/xen-pciback/xenbus.c
+@@ -123,7 +123,7 @@ static int xen_pcibk_do_attach(struct xen_pcibk_device *pdev, int gnt_ref,
+
+ pdev->sh_info = vaddr;
+
+- err = bind_interdomain_evtchn_to_irqhandler(
++ err = bind_interdomain_evtchn_to_irqhandler_lateeoi(
+ pdev->xdev->otherend_id, remote_evtchn, xen_pcibk_handle_event,
+ 0, DRV_NAME, pdev);
+ if (err < 0) {
+diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
+index 1e8cfd80a4e6b..4acc4e899600c 100644
+--- a/drivers/xen/xen-scsiback.c
++++ b/drivers/xen/xen-scsiback.c
+@@ -91,7 +91,6 @@ struct vscsibk_info {
+ unsigned int irq;
+
+ struct vscsiif_back_ring ring;
+- int ring_error;
+
+ spinlock_t ring_lock;
+ atomic_t nr_unreplied_reqs;
+@@ -722,7 +721,8 @@ static struct vscsibk_pend *prepare_pending_reqs(struct vscsibk_info *info,
+ return pending_req;
+ }
+
+-static int scsiback_do_cmd_fn(struct vscsibk_info *info)
++static int scsiback_do_cmd_fn(struct vscsibk_info *info,
++ unsigned int *eoi_flags)
+ {
+ struct vscsiif_back_ring *ring = &info->ring;
+ struct vscsiif_request ring_req;
+@@ -739,11 +739,12 @@ static int scsiback_do_cmd_fn(struct vscsibk_info *info)
+ rc = ring->rsp_prod_pvt;
+ pr_warn("Dom%d provided bogus ring requests (%#x - %#x = %u). Halting ring processing\n",
+ info->domid, rp, rc, rp - rc);
+- info->ring_error = 1;
+- return 0;
++ return -EINVAL;
+ }
+
+ while ((rc != rp)) {
++ *eoi_flags &= ~XEN_EOI_FLAG_SPURIOUS;
++
+ if (RING_REQUEST_CONS_OVERFLOW(ring, rc))
+ break;
+
+@@ -802,13 +803,16 @@ static int scsiback_do_cmd_fn(struct vscsibk_info *info)
+ static irqreturn_t scsiback_irq_fn(int irq, void *dev_id)
+ {
+ struct vscsibk_info *info = dev_id;
++ int rc;
++ unsigned int eoi_flags = XEN_EOI_FLAG_SPURIOUS;
+
+- if (info->ring_error)
+- return IRQ_HANDLED;
+-
+- while (scsiback_do_cmd_fn(info))
++ while ((rc = scsiback_do_cmd_fn(info, &eoi_flags)) > 0)
+ cond_resched();
+
++ /* In case of a ring error we keep the event channel masked. */
++ if (!rc)
++ xen_irq_lateeoi(irq, eoi_flags);
++
+ return IRQ_HANDLED;
+ }
+
+@@ -829,7 +833,7 @@ static int scsiback_init_sring(struct vscsibk_info *info, grant_ref_t ring_ref,
+ sring = (struct vscsiif_sring *)area;
+ BACK_RING_INIT(&info->ring, sring, PAGE_SIZE);
+
+- err = bind_interdomain_evtchn_to_irq(info->domid, evtchn);
++ err = bind_interdomain_evtchn_to_irq_lateeoi(info->domid, evtchn);
+ if (err < 0)
+ goto unmap_page;
+
+@@ -1253,7 +1257,6 @@ static int scsiback_probe(struct xenbus_device *dev,
+
+ info->domid = dev->otherend_id;
+ spin_lock_init(&info->ring_lock);
+- info->ring_error = 0;
+ atomic_set(&info->nr_unreplied_reqs, 0);
+ init_waitqueue_head(&info->waiting_to_free);
+ info->dev = dev;
+diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
+index 3576123d82990..6d97b6b4d34b6 100644
+--- a/fs/9p/vfs_file.c
++++ b/fs/9p/vfs_file.c
+@@ -612,9 +612,9 @@ static void v9fs_mmap_vm_close(struct vm_area_struct *vma)
+ struct writeback_control wbc = {
+ .nr_to_write = LONG_MAX,
+ .sync_mode = WB_SYNC_ALL,
+- .range_start = vma->vm_pgoff * PAGE_SIZE,
++ .range_start = (loff_t)vma->vm_pgoff * PAGE_SIZE,
+ /* absolute end, byte at end included */
+- .range_end = vma->vm_pgoff * PAGE_SIZE +
++ .range_end = (loff_t)vma->vm_pgoff * PAGE_SIZE +
+ (vma->vm_end - vma->vm_start - 1),
+ };
+
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 1d2e61e0ab047..1bb5b9d7f0a2c 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -281,8 +281,7 @@ retry:
+ if (ret < 0)
+ goto error;
+
+- set_page_private(req->pages[i], 1);
+- SetPagePrivate(req->pages[i]);
++ attach_page_private(req->pages[i], (void *)1);
+ unlock_page(req->pages[i]);
+ i++;
+ } else {
+@@ -1975,8 +1974,7 @@ static int afs_dir_releasepage(struct page *page, gfp_t gfp_flags)
+
+ _enter("{{%llx:%llu}[%lu]}", dvnode->fid.vid, dvnode->fid.vnode, page->index);
+
+- set_page_private(page, 0);
+- ClearPagePrivate(page);
++ detach_page_private(page);
+
+ /* The directory will need reloading. */
+ if (test_and_clear_bit(AFS_VNODE_DIR_VALID, &dvnode->flags))
+@@ -2003,8 +2001,6 @@ static void afs_dir_invalidatepage(struct page *page, unsigned int offset,
+ afs_stat_v(dvnode, n_inval);
+
+ /* we clean up only if the entire page is being invalidated */
+- if (offset == 0 && length == PAGE_SIZE) {
+- set_page_private(page, 0);
+- ClearPagePrivate(page);
+- }
++ if (offset == 0 && length == PAGE_SIZE)
++ detach_page_private(page);
+ }
+diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c
+index b108528bf010d..2ffe09abae7fc 100644
+--- a/fs/afs/dir_edit.c
++++ b/fs/afs/dir_edit.c
+@@ -243,10 +243,8 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
+ index, gfp);
+ if (!page)
+ goto error;
+- if (!PagePrivate(page)) {
+- set_page_private(page, 1);
+- SetPagePrivate(page);
+- }
++ if (!PagePrivate(page))
++ attach_page_private(page, (void *)1);
+ dir_page = kmap(page);
+ }
+
+diff --git a/fs/afs/file.c b/fs/afs/file.c
+index 371d1488cc549..5015f2b107824 100644
+--- a/fs/afs/file.c
++++ b/fs/afs/file.c
+@@ -600,6 +600,63 @@ static int afs_readpages(struct file *file, struct address_space *mapping,
+ return ret;
+ }
+
++/*
++ * Adjust the dirty region of the page on truncation or full invalidation,
++ * getting rid of the markers altogether if the region is entirely invalidated.
++ */
++static void afs_invalidate_dirty(struct page *page, unsigned int offset,
++ unsigned int length)
++{
++ struct afs_vnode *vnode = AFS_FS_I(page->mapping->host);
++ unsigned long priv;
++ unsigned int f, t, end = offset + length;
++
++ priv = page_private(page);
++
++ /* we clean up only if the entire page is being invalidated */
++ if (offset == 0 && length == thp_size(page))
++ goto full_invalidate;
++
++ /* If the page was dirtied by page_mkwrite(), the PTE stays writable
++ * and we don't get another notification to tell us to expand it
++ * again.
++ */
++ if (afs_is_page_dirty_mmapped(priv))
++ return;
++
++ /* We may need to shorten the dirty region */
++ f = afs_page_dirty_from(priv);
++ t = afs_page_dirty_to(priv);
++
++ if (t <= offset || f >= end)
++ return; /* Doesn't overlap */
++
++ if (f < offset && t > end)
++ return; /* Splits the dirty region - just absorb it */
++
++ if (f >= offset && t <= end)
++ goto undirty;
++
++ if (f < offset)
++ t = offset;
++ else
++ f = end;
++ if (f == t)
++ goto undirty;
++
++ priv = afs_page_dirty(f, t);
++ set_page_private(page, priv);
++ trace_afs_page_dirty(vnode, tracepoint_string("trunc"), page->index, priv);
++ return;
++
++undirty:
++ trace_afs_page_dirty(vnode, tracepoint_string("undirty"), page->index, priv);
++ clear_page_dirty_for_io(page);
++full_invalidate:
++ priv = (unsigned long)detach_page_private(page);
++ trace_afs_page_dirty(vnode, tracepoint_string("inval"), page->index, priv);
++}
++
+ /*
+ * invalidate part or all of a page
+ * - release a page and clean up its private data if offset is 0 (indicating
+@@ -608,31 +665,23 @@ static int afs_readpages(struct file *file, struct address_space *mapping,
+ static void afs_invalidatepage(struct page *page, unsigned int offset,
+ unsigned int length)
+ {
+- struct afs_vnode *vnode = AFS_FS_I(page->mapping->host);
+- unsigned long priv;
+-
+ _enter("{%lu},%u,%u", page->index, offset, length);
+
+ BUG_ON(!PageLocked(page));
+
++#ifdef CONFIG_AFS_FSCACHE
+ /* we clean up only if the entire page is being invalidated */
+ if (offset == 0 && length == PAGE_SIZE) {
+-#ifdef CONFIG_AFS_FSCACHE
+ if (PageFsCache(page)) {
+ struct afs_vnode *vnode = AFS_FS_I(page->mapping->host);
+ fscache_wait_on_page_write(vnode->cache, page);
+ fscache_uncache_page(vnode->cache, page);
+ }
++ }
+ #endif
+
+- if (PagePrivate(page)) {
+- priv = page_private(page);
+- trace_afs_page_dirty(vnode, tracepoint_string("inval"),
+- page->index, priv);
+- set_page_private(page, 0);
+- ClearPagePrivate(page);
+- }
+- }
++ if (PagePrivate(page))
++ afs_invalidate_dirty(page, offset, length);
+
+ _leave("");
+ }
+@@ -660,11 +709,9 @@ static int afs_releasepage(struct page *page, gfp_t gfp_flags)
+ #endif
+
+ if (PagePrivate(page)) {
+- priv = page_private(page);
++ priv = (unsigned long)detach_page_private(page);
+ trace_afs_page_dirty(vnode, tracepoint_string("rel"),
+ page->index, priv);
+- set_page_private(page, 0);
+- ClearPagePrivate(page);
+ }
+
+ /* indicate that the page can be released */
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 06e617ee4cd1e..17336cbb8419f 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -811,6 +811,7 @@ struct afs_operation {
+ pgoff_t last; /* last page in mapping to deal with */
+ unsigned first_offset; /* offset into mapping[first] */
+ unsigned last_to; /* amount of mapping[last] */
++ bool laundering; /* Laundering page, PG_writeback not set */
+ } store;
+ struct {
+ struct iattr *attr;
+@@ -856,6 +857,62 @@ struct afs_vnode_cache_aux {
+ u64 data_version;
+ } __packed;
+
++/*
++ * We use page->private to hold the amount of the page that we've written to,
++ * splitting the field into two parts. However, we need to represent a range
++ * 0...PAGE_SIZE, so we reduce the resolution if the size of the page
++ * exceeds what we can encode.
++ */
++#ifdef CONFIG_64BIT
++#define __AFS_PAGE_PRIV_MASK 0x7fffffffUL
++#define __AFS_PAGE_PRIV_SHIFT 32
++#define __AFS_PAGE_PRIV_MMAPPED 0x80000000UL
++#else
++#define __AFS_PAGE_PRIV_MASK 0x7fffUL
++#define __AFS_PAGE_PRIV_SHIFT 16
++#define __AFS_PAGE_PRIV_MMAPPED 0x8000UL
++#endif
++
++static inline unsigned int afs_page_dirty_resolution(void)
++{
++ int shift = PAGE_SHIFT - (__AFS_PAGE_PRIV_SHIFT - 1);
++ return (shift > 0) ? shift : 0;
++}
++
++static inline size_t afs_page_dirty_from(unsigned long priv)
++{
++ unsigned long x = priv & __AFS_PAGE_PRIV_MASK;
++
++ /* The lower bound is inclusive */
++ return x << afs_page_dirty_resolution();
++}
++
++static inline size_t afs_page_dirty_to(unsigned long priv)
++{
++ unsigned long x = (priv >> __AFS_PAGE_PRIV_SHIFT) & __AFS_PAGE_PRIV_MASK;
++
++ /* The upper bound is immediately beyond the region */
++ return (x + 1) << afs_page_dirty_resolution();
++}
++
++static inline unsigned long afs_page_dirty(size_t from, size_t to)
++{
++ unsigned int res = afs_page_dirty_resolution();
++ from >>= res;
++ to = (to - 1) >> res;
++ return (to << __AFS_PAGE_PRIV_SHIFT) | from;
++}
++
++static inline unsigned long afs_page_dirty_mmapped(unsigned long priv)
++{
++ return priv | __AFS_PAGE_PRIV_MMAPPED;
++}
++
++static inline bool afs_is_page_dirty_mmapped(unsigned long priv)
++{
++ return priv & __AFS_PAGE_PRIV_MMAPPED;
++}
++
+ #include <trace/events/afs.h>
+
+ /*****************************************************************************/
+diff --git a/fs/afs/server.c b/fs/afs/server.c
+index e82e452e26124..684a2b02b9ff7 100644
+--- a/fs/afs/server.c
++++ b/fs/afs/server.c
+@@ -550,7 +550,12 @@ void afs_manage_servers(struct work_struct *work)
+
+ _debug("manage %pU %u", &server->uuid, active);
+
+- ASSERTIFCMP(purging, active, ==, 0);
++ if (purging) {
++ trace_afs_server(server, atomic_read(&server->ref),
++ active, afs_server_trace_purging);
++ if (active != 0)
++ pr_notice("Can't purge s=%08x\n", server->debug_id);
++ }
+
+ if (active == 0) {
+ time64_t expire_at = server->unuse_time;
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index da12abd6db213..50371207f3273 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -76,7 +76,7 @@ static int afs_fill_page(struct afs_vnode *vnode, struct key *key,
+ */
+ int afs_write_begin(struct file *file, struct address_space *mapping,
+ loff_t pos, unsigned len, unsigned flags,
+- struct page **pagep, void **fsdata)
++ struct page **_page, void **fsdata)
+ {
+ struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
+ struct page *page;
+@@ -90,11 +90,6 @@ int afs_write_begin(struct file *file, struct address_space *mapping,
+ _enter("{%llx:%llu},{%lx},%u,%u",
+ vnode->fid.vid, vnode->fid.vnode, index, from, to);
+
+- /* We want to store information about how much of a page is altered in
+- * page->private.
+- */
+- BUILD_BUG_ON(PAGE_SIZE > 32768 && sizeof(page->private) < 8);
+-
+ page = grab_cache_page_write_begin(mapping, index, flags);
+ if (!page)
+ return -ENOMEM;
+@@ -110,9 +105,6 @@ int afs_write_begin(struct file *file, struct address_space *mapping,
+ SetPageUptodate(page);
+ }
+
+- /* page won't leak in error case: it eventually gets cleaned off LRU */
+- *pagep = page;
+-
+ try_again:
+ /* See if this page is already partially written in a way that we can
+ * merge the new write with.
+@@ -120,8 +112,8 @@ try_again:
+ t = f = 0;
+ if (PagePrivate(page)) {
+ priv = page_private(page);
+- f = priv & AFS_PRIV_MAX;
+- t = priv >> AFS_PRIV_SHIFT;
++ f = afs_page_dirty_from(priv);
++ t = afs_page_dirty_to(priv);
+ ASSERTCMP(f, <=, t);
+ }
+
+@@ -138,21 +130,9 @@ try_again:
+ if (!test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags) &&
+ (to < f || from > t))
+ goto flush_conflicting_write;
+- if (from < f)
+- f = from;
+- if (to > t)
+- t = to;
+- } else {
+- f = from;
+- t = to;
+ }
+
+- priv = (unsigned long)t << AFS_PRIV_SHIFT;
+- priv |= f;
+- trace_afs_page_dirty(vnode, tracepoint_string("begin"),
+- page->index, priv);
+- SetPagePrivate(page);
+- set_page_private(page, priv);
++ *_page = page;
+ _leave(" = 0");
+ return 0;
+
+@@ -162,17 +142,18 @@ try_again:
+ flush_conflicting_write:
+ _debug("flush conflict");
+ ret = write_one_page(page);
+- if (ret < 0) {
+- _leave(" = %d", ret);
+- return ret;
+- }
++ if (ret < 0)
++ goto error;
+
+ ret = lock_page_killable(page);
+- if (ret < 0) {
+- _leave(" = %d", ret);
+- return ret;
+- }
++ if (ret < 0)
++ goto error;
+ goto try_again;
++
++error:
++ put_page(page);
++ _leave(" = %d", ret);
++ return ret;
+ }
+
+ /*
+@@ -184,6 +165,9 @@ int afs_write_end(struct file *file, struct address_space *mapping,
+ {
+ struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
+ struct key *key = afs_file_key(file);
++ unsigned long priv;
++ unsigned int f, from = pos & (PAGE_SIZE - 1);
++ unsigned int t, to = from + copied;
+ loff_t i_size, maybe_i_size;
+ int ret;
+
+@@ -215,6 +199,25 @@ int afs_write_end(struct file *file, struct address_space *mapping,
+ SetPageUptodate(page);
+ }
+
++ if (PagePrivate(page)) {
++ priv = page_private(page);
++ f = afs_page_dirty_from(priv);
++ t = afs_page_dirty_to(priv);
++ if (from < f)
++ f = from;
++ if (to > t)
++ t = to;
++ priv = afs_page_dirty(f, t);
++ set_page_private(page, priv);
++ trace_afs_page_dirty(vnode, tracepoint_string("dirty+"),
++ page->index, priv);
++ } else {
++ priv = afs_page_dirty(from, to);
++ attach_page_private(page, (void *)priv);
++ trace_afs_page_dirty(vnode, tracepoint_string("dirty"),
++ page->index, priv);
++ }
++
+ set_page_dirty(page);
+ if (PageDirty(page))
+ _debug("dirtied");
+@@ -334,10 +337,9 @@ static void afs_pages_written_back(struct afs_vnode *vnode,
+ ASSERTCMP(pv.nr, ==, count);
+
+ for (loop = 0; loop < count; loop++) {
+- priv = page_private(pv.pages[loop]);
++ priv = (unsigned long)detach_page_private(pv.pages[loop]);
+ trace_afs_page_dirty(vnode, tracepoint_string("clear"),
+ pv.pages[loop]->index, priv);
+- set_page_private(pv.pages[loop], 0);
+ end_page_writeback(pv.pages[loop]);
+ }
+ first += count;
+@@ -396,7 +398,8 @@ static void afs_store_data_success(struct afs_operation *op)
+ op->ctime = op->file[0].scb.status.mtime_client;
+ afs_vnode_commit_status(op, &op->file[0]);
+ if (op->error == 0) {
+- afs_pages_written_back(vnode, op->store.first, op->store.last);
++ if (!op->store.laundering)
++ afs_pages_written_back(vnode, op->store.first, op->store.last);
+ afs_stat_v(vnode, n_stores);
+ atomic_long_add((op->store.last * PAGE_SIZE + op->store.last_to) -
+ (op->store.first * PAGE_SIZE + op->store.first_offset),
+@@ -415,7 +418,7 @@ static const struct afs_operation_ops afs_store_data_operation = {
+ */
+ static int afs_store_data(struct address_space *mapping,
+ pgoff_t first, pgoff_t last,
+- unsigned offset, unsigned to)
++ unsigned offset, unsigned to, bool laundering)
+ {
+ struct afs_vnode *vnode = AFS_FS_I(mapping->host);
+ struct afs_operation *op;
+@@ -448,6 +451,7 @@ static int afs_store_data(struct address_space *mapping,
+ op->store.last = last;
+ op->store.first_offset = offset;
+ op->store.last_to = to;
++ op->store.laundering = laundering;
+ op->mtime = vnode->vfs_inode.i_mtime;
+ op->flags |= AFS_OPERATION_UNINTR;
+ op->ops = &afs_store_data_operation;
+@@ -509,8 +513,8 @@ static int afs_write_back_from_locked_page(struct address_space *mapping,
+ */
+ start = primary_page->index;
+ priv = page_private(primary_page);
+- offset = priv & AFS_PRIV_MAX;
+- to = priv >> AFS_PRIV_SHIFT;
++ offset = afs_page_dirty_from(priv);
++ to = afs_page_dirty_to(priv);
+ trace_afs_page_dirty(vnode, tracepoint_string("store"),
+ primary_page->index, priv);
+
+@@ -555,8 +559,8 @@ static int afs_write_back_from_locked_page(struct address_space *mapping,
+ }
+
+ priv = page_private(page);
+- f = priv & AFS_PRIV_MAX;
+- t = priv >> AFS_PRIV_SHIFT;
++ f = afs_page_dirty_from(priv);
++ t = afs_page_dirty_to(priv);
+ if (f != 0 &&
+ !test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags)) {
+ unlock_page(page);
+@@ -601,7 +605,7 @@ no_more:
+ if (end > i_size)
+ to = i_size & ~PAGE_MASK;
+
+- ret = afs_store_data(mapping, first, last, offset, to);
++ ret = afs_store_data(mapping, first, last, offset, to, false);
+ switch (ret) {
+ case 0:
+ ret = count;
+@@ -857,12 +861,14 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
+ */
+ wait_on_page_writeback(vmf->page);
+
+- priv = (unsigned long)PAGE_SIZE << AFS_PRIV_SHIFT; /* To */
+- priv |= 0; /* From */
++ priv = afs_page_dirty(0, PAGE_SIZE);
++ priv = afs_page_dirty_mmapped(priv);
+ trace_afs_page_dirty(vnode, tracepoint_string("mkwrite"),
+ vmf->page->index, priv);
+- SetPagePrivate(vmf->page);
+- set_page_private(vmf->page, priv);
++ if (PagePrivate(vmf->page))
++ set_page_private(vmf->page, priv);
++ else
++ attach_page_private(vmf->page, (void *)priv);
+ file_update_time(file);
+
+ sb_end_pagefault(inode->i_sb);
+@@ -915,19 +921,18 @@ int afs_launder_page(struct page *page)
+ f = 0;
+ t = PAGE_SIZE;
+ if (PagePrivate(page)) {
+- f = priv & AFS_PRIV_MAX;
+- t = priv >> AFS_PRIV_SHIFT;
++ f = afs_page_dirty_from(priv);
++ t = afs_page_dirty_to(priv);
+ }
+
+ trace_afs_page_dirty(vnode, tracepoint_string("launder"),
+ page->index, priv);
+- ret = afs_store_data(mapping, page->index, page->index, t, f);
++ ret = afs_store_data(mapping, page->index, page->index, t, f, true);
+ }
+
++ priv = (unsigned long)detach_page_private(page);
+ trace_afs_page_dirty(vnode, tracepoint_string("laundered"),
+ page->index, priv);
+- set_page_private(page, 0);
+- ClearPagePrivate(page);
+
+ #ifdef CONFIG_AFS_FSCACHE
+ if (PageFsCache(page)) {
+diff --git a/fs/afs/xattr.c b/fs/afs/xattr.c
+index 84f3c4f575318..38884d6c57cdc 100644
+--- a/fs/afs/xattr.c
++++ b/fs/afs/xattr.c
+@@ -85,7 +85,7 @@ static int afs_xattr_get_acl(const struct xattr_handler *handler,
+ if (acl->size <= size)
+ memcpy(buffer, acl->data, acl->size);
+ else
+- op->error = -ERANGE;
++ ret = -ERANGE;
+ }
+ }
+
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index ea8aaf36647ee..a5347d8dcd76b 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -2034,6 +2034,7 @@ int btrfs_read_block_groups(struct btrfs_fs_info *info)
+ key.offset = 0;
+ btrfs_release_path(path);
+ }
++ btrfs_release_path(path);
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(space_info, &info->space_info, list) {
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index cd392da69b819..376827b04b0a3 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -1061,6 +1061,8 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
+
+ ret = update_ref_for_cow(trans, root, buf, cow, &last_ref);
+ if (ret) {
++ btrfs_tree_unlock(cow);
++ free_extent_buffer(cow);
+ btrfs_abort_transaction(trans, ret);
+ return ret;
+ }
+@@ -1068,6 +1070,8 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
+ if (test_bit(BTRFS_ROOT_SHAREABLE, &root->state)) {
+ ret = btrfs_reloc_cow_block(trans, root, buf, cow);
+ if (ret) {
++ btrfs_tree_unlock(cow);
++ free_extent_buffer(cow);
+ btrfs_abort_transaction(trans, ret);
+ return ret;
+ }
+@@ -1100,6 +1104,8 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
+ if (last_ref) {
+ ret = tree_mod_log_free_eb(buf);
+ if (ret) {
++ btrfs_tree_unlock(cow);
++ free_extent_buffer(cow);
+ btrfs_abort_transaction(trans, ret);
+ return ret;
+ }
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 9a72896bed2ee..2f5ab8c47f506 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -2619,7 +2619,7 @@ enum btrfs_flush_state {
+ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+ struct btrfs_block_rsv *rsv,
+ int nitems, bool use_global_rsv);
+-void btrfs_subvolume_release_metadata(struct btrfs_fs_info *fs_info,
++void btrfs_subvolume_release_metadata(struct btrfs_root *root,
+ struct btrfs_block_rsv *rsv);
+ void btrfs_delalloc_release_extents(struct btrfs_inode *inode, u64 num_bytes);
+
+@@ -3517,6 +3517,8 @@ struct reada_control *btrfs_reada_add(struct btrfs_root *root,
+ int btrfs_reada_wait(void *handle);
+ void btrfs_reada_detach(void *handle);
+ int btree_readahead_hook(struct extent_buffer *eb, int err);
++void btrfs_reada_remove_dev(struct btrfs_device *dev);
++void btrfs_reada_undo_remove_dev(struct btrfs_device *dev);
+
+ static inline int is_fstree(u64 rootid)
+ {
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index bf1595a42a988..0727b10a9a897 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -627,8 +627,7 @@ static int btrfs_delayed_inode_reserve_metadata(
+ */
+ if (!src_rsv || (!trans->bytes_reserved &&
+ src_rsv->type != BTRFS_BLOCK_RSV_DELALLOC)) {
+- ret = btrfs_qgroup_reserve_meta_prealloc(root,
+- fs_info->nodesize, true);
++ ret = btrfs_qgroup_reserve_meta_prealloc(root, num_bytes, true);
+ if (ret < 0)
+ return ret;
+ ret = btrfs_block_rsv_add(root, dst_rsv, num_bytes,
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index e4a1c6afe35dc..b58b33051a89d 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -230,7 +230,7 @@ static int btrfs_init_dev_replace_tgtdev(struct btrfs_fs_info *fs_info,
+ int ret = 0;
+
+ *device_out = NULL;
+- if (fs_info->fs_devices->seeding) {
++ if (srcdev->fs_devices->seeding) {
+ btrfs_err(fs_info, "the filesystem is a seed filesystem!");
+ return -EINVAL;
+ }
+@@ -668,6 +668,9 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ }
+ btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);
+
++ if (!scrub_ret)
++ btrfs_reada_remove_dev(src_device);
++
+ /*
+ * We have to use this loop approach because at this point src_device
+ * has to be available for transaction commit to complete, yet new
+@@ -676,6 +679,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ while (1) {
+ trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
++ btrfs_reada_undo_remove_dev(src_device);
+ mutex_unlock(&dev_replace->lock_finishing_cancel_unmount);
+ return PTR_ERR(trans);
+ }
+@@ -726,6 +730,7 @@ error:
+ up_write(&dev_replace->rwsem);
+ mutex_unlock(&fs_info->chunk_mutex);
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
++ btrfs_reada_undo_remove_dev(src_device);
+ btrfs_rm_dev_replace_blocked(fs_info);
+ if (tgt_device)
+ btrfs_destroy_dev_replace_tgtdev(tgt_device);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 9f72b092bc228..7882c07645014 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3482,8 +3482,12 @@ struct btrfs_super_block *btrfs_read_dev_one_super(struct block_device *bdev,
+ return ERR_CAST(page);
+
+ super = page_address(page);
+- if (btrfs_super_bytenr(super) != bytenr ||
+- btrfs_super_magic(super) != BTRFS_MAGIC) {
++ if (btrfs_super_magic(super) != BTRFS_MAGIC) {
++ btrfs_release_disk_super(super);
++ return ERR_PTR(-ENODATA);
++ }
++
++ if (btrfs_super_bytenr(super) != bytenr) {
+ btrfs_release_disk_super(super);
+ return ERR_PTR(-EINVAL);
+ }
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 780b9c9a98fe3..dbff61d36cab4 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -3918,11 +3918,12 @@ static int prepare_allocation(struct btrfs_fs_info *fs_info,
+ * |- Push harder to find free extents
+ * |- If not found, re-iterate all block groups
+ */
+-static noinline int find_free_extent(struct btrfs_fs_info *fs_info,
++static noinline int find_free_extent(struct btrfs_root *root,
+ u64 ram_bytes, u64 num_bytes, u64 empty_size,
+ u64 hint_byte_orig, struct btrfs_key *ins,
+ u64 flags, int delalloc)
+ {
++ struct btrfs_fs_info *fs_info = root->fs_info;
+ int ret = 0;
+ int cache_block_group_error = 0;
+ struct btrfs_block_group *block_group = NULL;
+@@ -3954,7 +3955,7 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info,
+ ins->objectid = 0;
+ ins->offset = 0;
+
+- trace_find_free_extent(fs_info, num_bytes, empty_size, flags);
++ trace_find_free_extent(root, num_bytes, empty_size, flags);
+
+ space_info = btrfs_find_space_info(fs_info, flags);
+ if (!space_info) {
+@@ -4203,7 +4204,7 @@ int btrfs_reserve_extent(struct btrfs_root *root, u64 ram_bytes,
+ flags = get_alloc_profile_by_root(root, is_data);
+ again:
+ WARN_ON(num_bytes < fs_info->sectorsize);
+- ret = find_free_extent(fs_info, ram_bytes, num_bytes, empty_size,
++ ret = find_free_extent(root, ram_bytes, num_bytes, empty_size,
+ hint_byte, ins, flags, delalloc);
+ if (!ret && !is_data) {
+ btrfs_dec_block_group_reservations(fs_info, ins->objectid);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 9570458aa8471..11d132bc2679c 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4051,7 +4051,7 @@ out_end_trans:
+ err = ret;
+ inode->i_flags |= S_DEAD;
+ out_release:
+- btrfs_subvolume_release_metadata(fs_info, &block_rsv);
++ btrfs_subvolume_release_metadata(root, &block_rsv);
+ out_up_write:
+ up_write(&fs_info->subvol_sem);
+ if (err) {
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 2d9109d9e98f9..2a5dc42f07505 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -618,7 +618,7 @@ static noinline int create_subvol(struct inode *dir,
+ trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
+ ret = PTR_ERR(trans);
+- btrfs_subvolume_release_metadata(fs_info, &block_rsv);
++ btrfs_subvolume_release_metadata(root, &block_rsv);
+ goto fail_free;
+ }
+ trans->block_rsv = &block_rsv;
+@@ -742,7 +742,7 @@ fail:
+ kfree(root_item);
+ trans->block_rsv = NULL;
+ trans->bytes_reserved = 0;
+- btrfs_subvolume_release_metadata(fs_info, &block_rsv);
++ btrfs_subvolume_release_metadata(root, &block_rsv);
+
+ err = btrfs_commit_transaction(trans);
+ if (err && !ret)
+@@ -856,7 +856,7 @@ fail:
+ if (ret && pending_snapshot->snap)
+ pending_snapshot->snap->anon_dev = 0;
+ btrfs_put_root(pending_snapshot->snap);
+- btrfs_subvolume_release_metadata(fs_info, &pending_snapshot->block_rsv);
++ btrfs_subvolume_release_metadata(root, &pending_snapshot->block_rsv);
+ free_pending:
+ if (pending_snapshot->anon_dev)
+ free_anon_bdev(pending_snapshot->anon_dev);
+diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c
+index 243a2e44526ef..49ff7162d3d0a 100644
+--- a/fs/btrfs/reada.c
++++ b/fs/btrfs/reada.c
+@@ -421,6 +421,9 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
+ if (!dev->bdev)
+ continue;
+
++ if (test_bit(BTRFS_DEV_STATE_NO_READA, &dev->dev_state))
++ continue;
++
+ if (dev_replace_is_ongoing &&
+ dev == fs_info->dev_replace.tgtdev) {
+ /*
+@@ -445,6 +448,8 @@ static struct reada_extent *reada_find_extent(struct btrfs_fs_info *fs_info,
+ }
+ have_zone = 1;
+ }
++ if (!have_zone)
++ radix_tree_delete(&fs_info->reada_tree, index);
+ spin_unlock(&fs_info->reada_lock);
+ up_read(&fs_info->dev_replace.rwsem);
+
+@@ -1012,3 +1017,45 @@ void btrfs_reada_detach(void *handle)
+
+ kref_put(&rc->refcnt, reada_control_release);
+ }
++
++/*
++ * Before removing a device (device replace or device remove ioctls), call this
++ * function to wait for all existing readahead requests on the device and to
++ * make sure no one queues more readahead requests for the device.
++ *
++ * Must be called without holding neither the device list mutex nor the device
++ * replace semaphore, otherwise it will deadlock.
++ */
++void btrfs_reada_remove_dev(struct btrfs_device *dev)
++{
++ struct btrfs_fs_info *fs_info = dev->fs_info;
++
++ /* Serialize with readahead extent creation at reada_find_extent(). */
++ spin_lock(&fs_info->reada_lock);
++ set_bit(BTRFS_DEV_STATE_NO_READA, &dev->dev_state);
++ spin_unlock(&fs_info->reada_lock);
++
++ /*
++ * There might be readahead requests added to the radix trees which
++ * were not yet added to the readahead work queue. We need to start
++ * them and wait for their completion, otherwise we can end up with
++ * use-after-free problems when dropping the last reference on the
++ * readahead extents and their zones, as they need to access the
++ * device structure.
++ */
++ reada_start_machine(fs_info);
++ btrfs_flush_workqueue(fs_info->readahead_workers);
++}
++
++/*
++ * If when removing a device (device replace or device remove ioctls) an error
++ * happens after calling btrfs_reada_remove_dev(), call this to undo what that
++ * function did. This is safe to call even if btrfs_reada_remove_dev() was not
++ * called before.
++ */
++void btrfs_reada_undo_remove_dev(struct btrfs_device *dev)
++{
++ spin_lock(&dev->fs_info->reada_lock);
++ clear_bit(BTRFS_DEV_STATE_NO_READA, &dev->dev_state);
++ spin_unlock(&dev->fs_info->reada_lock);
++}
+diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
+index 5cd02514cf4d4..bcb785d1867cf 100644
+--- a/fs/btrfs/reflink.c
++++ b/fs/btrfs/reflink.c
+@@ -520,6 +520,8 @@ process_slot:
+ ret = -EINTR;
+ goto out;
+ }
++
++ cond_resched();
+ }
+ ret = 0;
+
+diff --git a/fs/btrfs/root-tree.c b/fs/btrfs/root-tree.c
+index c89697486366a..702dc5441f039 100644
+--- a/fs/btrfs/root-tree.c
++++ b/fs/btrfs/root-tree.c
+@@ -512,11 +512,20 @@ int btrfs_subvolume_reserve_metadata(struct btrfs_root *root,
+ if (ret && qgroup_num_bytes)
+ btrfs_qgroup_free_meta_prealloc(root, qgroup_num_bytes);
+
++ if (!ret) {
++ spin_lock(&rsv->lock);
++ rsv->qgroup_rsv_reserved += qgroup_num_bytes;
++ spin_unlock(&rsv->lock);
++ }
+ return ret;
+ }
+
+-void btrfs_subvolume_release_metadata(struct btrfs_fs_info *fs_info,
++void btrfs_subvolume_release_metadata(struct btrfs_root *root,
+ struct btrfs_block_rsv *rsv)
+ {
+- btrfs_block_rsv_release(fs_info, rsv, (u64)-1, NULL);
++ struct btrfs_fs_info *fs_info = root->fs_info;
++ u64 qgroup_to_release;
++
++ btrfs_block_rsv_release(fs_info, rsv, (u64)-1, &qgroup_to_release);
++ btrfs_qgroup_convert_reserved_meta(root, qgroup_to_release);
+ }
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index d9813a5b075ac..e357f23fb54ad 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -3812,6 +3812,72 @@ static int update_ref_path(struct send_ctx *sctx, struct recorded_ref *ref)
+ return 0;
+ }
+
++/*
++ * When processing the new references for an inode we may orphanize an existing
++ * directory inode because its old name conflicts with one of the new references
++ * of the current inode. Later, when processing another new reference of our
++ * inode, we might need to orphanize another inode, but the path we have in the
++ * reference reflects the pre-orphanization name of the directory we previously
++ * orphanized. For example:
++ *
++ * parent snapshot looks like:
++ *
++ * . (ino 256)
++ * |----- f1 (ino 257)
++ * |----- f2 (ino 258)
++ * |----- d1/ (ino 259)
++ * |----- d2/ (ino 260)
++ *
++ * send snapshot looks like:
++ *
++ * . (ino 256)
++ * |----- d1 (ino 258)
++ * |----- f2/ (ino 259)
++ * |----- f2_link/ (ino 260)
++ * | |----- f1 (ino 257)
++ * |
++ * |----- d2 (ino 258)
++ *
++ * When processing inode 257 we compute the name for inode 259 as "d1", and we
++ * cache it in the name cache. Later when we start processing inode 258, when
++ * collecting all its new references we set a full path of "d1/d2" for its new
++ * reference with name "d2". When we start processing the new references we
++ * start by processing the new reference with name "d1", and this results in
++ * orphanizing inode 259, since its old reference causes a conflict. Then we
++ * move on the next new reference, with name "d2", and we find out we must
++ * orphanize inode 260, as its old reference conflicts with ours - but for the
++ * orphanization we use a source path corresponding to the path we stored in the
++ * new reference, which is "d1/d2" and not "o259-6-0/d2" - this makes the
++ * receiver fail since the path component "d1/" no longer exists, it was renamed
++ * to "o259-6-0/" when processing the previous new reference. So in this case we
++ * must recompute the path in the new reference and use it for the new
++ * orphanization operation.
++ */
++static int refresh_ref_path(struct send_ctx *sctx, struct recorded_ref *ref)
++{
++ char *name;
++ int ret;
++
++ name = kmemdup(ref->name, ref->name_len, GFP_KERNEL);
++ if (!name)
++ return -ENOMEM;
++
++ fs_path_reset(ref->full_path);
++ ret = get_cur_path(sctx, ref->dir, ref->dir_gen, ref->full_path);
++ if (ret < 0)
++ goto out;
++
++ ret = fs_path_add(ref->full_path, name, ref->name_len);
++ if (ret < 0)
++ goto out;
++
++ /* Update the reference's base name pointer. */
++ set_ref_path(ref, ref->full_path);
++out:
++ kfree(name);
++ return ret;
++}
++
+ /*
+ * This does all the move/link/unlink/rmdir magic.
+ */
+@@ -3880,52 +3946,56 @@ static int process_recorded_refs(struct send_ctx *sctx, int *pending_move)
+ goto out;
+ }
+
++ /*
++ * Before doing any rename and link operations, do a first pass on the
++ * new references to orphanize any unprocessed inodes that may have a
++ * reference that conflicts with one of the new references of the current
++ * inode. This needs to happen first because a new reference may conflict
++ * with the old reference of a parent directory, so we must make sure
++ * that the path used for link and rename commands don't use an
++ * orphanized name when an ancestor was not yet orphanized.
++ *
++ * Example:
++ *
++ * Parent snapshot:
++ *
++ * . (ino 256)
++ * |----- testdir/ (ino 259)
++ * | |----- a (ino 257)
++ * |
++ * |----- b (ino 258)
++ *
++ * Send snapshot:
++ *
++ * . (ino 256)
++ * |----- testdir_2/ (ino 259)
++ * | |----- a (ino 260)
++ * |
++ * |----- testdir (ino 257)
++ * |----- b (ino 257)
++ * |----- b2 (ino 258)
++ *
++ * Processing the new reference for inode 257 with name "b" may happen
++ * before processing the new reference with name "testdir". If so, we
++ * must make sure that by the time we send a link command to create the
++ * hard link "b", inode 259 was already orphanized, since the generated
++ * path in "valid_path" already contains the orphanized name for 259.
++ * We are processing inode 257, so only later when processing 259 we do
++ * the rename operation to change its temporary (orphanized) name to
++ * "testdir_2".
++ */
+ list_for_each_entry(cur, &sctx->new_refs, list) {
+- /*
+- * We may have refs where the parent directory does not exist
+- * yet. This happens if the parent directories inum is higher
+- * than the current inum. To handle this case, we create the
+- * parent directory out of order. But we need to check if this
+- * did already happen before due to other refs in the same dir.
+- */
+ ret = get_cur_inode_state(sctx, cur->dir, cur->dir_gen);
+ if (ret < 0)
+ goto out;
+- if (ret == inode_state_will_create) {
+- ret = 0;
+- /*
+- * First check if any of the current inodes refs did
+- * already create the dir.
+- */
+- list_for_each_entry(cur2, &sctx->new_refs, list) {
+- if (cur == cur2)
+- break;
+- if (cur2->dir == cur->dir) {
+- ret = 1;
+- break;
+- }
+- }
+-
+- /*
+- * If that did not happen, check if a previous inode
+- * did already create the dir.
+- */
+- if (!ret)
+- ret = did_create_dir(sctx, cur->dir);
+- if (ret < 0)
+- goto out;
+- if (!ret) {
+- ret = send_create_inode(sctx, cur->dir);
+- if (ret < 0)
+- goto out;
+- }
+- }
++ if (ret == inode_state_will_create)
++ continue;
+
+ /*
+- * Check if this new ref would overwrite the first ref of
+- * another unprocessed inode. If yes, orphanize the
+- * overwritten inode. If we find an overwritten ref that is
+- * not the first ref, simply unlink it.
++ * Check if this new ref would overwrite the first ref of another
++ * unprocessed inode. If yes, orphanize the overwritten inode.
++ * If we find an overwritten ref that is not the first ref,
++ * simply unlink it.
+ */
+ ret = will_overwrite_ref(sctx, cur->dir, cur->dir_gen,
+ cur->name, cur->name_len,
+@@ -3942,6 +4012,12 @@ static int process_recorded_refs(struct send_ctx *sctx, int *pending_move)
+ struct name_cache_entry *nce;
+ struct waiting_dir_move *wdm;
+
++ if (orphanized_dir) {
++ ret = refresh_ref_path(sctx, cur);
++ if (ret < 0)
++ goto out;
++ }
++
+ ret = orphanize_inode(sctx, ow_inode, ow_gen,
+ cur->full_path);
+ if (ret < 0)
+@@ -4004,6 +4080,49 @@ static int process_recorded_refs(struct send_ctx *sctx, int *pending_move)
+ }
+ }
+
++ }
++
++ list_for_each_entry(cur, &sctx->new_refs, list) {
++ /*
++ * We may have refs where the parent directory does not exist
++ * yet. This happens if the parent directories inum is higher
++ * than the current inum. To handle this case, we create the
++ * parent directory out of order. But we need to check if this
++ * did already happen before due to other refs in the same dir.
++ */
++ ret = get_cur_inode_state(sctx, cur->dir, cur->dir_gen);
++ if (ret < 0)
++ goto out;
++ if (ret == inode_state_will_create) {
++ ret = 0;
++ /*
++ * First check if any of the current inodes refs did
++ * already create the dir.
++ */
++ list_for_each_entry(cur2, &sctx->new_refs, list) {
++ if (cur == cur2)
++ break;
++ if (cur2->dir == cur->dir) {
++ ret = 1;
++ break;
++ }
++ }
++
++ /*
++ * If that did not happen, check if a previous inode
++ * did already create the dir.
++ */
++ if (!ret)
++ ret = did_create_dir(sctx, cur->dir);
++ if (ret < 0)
++ goto out;
++ if (!ret) {
++ ret = send_create_inode(sctx, cur->dir);
++ if (ret < 0)
++ goto out;
++ }
++ }
++
+ if (S_ISDIR(sctx->cur_inode_mode) && sctx->parent_root) {
+ ret = wait_for_dest_dir_move(sctx, cur, is_orphan);
+ if (ret < 0)
+@@ -7181,7 +7300,7 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+
+ alloc_size = sizeof(struct clone_root) * (arg->clone_sources_count + 1);
+
+- sctx->clone_roots = kzalloc(alloc_size, GFP_KERNEL);
++ sctx->clone_roots = kvzalloc(alloc_size, GFP_KERNEL);
+ if (!sctx->clone_roots) {
+ ret = -ENOMEM;
+ goto out;
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 7b1fee630f978..8784b74f5232e 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -760,18 +760,36 @@ int btrfs_check_chunk_valid(struct extent_buffer *leaf,
+ u64 type;
+ u64 features;
+ bool mixed = false;
++ int raid_index;
++ int nparity;
++ int ncopies;
+
+ length = btrfs_chunk_length(leaf, chunk);
+ stripe_len = btrfs_chunk_stripe_len(leaf, chunk);
+ num_stripes = btrfs_chunk_num_stripes(leaf, chunk);
+ sub_stripes = btrfs_chunk_sub_stripes(leaf, chunk);
+ type = btrfs_chunk_type(leaf, chunk);
++ raid_index = btrfs_bg_flags_to_raid_index(type);
++ ncopies = btrfs_raid_array[raid_index].ncopies;
++ nparity = btrfs_raid_array[raid_index].nparity;
+
+ if (!num_stripes) {
+ chunk_err(leaf, chunk, logical,
+ "invalid chunk num_stripes, have %u", num_stripes);
+ return -EUCLEAN;
+ }
++ if (num_stripes < ncopies) {
++ chunk_err(leaf, chunk, logical,
++ "invalid chunk num_stripes < ncopies, have %u < %d",
++ num_stripes, ncopies);
++ return -EUCLEAN;
++ }
++ if (nparity && num_stripes == nparity) {
++ chunk_err(leaf, chunk, logical,
++ "invalid chunk num_stripes == nparity, have %u == %d",
++ num_stripes, nparity);
++ return -EUCLEAN;
++ }
+ if (!IS_ALIGNED(logical, fs_info->sectorsize)) {
+ chunk_err(leaf, chunk, logical,
+ "invalid chunk logical, have %llu should aligned to %u",
+@@ -1035,7 +1053,7 @@ static int check_root_item(struct extent_buffer *leaf, struct btrfs_key *key,
+ int slot)
+ {
+ struct btrfs_fs_info *fs_info = leaf->fs_info;
+- struct btrfs_root_item ri;
++ struct btrfs_root_item ri = { 0 };
+ const u64 valid_root_flags = BTRFS_ROOT_SUBVOL_RDONLY |
+ BTRFS_ROOT_SUBVOL_DEAD;
+ int ret;
+@@ -1044,14 +1062,21 @@ static int check_root_item(struct extent_buffer *leaf, struct btrfs_key *key,
+ if (ret < 0)
+ return ret;
+
+- if (btrfs_item_size_nr(leaf, slot) != sizeof(ri)) {
++ if (btrfs_item_size_nr(leaf, slot) != sizeof(ri) &&
++ btrfs_item_size_nr(leaf, slot) != btrfs_legacy_root_item_size()) {
+ generic_err(leaf, slot,
+- "invalid root item size, have %u expect %zu",
+- btrfs_item_size_nr(leaf, slot), sizeof(ri));
++ "invalid root item size, have %u expect %zu or %u",
++ btrfs_item_size_nr(leaf, slot), sizeof(ri),
++ btrfs_legacy_root_item_size());
+ }
+
++ /*
++ * For legacy root item, the members starting at generation_v2 will be
++ * all filled with 0.
++ * And since we allow geneartion_v2 as 0, it will still pass the check.
++ */
+ read_extent_buffer(leaf, &ri, btrfs_item_ptr_offset(leaf, slot),
+- sizeof(ri));
++ btrfs_item_size_nr(leaf, slot));
+
+ /* Generation related */
+ if (btrfs_root_generation(&ri) >
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 39da9db352786..a6f061fcd3001 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3615,6 +3615,7 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans,
+ * search and this search we'll not find the key again and can just
+ * bail.
+ */
++search:
+ ret = btrfs_search_slot(NULL, root, &min_key, path, 0, 0);
+ if (ret != 0)
+ goto done;
+@@ -3634,6 +3635,13 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans,
+
+ if (min_key.objectid != ino || min_key.type != key_type)
+ goto done;
++
++ if (need_resched()) {
++ btrfs_release_path(path);
++ cond_resched();
++ goto search;
++ }
++
+ ret = overwrite_item(trans, log, dst_path, src, i,
+ &min_key);
+ if (ret) {
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index e61c298ce2b42..309734fdd1580 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -942,16 +942,18 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ bdput(path_bdev);
+ mutex_unlock(&fs_devices->device_list_mutex);
+ btrfs_warn_in_rcu(device->fs_info,
+- "duplicate device fsid:devid for %pU:%llu old:%s new:%s",
+- disk_super->fsid, devid,
+- rcu_str_deref(device->name), path);
++ "duplicate device %s devid %llu generation %llu scanned by %s (%d)",
++ path, devid, found_transid,
++ current->comm,
++ task_pid_nr(current));
+ return ERR_PTR(-EEXIST);
+ }
+ bdput(path_bdev);
+ btrfs_info_in_rcu(device->fs_info,
+- "device fsid %pU devid %llu moved old:%s new:%s",
+- disk_super->fsid, devid,
+- rcu_str_deref(device->name), path);
++ "devid %llu device path %s changed to %s scanned by %s (%d)",
++ devid, rcu_str_deref(device->name),
++ path, current->comm,
++ task_pid_nr(current));
+ }
+
+ name = rcu_string_strdup(path, GFP_NOFS);
+@@ -1198,17 +1200,23 @@ static int open_fs_devices(struct btrfs_fs_devices *fs_devices,
+ {
+ struct btrfs_device *device;
+ struct btrfs_device *latest_dev = NULL;
++ struct btrfs_device *tmp_device;
+
+ flags |= FMODE_EXCL;
+
+- list_for_each_entry(device, &fs_devices->devices, dev_list) {
+- /* Just open everything we can; ignore failures here */
+- if (btrfs_open_one_device(fs_devices, device, flags, holder))
+- continue;
++ list_for_each_entry_safe(device, tmp_device, &fs_devices->devices,
++ dev_list) {
++ int ret;
+
+- if (!latest_dev ||
+- device->generation > latest_dev->generation)
++ ret = btrfs_open_one_device(fs_devices, device, flags, holder);
++ if (ret == 0 &&
++ (!latest_dev || device->generation > latest_dev->generation)) {
+ latest_dev = device;
++ } else if (ret == -ENODATA) {
++ fs_devices->num_devices--;
++ list_del(&device->dev_list);
++ btrfs_free_device(device);
++ }
+ }
+ if (fs_devices->open_devices == 0)
+ return -EINVAL;
+@@ -2096,6 +2104,8 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info, const char *device_path,
+
+ mutex_unlock(&uuid_mutex);
+ ret = btrfs_shrink_device(device, 0);
++ if (!ret)
++ btrfs_reada_remove_dev(device);
+ mutex_lock(&uuid_mutex);
+ if (ret)
+ goto error_undo;
+@@ -2183,6 +2193,7 @@ out:
+ return ret;
+
+ error_undo:
++ btrfs_reada_undo_remove_dev(device);
+ if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) {
+ mutex_lock(&fs_info->chunk_mutex);
+ list_add(&device->dev_alloc_list,
+@@ -2611,9 +2622,6 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ btrfs_set_super_num_devices(fs_info->super_copy,
+ orig_super_num_devices + 1);
+
+- /* add sysfs device entry */
+- btrfs_sysfs_add_devices_dir(fs_devices, device);
+-
+ /*
+ * we've got more storage, clear any full flags on the space
+ * infos
+@@ -2621,6 +2629,10 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ btrfs_clear_space_info_full(fs_info);
+
+ mutex_unlock(&fs_info->chunk_mutex);
++
++ /* Add sysfs device entry */
++ btrfs_sysfs_add_devices_dir(fs_devices, device);
++
+ mutex_unlock(&fs_devices->device_list_mutex);
+
+ if (seeding_dev) {
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index 302c9234f7d08..2a33a6af289b9 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -50,6 +50,7 @@ struct btrfs_io_geometry {
+ #define BTRFS_DEV_STATE_MISSING (2)
+ #define BTRFS_DEV_STATE_REPLACE_TGT (3)
+ #define BTRFS_DEV_STATE_FLUSH_SENT (4)
++#define BTRFS_DEV_STATE_NO_READA (5)
+
+ struct btrfs_device {
+ struct list_head dev_list; /* device_list_mutex */
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 50bbc99e3d960..5a28a6aa7f16b 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -2771,16 +2771,6 @@ int nobh_writepage(struct page *page, get_block_t *get_block,
+ /* Is the page fully outside i_size? (truncate in progress) */
+ offset = i_size & (PAGE_SIZE-1);
+ if (page->index >= end_index+1 || !offset) {
+- /*
+- * The page may have dirty, unmapped buffers. For example,
+- * they may have been added in ext3_writepage(). Make them
+- * freeable here, so the page does not leak.
+- */
+-#if 0
+- /* Not really sure about this - do we need this ? */
+- if (page->mapping->a_ops->invalidatepage)
+- page->mapping->a_ops->invalidatepage(page, offset);
+-#endif
+ unlock_page(page);
+ return 0; /* don't care */
+ }
+@@ -2975,12 +2965,6 @@ int block_write_full_page(struct page *page, get_block_t *get_block,
+ /* Is the page fully outside i_size? (truncate in progress) */
+ offset = i_size & (PAGE_SIZE-1);
+ if (page->index >= end_index+1 || !offset) {
+- /*
+- * The page may have dirty, unmapped buffers. For example,
+- * they may have been added in ext3_writepage(). Make them
+- * freeable here, so the page does not leak.
+- */
+- do_invalidatepage(page, 0, PAGE_SIZE);
+ unlock_page(page);
+ return 0; /* don't care */
+ }
+diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c
+index 3080cda9e8245..8bda092e60c5a 100644
+--- a/fs/cachefiles/rdwr.c
++++ b/fs/cachefiles/rdwr.c
+@@ -121,7 +121,7 @@ static int cachefiles_read_reissue(struct cachefiles_object *object,
+ _debug("reissue read");
+ ret = bmapping->a_ops->readpage(NULL, backpage);
+ if (ret < 0)
+- goto unlock_discard;
++ goto discard;
+ }
+
+ /* but the page may have been read before the monitor was installed, so
+@@ -138,6 +138,7 @@ static int cachefiles_read_reissue(struct cachefiles_object *object,
+
+ unlock_discard:
+ unlock_page(backpage);
++discard:
+ spin_lock_irq(&object->work_lock);
+ list_del(&monitor->op_link);
+ spin_unlock_irq(&object->work_lock);
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index 6ea761c84494f..970e5a0940350 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -1522,7 +1522,7 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
+ struct ceph_inode_info *ci = ceph_inode(inode);
+ struct ceph_file_info *fi = vma->vm_file->private_data;
+ struct page *pinned_page = NULL;
+- loff_t off = vmf->pgoff << PAGE_SHIFT;
++ loff_t off = (loff_t)vmf->pgoff << PAGE_SHIFT;
+ int want, got, err;
+ sigset_t oldset;
+ vm_fault_t ret = VM_FAULT_SIGBUS;
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 4a26862d7667e..76d8d9495d1d4 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -3612,6 +3612,39 @@ fail_msg:
+ return err;
+ }
+
++static struct dentry* d_find_primary(struct inode *inode)
++{
++ struct dentry *alias, *dn = NULL;
++
++ if (hlist_empty(&inode->i_dentry))
++ return NULL;
++
++ spin_lock(&inode->i_lock);
++ if (hlist_empty(&inode->i_dentry))
++ goto out_unlock;
++
++ if (S_ISDIR(inode->i_mode)) {
++ alias = hlist_entry(inode->i_dentry.first, struct dentry, d_u.d_alias);
++ if (!IS_ROOT(alias))
++ dn = dget(alias);
++ goto out_unlock;
++ }
++
++ hlist_for_each_entry(alias, &inode->i_dentry, d_u.d_alias) {
++ spin_lock(&alias->d_lock);
++ if (!d_unhashed(alias) &&
++ (ceph_dentry(alias)->flags & CEPH_DENTRY_PRIMARY_LINK)) {
++ dn = dget_dlock(alias);
++ }
++ spin_unlock(&alias->d_lock);
++ if (dn)
++ break;
++ }
++out_unlock:
++ spin_unlock(&inode->i_lock);
++ return dn;
++}
++
+ /*
+ * Encode information about a cap for a reconnect with the MDS.
+ */
+@@ -3625,13 +3658,32 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ struct ceph_inode_info *ci = cap->ci;
+ struct ceph_reconnect_state *recon_state = arg;
+ struct ceph_pagelist *pagelist = recon_state->pagelist;
+- int err;
++ struct dentry *dentry;
++ char *path;
++ int pathlen, err;
++ u64 pathbase;
+ u64 snap_follows;
+
+ dout(" adding %p ino %llx.%llx cap %p %lld %s\n",
+ inode, ceph_vinop(inode), cap, cap->cap_id,
+ ceph_cap_string(cap->issued));
+
++ dentry = d_find_primary(inode);
++ if (dentry) {
++ /* set pathbase to parent dir when msg_version >= 2 */
++ path = ceph_mdsc_build_path(dentry, &pathlen, &pathbase,
++ recon_state->msg_version >= 2);
++ dput(dentry);
++ if (IS_ERR(path)) {
++ err = PTR_ERR(path);
++ goto out_err;
++ }
++ } else {
++ path = NULL;
++ pathlen = 0;
++ pathbase = 0;
++ }
++
+ spin_lock(&ci->i_ceph_lock);
+ cap->seq = 0; /* reset cap seq */
+ cap->issue_seq = 0; /* and issue_seq */
+@@ -3652,7 +3704,7 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ rec.v2.wanted = cpu_to_le32(__ceph_caps_wanted(ci));
+ rec.v2.issued = cpu_to_le32(cap->issued);
+ rec.v2.snaprealm = cpu_to_le64(ci->i_snap_realm->ino);
+- rec.v2.pathbase = 0;
++ rec.v2.pathbase = cpu_to_le64(pathbase);
+ rec.v2.flock_len = (__force __le32)
+ ((ci->i_ceph_flags & CEPH_I_ERROR_FILELOCK) ? 0 : 1);
+ } else {
+@@ -3663,7 +3715,7 @@ static int reconnect_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ ceph_encode_timespec64(&rec.v1.mtime, &inode->i_mtime);
+ ceph_encode_timespec64(&rec.v1.atime, &inode->i_atime);
+ rec.v1.snaprealm = cpu_to_le64(ci->i_snap_realm->ino);
+- rec.v1.pathbase = 0;
++ rec.v1.pathbase = cpu_to_le64(pathbase);
+ }
+
+ if (list_empty(&ci->i_cap_snaps)) {
+@@ -3725,7 +3777,7 @@ encode_again:
+ sizeof(struct ceph_filelock);
+ rec.v2.flock_len = cpu_to_le32(struct_len);
+
+- struct_len += sizeof(u32) + sizeof(rec.v2);
++ struct_len += sizeof(u32) + pathlen + sizeof(rec.v2);
+
+ if (struct_v >= 2)
+ struct_len += sizeof(u64); /* snap_follows */
+@@ -3749,7 +3801,7 @@ encode_again:
+ ceph_pagelist_encode_8(pagelist, 1);
+ ceph_pagelist_encode_32(pagelist, struct_len);
+ }
+- ceph_pagelist_encode_string(pagelist, NULL, 0);
++ ceph_pagelist_encode_string(pagelist, path, pathlen);
+ ceph_pagelist_append(pagelist, &rec, sizeof(rec.v2));
+ ceph_locks_to_pagelist(flocks, pagelist,
+ num_fcntl_locks, num_flock_locks);
+@@ -3758,39 +3810,20 @@ encode_again:
+ out_freeflocks:
+ kfree(flocks);
+ } else {
+- u64 pathbase = 0;
+- int pathlen = 0;
+- char *path = NULL;
+- struct dentry *dentry;
+-
+- dentry = d_find_alias(inode);
+- if (dentry) {
+- path = ceph_mdsc_build_path(dentry,
+- &pathlen, &pathbase, 0);
+- dput(dentry);
+- if (IS_ERR(path)) {
+- err = PTR_ERR(path);
+- goto out_err;
+- }
+- rec.v1.pathbase = cpu_to_le64(pathbase);
+- }
+-
+ err = ceph_pagelist_reserve(pagelist,
+ sizeof(u64) + sizeof(u32) +
+ pathlen + sizeof(rec.v1));
+- if (err) {
+- goto out_freepath;
+- }
++ if (err)
++ goto out_err;
+
+ ceph_pagelist_encode_64(pagelist, ceph_ino(inode));
+ ceph_pagelist_encode_string(pagelist, path, pathlen);
+ ceph_pagelist_append(pagelist, &rec, sizeof(rec.v1));
+-out_freepath:
+- ceph_mdsc_free_path(path, pathlen);
+ }
+
+ out_err:
+- if (err >= 0)
++ ceph_mdsc_free_path(path, pathlen);
++ if (!err)
+ recon_state->nr_caps++;
+ return err;
+ }
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index b565d83ba89ed..5a491afafacc7 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -510,6 +510,8 @@ struct smb_version_operations {
+ struct fiemap_extent_info *, u64, u64);
+ /* version specific llseek implementation */
+ loff_t (*llseek)(struct file *, struct cifs_tcon *, loff_t, int);
++ /* Check for STATUS_IO_TIMEOUT */
++ bool (*is_status_io_timeout)(char *buf);
+ };
+
+ struct smb_version_values {
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 9817a31a39db6..b8780a79a42a2 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -69,6 +69,9 @@ extern bool disable_legacy_dialects;
+ #define TLINK_ERROR_EXPIRE (1 * HZ)
+ #define TLINK_IDLE_EXPIRE (600 * HZ)
+
++/* Drop the connection to not overload the server */
++#define NUM_STATUS_IO_TIMEOUT 5
++
+ enum {
+ /* Mount options that take no arguments */
+ Opt_user_xattr, Opt_nouser_xattr,
+@@ -1117,7 +1120,7 @@ cifs_demultiplex_thread(void *p)
+ struct task_struct *task_to_wake = NULL;
+ struct mid_q_entry *mids[MAX_COMPOUND];
+ char *bufs[MAX_COMPOUND];
+- unsigned int noreclaim_flag;
++ unsigned int noreclaim_flag, num_io_timeout = 0;
+
+ noreclaim_flag = memalloc_noreclaim_save();
+ cifs_dbg(FYI, "Demultiplex PID: %d\n", task_pid_nr(current));
+@@ -1213,6 +1216,16 @@ next_pdu:
+ continue;
+ }
+
++ if (server->ops->is_status_io_timeout &&
++ server->ops->is_status_io_timeout(buf)) {
++ num_io_timeout++;
++ if (num_io_timeout > NUM_STATUS_IO_TIMEOUT) {
++ cifs_reconnect(server);
++ num_io_timeout = 0;
++ continue;
++ }
++ }
++
+ server->lstrp = jiffies;
+
+ for (i = 0; i < num_mids; i++) {
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 1f75b25e559a7..daec31be85718 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -2883,13 +2883,18 @@ cifs_setattr(struct dentry *direntry, struct iattr *attrs)
+ {
+ struct cifs_sb_info *cifs_sb = CIFS_SB(direntry->d_sb);
+ struct cifs_tcon *pTcon = cifs_sb_master_tcon(cifs_sb);
++ int rc, retries = 0;
+
+- if (pTcon->unix_ext)
+- return cifs_setattr_unix(direntry, attrs);
+-
+- return cifs_setattr_nounix(direntry, attrs);
++ do {
++ if (pTcon->unix_ext)
++ rc = cifs_setattr_unix(direntry, attrs);
++ else
++ rc = cifs_setattr_nounix(direntry, attrs);
++ retries++;
++ } while (is_retryable_error(rc) && retries < 2);
+
+ /* BB: add cifs_setattr_legacy for really old servers */
++ return rc;
+ }
+
+ #if 0
+diff --git a/fs/cifs/smb2maperror.c b/fs/cifs/smb2maperror.c
+index 7fde3775cb574..b004cf87692a7 100644
+--- a/fs/cifs/smb2maperror.c
++++ b/fs/cifs/smb2maperror.c
+@@ -488,7 +488,7 @@ static const struct status_to_posix_error smb2_error_map_table[] = {
+ {STATUS_PIPE_CONNECTED, -EIO, "STATUS_PIPE_CONNECTED"},
+ {STATUS_PIPE_LISTENING, -EIO, "STATUS_PIPE_LISTENING"},
+ {STATUS_INVALID_READ_MODE, -EIO, "STATUS_INVALID_READ_MODE"},
+- {STATUS_IO_TIMEOUT, -ETIMEDOUT, "STATUS_IO_TIMEOUT"},
++ {STATUS_IO_TIMEOUT, -EAGAIN, "STATUS_IO_TIMEOUT"},
+ {STATUS_FILE_FORCED_CLOSED, -EIO, "STATUS_FILE_FORCED_CLOSED"},
+ {STATUS_PROFILING_NOT_STARTED, -EIO, "STATUS_PROFILING_NOT_STARTED"},
+ {STATUS_PROFILING_NOT_STOPPED, -EIO, "STATUS_PROFILING_NOT_STOPPED"},
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 09e1cd320ee56..e2e53652193e6 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -2346,6 +2346,17 @@ smb2_is_session_expired(char *buf)
+ return true;
+ }
+
++static bool
++smb2_is_status_io_timeout(char *buf)
++{
++ struct smb2_sync_hdr *shdr = (struct smb2_sync_hdr *)buf;
++
++ if (shdr->Status == STATUS_IO_TIMEOUT)
++ return true;
++ else
++ return false;
++}
++
+ static int
+ smb2_oplock_response(struct cifs_tcon *tcon, struct cifs_fid *fid,
+ struct cifsInodeInfo *cinode)
+@@ -4816,6 +4827,7 @@ struct smb_version_operations smb20_operations = {
+ .make_node = smb2_make_node,
+ .fiemap = smb3_fiemap,
+ .llseek = smb3_llseek,
++ .is_status_io_timeout = smb2_is_status_io_timeout,
+ };
+
+ struct smb_version_operations smb21_operations = {
+@@ -4916,6 +4928,7 @@ struct smb_version_operations smb21_operations = {
+ .make_node = smb2_make_node,
+ .fiemap = smb3_fiemap,
+ .llseek = smb3_llseek,
++ .is_status_io_timeout = smb2_is_status_io_timeout,
+ };
+
+ struct smb_version_operations smb30_operations = {
+@@ -5026,6 +5039,7 @@ struct smb_version_operations smb30_operations = {
+ .make_node = smb2_make_node,
+ .fiemap = smb3_fiemap,
+ .llseek = smb3_llseek,
++ .is_status_io_timeout = smb2_is_status_io_timeout,
+ };
+
+ struct smb_version_operations smb311_operations = {
+@@ -5137,6 +5151,7 @@ struct smb_version_operations smb311_operations = {
+ .make_node = smb2_make_node,
+ .fiemap = smb3_fiemap,
+ .llseek = smb3_llseek,
++ .is_status_io_timeout = smb2_is_status_io_timeout,
+ };
+
+ struct smb_version_values smb20_values = {
+diff --git a/fs/exec.c b/fs/exec.c
+index 07910f5032e74..529c3bcefb650 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -955,6 +955,7 @@ int kernel_read_file(struct file *file, void **buf, loff_t *size,
+ {
+ loff_t i_size, pos;
+ ssize_t bytes = 0;
++ void *allocated = NULL;
+ int ret;
+
+ if (!S_ISREG(file_inode(file)->i_mode) || max_size < 0)
+@@ -978,8 +979,8 @@ int kernel_read_file(struct file *file, void **buf, loff_t *size,
+ goto out;
+ }
+
+- if (id != READING_FIRMWARE_PREALLOC_BUFFER)
+- *buf = vmalloc(i_size);
++ if (!*buf)
++ *buf = allocated = vmalloc(i_size);
+ if (!*buf) {
+ ret = -ENOMEM;
+ goto out;
+@@ -1008,7 +1009,7 @@ int kernel_read_file(struct file *file, void **buf, loff_t *size,
+
+ out_free:
+ if (ret < 0) {
+- if (id != READING_FIRMWARE_PREALLOC_BUFFER) {
++ if (allocated) {
+ vfree(*buf);
+ *buf = NULL;
+ }
+@@ -1131,11 +1132,24 @@ static int exec_mmap(struct mm_struct *mm)
+ }
+
+ task_lock(tsk);
+- active_mm = tsk->active_mm;
+ membarrier_exec_mmap(mm);
+- tsk->mm = mm;
++
++ local_irq_disable();
++ active_mm = tsk->active_mm;
+ tsk->active_mm = mm;
++ tsk->mm = mm;
++ /*
++ * This prevents preemption while active_mm is being loaded and
++ * it and mm are being updated, which could cause problems for
++ * lazy tlb mm refcounting when these are updated by context
++ * switches. Not all architectures can handle irqs off over
++ * activate_mm yet.
++ */
++ if (!IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM))
++ local_irq_enable();
+ activate_mm(active_mm, mm);
++ if (IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM))
++ local_irq_enable();
+ tsk->mm->vmacache_seqnum = 0;
+ vmacache_flush(tsk);
+ task_unlock(tsk);
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 48c3df47748db..8e7e9715cde9c 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -494,6 +494,7 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group,
+ * submit the buffer_head for reading
+ */
+ set_buffer_new(bh);
++ clear_buffer_verified(bh);
+ trace_ext4_read_block_bitmap_load(sb, block_group, ignore_locked);
+ bh->b_end_io = ext4_end_bitmap_read;
+ get_bh(bh);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index a0481582187a3..9e506129ea367 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -501,6 +501,7 @@ __read_extent_tree_block(const char *function, unsigned int line,
+
+ if (!bh_uptodate_or_lock(bh)) {
+ trace_ext4_ext_load_extent(inode, pblk, _RET_IP_);
++ clear_buffer_verified(bh);
+ err = bh_submit_read(bh);
+ if (err < 0)
+ goto errout;
+@@ -1471,16 +1472,16 @@ static int ext4_ext_search_left(struct inode *inode,
+ }
+
+ /*
+- * search the closest allocated block to the right for *logical
+- * and returns it at @logical + it's physical address at @phys
+- * if *logical is the largest allocated block, the function
+- * returns 0 at @phys
+- * return value contains 0 (success) or error code
++ * Search the closest allocated block to the right for *logical
++ * and returns it at @logical + it's physical address at @phys.
++ * If not exists, return 0 and @phys is set to 0. We will return
++ * 1 which means we found an allocated block and ret_ex is valid.
++ * Or return a (< 0) error code.
+ */
+ static int ext4_ext_search_right(struct inode *inode,
+ struct ext4_ext_path *path,
+ ext4_lblk_t *logical, ext4_fsblk_t *phys,
+- struct ext4_extent **ret_ex)
++ struct ext4_extent *ret_ex)
+ {
+ struct buffer_head *bh = NULL;
+ struct ext4_extent_header *eh;
+@@ -1574,10 +1575,11 @@ got_index:
+ found_extent:
+ *logical = le32_to_cpu(ex->ee_block);
+ *phys = ext4_ext_pblock(ex);
+- *ret_ex = ex;
++ if (ret_ex)
++ *ret_ex = *ex;
+ if (bh)
+ put_bh(bh);
+- return 0;
++ return 1;
+ }
+
+ /*
+@@ -2868,8 +2870,8 @@ again:
+ */
+ lblk = ex_end + 1;
+ err = ext4_ext_search_right(inode, path, &lblk, &pblk,
+- &ex);
+- if (err)
++ NULL);
++ if (err < 0)
+ goto out;
+ if (pblk) {
+ partial.pclu = EXT4_B2C(sbi, pblk);
+@@ -4037,7 +4039,7 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
+ struct ext4_map_blocks *map, int flags)
+ {
+ struct ext4_ext_path *path = NULL;
+- struct ext4_extent newex, *ex, *ex2;
++ struct ext4_extent newex, *ex, ex2;
+ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ ext4_fsblk_t newblock = 0, pblk;
+ int err = 0, depth, ret;
+@@ -4173,15 +4175,14 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
+ if (err)
+ goto out;
+ ar.lright = map->m_lblk;
+- ex2 = NULL;
+ err = ext4_ext_search_right(inode, path, &ar.lright, &ar.pright, &ex2);
+- if (err)
++ if (err < 0)
+ goto out;
+
+ /* Check if the extent after searching to the right implies a
+ * cluster we can use. */
+- if ((sbi->s_cluster_ratio > 1) && ex2 &&
+- get_implied_cluster_alloc(inode->i_sb, map, ex2, path)) {
++ if ((sbi->s_cluster_ratio > 1) && err &&
++ get_implied_cluster_alloc(inode->i_sb, map, &ex2, path)) {
+ ar.len = allocated = map->m_len;
+ newblock = map->m_pblk;
+ goto got_allocated_blocks;
+@@ -4769,7 +4770,7 @@ int ext4_convert_unwritten_extents(handle_t *handle, struct inode *inode,
+
+ int ext4_convert_unwritten_io_end_vec(handle_t *handle, ext4_io_end_t *io_end)
+ {
+- int ret, err = 0;
++ int ret = 0, err = 0;
+ struct ext4_io_end_vec *io_end_vec;
+
+ /*
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index df25d38d65393..20cda952c6219 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -188,6 +188,7 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
+ /*
+ * submit the buffer_head for reading
+ */
++ clear_buffer_verified(bh);
+ trace_ext4_load_inode_bitmap(sb, block_group);
+ bh->b_end_io = ext4_end_bitmap_read;
+ get_bh(bh);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index bf596467c234c..16a8c29256cd9 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -884,6 +884,7 @@ struct buffer_head *ext4_bread(handle_t *handle, struct inode *inode,
+ return bh;
+ if (!bh || ext4_buffer_uptodate(bh))
+ return bh;
++ clear_buffer_verified(bh);
+ ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1, &bh);
+ wait_on_buffer(bh);
+ if (buffer_uptodate(bh))
+@@ -909,9 +910,11 @@ int ext4_bread_batch(struct inode *inode, ext4_lblk_t block, int bh_count,
+
+ for (i = 0; i < bh_count; i++)
+ /* Note that NULL bhs[i] is valid because of holes. */
+- if (bhs[i] && !ext4_buffer_uptodate(bhs[i]))
++ if (bhs[i] && !ext4_buffer_uptodate(bhs[i])) {
++ clear_buffer_verified(bhs[i]);
+ ll_rw_block(REQ_OP_READ, REQ_META | REQ_PRIO, 1,
+ &bhs[i]);
++ }
+
+ if (!wait)
+ return 0;
+@@ -2254,7 +2257,7 @@ static int mpage_process_page(struct mpage_da_data *mpd, struct page *page,
+ err = PTR_ERR(io_end_vec);
+ goto out;
+ }
+- io_end_vec->offset = mpd->map.m_lblk << blkbits;
++ io_end_vec->offset = (loff_t)mpd->map.m_lblk << blkbits;
+ }
+ *map_bh = true;
+ goto out;
+@@ -3601,6 +3604,13 @@ static int ext4_set_page_dirty(struct page *page)
+ return __set_page_dirty_buffers(page);
+ }
+
++static int ext4_iomap_swap_activate(struct swap_info_struct *sis,
++ struct file *file, sector_t *span)
++{
++ return iomap_swapfile_activate(sis, file, span,
++ &ext4_iomap_report_ops);
++}
++
+ static const struct address_space_operations ext4_aops = {
+ .readpage = ext4_readpage,
+ .readahead = ext4_readahead,
+@@ -3616,6 +3626,7 @@ static const struct address_space_operations ext4_aops = {
+ .migratepage = buffer_migrate_page,
+ .is_partially_uptodate = block_is_partially_uptodate,
+ .error_remove_page = generic_error_remove_page,
++ .swap_activate = ext4_iomap_swap_activate,
+ };
+
+ static const struct address_space_operations ext4_journalled_aops = {
+@@ -3632,6 +3643,7 @@ static const struct address_space_operations ext4_journalled_aops = {
+ .direct_IO = noop_direct_IO,
+ .is_partially_uptodate = block_is_partially_uptodate,
+ .error_remove_page = generic_error_remove_page,
++ .swap_activate = ext4_iomap_swap_activate,
+ };
+
+ static const struct address_space_operations ext4_da_aops = {
+@@ -3649,6 +3661,7 @@ static const struct address_space_operations ext4_da_aops = {
+ .migratepage = buffer_migrate_page,
+ .is_partially_uptodate = block_is_partially_uptodate,
+ .error_remove_page = generic_error_remove_page,
++ .swap_activate = ext4_iomap_swap_activate,
+ };
+
+ static const struct address_space_operations ext4_dax_aops = {
+@@ -3657,6 +3670,7 @@ static const struct address_space_operations ext4_dax_aops = {
+ .set_page_dirty = noop_set_page_dirty,
+ .bmap = ext4_bmap,
+ .invalidatepage = noop_invalidatepage,
++ .swap_activate = ext4_iomap_swap_activate,
+ };
+
+ void ext4_set_aops(struct inode *inode)
+@@ -4971,6 +4985,12 @@ static int ext4_do_update_inode(handle_t *handle,
+ if (ext4_test_inode_state(inode, EXT4_STATE_NEW))
+ memset(raw_inode, 0, EXT4_SB(inode->i_sb)->s_inode_size);
+
++ err = ext4_inode_blocks_set(handle, raw_inode, ei);
++ if (err) {
++ spin_unlock(&ei->i_raw_lock);
++ goto out_brelse;
++ }
++
+ raw_inode->i_mode = cpu_to_le16(inode->i_mode);
+ i_uid = i_uid_read(inode);
+ i_gid = i_gid_read(inode);
+@@ -5004,11 +5024,6 @@ static int ext4_do_update_inode(handle_t *handle,
+ EXT4_INODE_SET_XTIME(i_atime, inode, raw_inode);
+ EXT4_EINODE_SET_XTIME(i_crtime, ei, raw_inode);
+
+- err = ext4_inode_blocks_set(handle, raw_inode, ei);
+- if (err) {
+- spin_unlock(&ei->i_raw_lock);
+- goto out_brelse;
+- }
+ raw_inode->i_dtime = cpu_to_le32(ei->i_dtime);
+ raw_inode->i_flags = cpu_to_le32(ei->i_flags & 0xFFFFFFFF);
+ if (likely(!test_opt2(inode->i_sb, HURD_COMPAT)))
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index a50b51270ea9a..71bf600e5b42c 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -843,8 +843,10 @@ static int add_new_gdb(handle_t *handle, struct inode *inode,
+
+ BUFFER_TRACE(dind, "get_write_access");
+ err = ext4_journal_get_write_access(handle, dind);
+- if (unlikely(err))
++ if (unlikely(err)) {
+ ext4_std_error(sb, err);
++ goto errout;
++ }
+
+ /* ext4_reserve_inode_write() gets a reference on the iloc */
+ err = ext4_reserve_inode_write(handle, inode, &iloc);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index ea425b49b3456..20378050df09c 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -156,6 +156,7 @@ ext4_sb_bread(struct super_block *sb, sector_t block, int op_flags)
+ return ERR_PTR(-ENOMEM);
+ if (ext4_buffer_uptodate(bh))
+ return bh;
++ clear_buffer_verified(bh);
+ ll_rw_block(REQ_OP_READ, REQ_META | op_flags, 1, &bh);
+ wait_on_buffer(bh);
+ if (buffer_uptodate(bh))
+@@ -4814,9 +4815,8 @@ no_journal:
+ * used to detect the metadata async write error.
+ */
+ spin_lock_init(&sbi->s_bdev_wb_lock);
+- if (!sb_rdonly(sb))
+- errseq_check_and_advance(&sb->s_bdev->bd_inode->i_mapping->wb_err,
+- &sbi->s_bdev_wb_err);
++ errseq_check_and_advance(&sb->s_bdev->bd_inode->i_mapping->wb_err,
++ &sbi->s_bdev_wb_err);
+ sb->s_bdev->bd_super = sb;
+ EXT4_SB(sb)->s_mount_state |= EXT4_ORPHAN_FS;
+ ext4_orphan_cleanup(sb, es);
+@@ -4872,6 +4872,7 @@ cantfind_ext4:
+
+ failed_mount8:
+ ext4_unregister_sysfs(sb);
++ kobject_put(&sbi->s_kobj);
+ failed_mount7:
+ ext4_unregister_li_request(sb);
+ failed_mount6:
+@@ -5707,14 +5708,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ goto restore_opts;
+ }
+
+- /*
+- * Update the original bdev mapping's wb_err value
+- * which could be used to detect the metadata async
+- * write error.
+- */
+- errseq_check_and_advance(&sb->s_bdev->bd_inode->i_mapping->wb_err,
+- &sbi->s_bdev_wb_err);
+-
+ /*
+ * Mounting a RDONLY partition read-write, so reread
+ * and store the current valid flag. (It may have
+@@ -6042,6 +6035,11 @@ static int ext4_quota_on(struct super_block *sb, int type, int format_id,
+ /* Quotafile not on the same filesystem? */
+ if (path->dentry->d_sb != sb)
+ return -EXDEV;
++
++ /* Quota already enabled for this file? */
++ if (IS_NOQUOTA(d_inode(path->dentry)))
++ return -EBUSY;
++
+ /* Journaling quota? */
+ if (EXT4_SB(sb)->s_qf_names[type]) {
+ /* Quotafile not in fs root? */
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index ff807e14c8911..4a97fe4ddf789 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -107,7 +107,7 @@ struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index)
+ return __get_meta_page(sbi, index, true);
+ }
+
+-struct page *f2fs_get_meta_page_nofail(struct f2fs_sb_info *sbi, pgoff_t index)
++struct page *f2fs_get_meta_page_retry(struct f2fs_sb_info *sbi, pgoff_t index)
+ {
+ struct page *page;
+ int count = 0;
+@@ -243,6 +243,8 @@ int f2fs_ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
+ blkno * NAT_ENTRY_PER_BLOCK);
+ break;
+ case META_SIT:
++ if (unlikely(blkno >= TOTAL_SEGS(sbi)))
++ goto out;
+ /* get sit block addr */
+ fio.new_blkaddr = current_sit_addr(sbi,
+ blkno * SIT_ENTRY_PER_BLOCK);
+@@ -1047,8 +1049,12 @@ int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type)
+ get_pages(sbi, is_dir ?
+ F2FS_DIRTY_DENTS : F2FS_DIRTY_DATA));
+ retry:
+- if (unlikely(f2fs_cp_error(sbi)))
++ if (unlikely(f2fs_cp_error(sbi))) {
++ trace_f2fs_sync_dirty_inodes_exit(sbi->sb, is_dir,
++ get_pages(sbi, is_dir ?
++ F2FS_DIRTY_DENTS : F2FS_DIRTY_DATA));
+ return -EIO;
++ }
+
+ spin_lock(&sbi->inode_lock[type]);
+
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 1dfb126a0cb20..1cd4b3f9c9f8c 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -382,16 +382,17 @@ static int zstd_init_decompress_ctx(struct decompress_io_ctx *dic)
+ ZSTD_DStream *stream;
+ void *workspace;
+ unsigned int workspace_size;
++ unsigned int max_window_size =
++ MAX_COMPRESS_WINDOW_SIZE(dic->log_cluster_size);
+
+- workspace_size = ZSTD_DStreamWorkspaceBound(MAX_COMPRESS_WINDOW_SIZE);
++ workspace_size = ZSTD_DStreamWorkspaceBound(max_window_size);
+
+ workspace = f2fs_kvmalloc(F2FS_I_SB(dic->inode),
+ workspace_size, GFP_NOFS);
+ if (!workspace)
+ return -ENOMEM;
+
+- stream = ZSTD_initDStream(MAX_COMPRESS_WINDOW_SIZE,
+- workspace, workspace_size);
++ stream = ZSTD_initDStream(max_window_size, workspace, workspace_size);
+ if (!stream) {
+ printk_ratelimited("%sF2FS-fs (%s): %s ZSTD_initDStream failed\n",
+ KERN_ERR, F2FS_I_SB(dic->inode)->sb->s_id,
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 069f498af1e38..ceb4431b56690 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -357,16 +357,15 @@ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
+ unsigned int max_depth;
+ unsigned int level;
+
++ *res_page = NULL;
++
+ if (f2fs_has_inline_dentry(dir)) {
+- *res_page = NULL;
+ de = f2fs_find_in_inline_dir(dir, fname, res_page);
+ goto out;
+ }
+
+- if (npages == 0) {
+- *res_page = NULL;
++ if (npages == 0)
+ goto out;
+- }
+
+ max_depth = F2FS_I(dir)->i_current_depth;
+ if (unlikely(max_depth > MAX_DIR_HASH_DEPTH)) {
+@@ -377,7 +376,6 @@ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
+ }
+
+ for (level = 0; level < max_depth; level++) {
+- *res_page = NULL;
+ de = find_in_level(dir, level, fname, res_page);
+ if (de || IS_ERR(*res_page))
+ break;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index d9e52a7f3702f..d44c6c36de678 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1394,7 +1394,7 @@ struct decompress_io_ctx {
+ #define NULL_CLUSTER ((unsigned int)(~0))
+ #define MIN_COMPRESS_LOG_SIZE 2
+ #define MAX_COMPRESS_LOG_SIZE 8
+-#define MAX_COMPRESS_WINDOW_SIZE ((PAGE_SIZE) << MAX_COMPRESS_LOG_SIZE)
++#define MAX_COMPRESS_WINDOW_SIZE(log_size) ((PAGE_SIZE) << (log_size))
+
+ struct f2fs_sb_info {
+ struct super_block *sb; /* pointer to VFS super block */
+@@ -3385,7 +3385,7 @@ enum rw_hint f2fs_io_type_to_rw_hint(struct f2fs_sb_info *sbi,
+ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io);
+ struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index);
+ struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index);
+-struct page *f2fs_get_meta_page_nofail(struct f2fs_sb_info *sbi, pgoff_t index);
++struct page *f2fs_get_meta_page_retry(struct f2fs_sb_info *sbi, pgoff_t index);
+ struct page *f2fs_get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index);
+ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
+ block_t blkaddr, int type);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 8a422400e824d..4ec10256dc67f 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1836,6 +1836,8 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask)
+ if (iflags & F2FS_COMPR_FL) {
+ if (!f2fs_may_compress(inode))
+ return -EINVAL;
++ if (S_ISREG(inode->i_mode) && inode->i_size)
++ return -EINVAL;
+
+ set_compress_context(inode);
+ }
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 5195e083fc1e6..12c7fa1631935 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -299,6 +299,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
+ F2FS_FITS_IN_INODE(ri, fi->i_extra_isize,
+ i_log_cluster_size)) {
+ if (ri->i_compress_algorithm >= COMPRESS_MAX) {
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
+ f2fs_warn(sbi, "%s: inode (ino=%lx) has unsupported "
+ "compress algorithm: %u, run fsck to fix",
+ __func__, inode->i_ino,
+@@ -307,6 +308,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
+ }
+ if (le64_to_cpu(ri->i_compr_blocks) >
+ SECTOR_TO_BLOCK(inode->i_blocks)) {
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
+ f2fs_warn(sbi, "%s: inode (ino=%lx) has inconsistent "
+ "i_compr_blocks:%llu, i_blocks:%llu, run fsck to fix",
+ __func__, inode->i_ino,
+@@ -316,6 +318,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
+ }
+ if (ri->i_log_cluster_size < MIN_COMPRESS_LOG_SIZE ||
+ ri->i_log_cluster_size > MAX_COMPRESS_LOG_SIZE) {
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
+ f2fs_warn(sbi, "%s: inode (ino=%lx) has unsupported "
+ "log cluster size: %u, run fsck to fix",
+ __func__, inode->i_ino,
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index cb1b5b61a1dab..cc4700f6240db 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -109,7 +109,7 @@ static void clear_node_page_dirty(struct page *page)
+
+ static struct page *get_current_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
+ {
+- return f2fs_get_meta_page_nofail(sbi, current_nat_addr(sbi, nid));
++ return f2fs_get_meta_page(sbi, current_nat_addr(sbi, nid));
+ }
+
+ static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index e247a5ef3713f..2628406f43f64 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -2344,7 +2344,9 @@ int f2fs_npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra)
+ */
+ struct page *f2fs_get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno)
+ {
+- return f2fs_get_meta_page_nofail(sbi, GET_SUM_BLOCK(sbi, segno));
++ if (unlikely(f2fs_cp_error(sbi)))
++ return ERR_PTR(-EIO);
++ return f2fs_get_meta_page_retry(sbi, GET_SUM_BLOCK(sbi, segno));
+ }
+
+ void f2fs_update_meta_page(struct f2fs_sb_info *sbi,
+@@ -2616,7 +2618,11 @@ static void change_curseg(struct f2fs_sb_info *sbi, int type)
+ __next_free_blkoff(sbi, curseg, 0);
+
+ sum_page = f2fs_get_sum_page(sbi, new_segno);
+- f2fs_bug_on(sbi, IS_ERR(sum_page));
++ if (IS_ERR(sum_page)) {
++ /* GC won't be able to use stale summary pages by cp_error */
++ memset(curseg->sum_blk, 0, SUM_ENTRY_SIZE);
++ return;
++ }
+ sum_node = (struct f2fs_summary_block *)page_address(sum_page);
+ memcpy(curseg->sum_blk, sum_node, SUM_ENTRY_SIZE);
+ f2fs_put_page(sum_page, 1);
+@@ -3781,7 +3787,7 @@ int f2fs_lookup_journal_in_cursum(struct f2fs_journal *journal, int type,
+ static struct page *get_current_sit_page(struct f2fs_sb_info *sbi,
+ unsigned int segno)
+ {
+- return f2fs_get_meta_page_nofail(sbi, current_sit_addr(sbi, segno));
++ return f2fs_get_meta_page(sbi, current_sit_addr(sbi, segno));
+ }
+
+ static struct page *get_next_sit_page(struct f2fs_sb_info *sbi,
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index dfa072fa80815..be5050292caa5 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2832,6 +2832,12 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
+ segment_count, dev_seg_count);
+ return -EFSCORRUPTED;
+ }
++ } else {
++ if (__F2FS_HAS_FEATURE(raw_super, F2FS_FEATURE_BLKZONED) &&
++ !bdev_is_zoned(sbi->sb->s_bdev)) {
++ f2fs_info(sbi, "Zoned block device path is missing");
++ return -EFSCORRUPTED;
++ }
+ }
+
+ if (secs_per_zone > total_sections || !secs_per_zone) {
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index f13b136654cae..1192fcd8ee41c 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -270,7 +270,12 @@ static void __gfs2_glock_put(struct gfs2_glock *gl)
+ gfs2_glock_remove_from_lru(gl);
+ spin_unlock(&gl->gl_lockref.lock);
+ GLOCK_BUG_ON(gl, !list_empty(&gl->gl_holders));
+- GLOCK_BUG_ON(gl, mapping && mapping->nrpages && !gfs2_withdrawn(sdp));
++ if (mapping) {
++ truncate_inode_pages_final(mapping);
++ if (!gfs2_withdrawn(sdp))
++ GLOCK_BUG_ON(gl, mapping->nrpages ||
++ mapping->nrexceptional);
++ }
+ trace_gfs2_glock_put(gl);
+ sdp->sd_lockstruct.ls_ops->lm_put_lock(gl);
+ }
+@@ -1049,7 +1054,8 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
+ gl->gl_object = NULL;
+ gl->gl_hold_time = GL_GLOCK_DFT_HOLD;
+ INIT_DELAYED_WORK(&gl->gl_work, glock_work_func);
+- INIT_DELAYED_WORK(&gl->gl_delete, delete_work_func);
++ if (gl->gl_name.ln_type == LM_TYPE_IOPEN)
++ INIT_DELAYED_WORK(&gl->gl_delete, delete_work_func);
+
+ mapping = gfs2_glock2aspace(gl);
+ if (mapping) {
+@@ -1901,9 +1907,11 @@ bool gfs2_delete_work_queued(const struct gfs2_glock *gl)
+
+ static void flush_delete_work(struct gfs2_glock *gl)
+ {
+- if (cancel_delayed_work(&gl->gl_delete)) {
+- queue_delayed_work(gfs2_delete_workqueue,
+- &gl->gl_delete, 0);
++ if (gl->gl_name.ln_type == LM_TYPE_IOPEN) {
++ if (cancel_delayed_work(&gl->gl_delete)) {
++ queue_delayed_work(gfs2_delete_workqueue,
++ &gl->gl_delete, 0);
++ }
+ }
+ gfs2_glock_queue_work(gl, 0);
+ }
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index de1d5f1d9ff85..c2c90747d79b5 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -227,6 +227,15 @@ static void rgrp_go_inval(struct gfs2_glock *gl, int flags)
+ rgd->rd_flags &= ~GFS2_RDF_UPTODATE;
+ }
+
++static void gfs2_rgrp_go_dump(struct seq_file *seq, struct gfs2_glock *gl,
++ const char *fs_id_buf)
++{
++ struct gfs2_rgrpd *rgd = gfs2_glock2rgrp(gl);
++
++ if (rgd)
++ gfs2_rgrp_dump(seq, rgd, fs_id_buf);
++}
++
+ static struct gfs2_inode *gfs2_glock2inode(struct gfs2_glock *gl)
+ {
+ struct gfs2_inode *ip;
+@@ -712,7 +721,7 @@ const struct gfs2_glock_operations gfs2_rgrp_glops = {
+ .go_sync = rgrp_go_sync,
+ .go_inval = rgrp_go_inval,
+ .go_lock = gfs2_rgrp_go_lock,
+- .go_dump = gfs2_rgrp_dump,
++ .go_dump = gfs2_rgrp_go_dump,
+ .go_type = LM_TYPE_RGRP,
+ .go_flags = GLOF_LVB,
+ };
+diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
+index ca2ec02436ec7..387e99d6eda9e 100644
+--- a/fs/gfs2/incore.h
++++ b/fs/gfs2/incore.h
+@@ -705,6 +705,7 @@ struct gfs2_sbd {
+ struct super_block *sd_vfs;
+ struct gfs2_pcpu_lkstats __percpu *sd_lkstats;
+ struct kobject sd_kobj;
++ struct completion sd_kobj_unregister;
+ unsigned long sd_flags; /* SDF_... */
+ struct gfs2_sb_host sd_sb;
+
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index 3763c9ff1406b..93032feb51599 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -954,10 +954,8 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
+ goto out;
+
+ /* Log might have been flushed while we waited for the flush lock */
+- if (gl && !test_bit(GLF_LFLUSH, &gl->gl_flags)) {
+- up_write(&sdp->sd_log_flush_lock);
+- return;
+- }
++ if (gl && !test_bit(GLF_LFLUSH, &gl->gl_flags))
++ goto out;
+ trace_gfs2_log_flush(sdp, 1, flags);
+
+ if (flags & GFS2_LOG_HEAD_FLUSH_SHUTDOWN)
+@@ -971,25 +969,25 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
+ if (unlikely (state == SFS_FROZEN))
+ if (gfs2_assert_withdraw_delayed(sdp,
+ !tr->tr_num_buf_new && !tr->tr_num_databuf_new))
+- goto out;
++ goto out_withdraw;
+ }
+
+ if (unlikely(state == SFS_FROZEN))
+ if (gfs2_assert_withdraw_delayed(sdp, !sdp->sd_log_num_revoke))
+- goto out;
++ goto out_withdraw;
+ if (gfs2_assert_withdraw_delayed(sdp,
+ sdp->sd_log_num_revoke == sdp->sd_log_committed_revoke))
+- goto out;
++ goto out_withdraw;
+
+ gfs2_ordered_write(sdp);
+ if (gfs2_withdrawn(sdp))
+- goto out;
++ goto out_withdraw;
+ lops_before_commit(sdp, tr);
+ if (gfs2_withdrawn(sdp))
+- goto out;
++ goto out_withdraw;
+ gfs2_log_submit_bio(&sdp->sd_log_bio, REQ_OP_WRITE);
+ if (gfs2_withdrawn(sdp))
+- goto out;
++ goto out_withdraw;
+
+ if (sdp->sd_log_head != sdp->sd_log_flush_head) {
+ log_flush_wait(sdp);
+@@ -1000,7 +998,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
+ log_write_header(sdp, flags);
+ }
+ if (gfs2_withdrawn(sdp))
+- goto out;
++ goto out_withdraw;
+ lops_after_commit(sdp, tr);
+
+ gfs2_log_lock(sdp);
+@@ -1020,7 +1018,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
+ if (!sdp->sd_log_idle) {
+ empty_ail1_list(sdp);
+ if (gfs2_withdrawn(sdp))
+- goto out;
++ goto out_withdraw;
+ atomic_dec(&sdp->sd_log_blks_free); /* Adjust for unreserved buffer */
+ trace_gfs2_log_blocks(sdp, -1);
+ log_write_header(sdp, flags);
+@@ -1033,27 +1031,30 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
+ atomic_set(&sdp->sd_freeze_state, SFS_FROZEN);
+ }
+
+-out:
+- if (gfs2_withdrawn(sdp)) {
+- trans_drain(tr);
+- /**
+- * If the tr_list is empty, we're withdrawing during a log
+- * flush that targets a transaction, but the transaction was
+- * never queued onto any of the ail lists. Here we add it to
+- * ail1 just so that ail_drain() will find and free it.
+- */
+- spin_lock(&sdp->sd_ail_lock);
+- if (tr && list_empty(&tr->tr_list))
+- list_add(&tr->tr_list, &sdp->sd_ail1_list);
+- spin_unlock(&sdp->sd_ail_lock);
+- ail_drain(sdp); /* frees all transactions */
+- tr = NULL;
+- }
+-
++out_end:
+ trace_gfs2_log_flush(sdp, 0, flags);
++out:
+ up_write(&sdp->sd_log_flush_lock);
+-
+ gfs2_trans_free(sdp, tr);
++ if (gfs2_withdrawing(sdp))
++ gfs2_withdraw(sdp);
++ return;
++
++out_withdraw:
++ trans_drain(tr);
++ /**
++ * If the tr_list is empty, we're withdrawing during a log
++ * flush that targets a transaction, but the transaction was
++ * never queued onto any of the ail lists. Here we add it to
++ * ail1 just so that ail_drain() will find and free it.
++ */
++ spin_lock(&sdp->sd_ail_lock);
++ if (tr && list_empty(&tr->tr_list))
++ list_add(&tr->tr_list, &sdp->sd_ail1_list);
++ spin_unlock(&sdp->sd_ail_lock);
++ ail_drain(sdp); /* frees all transactions */
++ tr = NULL;
++ goto out_end;
+ }
+
+ /**
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index 6d18d2c91add2..03c33fc03c055 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -169,15 +169,19 @@ static int gfs2_check_sb(struct gfs2_sbd *sdp, int silent)
+ return -EINVAL;
+ }
+
+- /* If format numbers match exactly, we're done. */
+-
+- if (sb->sb_fs_format == GFS2_FORMAT_FS &&
+- sb->sb_multihost_format == GFS2_FORMAT_MULTI)
+- return 0;
++ if (sb->sb_fs_format != GFS2_FORMAT_FS ||
++ sb->sb_multihost_format != GFS2_FORMAT_MULTI) {
++ fs_warn(sdp, "Unknown on-disk format, unable to mount\n");
++ return -EINVAL;
++ }
+
+- fs_warn(sdp, "Unknown on-disk format, unable to mount\n");
++ if (sb->sb_bsize < 512 || sb->sb_bsize > PAGE_SIZE ||
++ (sb->sb_bsize & (sb->sb_bsize - 1))) {
++ pr_warn("Invalid superblock size\n");
++ return -EINVAL;
++ }
+
+- return -EINVAL;
++ return 0;
+ }
+
+ static void end_bio_io_page(struct bio *bio)
+@@ -1062,26 +1066,14 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
+ }
+
+ error = init_names(sdp, silent);
+- if (error) {
+- /* In this case, we haven't initialized sysfs, so we have to
+- manually free the sdp. */
+- free_sbd(sdp);
+- sb->s_fs_info = NULL;
+- return error;
+- }
++ if (error)
++ goto fail_free;
+
+ snprintf(sdp->sd_fsname, sizeof(sdp->sd_fsname), "%s", sdp->sd_table_name);
+
+ error = gfs2_sys_fs_add(sdp);
+- /*
+- * If we hit an error here, gfs2_sys_fs_add will have called function
+- * kobject_put which causes the sysfs usage count to go to zero, which
+- * causes sysfs to call function gfs2_sbd_release, which frees sdp.
+- * Subsequent error paths here will call gfs2_sys_fs_del, which also
+- * kobject_put to free sdp.
+- */
+ if (error)
+- return error;
++ goto fail_free;
+
+ gfs2_create_debugfs_file(sdp);
+
+@@ -1179,9 +1171,9 @@ fail_lm:
+ gfs2_lm_unmount(sdp);
+ fail_debug:
+ gfs2_delete_debugfs_file(sdp);
+- /* gfs2_sys_fs_del must be the last thing we do, since it causes
+- * sysfs to call function gfs2_sbd_release, which frees sdp. */
+ gfs2_sys_fs_del(sdp);
++fail_free:
++ free_sbd(sdp);
+ sb->s_fs_info = NULL;
+ return error;
+ }
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 074f228ea8390..1bba5a9d45fa3 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -2209,20 +2209,17 @@ static void rgblk_free(struct gfs2_sbd *sdp, struct gfs2_rgrpd *rgd,
+ /**
+ * gfs2_rgrp_dump - print out an rgrp
+ * @seq: The iterator
+- * @gl: The glock in question
++ * @rgd: The rgrp in question
+ * @fs_id_buf: pointer to file system id (if requested)
+ *
+ */
+
+-void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_glock *gl,
++void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_rgrpd *rgd,
+ const char *fs_id_buf)
+ {
+- struct gfs2_rgrpd *rgd = gl->gl_object;
+ struct gfs2_blkreserv *trs;
+ const struct rb_node *n;
+
+- if (rgd == NULL)
+- return;
+ gfs2_print_dbg(seq, "%s R: n:%llu f:%02x b:%u/%u i:%u r:%u e:%u\n",
+ fs_id_buf,
+ (unsigned long long)rgd->rd_addr, rgd->rd_flags,
+@@ -2253,7 +2250,7 @@ static void gfs2_rgrp_error(struct gfs2_rgrpd *rgd)
+ (unsigned long long)rgd->rd_addr);
+ fs_warn(sdp, "umount on all nodes and run fsck.gfs2 to fix the error\n");
+ sprintf(fs_id_buf, "fsid=%s: ", sdp->sd_fsname);
+- gfs2_rgrp_dump(NULL, rgd->rd_gl, fs_id_buf);
++ gfs2_rgrp_dump(NULL, rgd, fs_id_buf);
+ rgd->rd_flags |= GFS2_RDF_ERROR;
+ }
+
+diff --git a/fs/gfs2/rgrp.h b/fs/gfs2/rgrp.h
+index a1d7e14fc55b9..9a587ada51eda 100644
+--- a/fs/gfs2/rgrp.h
++++ b/fs/gfs2/rgrp.h
+@@ -67,7 +67,7 @@ extern void gfs2_rlist_add(struct gfs2_inode *ip, struct gfs2_rgrp_list *rlist,
+ extern void gfs2_rlist_alloc(struct gfs2_rgrp_list *rlist);
+ extern void gfs2_rlist_free(struct gfs2_rgrp_list *rlist);
+ extern u64 gfs2_ri_total(struct gfs2_sbd *sdp);
+-extern void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_glock *gl,
++extern void gfs2_rgrp_dump(struct seq_file *seq, struct gfs2_rgrpd *rgd,
+ const char *fs_id_buf);
+ extern int gfs2_rgrp_send_discards(struct gfs2_sbd *sdp, u64 offset,
+ struct buffer_head *bh,
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 9f4d9e7be8397..32ae1a7cdaed8 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -702,6 +702,8 @@ restart:
+ if (error)
+ gfs2_io_error(sdp);
+ }
++ WARN_ON(gfs2_withdrawing(sdp));
++
+ /* At this point, we're through modifying the disk */
+
+ /* Release stuff */
+@@ -736,6 +738,7 @@ restart:
+
+ /* At this point, we're through participating in the lockspace */
+ gfs2_sys_fs_del(sdp);
++ free_sbd(sdp);
+ }
+
+ /**
+diff --git a/fs/gfs2/sys.c b/fs/gfs2/sys.c
+index d28c41bd69b05..c3e72dba7418a 100644
+--- a/fs/gfs2/sys.c
++++ b/fs/gfs2/sys.c
+@@ -303,7 +303,7 @@ static void gfs2_sbd_release(struct kobject *kobj)
+ {
+ struct gfs2_sbd *sdp = container_of(kobj, struct gfs2_sbd, sd_kobj);
+
+- free_sbd(sdp);
++ complete(&sdp->sd_kobj_unregister);
+ }
+
+ static struct kobj_type gfs2_ktype = {
+@@ -655,6 +655,7 @@ int gfs2_sys_fs_add(struct gfs2_sbd *sdp)
+ sprintf(ro, "RDONLY=%d", sb_rdonly(sb));
+ sprintf(spectator, "SPECTATOR=%d", sdp->sd_args.ar_spectator ? 1 : 0);
+
++ init_completion(&sdp->sd_kobj_unregister);
+ sdp->sd_kobj.kset = gfs2_kset;
+ error = kobject_init_and_add(&sdp->sd_kobj, &gfs2_ktype, NULL,
+ "%s", sdp->sd_table_name);
+@@ -685,6 +686,7 @@ fail_tune:
+ fail_reg:
+ fs_err(sdp, "error %d adding sysfs files\n", error);
+ kobject_put(&sdp->sd_kobj);
++ wait_for_completion(&sdp->sd_kobj_unregister);
+ sb->s_fs_info = NULL;
+ return error;
+ }
+@@ -695,6 +697,7 @@ void gfs2_sys_fs_del(struct gfs2_sbd *sdp)
+ sysfs_remove_group(&sdp->sd_kobj, &tune_group);
+ sysfs_remove_group(&sdp->sd_kobj, &lock_module_group);
+ kobject_put(&sdp->sd_kobj);
++ wait_for_completion(&sdp->sd_kobj_unregister);
+ }
+
+ static int gfs2_uevent(struct kset *kset, struct kobject *kobj,
+diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c
+index 1cd0328cae20a..0fba3bf641890 100644
+--- a/fs/gfs2/util.c
++++ b/fs/gfs2/util.c
+@@ -419,7 +419,7 @@ void gfs2_consist_rgrpd_i(struct gfs2_rgrpd *rgd,
+ char fs_id_buf[sizeof(sdp->sd_fsname) + 7];
+
+ sprintf(fs_id_buf, "fsid=%s: ", sdp->sd_fsname);
+- gfs2_rgrp_dump(NULL, rgd->rd_gl, fs_id_buf);
++ gfs2_rgrp_dump(NULL, rgd, fs_id_buf);
+ gfs2_lm(sdp,
+ "fatal: filesystem consistency error\n"
+ " RG = %llu\n"
+diff --git a/fs/gfs2/util.h b/fs/gfs2/util.h
+index 6d9157efe16c3..d7562981b3a09 100644
+--- a/fs/gfs2/util.h
++++ b/fs/gfs2/util.h
+@@ -205,6 +205,16 @@ static inline bool gfs2_withdrawn(struct gfs2_sbd *sdp)
+ test_bit(SDF_WITHDRAWING, &sdp->sd_flags);
+ }
+
++/**
++ * gfs2_withdrawing - check if a withdraw is pending
++ * @sdp: the superblock
++ */
++static inline bool gfs2_withdrawing(struct gfs2_sbd *sdp)
++{
++ return test_bit(SDF_WITHDRAWING, &sdp->sd_flags) &&
++ !test_bit(SDF_WITHDRAWN, &sdp->sd_flags);
++}
++
+ #define gfs2_tune_get(sdp, field) \
+ gfs2_tune_get_i(&(sdp)->sd_tune, &(sdp)->sd_tune.field)
+
+diff --git a/fs/io-wq.c b/fs/io-wq.c
+index 19db17e99cf96..5ad65b3059367 100644
+--- a/fs/io-wq.c
++++ b/fs/io-wq.c
+@@ -654,6 +654,7 @@ static bool create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index)
+ kfree(worker);
+ return false;
+ }
++ kthread_bind_mask(worker->task, cpumask_of_node(wqe->node));
+
+ raw_spin_lock_irq(&wqe->lock);
+ hlist_nulls_add_head_rcu(&worker->nulls_node, &wqe->free_list);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 59ab8c5c2aaaa..64f214a3dc9dd 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -1650,6 +1650,7 @@ static bool io_link_cancel_timeout(struct io_kiocb *req)
+
+ ret = hrtimer_try_to_cancel(&req->io->timeout.timer);
+ if (ret != -1) {
++ req->flags |= REQ_F_COMP_LOCKED;
+ io_cqring_fill_event(req, -ECANCELED);
+ io_commit_cqring(ctx);
+ req->flags &= ~REQ_F_LINK_HEAD;
+@@ -1672,7 +1673,6 @@ static bool __io_kill_linked_timeout(struct io_kiocb *req)
+ return false;
+
+ list_del_init(&link->link_list);
+- link->flags |= REQ_F_COMP_LOCKED;
+ wake_ev = io_link_cancel_timeout(link);
+ req->flags &= ~REQ_F_LINK_TIMEOUT;
+ return wake_ev;
+@@ -4786,8 +4786,10 @@ static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode,
+ /* make sure double remove sees this as being gone */
+ wait->private = NULL;
+ spin_unlock(&poll->head->lock);
+- if (!done)
+- __io_async_wake(req, poll, mask, io_poll_task_func);
++ if (!done) {
++ /* use wait func handler, so it matches the rq type */
++ poll->wait.func(&poll->wait, mode, sync, key);
++ }
+ }
+ refcount_dec(&req->refs);
+ return 1;
+diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c
+index faa97d748474d..fb134c7a12c89 100644
+--- a/fs/jbd2/recovery.c
++++ b/fs/jbd2/recovery.c
+@@ -428,6 +428,8 @@ static int do_one_pass(journal_t *journal,
+ __u32 crc32_sum = ~0; /* Transactional Checksums */
+ int descr_csum_size = 0;
+ int block_error = 0;
++ bool need_check_commit_time = false;
++ __u64 last_trans_commit_time = 0, commit_time;
+
+ /*
+ * First thing is to establish what we expect to find in the log
+@@ -520,12 +522,21 @@ static int do_one_pass(journal_t *journal,
+ if (descr_csum_size > 0 &&
+ !jbd2_descriptor_block_csum_verify(journal,
+ bh->b_data)) {
+- printk(KERN_ERR "JBD2: Invalid checksum "
+- "recovering block %lu in log\n",
+- next_log_block);
+- err = -EFSBADCRC;
+- brelse(bh);
+- goto failed;
++ /*
++ * PASS_SCAN can see stale blocks due to lazy
++ * journal init. Don't error out on those yet.
++ */
++ if (pass != PASS_SCAN) {
++ pr_err("JBD2: Invalid checksum recovering block %lu in log\n",
++ next_log_block);
++ err = -EFSBADCRC;
++ brelse(bh);
++ goto failed;
++ }
++ need_check_commit_time = true;
++ jbd_debug(1,
++ "invalid descriptor block found in %lu\n",
++ next_log_block);
+ }
+
+ /* If it is a valid descriptor block, replay it
+@@ -535,6 +546,7 @@ static int do_one_pass(journal_t *journal,
+ if (pass != PASS_REPLAY) {
+ if (pass == PASS_SCAN &&
+ jbd2_has_feature_checksum(journal) &&
++ !need_check_commit_time &&
+ !info->end_transaction) {
+ if (calc_chksums(journal, bh,
+ &next_log_block,
+@@ -683,11 +695,41 @@ static int do_one_pass(journal_t *journal,
+ * mentioned conditions. Hence assume
+ * "Interrupted Commit".)
+ */
++ commit_time = be64_to_cpu(
++ ((struct commit_header *)bh->b_data)->h_commit_sec);
++ /*
++ * If need_check_commit_time is set, it means we are in
++ * PASS_SCAN and csum verify failed before. If
++ * commit_time is increasing, it's the same journal,
++ * otherwise it is stale journal block, just end this
++ * recovery.
++ */
++ if (need_check_commit_time) {
++ if (commit_time >= last_trans_commit_time) {
++ pr_err("JBD2: Invalid checksum found in transaction %u\n",
++ next_commit_ID);
++ err = -EFSBADCRC;
++ brelse(bh);
++ goto failed;
++ }
++ ignore_crc_mismatch:
++ /*
++ * It likely does not belong to same journal,
++ * just end this recovery with success.
++ */
++ jbd_debug(1, "JBD2: Invalid checksum ignored in transaction %u, likely stale data\n",
++ next_commit_ID);
++ err = 0;
++ brelse(bh);
++ goto done;
++ }
+
+- /* Found an expected commit block: if checksums
+- * are present verify them in PASS_SCAN; else not
++ /*
++ * Found an expected commit block: if checksums
++ * are present, verify them in PASS_SCAN; else not
+ * much to do other than move on to the next sequence
+- * number. */
++ * number.
++ */
+ if (pass == PASS_SCAN &&
+ jbd2_has_feature_checksum(journal)) {
+ struct commit_header *cbh =
+@@ -719,6 +761,8 @@ static int do_one_pass(journal_t *journal,
+ !jbd2_commit_block_csum_verify(journal,
+ bh->b_data)) {
+ chksum_error:
++ if (commit_time < last_trans_commit_time)
++ goto ignore_crc_mismatch;
+ info->end_transaction = next_commit_ID;
+
+ if (!jbd2_has_feature_async_commit(journal)) {
+@@ -728,11 +772,24 @@ static int do_one_pass(journal_t *journal,
+ break;
+ }
+ }
++ if (pass == PASS_SCAN)
++ last_trans_commit_time = commit_time;
+ brelse(bh);
+ next_commit_ID++;
+ continue;
+
+ case JBD2_REVOKE_BLOCK:
++ /*
++ * Check revoke block crc in pass_scan, if csum verify
++ * failed, check commit block time later.
++ */
++ if (pass == PASS_SCAN &&
++ !jbd2_descriptor_block_csum_verify(journal,
++ bh->b_data)) {
++ jbd_debug(1, "JBD2: invalid revoke block found in %lu\n",
++ next_log_block);
++ need_check_commit_time = true;
++ }
+ /* If we aren't in the REVOKE pass, then we can
+ * just skip over this block. */
+ if (pass != PASS_REVOKE) {
+@@ -800,9 +857,6 @@ static int scan_revoke_records(journal_t *journal, struct buffer_head *bh,
+ offset = sizeof(jbd2_journal_revoke_header_t);
+ rcount = be32_to_cpu(header->r_count);
+
+- if (!jbd2_descriptor_block_csum_verify(journal, header))
+- return -EFSBADCRC;
+-
+ if (jbd2_journal_has_csum_v2or3(journal))
+ csum_size = sizeof(struct jbd2_journal_block_tail);
+ if (rcount > journal->j_blocksize - csum_size)
+diff --git a/fs/nfs/namespace.c b/fs/nfs/namespace.c
+index 6b063227e34e9..2bcbe38afe2e7 100644
+--- a/fs/nfs/namespace.c
++++ b/fs/nfs/namespace.c
+@@ -32,9 +32,9 @@ int nfs_mountpoint_expiry_timeout = 500 * HZ;
+ /*
+ * nfs_path - reconstruct the path given an arbitrary dentry
+ * @base - used to return pointer to the end of devname part of path
+- * @dentry - pointer to dentry
++ * @dentry_in - pointer to dentry
+ * @buffer - result buffer
+- * @buflen - length of buffer
++ * @buflen_in - length of buffer
+ * @flags - options (see below)
+ *
+ * Helper function for constructing the server pathname
+@@ -49,15 +49,19 @@ int nfs_mountpoint_expiry_timeout = 500 * HZ;
+ * the original device (export) name
+ * (if unset, the original name is returned verbatim)
+ */
+-char *nfs_path(char **p, struct dentry *dentry, char *buffer, ssize_t buflen,
+- unsigned flags)
++char *nfs_path(char **p, struct dentry *dentry_in, char *buffer,
++ ssize_t buflen_in, unsigned flags)
+ {
+ char *end;
+ int namelen;
+ unsigned seq;
+ const char *base;
++ struct dentry *dentry;
++ ssize_t buflen;
+
+ rename_retry:
++ buflen = buflen_in;
++ dentry = dentry_in;
+ end = buffer+buflen;
+ *--end = '\0';
+ buflen--;
+diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
+index 0c9505dc852cd..065cb04222a1b 100644
+--- a/fs/nfs/nfs4_fs.h
++++ b/fs/nfs/nfs4_fs.h
+@@ -599,6 +599,14 @@ static inline bool nfs4_stateid_is_newer(const nfs4_stateid *s1, const nfs4_stat
+ return (s32)(be32_to_cpu(s1->seqid) - be32_to_cpu(s2->seqid)) > 0;
+ }
+
++static inline bool nfs4_stateid_is_next(const nfs4_stateid *s1, const nfs4_stateid *s2)
++{
++ u32 seq1 = be32_to_cpu(s1->seqid);
++ u32 seq2 = be32_to_cpu(s2->seqid);
++
++ return seq2 == seq1 + 1U || (seq2 == 1U && seq1 == 0xffffffffU);
++}
++
+ static inline bool nfs4_stateid_match_or_older(const nfs4_stateid *dst, const nfs4_stateid *src)
+ {
+ return nfs4_stateid_match_other(dst, src) &&
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 984938024011b..9d354de613dae 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -146,7 +146,8 @@ static ssize_t __nfs4_copy_file_range(struct file *file_in, loff_t pos_in,
+ /* Only offload copy if superblock is the same */
+ if (file_in->f_op != &nfs4_file_operations)
+ return -EXDEV;
+- if (!nfs_server_capable(file_inode(file_out), NFS_CAP_COPY))
++ if (!nfs_server_capable(file_inode(file_out), NFS_CAP_COPY) ||
++ !nfs_server_capable(file_inode(file_in), NFS_CAP_COPY))
+ return -EOPNOTSUPP;
+ if (file_inode(file_in) == file_inode(file_out))
+ return -EOPNOTSUPP;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 6e95c85fe395a..3375f0a096390 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1547,19 +1547,6 @@ static void nfs_state_log_update_open_stateid(struct nfs4_state *state)
+ wake_up_all(&state->waitq);
+ }
+
+-static void nfs_state_log_out_of_order_open_stateid(struct nfs4_state *state,
+- const nfs4_stateid *stateid)
+-{
+- u32 state_seqid = be32_to_cpu(state->open_stateid.seqid);
+- u32 stateid_seqid = be32_to_cpu(stateid->seqid);
+-
+- if (stateid_seqid == state_seqid + 1U ||
+- (stateid_seqid == 1U && state_seqid == 0xffffffffU))
+- nfs_state_log_update_open_stateid(state);
+- else
+- set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
+-}
+-
+ static void nfs_test_and_clear_all_open_stateid(struct nfs4_state *state)
+ {
+ struct nfs_client *clp = state->owner->so_server->nfs_client;
+@@ -1585,21 +1572,19 @@ static void nfs_test_and_clear_all_open_stateid(struct nfs4_state *state)
+ * i.e. The stateid seqids have to be initialised to 1, and
+ * are then incremented on every state transition.
+ */
+-static bool nfs_need_update_open_stateid(struct nfs4_state *state,
++static bool nfs_stateid_is_sequential(struct nfs4_state *state,
+ const nfs4_stateid *stateid)
+ {
+- if (test_bit(NFS_OPEN_STATE, &state->flags) == 0 ||
+- !nfs4_stateid_match_other(stateid, &state->open_stateid)) {
++ if (test_bit(NFS_OPEN_STATE, &state->flags)) {
++ /* The common case - we're updating to a new sequence number */
++ if (nfs4_stateid_match_other(stateid, &state->open_stateid) &&
++ nfs4_stateid_is_next(&state->open_stateid, stateid)) {
++ return true;
++ }
++ } else {
++ /* This is the first OPEN in this generation */
+ if (stateid->seqid == cpu_to_be32(1))
+- nfs_state_log_update_open_stateid(state);
+- else
+- set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
+- return true;
+- }
+-
+- if (nfs4_stateid_is_newer(stateid, &state->open_stateid)) {
+- nfs_state_log_out_of_order_open_stateid(state, stateid);
+- return true;
++ return true;
+ }
+ return false;
+ }
+@@ -1673,16 +1658,16 @@ static void nfs_set_open_stateid_locked(struct nfs4_state *state,
+ int status = 0;
+ for (;;) {
+
+- if (!nfs_need_update_open_stateid(state, stateid))
+- return;
+- if (!test_bit(NFS_STATE_CHANGE_WAIT, &state->flags))
++ if (nfs_stateid_is_sequential(state, stateid))
+ break;
++
+ if (status)
+ break;
+ /* Rely on seqids for serialisation with NFSv4.0 */
+ if (!nfs4_has_session(NFS_SERVER(state->inode)->nfs_client))
+ break;
+
++ set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
+ prepare_to_wait(&state->waitq, &wait, TASK_KILLABLE);
+ /*
+ * Ensure we process the state changes in the same order
+@@ -1693,6 +1678,7 @@ static void nfs_set_open_stateid_locked(struct nfs4_state *state,
+ spin_unlock(&state->owner->so_lock);
+ rcu_read_unlock();
+ trace_nfs4_open_stateid_update_wait(state->inode, stateid, 0);
++
+ if (!signal_pending(current)) {
+ if (schedule_timeout(5*HZ) == 0)
+ status = -EAGAIN;
+@@ -3435,7 +3421,8 @@ static bool nfs4_refresh_open_old_stateid(nfs4_stateid *dst,
+ __be32 seqid_open;
+ u32 dst_seqid;
+ bool ret;
+- int seq;
++ int seq, status = -EAGAIN;
++ DEFINE_WAIT(wait);
+
+ for (;;) {
+ ret = false;
+@@ -3447,15 +3434,41 @@ static bool nfs4_refresh_open_old_stateid(nfs4_stateid *dst,
+ continue;
+ break;
+ }
++
++ write_seqlock(&state->seqlock);
+ seqid_open = state->open_stateid.seqid;
+- if (read_seqretry(&state->seqlock, seq))
+- continue;
+
+ dst_seqid = be32_to_cpu(dst->seqid);
+- if ((s32)(dst_seqid - be32_to_cpu(seqid_open)) >= 0)
+- dst->seqid = cpu_to_be32(dst_seqid + 1);
+- else
++
++ /* Did another OPEN bump the state's seqid? try again: */
++ if ((s32)(be32_to_cpu(seqid_open) - dst_seqid) > 0) {
+ dst->seqid = seqid_open;
++ write_sequnlock(&state->seqlock);
++ ret = true;
++ break;
++ }
++
++ /* server says we're behind but we haven't seen the update yet */
++ set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
++ prepare_to_wait(&state->waitq, &wait, TASK_KILLABLE);
++ write_sequnlock(&state->seqlock);
++ trace_nfs4_close_stateid_update_wait(state->inode, dst, 0);
++
++ if (signal_pending(current))
++ status = -EINTR;
++ else
++ if (schedule_timeout(5*HZ) != 0)
++ status = 0;
++
++ finish_wait(&state->waitq, &wait);
++
++ if (!status)
++ continue;
++ if (status == -EINTR)
++ break;
++
++ /* we slept the whole 5 seconds, we must have lost a seqid */
++ dst->seqid = cpu_to_be32(dst_seqid + 1);
+ ret = true;
+ break;
+ }
+@@ -8039,9 +8052,11 @@ int nfs4_proc_secinfo(struct inode *dir, const struct qstr *name,
+ * both PNFS and NON_PNFS flags set, and not having one of NON_PNFS, PNFS, or
+ * DS flags set.
+ */
+-static int nfs4_check_cl_exchange_flags(u32 flags)
++static int nfs4_check_cl_exchange_flags(u32 flags, u32 version)
+ {
+- if (flags & ~EXCHGID4_FLAG_MASK_R)
++ if (version >= 2 && (flags & ~EXCHGID4_2_FLAG_MASK_R))
++ goto out_inval;
++ else if (version < 2 && (flags & ~EXCHGID4_FLAG_MASK_R))
+ goto out_inval;
+ if ((flags & EXCHGID4_FLAG_USE_PNFS_MDS) &&
+ (flags & EXCHGID4_FLAG_USE_NON_PNFS))
+@@ -8454,7 +8469,8 @@ static int _nfs4_proc_exchange_id(struct nfs_client *clp, const struct cred *cre
+ if (status != 0)
+ goto out;
+
+- status = nfs4_check_cl_exchange_flags(resp->flags);
++ status = nfs4_check_cl_exchange_flags(resp->flags,
++ clp->cl_mvops->minor_version);
+ if (status != 0)
+ goto out;
+
+diff --git a/fs/nfs/nfs4trace.h b/fs/nfs/nfs4trace.h
+index b4f852d4d0994..484c1da96dea2 100644
+--- a/fs/nfs/nfs4trace.h
++++ b/fs/nfs/nfs4trace.h
+@@ -1511,6 +1511,7 @@ DEFINE_NFS4_INODE_STATEID_EVENT(nfs4_setattr);
+ DEFINE_NFS4_INODE_STATEID_EVENT(nfs4_delegreturn);
+ DEFINE_NFS4_INODE_STATEID_EVENT(nfs4_open_stateid_update);
+ DEFINE_NFS4_INODE_STATEID_EVENT(nfs4_open_stateid_update_wait);
++DEFINE_NFS4_INODE_STATEID_EVENT(nfs4_close_stateid_update_wait);
+
+ DECLARE_EVENT_CLASS(nfs4_getattr_event,
+ TP_PROTO(
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index c09a2a4281ec9..1f646a27481fb 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4954,7 +4954,6 @@ static int nfsd4_check_conflicting_opens(struct nfs4_client *clp,
+ writes--;
+ if (fp->fi_fds[O_RDWR])
+ writes--;
+- WARN_ON_ONCE(writes < 0);
+ if (writes > 0)
+ return -EAGAIN;
+ spin_lock(&fp->fi_lock);
+@@ -5126,7 +5125,7 @@ nfs4_open_delegation(struct svc_fh *fh, struct nfsd4_open *open,
+
+ memcpy(&open->op_delegate_stateid, &dp->dl_stid.sc_stateid, sizeof(dp->dl_stid.sc_stateid));
+
+- trace_nfsd_deleg_open(&dp->dl_stid.sc_stateid);
++ trace_nfsd_deleg_read(&dp->dl_stid.sc_stateid);
+ open->op_delegate_type = NFS4_OPEN_DELEGATE_READ;
+ nfs4_put_stid(&dp->dl_stid);
+ return;
+@@ -5243,7 +5242,7 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
+ nfs4_open_delegation(current_fh, open, stp);
+ nodeleg:
+ status = nfs_ok;
+- trace_nfsd_deleg_none(&stp->st_stid.sc_stateid);
++ trace_nfsd_open(&stp->st_stid.sc_stateid);
+ out:
+ /* 4.1 client trying to upgrade/downgrade delegation? */
+ if (open->op_delegate_type == NFS4_OPEN_DELEGATE_NONE && dp &&
+diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
+index 6e0b066480c50..6d1b3af40a4f5 100644
+--- a/fs/nfsd/nfsproc.c
++++ b/fs/nfsd/nfsproc.c
+@@ -118,6 +118,13 @@ done:
+ return nfsd_return_attrs(nfserr, resp);
+ }
+
++/* Obsolete, replaced by MNTPROC_MNT. */
++static __be32
++nfsd_proc_root(struct svc_rqst *rqstp)
++{
++ return nfs_ok;
++}
++
+ /*
+ * Look up a path name component
+ * Note: the dentry in the resp->fh may be negative if the file
+@@ -203,6 +210,13 @@ nfsd_proc_read(struct svc_rqst *rqstp)
+ return fh_getattr(&resp->fh, &resp->stat);
+ }
+
++/* Reserved */
++static __be32
++nfsd_proc_writecache(struct svc_rqst *rqstp)
++{
++ return nfs_ok;
++}
++
+ /*
+ * Write data to a file
+ * N.B. After this call resp->fh needs an fh_put
+@@ -617,6 +631,7 @@ static const struct svc_procedure nfsd_procedures2[18] = {
+ .pc_xdrressize = ST+AT,
+ },
+ [NFSPROC_ROOT] = {
++ .pc_func = nfsd_proc_root,
+ .pc_decode = nfssvc_decode_void,
+ .pc_encode = nfssvc_encode_void,
+ .pc_argsize = sizeof(struct nfsd_void),
+@@ -654,6 +669,7 @@ static const struct svc_procedure nfsd_procedures2[18] = {
+ .pc_xdrressize = ST+AT+1+NFSSVC_MAXBLKSIZE_V2/4,
+ },
+ [NFSPROC_WRITECACHE] = {
++ .pc_func = nfsd_proc_writecache,
+ .pc_decode = nfssvc_decode_void,
+ .pc_encode = nfssvc_encode_void,
+ .pc_argsize = sizeof(struct nfsd_void),
+diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h
+index 1861db1bdc670..99bf07800cd09 100644
+--- a/fs/nfsd/trace.h
++++ b/fs/nfsd/trace.h
+@@ -289,8 +289,8 @@ DEFINE_STATEID_EVENT(layout_recall_done);
+ DEFINE_STATEID_EVENT(layout_recall_fail);
+ DEFINE_STATEID_EVENT(layout_recall_release);
+
+-DEFINE_STATEID_EVENT(deleg_open);
+-DEFINE_STATEID_EVENT(deleg_none);
++DEFINE_STATEID_EVENT(open);
++DEFINE_STATEID_EVENT(deleg_read);
+ DEFINE_STATEID_EVENT(deleg_break);
+ DEFINE_STATEID_EVENT(deleg_recall);
+
+diff --git a/fs/ubifs/debug.c b/fs/ubifs/debug.c
+index 31288d8fa2ce9..ebff43f8009c2 100644
+--- a/fs/ubifs/debug.c
++++ b/fs/ubifs/debug.c
+@@ -1123,6 +1123,7 @@ int dbg_check_dir(struct ubifs_info *c, const struct inode *dir)
+ err = PTR_ERR(dent);
+ if (err == -ENOENT)
+ break;
++ kfree(pdent);
+ return err;
+ }
+
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index 4a5b06f8d8129..9a3b6e92270db 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -894,6 +894,7 @@ int ubifs_jnl_write_inode(struct ubifs_info *c, const struct inode *inode)
+ if (err == -ENOENT)
+ break;
+
++ kfree(pxent);
+ goto out_release;
+ }
+
+@@ -906,6 +907,7 @@ int ubifs_jnl_write_inode(struct ubifs_info *c, const struct inode *inode)
+ ubifs_err(c, "dead directory entry '%s', error %d",
+ xent->name, err);
+ ubifs_ro_mode(c, err);
++ kfree(pxent);
+ kfree(xent);
+ goto out_release;
+ }
+@@ -936,8 +938,6 @@ int ubifs_jnl_write_inode(struct ubifs_info *c, const struct inode *inode)
+ inode->i_ino);
+ release_head(c, BASEHD);
+
+- ubifs_add_auth_dirt(c, lnum);
+-
+ if (last_reference) {
+ err = ubifs_tnc_remove_ino(c, inode->i_ino);
+ if (err)
+@@ -947,6 +947,8 @@ int ubifs_jnl_write_inode(struct ubifs_info *c, const struct inode *inode)
+ } else {
+ union ubifs_key key;
+
++ ubifs_add_auth_dirt(c, lnum);
++
+ ino_key_init(c, &key, inode->i_ino);
+ err = ubifs_tnc_add(c, &key, lnum, offs, ilen, hash);
+ }
+diff --git a/fs/ubifs/orphan.c b/fs/ubifs/orphan.c
+index 2c294085ffedc..0fb61956146da 100644
+--- a/fs/ubifs/orphan.c
++++ b/fs/ubifs/orphan.c
+@@ -173,6 +173,7 @@ int ubifs_add_orphan(struct ubifs_info *c, ino_t inum)
+ err = PTR_ERR(xent);
+ if (err == -ENOENT)
+ break;
++ kfree(pxent);
+ return err;
+ }
+
+@@ -182,6 +183,7 @@ int ubifs_add_orphan(struct ubifs_info *c, ino_t inum)
+
+ xattr_orphan = orphan_add(c, xattr_inum, orphan);
+ if (IS_ERR(xattr_orphan)) {
++ kfree(pxent);
+ kfree(xent);
+ return PTR_ERR(xattr_orphan);
+ }
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index a2420c900275a..732218ef66567 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -1110,14 +1110,20 @@ static int ubifs_parse_options(struct ubifs_info *c, char *options,
+ break;
+ }
+ case Opt_auth_key:
+- c->auth_key_name = kstrdup(args[0].from, GFP_KERNEL);
+- if (!c->auth_key_name)
+- return -ENOMEM;
++ if (!is_remount) {
++ c->auth_key_name = kstrdup(args[0].from,
++ GFP_KERNEL);
++ if (!c->auth_key_name)
++ return -ENOMEM;
++ }
+ break;
+ case Opt_auth_hash_name:
+- c->auth_hash_name = kstrdup(args[0].from, GFP_KERNEL);
+- if (!c->auth_hash_name)
+- return -ENOMEM;
++ if (!is_remount) {
++ c->auth_hash_name = kstrdup(args[0].from,
++ GFP_KERNEL);
++ if (!c->auth_hash_name)
++ return -ENOMEM;
++ }
+ break;
+ case Opt_ignore:
+ break;
+@@ -1141,6 +1147,18 @@ static int ubifs_parse_options(struct ubifs_info *c, char *options,
+ return 0;
+ }
+
++/*
++ * ubifs_release_options - release mount parameters which have been dumped.
++ * @c: UBIFS file-system description object
++ */
++static void ubifs_release_options(struct ubifs_info *c)
++{
++ kfree(c->auth_key_name);
++ c->auth_key_name = NULL;
++ kfree(c->auth_hash_name);
++ c->auth_hash_name = NULL;
++}
++
+ /**
+ * destroy_journal - destroy journal data structures.
+ * @c: UBIFS file-system description object
+@@ -1313,7 +1331,7 @@ static int mount_ubifs(struct ubifs_info *c)
+
+ err = ubifs_read_superblock(c);
+ if (err)
+- goto out_free;
++ goto out_auth;
+
+ c->probing = 0;
+
+@@ -1325,18 +1343,18 @@ static int mount_ubifs(struct ubifs_info *c)
+ ubifs_err(c, "'compressor \"%s\" is not compiled in",
+ ubifs_compr_name(c, c->default_compr));
+ err = -ENOTSUPP;
+- goto out_free;
++ goto out_auth;
+ }
+
+ err = init_constants_sb(c);
+ if (err)
+- goto out_free;
++ goto out_auth;
+
+ sz = ALIGN(c->max_idx_node_sz, c->min_io_size) * 2;
+ c->cbuf = kmalloc(sz, GFP_NOFS);
+ if (!c->cbuf) {
+ err = -ENOMEM;
+- goto out_free;
++ goto out_auth;
+ }
+
+ err = alloc_wbufs(c);
+@@ -1611,6 +1629,8 @@ out_wbufs:
+ free_wbufs(c);
+ out_cbuf:
+ kfree(c->cbuf);
++out_auth:
++ ubifs_exit_authentication(c);
+ out_free:
+ kfree(c->write_reserve_buf);
+ kfree(c->bu.buf);
+@@ -1650,8 +1670,7 @@ static void ubifs_umount(struct ubifs_info *c)
+ ubifs_lpt_free(c, 0);
+ ubifs_exit_authentication(c);
+
+- kfree(c->auth_key_name);
+- kfree(c->auth_hash_name);
++ ubifs_release_options(c);
+ kfree(c->cbuf);
+ kfree(c->rcvrd_mst_node);
+ kfree(c->mst_node);
+@@ -2219,6 +2238,7 @@ out_umount:
+ out_unlock:
+ mutex_unlock(&c->umount_mutex);
+ out_close:
++ ubifs_release_options(c);
+ ubi_close_volume(c->ubi);
+ out:
+ return err;
+diff --git a/fs/ubifs/tnc.c b/fs/ubifs/tnc.c
+index f609f6cdde700..b120a00773f81 100644
+--- a/fs/ubifs/tnc.c
++++ b/fs/ubifs/tnc.c
+@@ -2885,6 +2885,7 @@ int ubifs_tnc_remove_ino(struct ubifs_info *c, ino_t inum)
+ err = PTR_ERR(xent);
+ if (err == -ENOENT)
+ break;
++ kfree(pxent);
+ return err;
+ }
+
+@@ -2898,6 +2899,7 @@ int ubifs_tnc_remove_ino(struct ubifs_info *c, ino_t inum)
+ fname_len(&nm) = le16_to_cpu(xent->nlen);
+ err = ubifs_tnc_remove_nm(c, &key1, &nm);
+ if (err) {
++ kfree(pxent);
+ kfree(xent);
+ return err;
+ }
+@@ -2906,6 +2908,7 @@ int ubifs_tnc_remove_ino(struct ubifs_info *c, ino_t inum)
+ highest_ino_key(c, &key2, xattr_inum);
+ err = ubifs_tnc_remove_range(c, &key1, &key2);
+ if (err) {
++ kfree(pxent);
+ kfree(xent);
+ return err;
+ }
+diff --git a/fs/ubifs/xattr.c b/fs/ubifs/xattr.c
+index 9aefbb60074ff..a0b9b349efe65 100644
+--- a/fs/ubifs/xattr.c
++++ b/fs/ubifs/xattr.c
+@@ -522,6 +522,7 @@ int ubifs_purge_xattrs(struct inode *host)
+ xent->name, err);
+ ubifs_ro_mode(c, err);
+ kfree(pxent);
++ kfree(xent);
+ return err;
+ }
+
+@@ -531,6 +532,7 @@ int ubifs_purge_xattrs(struct inode *host)
+ err = remove_xattr(c, host, xino, &nm);
+ if (err) {
+ kfree(pxent);
++ kfree(xent);
+ iput(xino);
+ ubifs_err(c, "cannot remove xattr, error %d", err);
+ return err;
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index a03b8ce5ef0fd..fca3f5b590782 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -1704,7 +1704,8 @@ static noinline int udf_process_sequence(
+ "Pointers (max %u supported)\n",
+ UDF_MAX_TD_NESTING);
+ brelse(bh);
+- return -EIO;
++ ret = -EIO;
++ goto out;
+ }
+
+ vdp = (struct volDescPtr *)bh->b_data;
+@@ -1724,7 +1725,8 @@ static noinline int udf_process_sequence(
+ curr = get_volume_descriptor_record(ident, bh, &data);
+ if (IS_ERR(curr)) {
+ brelse(bh);
+- return PTR_ERR(curr);
++ ret = PTR_ERR(curr);
++ goto out;
+ }
+ /* Descriptor we don't care about? */
+ if (!curr)
+@@ -1746,28 +1748,31 @@ static noinline int udf_process_sequence(
+ */
+ if (!data.vds[VDS_POS_PRIMARY_VOL_DESC].block) {
+ udf_err(sb, "Primary Volume Descriptor not found!\n");
+- return -EAGAIN;
++ ret = -EAGAIN;
++ goto out;
+ }
+ ret = udf_load_pvoldesc(sb, data.vds[VDS_POS_PRIMARY_VOL_DESC].block);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+ if (data.vds[VDS_POS_LOGICAL_VOL_DESC].block) {
+ ret = udf_load_logicalvol(sb,
+ data.vds[VDS_POS_LOGICAL_VOL_DESC].block,
+ fileset);
+ if (ret < 0)
+- return ret;
++ goto out;
+ }
+
+ /* Now handle prevailing Partition Descriptors */
+ for (i = 0; i < data.num_part_descs; i++) {
+ ret = udf_load_partdesc(sb, data.part_descs_loc[i].rec.block);
+ if (ret < 0)
+- return ret;
++ goto out;
+ }
+-
+- return 0;
++ ret = 0;
++out:
++ kfree(data.part_descs_loc);
++ return ret;
+ }
+
+ /*
+diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
+index 1b0a01b06a05d..d9a692484eaed 100644
+--- a/fs/xfs/libxfs/xfs_bmap.c
++++ b/fs/xfs/libxfs/xfs_bmap.c
+@@ -5046,20 +5046,25 @@ xfs_bmap_del_extent_real(
+
+ flags = XFS_ILOG_CORE;
+ if (whichfork == XFS_DATA_FORK && XFS_IS_REALTIME_INODE(ip)) {
+- xfs_fsblock_t bno;
+ xfs_filblks_t len;
+ xfs_extlen_t mod;
+
+- bno = div_u64_rem(del->br_startblock, mp->m_sb.sb_rextsize,
+- &mod);
+- ASSERT(mod == 0);
+ len = div_u64_rem(del->br_blockcount, mp->m_sb.sb_rextsize,
+ &mod);
+ ASSERT(mod == 0);
+
+- error = xfs_rtfree_extent(tp, bno, (xfs_extlen_t)len);
+- if (error)
+- goto done;
++ if (!(bflags & XFS_BMAPI_REMAP)) {
++ xfs_fsblock_t bno;
++
++ bno = div_u64_rem(del->br_startblock,
++ mp->m_sb.sb_rextsize, &mod);
++ ASSERT(mod == 0);
++
++ error = xfs_rtfree_extent(tp, bno, (xfs_extlen_t)len);
++ if (error)
++ goto done;
++ }
++
+ do_fx = 0;
+ nblks = len * mp->m_sb.sb_rextsize;
+ qfield = XFS_TRANS_DQ_RTBCOUNT;
+diff --git a/fs/xfs/libxfs/xfs_defer.c b/fs/xfs/libxfs/xfs_defer.c
+index d8f586256add7..4959d8a32b606 100644
+--- a/fs/xfs/libxfs/xfs_defer.c
++++ b/fs/xfs/libxfs/xfs_defer.c
+@@ -186,8 +186,9 @@ xfs_defer_create_intent(
+ {
+ const struct xfs_defer_op_type *ops = defer_op_types[dfp->dfp_type];
+
+- dfp->dfp_intent = ops->create_intent(tp, &dfp->dfp_work,
+- dfp->dfp_count, sort);
++ if (!dfp->dfp_intent)
++ dfp->dfp_intent = ops->create_intent(tp, &dfp->dfp_work,
++ dfp->dfp_count, sort);
+ }
+
+ /*
+@@ -390,6 +391,7 @@ xfs_defer_finish_one(
+ list_add(li, &dfp->dfp_work);
+ dfp->dfp_count++;
+ dfp->dfp_done = NULL;
++ dfp->dfp_intent = NULL;
+ xfs_defer_create_intent(tp, dfp, false);
+ }
+
+@@ -428,8 +430,17 @@ xfs_defer_finish_noroll(
+
+ /* Until we run out of pending work to finish... */
+ while (!list_empty(&dop_pending) || !list_empty(&(*tp)->t_dfops)) {
++ /*
++ * Deferred items that are created in the process of finishing
++ * other deferred work items should be queued at the head of
++ * the pending list, which puts them ahead of the deferred work
++ * that was created by the caller. This keeps the number of
++ * pending work items to a minimum, which decreases the amount
++ * of time that any one intent item can stick around in memory,
++ * pinning the log tail.
++ */
+ xfs_defer_create_intents(*tp);
+- list_splice_tail_init(&(*tp)->t_dfops, &dop_pending);
++ list_splice_init(&(*tp)->t_dfops, &dop_pending);
+
+ error = xfs_defer_trans_roll(tp);
+ if (error)
+@@ -552,3 +563,23 @@ xfs_defer_move(
+
+ xfs_defer_reset(stp);
+ }
++
++/*
++ * Prepare a chain of fresh deferred ops work items to be completed later. Log
++ * recovery requires the ability to put off until later the actual finishing
++ * work so that it can process unfinished items recovered from the log in
++ * correct order.
++ *
++ * Create and log intent items for all the work that we're capturing so that we
++ * can be assured that the items will get replayed if the system goes down
++ * before log recovery gets a chance to finish the work it put off. Then we
++ * move the chain from stp to dtp.
++ */
++void
++xfs_defer_capture(
++ struct xfs_trans *dtp,
++ struct xfs_trans *stp)
++{
++ xfs_defer_create_intents(stp);
++ xfs_defer_move(dtp, stp);
++}
+diff --git a/fs/xfs/libxfs/xfs_defer.h b/fs/xfs/libxfs/xfs_defer.h
+index 6b2ca580f2b06..3164199162b61 100644
+--- a/fs/xfs/libxfs/xfs_defer.h
++++ b/fs/xfs/libxfs/xfs_defer.h
+@@ -63,4 +63,10 @@ extern const struct xfs_defer_op_type xfs_rmap_update_defer_type;
+ extern const struct xfs_defer_op_type xfs_extent_free_defer_type;
+ extern const struct xfs_defer_op_type xfs_agfl_free_defer_type;
+
++/*
++ * Functions to capture a chain of deferred operations and continue them later.
++ * This doesn't normally happen except log recovery.
++ */
++void xfs_defer_capture(struct xfs_trans *dtp, struct xfs_trans *stp);
++
+ #endif /* __XFS_DEFER_H__ */
+diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c
+index ec3691372e7c0..815a0563288f4 100644
+--- a/fs/xfs/xfs_bmap_item.c
++++ b/fs/xfs/xfs_bmap_item.c
+@@ -534,7 +534,7 @@ xfs_bui_item_recover(
+ xfs_bmap_unmap_extent(tp, ip, &irec);
+ }
+
+- xfs_defer_move(parent_tp, tp);
++ xfs_defer_capture(parent_tp, tp);
+ error = xfs_trans_commit(tp);
+ xfs_iunlock(ip, XFS_ILOCK_EXCL);
+ xfs_irele(ip);
+diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
+index e2ec91b2d0f46..9ceb67d0f2565 100644
+--- a/fs/xfs/xfs_log_recover.c
++++ b/fs/xfs/xfs_log_recover.c
+@@ -2904,7 +2904,8 @@ STATIC int
+ xlog_valid_rec_header(
+ struct xlog *log,
+ struct xlog_rec_header *rhead,
+- xfs_daddr_t blkno)
++ xfs_daddr_t blkno,
++ int bufsize)
+ {
+ int hlen;
+
+@@ -2920,10 +2921,14 @@ xlog_valid_rec_header(
+ return -EFSCORRUPTED;
+ }
+
+- /* LR body must have data or it wouldn't have been written */
++ /*
++ * LR body must have data (or it wouldn't have been written)
++ * and h_len must not be greater than LR buffer size.
++ */
+ hlen = be32_to_cpu(rhead->h_len);
+- if (XFS_IS_CORRUPT(log->l_mp, hlen <= 0 || hlen > INT_MAX))
++ if (XFS_IS_CORRUPT(log->l_mp, hlen <= 0 || hlen > bufsize))
+ return -EFSCORRUPTED;
++
+ if (XFS_IS_CORRUPT(log->l_mp,
+ blkno > log->l_logBBsize || blkno > INT_MAX))
+ return -EFSCORRUPTED;
+@@ -2984,9 +2989,6 @@ xlog_do_recovery_pass(
+ goto bread_err1;
+
+ rhead = (xlog_rec_header_t *)offset;
+- error = xlog_valid_rec_header(log, rhead, tail_blk);
+- if (error)
+- goto bread_err1;
+
+ /*
+ * xfsprogs has a bug where record length is based on lsunit but
+@@ -3001,21 +3003,18 @@ xlog_do_recovery_pass(
+ */
+ h_size = be32_to_cpu(rhead->h_size);
+ h_len = be32_to_cpu(rhead->h_len);
+- if (h_len > h_size) {
+- if (h_len <= log->l_mp->m_logbsize &&
+- be32_to_cpu(rhead->h_num_logops) == 1) {
+- xfs_warn(log->l_mp,
++ if (h_len > h_size && h_len <= log->l_mp->m_logbsize &&
++ rhead->h_num_logops == cpu_to_be32(1)) {
++ xfs_warn(log->l_mp,
+ "invalid iclog size (%d bytes), using lsunit (%d bytes)",
+- h_size, log->l_mp->m_logbsize);
+- h_size = log->l_mp->m_logbsize;
+- } else {
+- XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW,
+- log->l_mp);
+- error = -EFSCORRUPTED;
+- goto bread_err1;
+- }
++ h_size, log->l_mp->m_logbsize);
++ h_size = log->l_mp->m_logbsize;
+ }
+
++ error = xlog_valid_rec_header(log, rhead, tail_blk, h_size);
++ if (error)
++ goto bread_err1;
++
+ if ((be32_to_cpu(rhead->h_version) & XLOG_VERSION_2) &&
+ (h_size > XLOG_HEADER_CYCLE_SIZE)) {
+ hblks = h_size / XLOG_HEADER_CYCLE_SIZE;
+@@ -3096,7 +3095,7 @@ xlog_do_recovery_pass(
+ }
+ rhead = (xlog_rec_header_t *)offset;
+ error = xlog_valid_rec_header(log, rhead,
+- split_hblks ? blk_no : 0);
++ split_hblks ? blk_no : 0, h_size);
+ if (error)
+ goto bread_err2;
+
+@@ -3177,7 +3176,7 @@ xlog_do_recovery_pass(
+ goto bread_err2;
+
+ rhead = (xlog_rec_header_t *)offset;
+- error = xlog_valid_rec_header(log, rhead, blk_no);
++ error = xlog_valid_rec_header(log, rhead, blk_no, h_size);
+ if (error)
+ goto bread_err2;
+
+diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c
+index ca93b64883774..492d80a0b4060 100644
+--- a/fs/xfs/xfs_refcount_item.c
++++ b/fs/xfs/xfs_refcount_item.c
+@@ -555,7 +555,7 @@ xfs_cui_item_recover(
+ }
+
+ xfs_refcount_finish_one_cleanup(tp, rcur, error);
+- xfs_defer_move(parent_tp, tp);
++ xfs_defer_capture(parent_tp, tp);
+ error = xfs_trans_commit(tp);
+ return error;
+
+diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c
+index 86994d7f7cba3..be01bfbc3ad93 100644
+--- a/fs/xfs/xfs_rtalloc.c
++++ b/fs/xfs/xfs_rtalloc.c
+@@ -778,8 +778,14 @@ xfs_growfs_rt_alloc(
+ struct xfs_bmbt_irec map; /* block map output */
+ int nmap; /* number of block maps */
+ int resblks; /* space reservation */
++ enum xfs_blft buf_type;
+ struct xfs_trans *tp;
+
++ if (ip == mp->m_rsumip)
++ buf_type = XFS_BLFT_RTSUMMARY_BUF;
++ else
++ buf_type = XFS_BLFT_RTBITMAP_BUF;
++
+ /*
+ * Allocate space to the file, as necessary.
+ */
+@@ -841,6 +847,9 @@ xfs_growfs_rt_alloc(
+ mp->m_bsize, 0, &bp);
+ if (error)
+ goto out_trans_cancel;
++
++ xfs_trans_buf_set_type(tp, bp, buf_type);
++ bp->b_ops = &xfs_rtbuf_ops;
+ memset(bp->b_addr, 0, mp->m_sb.sb_blocksize);
+ xfs_trans_log_buf(tp, bp, 0, mp->m_sb.sb_blocksize - 1);
+ /*
+@@ -1018,10 +1027,13 @@ xfs_growfs_rt(
+ xfs_ilock(mp->m_rbmip, XFS_ILOCK_EXCL);
+ xfs_trans_ijoin(tp, mp->m_rbmip, XFS_ILOCK_EXCL);
+ /*
+- * Update the bitmap inode's size.
++ * Update the bitmap inode's size ondisk and incore. We need
++ * to update the incore size so that inode inactivation won't
++ * punch what it thinks are "posteof" blocks.
+ */
+ mp->m_rbmip->i_d.di_size =
+ nsbp->sb_rbmblocks * nsbp->sb_blocksize;
++ i_size_write(VFS_I(mp->m_rbmip), mp->m_rbmip->i_d.di_size);
+ xfs_trans_log_inode(tp, mp->m_rbmip, XFS_ILOG_CORE);
+ /*
+ * Get the summary inode into the transaction.
+@@ -1029,9 +1041,12 @@ xfs_growfs_rt(
+ xfs_ilock(mp->m_rsumip, XFS_ILOCK_EXCL);
+ xfs_trans_ijoin(tp, mp->m_rsumip, XFS_ILOCK_EXCL);
+ /*
+- * Update the summary inode's size.
++ * Update the summary inode's size. We need to update the
++ * incore size so that inode inactivation won't punch what it
++ * thinks are "posteof" blocks.
+ */
+ mp->m_rsumip->i_d.di_size = nmp->m_rsumsize;
++ i_size_write(VFS_I(mp->m_rsumip), mp->m_rsumip->i_d.di_size);
+ xfs_trans_log_inode(tp, mp->m_rsumip, XFS_ILOG_CORE);
+ /*
+ * Copy summary data from old to new sizes.
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 7636bc71c71f9..2b34e6de3e8a2 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -581,7 +581,10 @@
+ */
+ #define TEXT_TEXT \
+ ALIGN_FUNCTION(); \
+- *(.text.hot TEXT_MAIN .text.fixup .text.unlikely) \
++ *(.text.hot .text.hot.*) \
++ *(TEXT_MAIN .text.fixup) \
++ *(.text.unlikely .text.unlikely.*) \
++ *(.text.unknown .text.unknown.*) \
+ NOINSTR_TEXT \
+ *(.text..refcount) \
+ *(.ref.text) \
+diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
+index b9780ae9dd26c..72dc3a95fbaad 100644
+--- a/include/drm/gpu_scheduler.h
++++ b/include/drm/gpu_scheduler.h
+@@ -33,14 +33,16 @@
+ struct drm_gpu_scheduler;
+ struct drm_sched_rq;
+
++/* These are often used as an (initial) index
++ * to an array, and as such should start at 0.
++ */
+ enum drm_sched_priority {
+ DRM_SCHED_PRIORITY_MIN,
+- DRM_SCHED_PRIORITY_LOW = DRM_SCHED_PRIORITY_MIN,
+ DRM_SCHED_PRIORITY_NORMAL,
+- DRM_SCHED_PRIORITY_HIGH_SW,
+- DRM_SCHED_PRIORITY_HIGH_HW,
++ DRM_SCHED_PRIORITY_HIGH,
+ DRM_SCHED_PRIORITY_KERNEL,
+- DRM_SCHED_PRIORITY_MAX,
++
++ DRM_SCHED_PRIORITY_COUNT,
+ DRM_SCHED_PRIORITY_INVALID = -1,
+ DRM_SCHED_PRIORITY_UNSET = -2
+ };
+@@ -274,7 +276,7 @@ struct drm_gpu_scheduler {
+ uint32_t hw_submission_limit;
+ long timeout;
+ const char *name;
+- struct drm_sched_rq sched_rq[DRM_SCHED_PRIORITY_MAX];
++ struct drm_sched_rq sched_rq[DRM_SCHED_PRIORITY_COUNT];
+ wait_queue_head_t wake_up_worker;
+ wait_queue_head_t job_scheduled;
+ atomic_t hw_rq_count;
+diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
+index a911e5d068454..2e900fd461f2e 100644
+--- a/include/linux/cpufreq.h
++++ b/include/linux/cpufreq.h
+@@ -293,7 +293,7 @@ __ATTR(_name, 0644, show_##_name, store_##_name)
+
+ struct cpufreq_driver {
+ char name[CPUFREQ_NAME_LEN];
+- u8 flags;
++ u16 flags;
+ void *driver_data;
+
+ /* needed by all drivers */
+@@ -417,9 +417,18 @@ struct cpufreq_driver {
+ */
+ #define CPUFREQ_IS_COOLING_DEV BIT(7)
+
++/*
++ * Set by drivers that need to update internale upper and lower boundaries along
++ * with the target frequency and so the core and governors should also invoke
++ * the diver if the target frequency does not change, but the policy min or max
++ * may have changed.
++ */
++#define CPUFREQ_NEED_UPDATE_LIMITS BIT(8)
++
+ int cpufreq_register_driver(struct cpufreq_driver *driver_data);
+ int cpufreq_unregister_driver(struct cpufreq_driver *driver_data);
+
++bool cpufreq_driver_test_flags(u16 flags);
+ const char *cpufreq_get_current_driver(void);
+ void *cpufreq_get_driver_data(void);
+
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 7d4d04c9d3e64..dbbeb52ce5f31 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2861,7 +2861,6 @@ extern int do_pipe_flags(int *, int);
+ #define __kernel_read_file_id(id) \
+ id(UNKNOWN, unknown) \
+ id(FIRMWARE, firmware) \
+- id(FIRMWARE_PREALLOC_BUFFER, firmware) \
+ id(MODULE, kernel-module) \
+ id(KEXEC_IMAGE, kexec-image) \
+ id(KEXEC_INITRAMFS, kexec-initramfs) \
+diff --git a/include/linux/hil_mlc.h b/include/linux/hil_mlc.h
+index 774f7d3b8f6af..369221fd55187 100644
+--- a/include/linux/hil_mlc.h
++++ b/include/linux/hil_mlc.h
+@@ -103,7 +103,7 @@ struct hilse_node {
+
+ /* Methods for back-end drivers, e.g. hp_sdc_mlc */
+ typedef int (hil_mlc_cts) (hil_mlc *mlc);
+-typedef void (hil_mlc_out) (hil_mlc *mlc);
++typedef int (hil_mlc_out) (hil_mlc *mlc);
+ typedef int (hil_mlc_in) (hil_mlc *mlc, suseconds_t timeout);
+
+ struct hil_mlc_devinfo {
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 372100c755e7f..e30be3dd5be0e 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -1212,4 +1212,22 @@ static inline bool mlx5_is_roce_enabled(struct mlx5_core_dev *dev)
+ return val.vbool;
+ }
+
++/**
++ * mlx5_core_net - Provide net namespace of the mlx5_core_dev
++ * @dev: mlx5 core device
++ *
++ * mlx5_core_net() returns the net namespace of mlx5 core device.
++ * This can be called only in below described limited context.
++ * (a) When a devlink instance for mlx5_core is registered and
++ * when devlink reload operation is disabled.
++ * or
++ * (b) during devlink reload reload_down() and reload_up callbacks
++ * where it is ensured that devlink instance's net namespace is
++ * stable.
++ */
++static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev)
++{
++ return devlink_net(priv_to_devlink(dev));
++}
++
+ #endif /* MLX5_DRIVER_H */
+diff --git a/include/linux/pci-ecam.h b/include/linux/pci-ecam.h
+index 1af5cb02ef7f9..033ce74f02e81 100644
+--- a/include/linux/pci-ecam.h
++++ b/include/linux/pci-ecam.h
+@@ -51,6 +51,7 @@ extern const struct pci_ecam_ops pci_generic_ecam_ops;
+
+ #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
+ extern const struct pci_ecam_ops pci_32b_ops; /* 32-bit accesses only */
++extern const struct pci_ecam_ops pci_32b_read_ops; /* 32-bit read only */
+ extern const struct pci_ecam_ops hisi_pcie_ops; /* HiSilicon */
+ extern const struct pci_ecam_ops thunder_pem_ecam_ops; /* Cavium ThunderX 1.x & 2.x */
+ extern const struct pci_ecam_ops pci_thunder_ecam_ops; /* Cavium ThunderX 1.x */
+diff --git a/include/linux/rcupdate_trace.h b/include/linux/rcupdate_trace.h
+index d9015aac78c63..a6a6a3acab5a8 100644
+--- a/include/linux/rcupdate_trace.h
++++ b/include/linux/rcupdate_trace.h
+@@ -50,6 +50,7 @@ static inline void rcu_read_lock_trace(void)
+ struct task_struct *t = current;
+
+ WRITE_ONCE(t->trc_reader_nesting, READ_ONCE(t->trc_reader_nesting) + 1);
++ barrier();
+ if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB) &&
+ t->trc_reader_special.b.need_mb)
+ smp_mb(); // Pairs with update-side barriers
+@@ -72,6 +73,9 @@ static inline void rcu_read_unlock_trace(void)
+
+ rcu_lock_release(&rcu_trace_lock_map);
+ nesting = READ_ONCE(t->trc_reader_nesting) - 1;
++ barrier(); // Critical section before disabling.
++ // Disable IPI-based setting of .need_qs.
++ WRITE_ONCE(t->trc_reader_nesting, INT_MIN);
+ if (likely(!READ_ONCE(t->trc_reader_special.s)) || nesting) {
+ WRITE_ONCE(t->trc_reader_nesting, nesting);
+ return; // We assume shallow reader nesting.
+diff --git a/include/linux/time64.h b/include/linux/time64.h
+index c9dcb3e5781f8..5117cb5b56561 100644
+--- a/include/linux/time64.h
++++ b/include/linux/time64.h
+@@ -124,6 +124,10 @@ static inline bool timespec64_valid_settod(const struct timespec64 *ts)
+ */
+ static inline s64 timespec64_to_ns(const struct timespec64 *ts)
+ {
++ /* Prevent multiplication overflow */
++ if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX)
++ return KTIME_MAX;
++
+ return ((s64) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec;
+ }
+
+diff --git a/include/linux/usb/pd.h b/include/linux/usb/pd.h
+index b6c233e79bd45..1df895e4680b2 100644
+--- a/include/linux/usb/pd.h
++++ b/include/linux/usb/pd.h
+@@ -473,6 +473,7 @@ static inline unsigned int rdo_max_power(u32 rdo)
+ #define PD_T_ERROR_RECOVERY 100 /* minimum 25 is insufficient */
+ #define PD_T_SRCSWAPSTDBY 625 /* Maximum of 650ms */
+ #define PD_T_NEWSRC 250 /* Maximum of 275ms */
++#define PD_T_SWAP_SRC_START 20 /* Minimum of 20ms */
+
+ #define PD_T_DRP_TRY 100 /* 75 - 150 ms */
+ #define PD_T_DRP_TRYWAIT 600 /* 400 - 800 ms */
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 5b4f0efc4241f..ef7b786b8675c 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -1463,11 +1463,6 @@ enum rdma_remove_reason {
+ RDMA_REMOVE_DRIVER_REMOVE,
+ /* uobj is being cleaned-up before being committed */
+ RDMA_REMOVE_ABORT,
+- /*
+- * uobj has been fully created, with the uobj->object set, but is being
+- * cleaned up before being comitted
+- */
+- RDMA_REMOVE_ABORT_HWOBJ,
+ };
+
+ struct ib_rdmacg_object {
+diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
+index e76bac4d14c51..69ade4fb71aab 100644
+--- a/include/scsi/scsi_cmnd.h
++++ b/include/scsi/scsi_cmnd.h
+@@ -165,7 +165,8 @@ extern void *scsi_kmap_atomic_sg(struct scatterlist *sg, int sg_count,
+ size_t *offset, size_t *len);
+ extern void scsi_kunmap_atomic_sg(void *virt);
+
+-extern blk_status_t scsi_init_io(struct scsi_cmnd *cmd);
++blk_status_t scsi_alloc_sgtables(struct scsi_cmnd *cmd);
++void scsi_free_sgtables(struct scsi_cmnd *cmd);
+
+ #ifdef CONFIG_SCSI_DMA
+ extern int scsi_dma_map(struct scsi_cmnd *cmd);
+diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
+index 5f0c1cf1ea130..342b35fc33c59 100644
+--- a/include/trace/events/afs.h
++++ b/include/trace/events/afs.h
+@@ -40,6 +40,7 @@ enum afs_server_trace {
+ afs_server_trace_get_new_cbi,
+ afs_server_trace_get_probe,
+ afs_server_trace_give_up_cb,
++ afs_server_trace_purging,
+ afs_server_trace_put_call,
+ afs_server_trace_put_cbi,
+ afs_server_trace_put_find_rsq,
+@@ -270,6 +271,7 @@ enum afs_cb_break_reason {
+ EM(afs_server_trace_get_new_cbi, "GET cbi ") \
+ EM(afs_server_trace_get_probe, "GET probe") \
+ EM(afs_server_trace_give_up_cb, "giveup-cb") \
++ EM(afs_server_trace_purging, "PURGE ") \
+ EM(afs_server_trace_put_call, "PUT call ") \
+ EM(afs_server_trace_put_cbi, "PUT cbi ") \
+ EM(afs_server_trace_put_find_rsq, "PUT f-rsq") \
+@@ -884,19 +886,6 @@ TRACE_EVENT(afs_dir_check_failed,
+ __entry->vnode, __entry->off, __entry->i_size)
+ );
+
+-/*
+- * We use page->private to hold the amount of the page that we've written to,
+- * splitting the field into two parts. However, we need to represent a range
+- * 0...PAGE_SIZE inclusive, so we can't support 64K pages on a 32-bit system.
+- */
+-#if PAGE_SIZE > 32768
+-#define AFS_PRIV_MAX 0xffffffff
+-#define AFS_PRIV_SHIFT 32
+-#else
+-#define AFS_PRIV_MAX 0xffff
+-#define AFS_PRIV_SHIFT 16
+-#endif
+-
+ TRACE_EVENT(afs_page_dirty,
+ TP_PROTO(struct afs_vnode *vnode, const char *where,
+ pgoff_t page, unsigned long priv),
+@@ -917,10 +906,11 @@ TRACE_EVENT(afs_page_dirty,
+ __entry->priv = priv;
+ ),
+
+- TP_printk("vn=%p %lx %s %lu-%lu",
++ TP_printk("vn=%p %lx %s %zx-%zx%s",
+ __entry->vnode, __entry->page, __entry->where,
+- __entry->priv & AFS_PRIV_MAX,
+- __entry->priv >> AFS_PRIV_SHIFT)
++ afs_page_dirty_from(__entry->priv),
++ afs_page_dirty_to(__entry->priv),
++ afs_is_page_dirty_mmapped(__entry->priv) ? " M" : "")
+ );
+
+ TRACE_EVENT(afs_call_state,
+diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
+index 863335ecb7e8a..b9241836d4f73 100644
+--- a/include/trace/events/btrfs.h
++++ b/include/trace/events/btrfs.h
+@@ -1176,25 +1176,27 @@ DEFINE_EVENT(btrfs__reserved_extent, btrfs_reserved_extent_free,
+
+ TRACE_EVENT(find_free_extent,
+
+- TP_PROTO(const struct btrfs_fs_info *fs_info, u64 num_bytes,
++ TP_PROTO(const struct btrfs_root *root, u64 num_bytes,
+ u64 empty_size, u64 data),
+
+- TP_ARGS(fs_info, num_bytes, empty_size, data),
++ TP_ARGS(root, num_bytes, empty_size, data),
+
+ TP_STRUCT__entry_btrfs(
++ __field( u64, root_objectid )
+ __field( u64, num_bytes )
+ __field( u64, empty_size )
+ __field( u64, data )
+ ),
+
+- TP_fast_assign_btrfs(fs_info,
++ TP_fast_assign_btrfs(root->fs_info,
++ __entry->root_objectid = root->root_key.objectid;
+ __entry->num_bytes = num_bytes;
+ __entry->empty_size = empty_size;
+ __entry->data = data;
+ ),
+
+ TP_printk_btrfs("root=%llu(%s) len=%llu empty_size=%llu flags=%llu(%s)",
+- show_root_type(BTRFS_EXTENT_TREE_OBJECTID),
++ show_root_type(__entry->root_objectid),
+ __entry->num_bytes, __entry->empty_size, __entry->data,
+ __print_flags((unsigned long)__entry->data, "|",
+ BTRFS_GROUP_FLAGS))
+diff --git a/include/uapi/linux/btrfs_tree.h b/include/uapi/linux/btrfs_tree.h
+index 9ba64ca6b4ac9..6b885982ece68 100644
+--- a/include/uapi/linux/btrfs_tree.h
++++ b/include/uapi/linux/btrfs_tree.h
+@@ -4,6 +4,11 @@
+
+ #include <linux/btrfs.h>
+ #include <linux/types.h>
++#ifdef __KERNEL__
++#include <linux/stddef.h>
++#else
++#include <stddef.h>
++#endif
+
+ /*
+ * This header contains the structure definitions and constants used
+@@ -644,6 +649,15 @@ struct btrfs_root_item {
+ __le64 reserved[8]; /* for future */
+ } __attribute__ ((__packed__));
+
++/*
++ * Btrfs root item used to be smaller than current size. The old format ends
++ * at where member generation_v2 is.
++ */
++static inline __u32 btrfs_legacy_root_item_size(void)
++{
++ return offsetof(struct btrfs_root_item, generation_v2);
++}
++
+ /*
+ * this is used for both forward and backward root refs
+ */
+diff --git a/include/uapi/linux/nfs4.h b/include/uapi/linux/nfs4.h
+index bf197e99b98fc..ed5415e0f1c19 100644
+--- a/include/uapi/linux/nfs4.h
++++ b/include/uapi/linux/nfs4.h
+@@ -139,6 +139,8 @@
+
+ #define EXCHGID4_FLAG_UPD_CONFIRMED_REC_A 0x40000000
+ #define EXCHGID4_FLAG_CONFIRMED_R 0x80000000
++
++#define EXCHGID4_FLAG_SUPP_FENCE_OPS 0x00000004
+ /*
+ * Since the validity of these bits depends on whether
+ * they're set in the argument or response, have separate
+@@ -146,6 +148,7 @@
+ */
+ #define EXCHGID4_FLAG_MASK_A 0x40070103
+ #define EXCHGID4_FLAG_MASK_R 0x80070103
++#define EXCHGID4_2_FLAG_MASK_R 0x80070107
+
+ #define SEQ4_STATUS_CB_PATH_DOWN 0x00000001
+ #define SEQ4_STATUS_CB_GSS_CONTEXTS_EXPIRING 0x00000002
+diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
+index 235db7754606d..f717826d5d7c0 100644
+--- a/include/uapi/linux/videodev2.h
++++ b/include/uapi/linux/videodev2.h
+@@ -373,9 +373,9 @@ enum v4l2_hsv_encoding {
+
+ enum v4l2_quantization {
+ /*
+- * The default for R'G'B' quantization is always full range, except
+- * for the BT2020 colorspace. For Y'CbCr the quantization is always
+- * limited range, except for COLORSPACE_JPEG: this is full range.
++ * The default for R'G'B' quantization is always full range.
++ * For Y'CbCr the quantization is always limited range, except
++ * for COLORSPACE_JPEG: this is full range.
+ */
+ V4L2_QUANTIZATION_DEFAULT = 0,
+ V4L2_QUANTIZATION_FULL_RANGE = 1,
+@@ -384,14 +384,13 @@ enum v4l2_quantization {
+
+ /*
+ * Determine how QUANTIZATION_DEFAULT should map to a proper quantization.
+- * This depends on whether the image is RGB or not, the colorspace and the
+- * Y'CbCr encoding.
++ * This depends on whether the image is RGB or not, the colorspace.
++ * The Y'CbCr encoding is not used anymore, but is still there for backwards
++ * compatibility.
+ */
+ #define V4L2_MAP_QUANTIZATION_DEFAULT(is_rgb_or_hsv, colsp, ycbcr_enc) \
+- (((is_rgb_or_hsv) && (colsp) == V4L2_COLORSPACE_BT2020) ? \
+- V4L2_QUANTIZATION_LIM_RANGE : \
+- (((is_rgb_or_hsv) || (colsp) == V4L2_COLORSPACE_JPEG) ? \
+- V4L2_QUANTIZATION_FULL_RANGE : V4L2_QUANTIZATION_LIM_RANGE))
++ (((is_rgb_or_hsv) || (colsp) == V4L2_COLORSPACE_JPEG) ? \
++ V4L2_QUANTIZATION_FULL_RANGE : V4L2_QUANTIZATION_LIM_RANGE)
+
+ /*
+ * Deprecated names for opRGB colorspace (IEC 61966-2-5)
+diff --git a/include/xen/events.h b/include/xen/events.h
+index df1e6391f63ff..3b8155c2ea034 100644
+--- a/include/xen/events.h
++++ b/include/xen/events.h
+@@ -15,10 +15,15 @@
+ unsigned xen_evtchn_nr_channels(void);
+
+ int bind_evtchn_to_irq(evtchn_port_t evtchn);
++int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn);
+ int bind_evtchn_to_irqhandler(evtchn_port_t evtchn,
+ irq_handler_t handler,
+ unsigned long irqflags, const char *devname,
+ void *dev_id);
++int bind_evtchn_to_irqhandler_lateeoi(evtchn_port_t evtchn,
++ irq_handler_t handler,
++ unsigned long irqflags, const char *devname,
++ void *dev_id);
+ int bind_virq_to_irq(unsigned int virq, unsigned int cpu, bool percpu);
+ int bind_virq_to_irqhandler(unsigned int virq, unsigned int cpu,
+ irq_handler_t handler,
+@@ -32,12 +37,20 @@ int bind_ipi_to_irqhandler(enum ipi_vector ipi,
+ void *dev_id);
+ int bind_interdomain_evtchn_to_irq(unsigned int remote_domain,
+ evtchn_port_t remote_port);
++int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
++ evtchn_port_t remote_port);
+ int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
+ evtchn_port_t remote_port,
+ irq_handler_t handler,
+ unsigned long irqflags,
+ const char *devname,
+ void *dev_id);
++int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
++ evtchn_port_t remote_port,
++ irq_handler_t handler,
++ unsigned long irqflags,
++ const char *devname,
++ void *dev_id);
+
+ /*
+ * Common unbind function for all event sources. Takes IRQ to unbind from.
+@@ -46,6 +59,14 @@ int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
+ */
+ void unbind_from_irqhandler(unsigned int irq, void *dev_id);
+
++/*
++ * Send late EOI for an IRQ bound to an event channel via one of the *_lateeoi
++ * functions above.
++ */
++void xen_irq_lateeoi(unsigned int irq, unsigned int eoi_flags);
++/* Signal an event was spurious, i.e. there was no action resulting from it. */
++#define XEN_EOI_FLAG_SPURIOUS 0x00000001
++
+ #define XEN_IRQ_PRIORITY_MAX EVTCHN_FIFO_PRIORITY_MAX
+ #define XEN_IRQ_PRIORITY_DEFAULT EVTCHN_FIFO_PRIORITY_DEFAULT
+ #define XEN_IRQ_PRIORITY_MIN EVTCHN_FIFO_PRIORITY_MIN
+diff --git a/init/Kconfig b/init/Kconfig
+index d6a0b31b13dc9..2a5df1cf838c6 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -682,7 +682,8 @@ config IKHEADERS
+
+ config LOG_BUF_SHIFT
+ int "Kernel log buffer size (16 => 64KB, 17 => 128KB)"
+- range 12 25
++ range 12 25 if !H8300
++ range 12 19 if H8300
+ default 17
+ depends on PRINTK
+ help
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 43cd175c66a55..718bbdc8b3c66 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5246,6 +5246,10 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ dst, reg_type_str[ptr_reg->type]);
+ return -EACCES;
+ case CONST_PTR_TO_MAP:
++ /* smin_val represents the known value */
++ if (known && smin_val == 0 && opcode == BPF_ADD)
++ break;
++ /* fall-through */
+ case PTR_TO_PACKET_END:
+ case PTR_TO_SOCKET:
+ case PTR_TO_SOCKET_OR_NULL:
+diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
+index b16dbc1bf0567..404d6d47a11da 100644
+--- a/kernel/debug/debug_core.c
++++ b/kernel/debug/debug_core.c
+@@ -94,14 +94,6 @@ int dbg_switch_cpu;
+ /* Use kdb or gdbserver mode */
+ int dbg_kdb_mode = 1;
+
+-static int __init opt_kgdb_con(char *str)
+-{
+- kgdb_use_con = 1;
+- return 0;
+-}
+-
+-early_param("kgdbcon", opt_kgdb_con);
+-
+ module_param(kgdb_use_con, int, 0644);
+ module_param(kgdbreboot, int, 0644);
+
+@@ -920,6 +912,20 @@ static struct console kgdbcons = {
+ .index = -1,
+ };
+
++static int __init opt_kgdb_con(char *str)
++{
++ kgdb_use_con = 1;
++
++ if (kgdb_io_module_registered && !kgdb_con_registered) {
++ register_console(&kgdbcons);
++ kgdb_con_registered = 1;
++ }
++
++ return 0;
++}
++
++early_param("kgdbcon", opt_kgdb_con);
++
+ #ifdef CONFIG_MAGIC_SYSRQ
+ static void sysrq_handle_dbg(int key)
+ {
+diff --git a/kernel/futex.c b/kernel/futex.c
+index a5876694a60eb..044c1a4fbece0 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -39,6 +39,7 @@
+ #include <linux/freezer.h>
+ #include <linux/memblock.h>
+ #include <linux/fault-inject.h>
++#include <linux/time_namespace.h>
+
+ #include <asm/futex.h>
+
+@@ -1502,8 +1503,10 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_
+ */
+ newval = FUTEX_WAITERS | task_pid_vnr(new_owner);
+
+- if (unlikely(should_fail_futex(true)))
++ if (unlikely(should_fail_futex(true))) {
+ ret = -EFAULT;
++ goto out_unlock;
++ }
+
+ ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
+ if (!ret && (curval != uval)) {
+@@ -3797,6 +3800,8 @@ SYSCALL_DEFINE6(futex, u32 __user *, uaddr, int, op, u32, val,
+ t = timespec64_to_ktime(ts);
+ if (cmd == FUTEX_WAIT)
+ t = ktime_add_safe(ktime_get(), t);
++ else if (!(op & FUTEX_CLOCK_REALTIME))
++ t = timens_ktime_to_host(CLOCK_MONOTONIC, t);
+ tp = &t;
+ }
+ /*
+@@ -3989,6 +3994,8 @@ SYSCALL_DEFINE6(futex_time32, u32 __user *, uaddr, int, op, u32, val,
+ t = timespec64_to_ktime(ts);
+ if (cmd == FUTEX_WAIT)
+ t = ktime_add_safe(ktime_get(), t);
++ else if (!(op & FUTEX_CLOCK_REALTIME))
++ t = timens_ktime_to_host(CLOCK_MONOTONIC, t);
+ tp = &t;
+ }
+ if (cmd == FUTEX_REQUEUE || cmd == FUTEX_CMP_REQUEUE ||
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 85d15f0362dc5..3eb35ad1b5241 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -3681,7 +3681,7 @@ void lockdep_hardirqs_on_prepare(unsigned long ip)
+ if (unlikely(in_nmi()))
+ return;
+
+- if (unlikely(__this_cpu_read(lockdep_recursion)))
++ if (unlikely(this_cpu_read(lockdep_recursion)))
+ return;
+
+ if (unlikely(lockdep_hardirqs_enabled())) {
+@@ -3750,7 +3750,7 @@ void noinstr lockdep_hardirqs_on(unsigned long ip)
+ goto skip_checks;
+ }
+
+- if (unlikely(__this_cpu_read(lockdep_recursion)))
++ if (unlikely(this_cpu_read(lockdep_recursion)))
+ return;
+
+ if (lockdep_hardirqs_enabled()) {
+diff --git a/kernel/module.c b/kernel/module.c
+index 8486123ffd7af..cc9281398f698 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -4028,7 +4028,7 @@ SYSCALL_DEFINE3(finit_module, int, fd, const char __user *, uargs, int, flags)
+ {
+ struct load_info info = { };
+ loff_t size;
+- void *hdr;
++ void *hdr = NULL;
+ int err;
+
+ err = may_init_module();
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 05d3e1375e4ca..a443b25e12ed5 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -821,6 +821,12 @@ static void trc_read_check_handler(void *t_in)
+ WRITE_ONCE(t->trc_reader_checked, true);
+ goto reset_ipi;
+ }
++ // If we are racing with an rcu_read_unlock_trace(), try again later.
++ if (unlikely(t->trc_reader_nesting < 0)) {
++ if (WARN_ON_ONCE(atomic_dec_and_test(&trc_n_readers_need_end)))
++ wake_up(&trc_wait);
++ goto reset_ipi;
++ }
+ WRITE_ONCE(t->trc_reader_checked, true);
+
+ // Get here if the task is in a read-side critical section. Set
+@@ -1072,15 +1078,17 @@ static void rcu_tasks_trace_postgp(struct rcu_tasks *rtp)
+ if (ret)
+ break; // Count reached zero.
+ // Stall warning time, so make a list of the offenders.
++ rcu_read_lock();
+ for_each_process_thread(g, t)
+ if (READ_ONCE(t->trc_reader_special.b.need_qs))
+ trc_add_holdout(t, &holdouts);
++ rcu_read_unlock();
+ firstreport = true;
+- list_for_each_entry_safe(t, g, &holdouts, trc_holdout_list)
+- if (READ_ONCE(t->trc_reader_special.b.need_qs)) {
++ list_for_each_entry_safe(t, g, &holdouts, trc_holdout_list) {
++ if (READ_ONCE(t->trc_reader_special.b.need_qs))
+ show_stalled_task_trace(t, &firstreport);
+- trc_del_holdout(t);
+- }
++ trc_del_holdout(t); // Release task_struct reference.
++ }
+ if (firstreport)
+ pr_err("INFO: rcu_tasks_trace detected stalls? (Counter/taskslist mismatch?)\n");
+ show_stalled_ipi_trace();
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 388a2ad292bf4..c8f62e2d02761 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -416,7 +416,7 @@ bool rcu_eqs_special_set(int cpu)
+ *
+ * The caller must have disabled interrupts and must not be idle.
+ */
+-void rcu_momentary_dyntick_idle(void)
++notrace void rcu_momentary_dyntick_idle(void)
+ {
+ int special;
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index e39008242cf4d..59d511e326730 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -102,7 +102,8 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
+ static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,
+ unsigned int next_freq)
+ {
+- if (sg_policy->next_freq == next_freq)
++ if (sg_policy->next_freq == next_freq &&
++ !cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS))
+ return false;
+
+ sg_policy->next_freq = next_freq;
+@@ -175,7 +176,8 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
+
+ freq = map_util_freq(util, freq, max);
+
+- if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
++ if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update &&
++ !cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS))
+ return sg_policy->next_freq;
+
+ sg_policy->need_freq_update = false;
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 676d4af621038..c359ef4380ad8 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -1472,13 +1472,7 @@ static const struct file_operations seccomp_notify_ops = {
+
+ static struct file *init_listener(struct seccomp_filter *filter)
+ {
+- struct file *ret = ERR_PTR(-EBUSY);
+- struct seccomp_filter *cur;
+-
+- for (cur = current->seccomp.filter; cur; cur = cur->prev) {
+- if (cur->notif)
+- goto out;
+- }
++ struct file *ret;
+
+ ret = ERR_PTR(-ENOMEM);
+ filter->notif = kzalloc(sizeof(*(filter->notif)), GFP_KERNEL);
+@@ -1504,6 +1498,31 @@ out:
+ return ret;
+ }
+
++/*
++ * Does @new_child have a listener while an ancestor also has a listener?
++ * If so, we'll want to reject this filter.
++ * This only has to be tested for the current process, even in the TSYNC case,
++ * because TSYNC installs @child with the same parent on all threads.
++ * Note that @new_child is not hooked up to its parent at this point yet, so
++ * we use current->seccomp.filter.
++ */
++static bool has_duplicate_listener(struct seccomp_filter *new_child)
++{
++ struct seccomp_filter *cur;
++
++ /* must be protected against concurrent TSYNC */
++ lockdep_assert_held(¤t->sighand->siglock);
++
++ if (!new_child->notif)
++ return false;
++ for (cur = current->seccomp.filter; cur; cur = cur->prev) {
++ if (cur->notif)
++ return true;
++ }
++
++ return false;
++}
++
+ /**
+ * seccomp_set_mode_filter: internal function for setting seccomp filter
+ * @flags: flags to change filter behavior
+@@ -1575,6 +1594,11 @@ static long seccomp_set_mode_filter(unsigned int flags,
+ if (!seccomp_may_assign_mode(seccomp_mode))
+ goto out;
+
++ if (has_duplicate_listener(prepared)) {
++ ret = -EBUSY;
++ goto out;
++ }
++
+ ret = seccomp_attach_filter(flags, prepared);
+ if (ret)
+ goto out;
+diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
+index 865bb0228ab66..890b79cf0e7c3 100644
+--- a/kernel/stop_machine.c
++++ b/kernel/stop_machine.c
+@@ -178,7 +178,7 @@ static void ack_state(struct multi_stop_data *msdata)
+ set_state(msdata, msdata->state + 1);
+ }
+
+-void __weak stop_machine_yield(const struct cpumask *cpumask)
++notrace void __weak stop_machine_yield(const struct cpumask *cpumask)
+ {
+ cpu_relax();
+ }
+diff --git a/kernel/time/itimer.c b/kernel/time/itimer.c
+index ca4e6d57d68b9..00629e658ca19 100644
+--- a/kernel/time/itimer.c
++++ b/kernel/time/itimer.c
+@@ -172,10 +172,6 @@ static void set_cpu_itimer(struct task_struct *tsk, unsigned int clock_id,
+ u64 oval, nval, ointerval, ninterval;
+ struct cpu_itimer *it = &tsk->signal->it[clock_id];
+
+- /*
+- * Use the to_ktime conversion because that clamps the maximum
+- * value to KTIME_MAX and avoid multiplication overflows.
+- */
+ nval = timespec64_to_ns(&value->it_value);
+ ninterval = timespec64_to_ns(&value->it_interval);
+
+diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
+index 1c03eec6ca9b9..4b7bdd7a5f27c 100644
+--- a/kernel/time/sched_clock.c
++++ b/kernel/time/sched_clock.c
+@@ -68,13 +68,13 @@ static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift)
+ return (cyc * mult) >> shift;
+ }
+
+-struct clock_read_data *sched_clock_read_begin(unsigned int *seq)
++notrace struct clock_read_data *sched_clock_read_begin(unsigned int *seq)
+ {
+ *seq = raw_read_seqcount_latch(&cd.seq);
+ return cd.read_data + (*seq & 1);
+ }
+
+-int sched_clock_read_retry(unsigned int seq)
++notrace int sched_clock_read_retry(unsigned int seq)
+ {
+ return read_seqcount_retry(&cd.seq, seq);
+ }
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 93ef0ab6ea201..5c6a9c6a058fa 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1952,18 +1952,18 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ {
+ struct ring_buffer_per_cpu *cpu_buffer;
+ unsigned long nr_pages;
+- int cpu, err = 0;
++ int cpu, err;
+
+ /*
+ * Always succeed at resizing a non-existent buffer:
+ */
+ if (!buffer)
+- return size;
++ return 0;
+
+ /* Make sure the requested buffer exists */
+ if (cpu_id != RING_BUFFER_ALL_CPUS &&
+ !cpumask_test_cpu(cpu_id, buffer->cpumask))
+- return size;
++ return 0;
+
+ nr_pages = DIV_ROUND_UP(size, BUF_PAGE_SIZE);
+
+@@ -2119,7 +2119,7 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ }
+
+ mutex_unlock(&buffer->mutex);
+- return size;
++ return 0;
+
+ out_err:
+ for_each_buffer_cpu(buffer, cpu) {
+@@ -4866,6 +4866,9 @@ void ring_buffer_reset_cpu(struct trace_buffer *buffer, int cpu)
+ if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ return;
+
++ /* prevent another thread from changing buffer sizes */
++ mutex_lock(&buffer->mutex);
++
+ atomic_inc(&cpu_buffer->resize_disabled);
+ atomic_inc(&cpu_buffer->record_disabled);
+
+@@ -4876,6 +4879,8 @@ void ring_buffer_reset_cpu(struct trace_buffer *buffer, int cpu)
+
+ atomic_dec(&cpu_buffer->record_disabled);
+ atomic_dec(&cpu_buffer->resize_disabled);
++
++ mutex_unlock(&buffer->mutex);
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu);
+
+@@ -4889,6 +4894,9 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
+ struct ring_buffer_per_cpu *cpu_buffer;
+ int cpu;
+
++ /* prevent another thread from changing buffer sizes */
++ mutex_lock(&buffer->mutex);
++
+ for_each_online_buffer_cpu(buffer, cpu) {
+ cpu_buffer = buffer->buffers[cpu];
+
+@@ -4907,6 +4915,8 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
+ atomic_dec(&cpu_buffer->record_disabled);
+ atomic_dec(&cpu_buffer->resize_disabled);
+ }
++
++ mutex_unlock(&buffer->mutex);
+ }
+
+ /**
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index c8892156db341..65e8c27141c02 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -465,6 +465,7 @@ static struct synth_field *parse_synth_field(int argc, const char **argv,
+ struct synth_field *field;
+ const char *prefix = NULL, *field_type = argv[0], *field_name, *array;
+ int len, ret = 0;
++ struct seq_buf s;
+ ssize_t size;
+
+ if (field_type[0] == ';')
+@@ -503,13 +504,9 @@ static struct synth_field *parse_synth_field(int argc, const char **argv,
+ field_type++;
+ len = strlen(field_type) + 1;
+
+- if (array) {
+- int l = strlen(array);
++ if (array)
++ len += strlen(array);
+
+- if (l && array[l - 1] == ';')
+- l--;
+- len += l;
+- }
+ if (prefix)
+ len += strlen(prefix);
+
+@@ -518,14 +515,18 @@ static struct synth_field *parse_synth_field(int argc, const char **argv,
+ ret = -ENOMEM;
+ goto free;
+ }
++ seq_buf_init(&s, field->type, len);
+ if (prefix)
+- strcat(field->type, prefix);
+- strcat(field->type, field_type);
++ seq_buf_puts(&s, prefix);
++ seq_buf_puts(&s, field_type);
+ if (array) {
+- strcat(field->type, array);
+- if (field->type[len - 1] == ';')
+- field->type[len - 1] = '\0';
++ seq_buf_puts(&s, array);
++ if (s.buffer[s.len - 1] == ';')
++ s.len--;
+ }
++ if (WARN_ON_ONCE(!seq_buf_buffer_left(&s)))
++ goto free;
++ s.buffer[s.len] = '\0';
+
+ size = synth_field_size(field->type);
+ if (size <= 0) {
+diff --git a/lib/scatterlist.c b/lib/scatterlist.c
+index 5d63a8857f361..c448642e0f786 100644
+--- a/lib/scatterlist.c
++++ b/lib/scatterlist.c
+@@ -514,7 +514,7 @@ struct scatterlist *sgl_alloc_order(unsigned long long length,
+ elem_len = min_t(u64, length, PAGE_SIZE << order);
+ page = alloc_pages(gfp, order);
+ if (!page) {
+- sgl_free(sgl);
++ sgl_free_order(sgl, order);
+ return NULL;
+ }
+
+diff --git a/mm/slab.c b/mm/slab.c
+index f658e86ec8cee..5c70600d8b1cc 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -3440,7 +3440,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp,
+ memset(objp, 0, cachep->object_size);
+ kmemleak_free_recursive(objp, cachep->flags);
+ objp = cache_free_debugcheck(cachep, objp, caller);
+- memcg_slab_free_hook(cachep, virt_to_head_page(objp), objp);
++ memcg_slab_free_hook(cachep, &objp, 1);
+
+ /*
+ * Skip calling cache_free_alien() when the platform is not numa.
+diff --git a/mm/slab.h b/mm/slab.h
+index 6cc323f1313af..6dd4b702888a7 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -345,30 +345,42 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
+ obj_cgroup_put(objcg);
+ }
+
+-static inline void memcg_slab_free_hook(struct kmem_cache *s, struct page *page,
+- void *p)
++static inline void memcg_slab_free_hook(struct kmem_cache *s_orig,
++ void **p, int objects)
+ {
++ struct kmem_cache *s;
+ struct obj_cgroup *objcg;
++ struct page *page;
+ unsigned int off;
++ int i;
+
+ if (!memcg_kmem_enabled())
+ return;
+
+- if (!page_has_obj_cgroups(page))
+- return;
++ for (i = 0; i < objects; i++) {
++ if (unlikely(!p[i]))
++ continue;
+
+- off = obj_to_index(s, page, p);
+- objcg = page_obj_cgroups(page)[off];
+- page_obj_cgroups(page)[off] = NULL;
++ page = virt_to_head_page(p[i]);
++ if (!page_has_obj_cgroups(page))
++ continue;
+
+- if (!objcg)
+- return;
++ if (!s_orig)
++ s = page->slab_cache;
++ else
++ s = s_orig;
+
+- obj_cgroup_uncharge(objcg, obj_full_size(s));
+- mod_objcg_state(objcg, page_pgdat(page), cache_vmstat_idx(s),
+- -obj_full_size(s));
++ off = obj_to_index(s, page, p[i]);
++ objcg = page_obj_cgroups(page)[off];
++ if (!objcg)
++ continue;
+
+- obj_cgroup_put(objcg);
++ page_obj_cgroups(page)[off] = NULL;
++ obj_cgroup_uncharge(objcg, obj_full_size(s));
++ mod_objcg_state(objcg, page_pgdat(page), cache_vmstat_idx(s),
++ -obj_full_size(s));
++ obj_cgroup_put(objcg);
++ }
+ }
+
+ #else /* CONFIG_MEMCG_KMEM */
+@@ -406,8 +418,8 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
+ {
+ }
+
+-static inline void memcg_slab_free_hook(struct kmem_cache *s, struct page *page,
+- void *p)
++static inline void memcg_slab_free_hook(struct kmem_cache *s,
++ void **p, int objects)
+ {
+ }
+ #endif /* CONFIG_MEMCG_KMEM */
+diff --git a/mm/slub.c b/mm/slub.c
+index 6d3574013b2f8..0cbe67f13946e 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -3091,7 +3091,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
+ struct kmem_cache_cpu *c;
+ unsigned long tid;
+
+- memcg_slab_free_hook(s, page, head);
++ memcg_slab_free_hook(s, &head, 1);
+ redo:
+ /*
+ * Determine the currently cpus per cpu slab.
+@@ -3253,6 +3253,7 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
+ if (WARN_ON(!size))
+ return;
+
++ memcg_slab_free_hook(s, p, size);
+ do {
+ struct detached_freelist df;
+
+diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
+index c0762a302162c..8f528e783a6c5 100644
+--- a/net/9p/trans_fd.c
++++ b/net/9p/trans_fd.c
+@@ -1023,7 +1023,7 @@ p9_fd_create_unix(struct p9_client *client, const char *addr, char *args)
+
+ csocket = NULL;
+
+- if (addr == NULL)
++ if (!addr || !strlen(addr))
+ return -EINVAL;
+
+ if (strlen(addr) >= UNIX_PATH_MAX) {
+diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
+index d4d7a0e524910..9d4a21b819c19 100644
+--- a/net/ceph/messenger.c
++++ b/net/ceph/messenger.c
+@@ -2998,6 +2998,11 @@ static void con_fault(struct ceph_connection *con)
+ ceph_msg_put(con->in_msg);
+ con->in_msg = NULL;
+ }
++ if (con->out_msg) {
++ BUG_ON(con->out_msg->con != con);
++ ceph_msg_put(con->out_msg);
++ con->out_msg = NULL;
++ }
+
+ /* Requeue anything that hasn't been acked */
+ list_splice_init(&con->out_sent, &con->out_queue);
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index dca01d7e6e3e0..282b0bc201eeb 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -4209,6 +4209,12 @@ static void ieee80211_8023_xmit(struct ieee80211_sub_if_data *sdata,
+ if (is_zero_ether_addr(ra))
+ goto out_free;
+
++ if (local->ops->wake_tx_queue) {
++ u16 queue = __ieee80211_select_queue(sdata, sta, skb);
++ skb_set_queue_mapping(skb, queue);
++ skb_get_hash(skb);
++ }
++
+ multicast = is_multicast_ether_addr(ra);
+
+ if (sta)
+diff --git a/net/sunrpc/sysctl.c b/net/sunrpc/sysctl.c
+index 999eee1ed61c9..e81a28f30f1d2 100644
+--- a/net/sunrpc/sysctl.c
++++ b/net/sunrpc/sysctl.c
+@@ -70,7 +70,13 @@ static int proc_do_xprt(struct ctl_table *table, int write,
+ return 0;
+ }
+ len = svc_print_xprts(tmpbuf, sizeof(tmpbuf));
+- return memory_read_from_buffer(buffer, *lenp, ppos, tmpbuf, len);
++ *lenp = memory_read_from_buffer(buffer, *lenp, ppos, tmpbuf, len);
++
++ if (*lenp < 0) {
++ *lenp = 0;
++ return -EINVAL;
++ }
++ return 0;
+ }
+
+ static int
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 5a8e47bbfb9f4..13fbc2dd4196a 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -1520,10 +1520,13 @@ xprt_transmit(struct rpc_task *task)
+ {
+ struct rpc_rqst *next, *req = task->tk_rqstp;
+ struct rpc_xprt *xprt = req->rq_xprt;
+- int status;
++ int counter, status;
+
+ spin_lock(&xprt->queue_lock);
++ counter = 0;
+ while (!list_empty(&xprt->xmit_queue)) {
++ if (++counter == 20)
++ break;
+ next = list_first_entry(&xprt->xmit_queue,
+ struct rpc_rqst, rq_xmit);
+ xprt_pin_rqst(next);
+@@ -1531,7 +1534,6 @@ xprt_transmit(struct rpc_task *task)
+ status = xprt_request_transmit(next, task);
+ if (status == -EBADMSG && next != req)
+ status = 0;
+- cond_resched();
+ spin_lock(&xprt->queue_lock);
+ xprt_unpin_rqst(next);
+ if (status == 0) {
+diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
+index c821e98671393..63a9a2a39da7b 100644
+--- a/samples/bpf/xdpsock_user.c
++++ b/samples/bpf/xdpsock_user.c
+@@ -1111,6 +1111,7 @@ static void l2fwd(struct xsk_socket_info *xsk, struct pollfd *fds)
+ while (ret != rcvd) {
+ if (ret < 0)
+ exit_with_error(-ret);
++ complete_tx_l2fwd(xsk, fds);
+ if (xsk_ring_prod__needs_wakeup(&xsk->tx))
+ kick_tx(xsk);
+ ret = xsk_ring_prod__reserve(&xsk->tx, rcvd, &idx_tx);
+diff --git a/security/integrity/digsig.c b/security/integrity/digsig.c
+index e9cbadade74bd..ac02b7632353e 100644
+--- a/security/integrity/digsig.c
++++ b/security/integrity/digsig.c
+@@ -169,7 +169,7 @@ int __init integrity_add_key(const unsigned int id, const void *data,
+
+ int __init integrity_load_x509(const unsigned int id, const char *path)
+ {
+- void *data;
++ void *data = NULL;
+ loff_t size;
+ int rc;
+ key_perm_t perm;
+diff --git a/security/integrity/ima/ima_fs.c b/security/integrity/ima/ima_fs.c
+index e3fcad871861a..15a44c5022f77 100644
+--- a/security/integrity/ima/ima_fs.c
++++ b/security/integrity/ima/ima_fs.c
+@@ -272,7 +272,7 @@ static const struct file_operations ima_ascii_measurements_ops = {
+
+ static ssize_t ima_read_policy(char *path)
+ {
+- void *data;
++ void *data = NULL;
+ char *datap;
+ loff_t size;
+ int rc, pathlen = strlen(path);
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 4c86cd4eece0c..e22caa833b7d9 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -621,19 +621,17 @@ void ima_post_path_mknod(struct dentry *dentry)
+ int ima_read_file(struct file *file, enum kernel_read_file_id read_id)
+ {
+ /*
+- * READING_FIRMWARE_PREALLOC_BUFFER
+- *
+ * Do devices using pre-allocated memory run the risk of the
+ * firmware being accessible to the device prior to the completion
+ * of IMA's signature verification any more than when using two
+- * buffers?
++ * buffers? It may be desirable to include the buffer address
++ * in this API and walk all the dma_map_single() mappings to check.
+ */
+ return 0;
+ }
+
+ const int read_idmap[READING_MAX_ID] = {
+ [READING_FIRMWARE] = FIRMWARE_CHECK,
+- [READING_FIRMWARE_PREALLOC_BUFFER] = FIRMWARE_CHECK,
+ [READING_MODULE] = MODULE_CHECK,
+ [READING_KEXEC_IMAGE] = KEXEC_KERNEL_CHECK,
+ [READING_KEXEC_INITRAMFS] = KEXEC_INITRAMFS_CHECK,
+diff --git a/security/selinux/include/security.h b/security/selinux/include/security.h
+index b0e02cfe3ce14..8a432f646967e 100644
+--- a/security/selinux/include/security.h
++++ b/security/selinux/include/security.h
+@@ -177,49 +177,49 @@ static inline bool selinux_policycap_netpeer(void)
+ {
+ struct selinux_state *state = &selinux_state;
+
+- return state->policycap[POLICYDB_CAPABILITY_NETPEER];
++ return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_NETPEER]);
+ }
+
+ static inline bool selinux_policycap_openperm(void)
+ {
+ struct selinux_state *state = &selinux_state;
+
+- return state->policycap[POLICYDB_CAPABILITY_OPENPERM];
++ return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_OPENPERM]);
+ }
+
+ static inline bool selinux_policycap_extsockclass(void)
+ {
+ struct selinux_state *state = &selinux_state;
+
+- return state->policycap[POLICYDB_CAPABILITY_EXTSOCKCLASS];
++ return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_EXTSOCKCLASS]);
+ }
+
+ static inline bool selinux_policycap_alwaysnetwork(void)
+ {
+ struct selinux_state *state = &selinux_state;
+
+- return state->policycap[POLICYDB_CAPABILITY_ALWAYSNETWORK];
++ return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_ALWAYSNETWORK]);
+ }
+
+ static inline bool selinux_policycap_cgroupseclabel(void)
+ {
+ struct selinux_state *state = &selinux_state;
+
+- return state->policycap[POLICYDB_CAPABILITY_CGROUPSECLABEL];
++ return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_CGROUPSECLABEL]);
+ }
+
+ static inline bool selinux_policycap_nnp_nosuid_transition(void)
+ {
+ struct selinux_state *state = &selinux_state;
+
+- return state->policycap[POLICYDB_CAPABILITY_NNP_NOSUID_TRANSITION];
++ return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_NNP_NOSUID_TRANSITION]);
+ }
+
+ static inline bool selinux_policycap_genfs_seclabel_symlinks(void)
+ {
+ struct selinux_state *state = &selinux_state;
+
+- return state->policycap[POLICYDB_CAPABILITY_GENFS_SECLABEL_SYMLINKS];
++ return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_GENFS_SECLABEL_SYMLINKS]);
+ }
+
+ int security_mls_enabled(struct selinux_state *state);
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index 1caf4e6033096..c55b3063753ab 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -2103,7 +2103,8 @@ static void security_load_policycaps(struct selinux_state *state)
+ struct ebitmap_node *node;
+
+ for (i = 0; i < ARRAY_SIZE(state->policycap); i++)
+- state->policycap[i] = ebitmap_get_bit(&p->policycaps, i);
++ WRITE_ONCE(state->policycap[i],
++ ebitmap_get_bit(&p->policycaps, i));
+
+ for (i = 0; i < ARRAY_SIZE(selinux_policycap_names); i++)
+ pr_info("SELinux: policy capability %s=%d\n",
+diff --git a/sound/soc/amd/acp3x-rt5682-max9836.c b/sound/soc/amd/acp3x-rt5682-max9836.c
+index 406526e79af34..1a4e8ca0f99c2 100644
+--- a/sound/soc/amd/acp3x-rt5682-max9836.c
++++ b/sound/soc/amd/acp3x-rt5682-max9836.c
+@@ -472,12 +472,17 @@ static int acp3x_probe(struct platform_device *pdev)
+
+ ret = devm_snd_soc_register_card(&pdev->dev, card);
+ if (ret) {
+- dev_err(&pdev->dev,
++ if (ret != -EPROBE_DEFER)
++ dev_err(&pdev->dev,
+ "devm_snd_soc_register_card(%s) failed: %d\n",
+ card->name, ret);
+- return ret;
++ else
++ dev_dbg(&pdev->dev,
++ "devm_snd_soc_register_card(%s) probe deferred: %d\n",
++ card->name, ret);
+ }
+- return 0;
++
++ return ret;
+ }
+
+ static const struct acpi_device_id acp3x_audio_acpi_match[] = {
+diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c
+index 2c5c451fa19d7..c475955c6eeba 100644
+--- a/sound/soc/sof/intel/hda-codec.c
++++ b/sound/soc/sof/intel/hda-codec.c
+@@ -151,7 +151,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ if (!hdev->bus->audio_component) {
+ dev_dbg(sdev->dev,
+ "iDisp hw present but no driver\n");
+- return -ENOENT;
++ goto error;
+ }
+ hda_priv->need_display_power = true;
+ }
+@@ -174,7 +174,7 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ * other return codes without modification
+ */
+ if (ret == 0)
+- ret = -ENOENT;
++ goto error;
+ }
+
+ return ret;
+diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
+index 404d4c569c01e..695ed3ffa3a6d 100644
+--- a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
++++ b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
+@@ -249,6 +249,24 @@
+ "BriefDescription": "Cycles with fill pending from L2. Total cycles spent with one or more fill requests in flight from L2.",
+ "UMask": "0x1"
+ },
++ {
++ "EventName": "l2_pf_hit_l2",
++ "EventCode": "0x70",
++ "BriefDescription": "L2 prefetch hit in L2.",
++ "UMask": "0xff"
++ },
++ {
++ "EventName": "l2_pf_miss_l2_hit_l3",
++ "EventCode": "0x71",
++ "BriefDescription": "L2 prefetcher hits in L3. Counts all L2 prefetches accepted by the L2 pipeline which miss the L2 cache and hit the L3.",
++ "UMask": "0xff"
++ },
++ {
++ "EventName": "l2_pf_miss_l2_l3",
++ "EventCode": "0x72",
++ "BriefDescription": "L2 prefetcher misses in L3. All L2 prefetches accepted by the L2 pipeline which miss the L2 and the L3 caches.",
++ "UMask": "0xff"
++ },
+ {
+ "EventName": "l3_request_g1.caching_l3_cache_accesses",
+ "EventCode": "0x01",
+diff --git a/tools/perf/util/print_binary.c b/tools/perf/util/print_binary.c
+index 599a1543871de..13fdc51c61d96 100644
+--- a/tools/perf/util/print_binary.c
++++ b/tools/perf/util/print_binary.c
+@@ -50,7 +50,7 @@ int is_printable_array(char *p, unsigned int len)
+
+ len--;
+
+- for (i = 0; i < len; i++) {
++ for (i = 0; i < len && p[i]; i++) {
+ if (!isprint(p[i]) && !isspace(p[i]))
+ return 0;
+ }
+diff --git a/tools/testing/selftests/bpf/progs/test_sysctl_prog.c b/tools/testing/selftests/bpf/progs/test_sysctl_prog.c
+index 50525235380e8..5489823c83fc2 100644
+--- a/tools/testing/selftests/bpf/progs/test_sysctl_prog.c
++++ b/tools/testing/selftests/bpf/progs/test_sysctl_prog.c
+@@ -19,11 +19,11 @@
+ #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+ #endif
+
++const char tcp_mem_name[] = "net/ipv4/tcp_mem";
+ static __always_inline int is_tcp_mem(struct bpf_sysctl *ctx)
+ {
+- char tcp_mem_name[] = "net/ipv4/tcp_mem";
+ unsigned char i;
+- char name[64];
++ char name[sizeof(tcp_mem_name)];
+ int ret;
+
+ memset(name, 0, sizeof(name));
+diff --git a/tools/testing/selftests/powerpc/utils.c b/tools/testing/selftests/powerpc/utils.c
+index 18b6a773d5c73..638ffacc90aa1 100644
+--- a/tools/testing/selftests/powerpc/utils.c
++++ b/tools/testing/selftests/powerpc/utils.c
+@@ -318,7 +318,9 @@ int using_hash_mmu(bool *using_hash)
+
+ rc = 0;
+ while (fgets(line, sizeof(line), f) != NULL) {
+- if (strcmp(line, "MMU : Hash\n") == 0) {
++ if (!strcmp(line, "MMU : Hash\n") ||
++ !strcmp(line, "platform : Cell\n") ||
++ !strcmp(line, "platform : PowerMac\n")) {
+ *using_hash = true;
+ goto out;
+ }
+diff --git a/tools/testing/selftests/x86/fsgsbase.c b/tools/testing/selftests/x86/fsgsbase.c
+index 9983195535237..7161cfc2e60b4 100644
+--- a/tools/testing/selftests/x86/fsgsbase.c
++++ b/tools/testing/selftests/x86/fsgsbase.c
+@@ -443,6 +443,68 @@ static void test_unexpected_base(void)
+
+ #define USER_REGS_OFFSET(r) offsetof(struct user_regs_struct, r)
+
++static void test_ptrace_write_gs_read_base(void)
++{
++ int status;
++ pid_t child = fork();
++
++ if (child < 0)
++ err(1, "fork");
++
++ if (child == 0) {
++ printf("[RUN]\tPTRACE_POKE GS, read GSBASE back\n");
++
++ printf("[RUN]\tARCH_SET_GS to 1\n");
++ if (syscall(SYS_arch_prctl, ARCH_SET_GS, 1) != 0)
++ err(1, "ARCH_SET_GS");
++
++ if (ptrace(PTRACE_TRACEME, 0, NULL, NULL) != 0)
++ err(1, "PTRACE_TRACEME");
++
++ raise(SIGTRAP);
++ _exit(0);
++ }
++
++ wait(&status);
++
++ if (WSTOPSIG(status) == SIGTRAP) {
++ unsigned long base;
++ unsigned long gs_offset = USER_REGS_OFFSET(gs);
++ unsigned long base_offset = USER_REGS_OFFSET(gs_base);
++
++ /* Read the initial base. It should be 1. */
++ base = ptrace(PTRACE_PEEKUSER, child, base_offset, NULL);
++ if (base == 1) {
++ printf("[OK]\tGSBASE started at 1\n");
++ } else {
++ nerrs++;
++ printf("[FAIL]\tGSBASE started at 0x%lx\n", base);
++ }
++
++ printf("[RUN]\tSet GS = 0x7, read GSBASE\n");
++
++ /* Poke an LDT selector into GS. */
++ if (ptrace(PTRACE_POKEUSER, child, gs_offset, 0x7) != 0)
++ err(1, "PTRACE_POKEUSER");
++
++ /* And read the base. */
++ base = ptrace(PTRACE_PEEKUSER, child, base_offset, NULL);
++
++ if (base == 0 || base == 1) {
++ printf("[OK]\tGSBASE reads as 0x%lx with invalid GS\n", base);
++ } else {
++ nerrs++;
++ printf("[FAIL]\tGSBASE=0x%lx (should be 0 or 1)\n", base);
++ }
++ }
++
++ ptrace(PTRACE_CONT, child, NULL, NULL);
++
++ wait(&status);
++ if (!WIFEXITED(status))
++ printf("[WARN]\tChild didn't exit cleanly.\n");
++}
++
+ static void test_ptrace_write_gsbase(void)
+ {
+ int status;
+@@ -517,6 +579,9 @@ static void test_ptrace_write_gsbase(void)
+
+ END:
+ ptrace(PTRACE_CONT, child, NULL, NULL);
++ wait(&status);
++ if (!WIFEXITED(status))
++ printf("[WARN]\tChild didn't exit cleanly.\n");
+ }
+
+ int main()
+@@ -526,6 +591,9 @@ int main()
+ shared_scratch = mmap(NULL, 4096, PROT_READ | PROT_WRITE,
+ MAP_ANONYMOUS | MAP_SHARED, -1, 0);
+
++ /* Do these tests before we have an LDT. */
++ test_ptrace_write_gs_read_base();
++
+ /* Probe FSGSBASE */
+ sethandler(SIGILL, sigill, 0);
+ if (sigsetjmp(jmpbuf, 1) == 0) {
diff --git a/1005_linux-5.9.6.patch b/1005_linux-5.9.6.patch
new file mode 100644
index 0000000..9cd9bb2
--- /dev/null
+++ b/1005_linux-5.9.6.patch
@@ -0,0 +1,29 @@
+diff --git a/Makefile b/Makefile
+index 27d4fe12da24c..2fed32cac74e2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 9
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c
+index c475955c6eeba..9500572c0e312 100644
+--- a/sound/soc/sof/intel/hda-codec.c
++++ b/sound/soc/sof/intel/hda-codec.c
+@@ -178,6 +178,11 @@ static int hda_codec_probe(struct snd_sof_dev *sdev, int address,
+ }
+
+ return ret;
++
++error:
++ snd_hdac_ext_bus_device_exit(hdev);
++ return -ENOENT;
++
+ #else
+ hdev = devm_kzalloc(sdev->dev, sizeof(*hdev), GFP_KERNEL);
+ if (!hdev)
next reply other threads:[~2020-11-05 17:54 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-05 17:54 Mike Pagano [this message]
-- strict thread matches above, loose matches on Subject: below --
2020-12-21 13:27 [gentoo-commits] proj/linux-patches:5.9 commit in: / Mike Pagano
2020-12-16 23:15 Mike Pagano
2020-12-13 16:11 Mike Pagano
2020-12-11 12:57 Mike Pagano
2020-12-08 12:08 Mike Pagano
2020-12-02 12:51 Mike Pagano
2020-11-24 14:52 Mike Pagano
2020-11-22 19:35 Mike Pagano
2020-11-19 12:41 Mike Pagano
2020-11-18 20:23 Mike Pagano
2020-11-11 15:52 Mike Pagano
2020-11-10 13:58 Mike Pagano
2020-11-04 23:38 Mike Pagano
2020-11-01 20:33 Mike Pagano
2020-10-29 11:21 Mike Pagano
2020-10-17 10:15 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1604598844.fe1318ebbf61e581f94a611f60560e0e0d63eba8.mpagano@gentoo \
--to=mpagano@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox