From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.19 commit in: /
Date: Thu, 25 Aug 2022 10:31:55 +0000 (UTC) [thread overview]
Message-ID: <1661423503.fbe75912097fd2d987df503261ed461b94514678.mpagano@gentoo> (raw)
commit: fbe75912097fd2d987df503261ed461b94514678
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 25 10:31:43 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Aug 25 10:31:43 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=fbe75912
Linux patch 5.19.4
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1003_linux-5.19.4.patch | 17636 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 17640 insertions(+)
diff --git a/0000_README b/0000_README
index 7a9bbb26..920f6ada 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-5.19.3.patch
From: http://www.kernel.org
Desc: Linux 5.19.3
+Patch: 1003_linux-5.19.4.patch
+From: http://www.kernel.org
+Desc: Linux 5.19.4
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1003_linux-5.19.4.patch b/1003_linux-5.19.4.patch
new file mode 100644
index 00000000..0df49d2c
--- /dev/null
+++ b/1003_linux-5.19.4.patch
@@ -0,0 +1,17636 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index ddccd10774623..9b7fa1baf225b 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -592,6 +592,18 @@ to the guest kernel command line (see
+ Documentation/admin-guide/kernel-parameters.rst).
+
+
++nmi_wd_lpm_factor (PPC only)
++============================
++
++Factor to apply to the NMI watchdog timeout (only when ``nmi_watchdog`` is
++set to 1). This factor represents the percentage added to
++``watchdog_thresh`` when calculating the NMI watchdog timeout during an
++LPM. The soft lockup timeout is not impacted.
++
++A value of 0 means no change. The default value is 200 meaning the NMI
++watchdog is set to 30s (based on ``watchdog_thresh`` equal to 10).
++
++
+ numa_balancing
+ ==============
+
+diff --git a/Documentation/atomic_bitops.txt b/Documentation/atomic_bitops.txt
+index 093cdaefdb373..d8b101c97031b 100644
+--- a/Documentation/atomic_bitops.txt
++++ b/Documentation/atomic_bitops.txt
+@@ -59,7 +59,7 @@ Like with atomic_t, the rule of thumb is:
+ - RMW operations that have a return value are fully ordered.
+
+ - RMW operations that are conditional are unordered on FAILURE,
+- otherwise the above rules apply. In the case of test_and_{}_bit() operations,
++ otherwise the above rules apply. In the case of test_and_set_bit_lock(),
+ if the bit in memory is unchanged by the operation then it is deemed to have
+ failed.
+
+diff --git a/Documentation/devicetree/bindings/arm/qcom.yaml b/Documentation/devicetree/bindings/arm/qcom.yaml
+index 5c06d1bfc046c..88759f8a3049f 100644
+--- a/Documentation/devicetree/bindings/arm/qcom.yaml
++++ b/Documentation/devicetree/bindings/arm/qcom.yaml
+@@ -153,28 +153,34 @@ properties:
+ - const: qcom,msm8974
+
+ - items:
+- - enum:
+- - alcatel,idol347
+- - const: qcom,msm8916-mtp/1
+ - const: qcom,msm8916-mtp
++ - const: qcom,msm8916-mtp/1
+ - const: qcom,msm8916
+
+ - items:
+ - enum:
+- - longcheer,l8150
++ - alcatel,idol347
+ - samsung,a3u-eur
+ - samsung,a5u-eur
+ - const: qcom,msm8916
+
++ - items:
++ - const: longcheer,l8150
++ - const: qcom,msm8916-v1-qrd/9-v1
++ - const: qcom,msm8916
++
+ - items:
+ - enum:
+ - sony,karin_windy
++ - const: qcom,apq8094
++
++ - items:
++ - enum:
+ - sony,karin-row
+ - sony,satsuki-row
+ - sony,sumire-row
+ - sony,suzuran-row
+- - qcom,msm8994
+- - const: qcom,apq8094
++ - const: qcom,msm8994
+
+ - items:
+ - enum:
+diff --git a/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml b/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml
+index 5a5b2214f0cae..005e0edd4609a 100644
+--- a/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml
++++ b/Documentation/devicetree/bindings/clock/qcom,gcc-msm8996.yaml
+@@ -22,16 +22,32 @@ properties:
+ const: qcom,gcc-msm8996
+
+ clocks:
++ minItems: 3
+ items:
+ - description: XO source
+ - description: Second XO source
+ - description: Sleep clock source
++ - description: PCIe 0 PIPE clock (optional)
++ - description: PCIe 1 PIPE clock (optional)
++ - description: PCIe 2 PIPE clock (optional)
++ - description: USB3 PIPE clock (optional)
++ - description: UFS RX symbol 0 clock (optional)
++ - description: UFS RX symbol 1 clock (optional)
++ - description: UFS TX symbol 0 clock (optional)
+
+ clock-names:
++ minItems: 3
+ items:
+ - const: cxo
+ - const: cxo2
+ - const: sleep_clk
++ - const: pcie_0_pipe_clk_src
++ - const: pcie_1_pipe_clk_src
++ - const: pcie_2_pipe_clk_src
++ - const: usb3_phy_pipe_clk_src
++ - const: ufs_rx_symbol_0_clk_src
++ - const: ufs_rx_symbol_1_clk_src
++ - const: ufs_tx_symbol_0_clk_src
+
+ '#clock-cells':
+ const: 1
+diff --git a/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tcon.yaml b/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tcon.yaml
+index 4a92a4c7dcd70..f8168986a0a9e 100644
+--- a/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tcon.yaml
++++ b/Documentation/devicetree/bindings/display/allwinner,sun4i-a10-tcon.yaml
+@@ -233,6 +233,7 @@ allOf:
+ - allwinner,sun8i-a83t-tcon-lcd
+ - allwinner,sun8i-v3s-tcon
+ - allwinner,sun9i-a80-tcon-lcd
++ - allwinner,sun20i-d1-tcon-lcd
+
+ then:
+ properties:
+@@ -252,6 +253,7 @@ allOf:
+ - allwinner,sun8i-a83t-tcon-tv
+ - allwinner,sun8i-r40-tcon-tv
+ - allwinner,sun9i-a80-tcon-tv
++ - allwinner,sun20i-d1-tcon-tv
+
+ then:
+ properties:
+@@ -278,6 +280,7 @@ allOf:
+ - allwinner,sun9i-a80-tcon-lcd
+ - allwinner,sun4i-a10-tcon
+ - allwinner,sun8i-a83t-tcon-lcd
++ - allwinner,sun20i-d1-tcon-lcd
+
+ then:
+ required:
+@@ -294,6 +297,7 @@ allOf:
+ - allwinner,sun8i-a23-tcon
+ - allwinner,sun8i-a33-tcon
+ - allwinner,sun8i-a83t-tcon-lcd
++ - allwinner,sun20i-d1-tcon-lcd
+
+ then:
+ properties:
+diff --git a/Documentation/devicetree/bindings/gpio/gpio-zynq.yaml b/Documentation/devicetree/bindings/gpio/gpio-zynq.yaml
+index 378da2649e668..980f92ad9eba2 100644
+--- a/Documentation/devicetree/bindings/gpio/gpio-zynq.yaml
++++ b/Documentation/devicetree/bindings/gpio/gpio-zynq.yaml
+@@ -11,7 +11,11 @@ maintainers:
+
+ properties:
+ compatible:
+- const: xlnx,zynq-gpio-1.0
++ enum:
++ - xlnx,zynq-gpio-1.0
++ - xlnx,zynqmp-gpio-1.0
++ - xlnx,versal-gpio-1.0
++ - xlnx,pmc-gpio-1.0
+
+ reg:
+ maxItems: 1
+diff --git a/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml b/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml
+index a3a1e5a65306b..32d0d51903342 100644
+--- a/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml
++++ b/Documentation/devicetree/bindings/input/azoteq,iqs7222.yaml
+@@ -37,10 +37,6 @@ properties:
+ device is temporarily held in hardware reset prior to initialization if
+ this property is present.
+
+- azoteq,rf-filt-enable:
+- type: boolean
+- description: Enables the device's internal RF filter.
+-
+ azoteq,max-counts:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ enum: [0, 1, 2, 3]
+@@ -537,9 +533,8 @@ patternProperties:
+
+ azoteq,bottom-speed:
+ $ref: /schemas/types.yaml#/definitions/uint32
+- multipleOf: 4
+ minimum: 0
+- maximum: 1020
++ maximum: 255
+ description:
+ Specifies the speed of movement after which coordinate filtering is
+ linearly reduced.
+@@ -616,16 +611,15 @@ patternProperties:
+ azoteq,gpio-select:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 1
+- maxItems: 1
++ maxItems: 3
+ items:
+ minimum: 0
+- maximum: 0
++ maximum: 2
+ description: |
+- Specifies an individual GPIO mapped to a tap, swipe or flick
+- gesture as follows:
++ Specifies one or more GPIO mapped to the event as follows:
+ 0: GPIO0
+- 1: GPIO3 (reserved)
+- 2: GPIO4 (reserved)
++ 1: GPIO3 (IQS7222C only)
++ 2: GPIO4 (IQS7222C only)
+
+ Note that although multiple events can be mapped to a single
+ GPIO, they must all be of the same type (proximity, touch or
+@@ -710,6 +704,14 @@ allOf:
+ multipleOf: 4
+ maximum: 1020
+
++ patternProperties:
++ "^event-(press|tap|(swipe|flick)-(pos|neg))$":
++ properties:
++ azoteq,gpio-select:
++ maxItems: 1
++ items:
++ maximum: 0
++
+ else:
+ patternProperties:
+ "^channel-([0-9]|1[0-9])$":
+@@ -726,8 +728,6 @@ allOf:
+
+ azoteq,gesture-dist: false
+
+- azoteq,gpio-select: false
+-
+ required:
+ - compatible
+ - reg
+diff --git a/Documentation/devicetree/bindings/opp/opp-v2-kryo-cpu.yaml b/Documentation/devicetree/bindings/opp/opp-v2-kryo-cpu.yaml
+index 30f7b596d609b..59663e897dae9 100644
+--- a/Documentation/devicetree/bindings/opp/opp-v2-kryo-cpu.yaml
++++ b/Documentation/devicetree/bindings/opp/opp-v2-kryo-cpu.yaml
+@@ -98,6 +98,8 @@ examples:
+ capacity-dmips-mhz = <1024>;
+ clocks = <&kryocc 0>;
+ operating-points-v2 = <&cluster0_opp>;
++ power-domains = <&cpr>;
++ power-domain-names = "cpr";
+ #cooling-cells = <2>;
+ next-level-cache = <&L2_0>;
+ L2_0: l2-cache {
+@@ -115,6 +117,8 @@ examples:
+ capacity-dmips-mhz = <1024>;
+ clocks = <&kryocc 0>;
+ operating-points-v2 = <&cluster0_opp>;
++ power-domains = <&cpr>;
++ power-domain-names = "cpr";
+ #cooling-cells = <2>;
+ next-level-cache = <&L2_0>;
+ };
+@@ -128,6 +132,8 @@ examples:
+ capacity-dmips-mhz = <1024>;
+ clocks = <&kryocc 1>;
+ operating-points-v2 = <&cluster1_opp>;
++ power-domains = <&cpr>;
++ power-domain-names = "cpr";
+ #cooling-cells = <2>;
+ next-level-cache = <&L2_1>;
+ L2_1: l2-cache {
+@@ -145,6 +151,8 @@ examples:
+ capacity-dmips-mhz = <1024>;
+ clocks = <&kryocc 1>;
+ operating-points-v2 = <&cluster1_opp>;
++ power-domains = <&cpr>;
++ power-domain-names = "cpr";
+ #cooling-cells = <2>;
+ next-level-cache = <&L2_1>;
+ };
+@@ -182,18 +190,21 @@ examples:
+ opp-microvolt = <905000 905000 1140000>;
+ opp-supported-hw = <0x7>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp1>;
+ };
+ opp-1401600000 {
+ opp-hz = /bits/ 64 <1401600000>;
+ opp-microvolt = <1140000 905000 1140000>;
+ opp-supported-hw = <0x5>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp2>;
+ };
+ opp-1593600000 {
+ opp-hz = /bits/ 64 <1593600000>;
+ opp-microvolt = <1140000 905000 1140000>;
+ opp-supported-hw = <0x1>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp3>;
+ };
+ };
+
+@@ -207,24 +218,28 @@ examples:
+ opp-microvolt = <905000 905000 1140000>;
+ opp-supported-hw = <0x7>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp1>;
+ };
+ opp-1804800000 {
+ opp-hz = /bits/ 64 <1804800000>;
+ opp-microvolt = <1140000 905000 1140000>;
+ opp-supported-hw = <0x6>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp4>;
+ };
+ opp-1900800000 {
+ opp-hz = /bits/ 64 <1900800000>;
+ opp-microvolt = <1140000 905000 1140000>;
+ opp-supported-hw = <0x4>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp5>;
+ };
+ opp-2150400000 {
+ opp-hz = /bits/ 64 <2150400000>;
+ opp-microvolt = <1140000 905000 1140000>;
+ opp-supported-hw = <0x1>;
+ clock-latency-ns = <200000>;
++ required-opps = <&cpr_opp6>;
+ };
+ };
+
+diff --git a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
+index 0b69b12b849ee..9b3ebee938e88 100644
+--- a/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
++++ b/Documentation/devicetree/bindings/pci/qcom,pcie.yaml
+@@ -614,7 +614,7 @@ allOf:
+ - if:
+ not:
+ properties:
+- compatibles:
++ compatible:
+ contains:
+ enum:
+ - qcom,pcie-msm8996
+diff --git a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8186.yaml b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8186.yaml
+index 8a2bb86082910..dbc41d1c4c7dc 100644
+--- a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8186.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8186.yaml
+@@ -105,31 +105,8 @@ patternProperties:
+ drive-strength:
+ enum: [2, 4, 6, 8, 10, 12, 14, 16]
+
+- mediatek,drive-strength-adv:
+- description: |
+- Describe the specific driving setup property.
+- For I2C pins, the existing generic driving setup can only support
+- 2/4/6/8/10/12/14/16mA driving. But in specific driving setup, they
+- can support 0.125/0.25/0.5/1mA adjustment. If we enable specific
+- driving setup, the existing generic setup will be disabled.
+- The specific driving setup is controlled by E1E0EN.
+- When E1=0/E0=0, the strength is 0.125mA.
+- When E1=0/E0=1, the strength is 0.25mA.
+- When E1=1/E0=0, the strength is 0.5mA.
+- When E1=1/E0=1, the strength is 1mA.
+- EN is used to enable or disable the specific driving setup.
+- Valid arguments are described as below:
+- 0: (E1, E0, EN) = (0, 0, 0)
+- 1: (E1, E0, EN) = (0, 0, 1)
+- 2: (E1, E0, EN) = (0, 1, 0)
+- 3: (E1, E0, EN) = (0, 1, 1)
+- 4: (E1, E0, EN) = (1, 0, 0)
+- 5: (E1, E0, EN) = (1, 0, 1)
+- 6: (E1, E0, EN) = (1, 1, 0)
+- 7: (E1, E0, EN) = (1, 1, 1)
+- So the valid arguments are from 0 to 7.
+- $ref: /schemas/types.yaml#/definitions/uint32
+- enum: [0, 1, 2, 3, 4, 5, 6, 7]
++ drive-strength-microamp:
++ enum: [125, 250, 500, 1000]
+
+ bias-pull-down:
+ oneOf:
+@@ -291,7 +268,7 @@ examples:
+ pinmux = <PINMUX_GPIO127__FUNC_SCL0>,
+ <PINMUX_GPIO128__FUNC_SDA0>;
+ bias-pull-up = <MTK_PULL_SET_RSEL_001>;
+- mediatek,drive-strength-adv = <7>;
++ drive-strength-microamp = <1000>;
+ };
+ };
+ };
+diff --git a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8192.yaml b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8192.yaml
+index c90a132fbc790..e39f5893bf16d 100644
+--- a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8192.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8192.yaml
+@@ -80,46 +80,24 @@ patternProperties:
+ dt-bindings/pinctrl/mt65xx.h. It can only support 2/4/6/8/10/12/14/16mA in mt8192.
+ enum: [2, 4, 6, 8, 10, 12, 14, 16]
+
+- mediatek,drive-strength-adv:
+- description: |
+- Describe the specific driving setup property.
+- For I2C pins, the existing generic driving setup can only support
+- 2/4/6/8/10/12/14/16mA driving. But in specific driving setup, they
+- can support 0.125/0.25/0.5/1mA adjustment. If we enable specific
+- driving setup, the existing generic setup will be disabled.
+- The specific driving setup is controlled by E1E0EN.
+- When E1=0/E0=0, the strength is 0.125mA.
+- When E1=0/E0=1, the strength is 0.25mA.
+- When E1=1/E0=0, the strength is 0.5mA.
+- When E1=1/E0=1, the strength is 1mA.
+- EN is used to enable or disable the specific driving setup.
+- Valid arguments are described as below:
+- 0: (E1, E0, EN) = (0, 0, 0)
+- 1: (E1, E0, EN) = (0, 0, 1)
+- 2: (E1, E0, EN) = (0, 1, 0)
+- 3: (E1, E0, EN) = (0, 1, 1)
+- 4: (E1, E0, EN) = (1, 0, 0)
+- 5: (E1, E0, EN) = (1, 0, 1)
+- 6: (E1, E0, EN) = (1, 1, 0)
+- 7: (E1, E0, EN) = (1, 1, 1)
+- So the valid arguments are from 0 to 7.
+- $ref: /schemas/types.yaml#/definitions/uint32
+- enum: [0, 1, 2, 3, 4, 5, 6, 7]
+-
+- mediatek,pull-up-adv:
+- description: |
+- Pull up settings for 2 pull resistors, R0 and R1. User can
+- configure those special pins. Valid arguments are described as below:
+- 0: (R1, R0) = (0, 0) which means R1 disabled and R0 disabled.
+- 1: (R1, R0) = (0, 1) which means R1 disabled and R0 enabled.
+- 2: (R1, R0) = (1, 0) which means R1 enabled and R0 disabled.
+- 3: (R1, R0) = (1, 1) which means R1 enabled and R0 enabled.
+- $ref: /schemas/types.yaml#/definitions/uint32
+- enum: [0, 1, 2, 3]
+-
+- bias-pull-down: true
+-
+- bias-pull-up: true
++ drive-strength-microamp:
++ enum: [125, 250, 500, 1000]
++
++ bias-pull-down:
++ oneOf:
++ - type: boolean
++ description: normal pull down.
++ - enum: [100, 101, 102, 103]
++ description: PUPD/R1/R0 pull down type. See MTK_PUPD_SET_R1R0_
++ defines in dt-bindings/pinctrl/mt65xx.h.
++
++ bias-pull-up:
++ oneOf:
++ - type: boolean
++ description: normal pull up.
++ - enum: [100, 101, 102, 103]
++ description: PUPD/R1/R0 pull up type. See MTK_PUPD_SET_R1R0_
++ defines in dt-bindings/pinctrl/mt65xx.h.
+
+ bias-disable: true
+
+diff --git a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml
+index c5b755514c46f..3d8afb3d5695b 100644
+--- a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml
+@@ -49,7 +49,7 @@ properties:
+ description: The interrupt outputs to sysirq.
+ maxItems: 1
+
+- mediatek,rsel_resistance_in_si_unit:
++ mediatek,rsel-resistance-in-si-unit:
+ type: boolean
+ description: |
+ Identifying i2c pins pull up/down type which is RSEL. It can support
+@@ -98,31 +98,8 @@ patternProperties:
+ drive-strength:
+ enum: [2, 4, 6, 8, 10, 12, 14, 16]
+
+- mediatek,drive-strength-adv:
+- description: |
+- Describe the specific driving setup property.
+- For I2C pins, the existing generic driving setup can only support
+- 2/4/6/8/10/12/14/16mA driving. But in specific driving setup, they
+- can support 0.125/0.25/0.5/1mA adjustment. If we enable specific
+- driving setup, the existing generic setup will be disabled.
+- The specific driving setup is controlled by E1E0EN.
+- When E1=0/E0=0, the strength is 0.125mA.
+- When E1=0/E0=1, the strength is 0.25mA.
+- When E1=1/E0=0, the strength is 0.5mA.
+- When E1=1/E0=1, the strength is 1mA.
+- EN is used to enable or disable the specific driving setup.
+- Valid arguments are described as below:
+- 0: (E1, E0, EN) = (0, 0, 0)
+- 1: (E1, E0, EN) = (0, 0, 1)
+- 2: (E1, E0, EN) = (0, 1, 0)
+- 3: (E1, E0, EN) = (0, 1, 1)
+- 4: (E1, E0, EN) = (1, 0, 0)
+- 5: (E1, E0, EN) = (1, 0, 1)
+- 6: (E1, E0, EN) = (1, 1, 0)
+- 7: (E1, E0, EN) = (1, 1, 1)
+- So the valid arguments are from 0 to 7.
+- $ref: /schemas/types.yaml#/definitions/uint32
+- enum: [0, 1, 2, 3, 4, 5, 6, 7]
++ drive-strength-microamp:
++ enum: [125, 250, 500, 1000]
+
+ bias-pull-down:
+ oneOf:
+@@ -142,7 +119,7 @@ patternProperties:
+ "MTK_PUPD_SET_R1R0_11" define in mt8195.
+ For pull down type is RSEL, it can add RSEL define & resistance
+ value(ohm) to set different resistance by identifying property
+- "mediatek,rsel_resistance_in_si_unit".
++ "mediatek,rsel-resistance-in-si-unit".
+ It can support "MTK_PULL_SET_RSEL_000" & "MTK_PULL_SET_RSEL_001"
+ & "MTK_PULL_SET_RSEL_010" & "MTK_PULL_SET_RSEL_011"
+ & "MTK_PULL_SET_RSEL_100" & "MTK_PULL_SET_RSEL_101"
+@@ -161,7 +138,7 @@ patternProperties:
+ };
+ An example of using si unit resistance value(ohm):
+ &pio {
+- mediatek,rsel_resistance_in_si_unit;
++ mediatek,rsel-resistance-in-si-unit;
+ }
+ pincontroller {
+ i2c0_pin {
+@@ -190,7 +167,7 @@ patternProperties:
+ "MTK_PUPD_SET_R1R0_11" define in mt8195.
+ For pull up type is RSEL, it can add RSEL define & resistance
+ value(ohm) to set different resistance by identifying property
+- "mediatek,rsel_resistance_in_si_unit".
++ "mediatek,rsel-resistance-in-si-unit".
+ It can support "MTK_PULL_SET_RSEL_000" & "MTK_PULL_SET_RSEL_001"
+ & "MTK_PULL_SET_RSEL_010" & "MTK_PULL_SET_RSEL_011"
+ & "MTK_PULL_SET_RSEL_100" & "MTK_PULL_SET_RSEL_101"
+@@ -209,7 +186,7 @@ patternProperties:
+ };
+ An example of using si unit resistance value(ohm):
+ &pio {
+- mediatek,rsel_resistance_in_si_unit;
++ mediatek,rsel-resistance-in-si-unit;
+ }
+ pincontroller {
+ i2c0-pins {
+diff --git a/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml b/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml
+index b539781e39aa4..835b53302db80 100644
+--- a/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/nxp,pca9450-regulator.yaml
+@@ -47,12 +47,6 @@ properties:
+ description:
+ Properties for single LDO regulator.
+
+- properties:
+- regulator-name:
+- pattern: "^LDO[1-5]$"
+- description:
+- should be "LDO1", ..., "LDO5"
+-
+ unevaluatedProperties: false
+
+ "^BUCK[1-6]$":
+@@ -62,11 +56,6 @@ properties:
+ Properties for single BUCK regulator.
+
+ properties:
+- regulator-name:
+- pattern: "^BUCK[1-6]$"
+- description:
+- should be "BUCK1", ..., "BUCK6"
+-
+ nxp,dvs-run-voltage:
+ $ref: "/schemas/types.yaml#/definitions/uint32"
+ minimum: 600000
+diff --git a/Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.yaml b/Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.yaml
+index 78ceb9d67754f..2e20ca313ec1d 100644
+--- a/Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.yaml
++++ b/Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.yaml
+@@ -45,12 +45,15 @@ properties:
+ - const: rx
+
+ interconnects:
+- maxItems: 2
++ minItems: 2
++ maxItems: 3
+
+ interconnect-names:
++ minItems: 2
+ items:
+ - const: qup-core
+ - const: qup-config
++ - const: qup-memory
+
+ interrupts:
+ maxItems: 1
+diff --git a/Documentation/devicetree/bindings/spi/spi-cadence.yaml b/Documentation/devicetree/bindings/spi/spi-cadence.yaml
+index 9787be21318e6..82d0ca5c00f3b 100644
+--- a/Documentation/devicetree/bindings/spi/spi-cadence.yaml
++++ b/Documentation/devicetree/bindings/spi/spi-cadence.yaml
+@@ -49,6 +49,13 @@ properties:
+ enum: [ 0, 1 ]
+ default: 0
+
++required:
++ - compatible
++ - reg
++ - interrupts
++ - clock-names
++ - clocks
++
+ unevaluatedProperties: false
+
+ examples:
+diff --git a/Documentation/devicetree/bindings/spi/spi-zynqmp-qspi.yaml b/Documentation/devicetree/bindings/spi/spi-zynqmp-qspi.yaml
+index ea72c8001256f..fafde1c06be67 100644
+--- a/Documentation/devicetree/bindings/spi/spi-zynqmp-qspi.yaml
++++ b/Documentation/devicetree/bindings/spi/spi-zynqmp-qspi.yaml
+@@ -30,6 +30,13 @@ properties:
+ clocks:
+ maxItems: 2
+
++required:
++ - compatible
++ - reg
++ - interrupts
++ - clock-names
++ - clocks
++
+ unevaluatedProperties: false
+
+ examples:
+diff --git a/Documentation/devicetree/bindings/usb/mediatek,mtk-xhci.yaml b/Documentation/devicetree/bindings/usb/mediatek,mtk-xhci.yaml
+index 084d7135b2d9f..5603359e7b29e 100644
+--- a/Documentation/devicetree/bindings/usb/mediatek,mtk-xhci.yaml
++++ b/Documentation/devicetree/bindings/usb/mediatek,mtk-xhci.yaml
+@@ -57,6 +57,7 @@ properties:
+ - description: optional, wakeup interrupt used to support runtime PM
+
+ interrupt-names:
++ minItems: 1
+ items:
+ - const: host
+ - const: wakeup
+diff --git a/Documentation/firmware-guide/acpi/apei/einj.rst b/Documentation/firmware-guide/acpi/apei/einj.rst
+index 55e2331a64380..d6b61d22f5258 100644
+--- a/Documentation/firmware-guide/acpi/apei/einj.rst
++++ b/Documentation/firmware-guide/acpi/apei/einj.rst
+@@ -168,7 +168,7 @@ An error injection example::
+ 0x00000008 Memory Correctable
+ 0x00000010 Memory Uncorrectable non-fatal
+ # echo 0x12345000 > param1 # Set memory address for injection
+- # echo $((-1 << 12)) > param2 # Mask 0xfffffffffffff000 - anywhere in this page
++ # echo 0xfffffffffffff000 > param2 # Mask - anywhere in this page
+ # echo 0x8 > error_type # Choose correctable memory error
+ # echo 1 > error_inject # Inject now
+
+diff --git a/Makefile b/Makefile
+index 8595916561f3f..65dc4f93ffdbf 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 19
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+@@ -1112,13 +1112,11 @@ vmlinux-alldirs := $(sort $(vmlinux-dirs) Documentation \
+ $(patsubst %/,%,$(filter %/, $(core-) \
+ $(drivers-) $(libs-))))
+
+-subdir-modorder := $(addsuffix modules.order,$(filter %/, \
+- $(core-y) $(core-m) $(libs-y) $(libs-m) \
+- $(drivers-y) $(drivers-m)))
+-
+ build-dirs := $(vmlinux-dirs)
+ clean-dirs := $(vmlinux-alldirs)
+
++subdir-modorder := $(addsuffix /modules.order, $(build-dirs))
++
+ # Externally visible symbols (used by link-vmlinux.sh)
+ KBUILD_VMLINUX_OBJS := $(head-y) $(patsubst %/,%/built-in.a, $(core-y))
+ KBUILD_VMLINUX_OBJS += $(addsuffix built-in.a, $(filter %/, $(libs-y)))
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index de32152cea048..7aaf755770962 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -838,6 +838,10 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
+ (system_supports_mte() && \
+ test_bit(KVM_ARCH_FLAG_MTE_ENABLED, &(kvm)->arch.flags))
+
++#define kvm_supports_32bit_el0() \
++ (system_supports_32bit_el0() && \
++ !static_branch_unlikely(&arm64_mismatched_32bit_el0))
++
+ int kvm_trng_call(struct kvm_vcpu *vcpu);
+ #ifdef CONFIG_KVM
+ extern phys_addr_t hyp_mem_base;
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 83a7f61354d3f..e21b245741182 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -751,8 +751,7 @@ static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu)
+ if (likely(!vcpu_mode_is_32bit(vcpu)))
+ return false;
+
+- return !system_supports_32bit_el0() ||
+- static_branch_unlikely(&arm64_mismatched_32bit_el0);
++ return !kvm_supports_32bit_el0();
+ }
+
+ /**
+diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
+index 8c607199cad14..f802a3b3f8dbc 100644
+--- a/arch/arm64/kvm/guest.c
++++ b/arch/arm64/kvm/guest.c
+@@ -242,7 +242,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ u64 mode = (*(u64 *)valp) & PSR_AA32_MODE_MASK;
+ switch (mode) {
+ case PSR_AA32_MODE_USR:
+- if (!system_supports_32bit_el0())
++ if (!kvm_supports_32bit_el0())
+ return -EINVAL;
+ break;
+ case PSR_AA32_MODE_FIQ:
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index c06c0477fab52..be7edd21537f9 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -692,7 +692,7 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+ */
+ val = ((pmcr & ~ARMV8_PMU_PMCR_MASK)
+ | (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E);
+- if (!system_supports_32bit_el0())
++ if (!kvm_supports_32bit_el0())
+ val |= ARMV8_PMU_PMCR_LC;
+ __vcpu_sys_reg(vcpu, r->reg) = val;
+ }
+@@ -741,7 +741,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ val = __vcpu_sys_reg(vcpu, PMCR_EL0);
+ val &= ~ARMV8_PMU_PMCR_MASK;
+ val |= p->regval & ARMV8_PMU_PMCR_MASK;
+- if (!system_supports_32bit_el0())
++ if (!kvm_supports_32bit_el0())
+ val |= ARMV8_PMU_PMCR_LC;
+ __vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+ kvm_pmu_handle_pmcr(vcpu, val);
+diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
+index 34ba684d5962b..3c6e5c725d814 100644
+--- a/arch/csky/kernel/probes/kprobes.c
++++ b/arch/csky/kernel/probes/kprobes.c
+@@ -124,6 +124,10 @@ void __kprobes arch_disarm_kprobe(struct kprobe *p)
+
+ void __kprobes arch_remove_kprobe(struct kprobe *p)
+ {
++ if (p->ainsn.api.insn) {
++ free_insn_slot(p->ainsn.api.insn, 0);
++ p->ainsn.api.insn = NULL;
++ }
+ }
+
+ static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
+diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c
+index 4218750414bbf..7dab46728aeda 100644
+--- a/arch/m68k/coldfire/device.c
++++ b/arch/m68k/coldfire/device.c
+@@ -581,7 +581,7 @@ static struct platform_device mcf_esdhc = {
+ };
+ #endif /* MCFSDHC_BASE */
+
+-#if IS_ENABLED(CONFIG_CAN_FLEXCAN)
++#ifdef MCFFLEXCAN_SIZE
+
+ #include <linux/can/platform/flexcan.h>
+
+@@ -620,7 +620,7 @@ static struct platform_device mcf_flexcan0 = {
+ .resource = mcf5441x_flexcan0_resource,
+ .dev.platform_data = &mcf5441x_flexcan_info,
+ };
+-#endif /* IS_ENABLED(CONFIG_CAN_FLEXCAN) */
++#endif /* MCFFLEXCAN_SIZE */
+
+ static struct platform_device *mcf_devices[] __initdata = {
+ &mcf_uart,
+@@ -657,7 +657,7 @@ static struct platform_device *mcf_devices[] __initdata = {
+ #ifdef MCFSDHC_BASE
+ &mcf_esdhc,
+ #endif
+-#if IS_ENABLED(CONFIG_CAN_FLEXCAN)
++#ifdef MCFFLEXCAN_SIZE
+ &mcf_flexcan0,
+ #endif
+ };
+diff --git a/arch/mips/cavium-octeon/octeon-platform.c b/arch/mips/cavium-octeon/octeon-platform.c
+index a994022e32c9f..ce05c0dd3acd7 100644
+--- a/arch/mips/cavium-octeon/octeon-platform.c
++++ b/arch/mips/cavium-octeon/octeon-platform.c
+@@ -86,11 +86,12 @@ static void octeon2_usb_clocks_start(struct device *dev)
+ "refclk-frequency", &clock_rate);
+ if (i) {
+ dev_err(dev, "No UCTL \"refclk-frequency\"\n");
++ of_node_put(uctl_node);
+ goto exit;
+ }
+ i = of_property_read_string(uctl_node,
+ "refclk-type", &clock_type);
+-
++ of_node_put(uctl_node);
+ if (!i && strcmp("crystal", clock_type) == 0)
+ is_crystal_clock = true;
+ }
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index 8dbbd99fc7e85..be4d4670d6499 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -626,7 +626,7 @@ static __maybe_unused void build_convert_pte_to_entrylo(u32 **p,
+ return;
+ }
+
+- if (cpu_has_rixi && !!_PAGE_NO_EXEC) {
++ if (cpu_has_rixi && _PAGE_NO_EXEC != 0) {
+ if (fill_includes_sw_bits) {
+ UASM_i_ROTR(p, reg, reg, ilog2(_PAGE_GLOBAL));
+ } else {
+@@ -2565,7 +2565,7 @@ static void check_pabits(void)
+ unsigned long entry;
+ unsigned pabits, fillbits;
+
+- if (!cpu_has_rixi || !_PAGE_NO_EXEC) {
++ if (!cpu_has_rixi || _PAGE_NO_EXEC == 0) {
+ /*
+ * We'll only be making use of the fact that we can rotate bits
+ * into the fill if the CPU supports RIXI, so don't bother
+diff --git a/arch/nios2/include/asm/entry.h b/arch/nios2/include/asm/entry.h
+index cf37f55efbc22..bafb7b2ca59fc 100644
+--- a/arch/nios2/include/asm/entry.h
++++ b/arch/nios2/include/asm/entry.h
+@@ -50,7 +50,8 @@
+ stw r13, PT_R13(sp)
+ stw r14, PT_R14(sp)
+ stw r15, PT_R15(sp)
+- stw r2, PT_ORIG_R2(sp)
++ movi r24, -1
++ stw r24, PT_ORIG_R2(sp)
+ stw r7, PT_ORIG_R7(sp)
+
+ stw ra, PT_RA(sp)
+diff --git a/arch/nios2/include/asm/ptrace.h b/arch/nios2/include/asm/ptrace.h
+index 6424621448728..9da34c3022a27 100644
+--- a/arch/nios2/include/asm/ptrace.h
++++ b/arch/nios2/include/asm/ptrace.h
+@@ -74,6 +74,8 @@ extern void show_regs(struct pt_regs *);
+ ((struct pt_regs *)((unsigned long)current_thread_info() + THREAD_SIZE)\
+ - 1)
+
++#define force_successful_syscall_return() (current_pt_regs()->orig_r2 = -1)
++
+ int do_syscall_trace_enter(void);
+ void do_syscall_trace_exit(void);
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/nios2/kernel/entry.S b/arch/nios2/kernel/entry.S
+index 0794cd7803dfe..99f0a65e62347 100644
+--- a/arch/nios2/kernel/entry.S
++++ b/arch/nios2/kernel/entry.S
+@@ -185,6 +185,7 @@ ENTRY(handle_system_call)
+ ldw r5, PT_R5(sp)
+
+ local_restart:
++ stw r2, PT_ORIG_R2(sp)
+ /* Check that the requested system call is within limits */
+ movui r1, __NR_syscalls
+ bgeu r2, r1, ret_invsyscall
+@@ -192,7 +193,6 @@ local_restart:
+ movhi r11, %hiadj(sys_call_table)
+ add r1, r1, r11
+ ldw r1, %lo(sys_call_table)(r1)
+- beq r1, r0, ret_invsyscall
+
+ /* Check if we are being traced */
+ GET_THREAD_INFO r11
+@@ -213,6 +213,9 @@ local_restart:
+ translate_rc_and_ret:
+ movi r1, 0
+ bge r2, zero, 3f
++ ldw r1, PT_ORIG_R2(sp)
++ addi r1, r1, 1
++ beq r1, zero, 3f
+ sub r2, zero, r2
+ movi r1, 1
+ 3:
+@@ -255,9 +258,9 @@ traced_system_call:
+ ldw r6, PT_R6(sp)
+ ldw r7, PT_R7(sp)
+
+- /* Fetch the syscall function, we don't need to check the boundaries
+- * since this is already done.
+- */
++ /* Fetch the syscall function. */
++ movui r1, __NR_syscalls
++ bgeu r2, r1, traced_invsyscall
+ slli r1, r2, 2
+ movhi r11,%hiadj(sys_call_table)
+ add r1, r1, r11
+@@ -276,6 +279,9 @@ traced_system_call:
+ translate_rc_and_ret2:
+ movi r1, 0
+ bge r2, zero, 4f
++ ldw r1, PT_ORIG_R2(sp)
++ addi r1, r1, 1
++ beq r1, zero, 4f
+ sub r2, zero, r2
+ movi r1, 1
+ 4:
+@@ -287,6 +293,11 @@ end_translate_rc_and_ret2:
+ RESTORE_SWITCH_STACK
+ br ret_from_exception
+
++ /* If the syscall number was invalid return ENOSYS */
++traced_invsyscall:
++ movi r2, -ENOSYS
++ br translate_rc_and_ret2
++
+ Luser_return:
+ GET_THREAD_INFO r11 /* get thread_info pointer */
+ ldw r10, TI_FLAGS(r11) /* get thread_info->flags */
+@@ -336,9 +347,6 @@ external_interrupt:
+ /* skip if no interrupt is pending */
+ beq r12, r0, ret_from_interrupt
+
+- movi r24, -1
+- stw r24, PT_ORIG_R2(sp)
+-
+ /*
+ * Process an external hardware interrupt.
+ */
+diff --git a/arch/nios2/kernel/signal.c b/arch/nios2/kernel/signal.c
+index cb0b91589cf20..a5b93a30c6eb2 100644
+--- a/arch/nios2/kernel/signal.c
++++ b/arch/nios2/kernel/signal.c
+@@ -242,7 +242,7 @@ static int do_signal(struct pt_regs *regs)
+ /*
+ * If we were from a system call, check for system call restarting...
+ */
+- if (regs->orig_r2 >= 0) {
++ if (regs->orig_r2 >= 0 && regs->r1) {
+ continue_addr = regs->ea;
+ restart_addr = continue_addr - 4;
+ retval = regs->r2;
+@@ -264,6 +264,7 @@ static int do_signal(struct pt_regs *regs)
+ regs->ea = restart_addr;
+ break;
+ }
++ regs->orig_r2 = -1;
+ }
+
+ if (get_signal(&ksig)) {
+diff --git a/arch/nios2/kernel/syscall_table.c b/arch/nios2/kernel/syscall_table.c
+index 6176d63023c1d..c2875a6dd5a4a 100644
+--- a/arch/nios2/kernel/syscall_table.c
++++ b/arch/nios2/kernel/syscall_table.c
+@@ -13,5 +13,6 @@
+ #define __SYSCALL(nr, call) [nr] = (call),
+
+ void *sys_call_table[__NR_syscalls] = {
++ [0 ... __NR_syscalls-1] = sys_ni_syscall,
+ #include <asm/unistd.h>
+ };
+diff --git a/arch/openrisc/include/asm/io.h b/arch/openrisc/include/asm/io.h
+index c298061c70a7e..8aa3e78181e9a 100644
+--- a/arch/openrisc/include/asm/io.h
++++ b/arch/openrisc/include/asm/io.h
+@@ -31,7 +31,7 @@
+ void __iomem *ioremap(phys_addr_t offset, unsigned long size);
+
+ #define iounmap iounmap
+-extern void iounmap(void __iomem *addr);
++extern void iounmap(volatile void __iomem *addr);
+
+ #include <asm-generic/io.h>
+
+diff --git a/arch/openrisc/mm/ioremap.c b/arch/openrisc/mm/ioremap.c
+index daae13a76743b..8ec0dafecf257 100644
+--- a/arch/openrisc/mm/ioremap.c
++++ b/arch/openrisc/mm/ioremap.c
+@@ -77,7 +77,7 @@ void __iomem *__ref ioremap(phys_addr_t addr, unsigned long size)
+ }
+ EXPORT_SYMBOL(ioremap);
+
+-void iounmap(void __iomem *addr)
++void iounmap(volatile void __iomem *addr)
+ {
+ /* If the page is from the fixmap pool then we just clear out
+ * the fixmap mapping.
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index a0cd707120619..d54e1fe035517 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -15,23 +15,6 @@ HAS_BIARCH := $(call cc-option-yn, -m32)
+ # Set default 32 bits cross compilers for vdso and boot wrapper
+ CROSS32_COMPILE ?=
+
+-ifeq ($(HAS_BIARCH),y)
+-ifeq ($(CROSS32_COMPILE),)
+-ifdef CONFIG_PPC32
+-# These options will be overridden by any -mcpu option that the CPU
+-# or platform code sets later on the command line, but they are needed
+-# to set a sane 32-bit cpu target for the 64-bit cross compiler which
+-# may default to the wrong ISA.
+-KBUILD_CFLAGS += -mcpu=powerpc
+-KBUILD_AFLAGS += -mcpu=powerpc
+-endif
+-endif
+-endif
+-
+-ifdef CONFIG_PPC_BOOK3S_32
+-KBUILD_CFLAGS += -mcpu=powerpc
+-endif
+-
+ # If we're on a ppc/ppc64/ppc64le machine use that defconfig, otherwise just use
+ # ppc64_defconfig because we have nothing better to go on.
+ uname := $(shell uname -m)
+@@ -183,6 +166,7 @@ endif
+ endif
+
+ CFLAGS-$(CONFIG_TARGET_CPU_BOOL) += $(call cc-option,-mcpu=$(CONFIG_TARGET_CPU))
++AFLAGS-$(CONFIG_TARGET_CPU_BOOL) += $(call cc-option,-mcpu=$(CONFIG_TARGET_CPU))
+
+ # Altivec option not allowed with e500mc64 in GCC.
+ ifdef CONFIG_ALTIVEC
+@@ -193,14 +177,6 @@ endif
+ CFLAGS-$(CONFIG_E5500_CPU) += $(E5500_CPU)
+ CFLAGS-$(CONFIG_E6500_CPU) += $(call cc-option,-mcpu=e6500,$(E5500_CPU))
+
+-ifdef CONFIG_PPC32
+-ifdef CONFIG_PPC_E500MC
+-CFLAGS-y += $(call cc-option,-mcpu=e500mc,-mcpu=powerpc)
+-else
+-CFLAGS-$(CONFIG_E500) += $(call cc-option,-mcpu=8540 -msoft-float,-mcpu=powerpc)
+-endif
+-endif
+-
+ asinstr := $(call as-instr,lis 9$(comma)foo@high,-DHAVE_AS_ATHIGH=1)
+
+ KBUILD_CPPFLAGS += -I $(srctree)/arch/$(ARCH) $(asinstr)
+diff --git a/arch/powerpc/include/asm/nmi.h b/arch/powerpc/include/asm/nmi.h
+index ea0e487f87b1f..c3c7adef74de0 100644
+--- a/arch/powerpc/include/asm/nmi.h
++++ b/arch/powerpc/include/asm/nmi.h
+@@ -5,8 +5,10 @@
+ #ifdef CONFIG_PPC_WATCHDOG
+ extern void arch_touch_nmi_watchdog(void);
+ long soft_nmi_interrupt(struct pt_regs *regs);
++void watchdog_nmi_set_timeout_pct(u64 pct);
+ #else
+ static inline void arch_touch_nmi_watchdog(void) {}
++static inline void watchdog_nmi_set_timeout_pct(u64 pct) {}
+ #endif
+
+ #ifdef CONFIG_NMI_IPI
+diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S
+index 6c739beb938c9..519b606951675 100644
+--- a/arch/powerpc/kernel/head_book3s_32.S
++++ b/arch/powerpc/kernel/head_book3s_32.S
+@@ -418,14 +418,14 @@ InstructionTLBMiss:
+ */
+ /* Get PTE (linux-style) and check access */
+ mfspr r3,SPRN_IMISS
+-#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
++#ifdef CONFIG_MODULES
+ lis r1, TASK_SIZE@h /* check if kernel address */
+ cmplw 0,r1,r3
+ #endif
+ mfspr r2, SPRN_SDR1
+ li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC | _PAGE_USER
+ rlwinm r2, r2, 28, 0xfffff000
+-#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
++#ifdef CONFIG_MODULES
+ bgt- 112f
+ lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
+ li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
+diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
+index c787df126ada2..2df8e1350297e 100644
+--- a/arch/powerpc/kernel/pci-common.c
++++ b/arch/powerpc/kernel/pci-common.c
+@@ -67,10 +67,6 @@ void __init set_pci_dma_ops(const struct dma_map_ops *dma_ops)
+ pci_dma_ops = dma_ops;
+ }
+
+-/*
+- * This function should run under locking protection, specifically
+- * hose_spinlock.
+- */
+ static int get_phb_number(struct device_node *dn)
+ {
+ int ret, phb_id = -1;
+@@ -107,15 +103,20 @@ static int get_phb_number(struct device_node *dn)
+ if (!ret)
+ phb_id = (int)(prop & (MAX_PHBS - 1));
+
++ spin_lock(&hose_spinlock);
++
+ /* We need to be sure to not use the same PHB number twice. */
+ if ((phb_id >= 0) && !test_and_set_bit(phb_id, phb_bitmap))
+- return phb_id;
++ goto out_unlock;
+
+ /* If everything fails then fallback to dynamic PHB numbering. */
+ phb_id = find_first_zero_bit(phb_bitmap, MAX_PHBS);
+ BUG_ON(phb_id >= MAX_PHBS);
+ set_bit(phb_id, phb_bitmap);
+
++out_unlock:
++ spin_unlock(&hose_spinlock);
++
+ return phb_id;
+ }
+
+@@ -126,10 +127,13 @@ struct pci_controller *pcibios_alloc_controller(struct device_node *dev)
+ phb = zalloc_maybe_bootmem(sizeof(struct pci_controller), GFP_KERNEL);
+ if (phb == NULL)
+ return NULL;
+- spin_lock(&hose_spinlock);
++
+ phb->global_number = get_phb_number(dev);
++
++ spin_lock(&hose_spinlock);
+ list_add_tail(&phb->list_node, &hose_list);
+ spin_unlock(&hose_spinlock);
++
+ phb->dn = dev;
+ phb->is_dynamic = slab_is_available();
+ #ifdef CONFIG_PPC64
+diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
+index feae8509b59c9..b64c3f06c069e 100644
+--- a/arch/powerpc/kernel/prom.c
++++ b/arch/powerpc/kernel/prom.c
+@@ -751,6 +751,13 @@ void __init early_init_devtree(void *params)
+ early_init_dt_scan_root();
+ early_init_dt_scan_memory_ppc();
+
++ /*
++ * As generic code authors expect to be able to use static keys
++ * in early_param() handlers, we initialize the static keys just
++ * before parsing early params (it's fine to call jump_label_init()
++ * more than once).
++ */
++ jump_label_init();
+ parse_early_param();
+
+ /* make sure we've parsed cmdline for mem= before this */
+diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
+index 7d28b95536540..5d903e63f932a 100644
+--- a/arch/powerpc/kernel/watchdog.c
++++ b/arch/powerpc/kernel/watchdog.c
+@@ -91,6 +91,10 @@ static cpumask_t wd_smp_cpus_pending;
+ static cpumask_t wd_smp_cpus_stuck;
+ static u64 wd_smp_last_reset_tb;
+
++#ifdef CONFIG_PPC_PSERIES
++static u64 wd_timeout_pct;
++#endif
++
+ /*
+ * Try to take the exclusive watchdog action / NMI IPI / printing lock.
+ * wd_smp_lock must be held. If this fails, we should return and wait
+@@ -527,7 +531,13 @@ static int stop_watchdog_on_cpu(unsigned int cpu)
+
+ static void watchdog_calc_timeouts(void)
+ {
+- wd_panic_timeout_tb = watchdog_thresh * ppc_tb_freq;
++ u64 threshold = watchdog_thresh;
++
++#ifdef CONFIG_PPC_PSERIES
++ threshold += (READ_ONCE(wd_timeout_pct) * threshold) / 100;
++#endif
++
++ wd_panic_timeout_tb = threshold * ppc_tb_freq;
+
+ /* Have the SMP detector trigger a bit later */
+ wd_smp_panic_timeout_tb = wd_panic_timeout_tb * 3 / 2;
+@@ -570,3 +580,12 @@ int __init watchdog_nmi_probe(void)
+ }
+ return 0;
+ }
++
++#ifdef CONFIG_PPC_PSERIES
++void watchdog_nmi_set_timeout_pct(u64 pct)
++{
++ pr_info("Set the NMI watchdog timeout factor to %llu%%\n", pct);
++ WRITE_ONCE(wd_timeout_pct, pct);
++ lockup_detector_reconfigure();
++}
++#endif
+diff --git a/arch/powerpc/kvm/book3s_hv_p9_entry.c b/arch/powerpc/kvm/book3s_hv_p9_entry.c
+index 112a09b333288..7f88be386b27a 100644
+--- a/arch/powerpc/kvm/book3s_hv_p9_entry.c
++++ b/arch/powerpc/kvm/book3s_hv_p9_entry.c
+@@ -438,15 +438,6 @@ void restore_p9_host_os_sprs(struct kvm_vcpu *vcpu,
+ EXPORT_SYMBOL_GPL(restore_p9_host_os_sprs);
+
+ #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING
+-static void __start_timing(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator *next)
+-{
+- struct kvmppc_vcore *vc = vcpu->arch.vcore;
+- u64 tb = mftb() - vc->tb_offset_applied;
+-
+- vcpu->arch.cur_activity = next;
+- vcpu->arch.cur_tb_start = tb;
+-}
+-
+ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator *next)
+ {
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+@@ -478,8 +469,8 @@ static void __accumulate_time(struct kvm_vcpu *vcpu, struct kvmhv_tb_accumulator
+ curr->seqcount = seq + 2;
+ }
+
+-#define start_timing(vcpu, next) __start_timing(vcpu, next)
+-#define end_timing(vcpu) __start_timing(vcpu, NULL)
++#define start_timing(vcpu, next) __accumulate_time(vcpu, next)
++#define end_timing(vcpu) __accumulate_time(vcpu, NULL)
+ #define accumulate_time(vcpu, next) __accumulate_time(vcpu, next)
+ #else
+ #define start_timing(vcpu, next) do {} while (0)
+diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
+index 49a737fbbd189..40029280c3206 100644
+--- a/arch/powerpc/mm/book3s32/mmu.c
++++ b/arch/powerpc/mm/book3s32/mmu.c
+@@ -159,7 +159,10 @@ unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top)
+ {
+ unsigned long done;
+ unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET;
++ unsigned long size;
+
++ size = roundup_pow_of_two((unsigned long)_einittext - PAGE_OFFSET);
++ setibat(0, PAGE_OFFSET, 0, size, PAGE_KERNEL_X);
+
+ if (debug_pagealloc_enabled_or_kfence() || __map_without_bats) {
+ pr_debug_once("Read-Write memory mapped without BATs\n");
+@@ -245,10 +248,9 @@ void mmu_mark_rodata_ro(void)
+ }
+
+ /*
+- * Set up one of the I/D BAT (block address translation) register pairs.
++ * Set up one of the D BAT (block address translation) register pairs.
+ * The parameters are not checked; in particular size must be a power
+ * of 2 between 128k and 256M.
+- * On 603+, only set IBAT when _PAGE_EXEC is set
+ */
+ void __init setbat(int index, unsigned long virt, phys_addr_t phys,
+ unsigned int size, pgprot_t prot)
+@@ -284,10 +286,6 @@ void __init setbat(int index, unsigned long virt, phys_addr_t phys,
+ /* G bit must be zero in IBATs */
+ flags &= ~_PAGE_EXEC;
+ }
+- if (flags & _PAGE_EXEC)
+- bat[0] = bat[1];
+- else
+- bat[0].batu = bat[0].batl = 0;
+
+ bat_addrs[index].start = virt;
+ bat_addrs[index].limit = virt + ((bl + 1) << 17) - 1;
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index 3629fd73083e2..18115cabaedf9 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -136,9 +136,9 @@ config GENERIC_CPU
+ select ARCH_HAS_FAST_MULTIPLIER
+ select PPC_64S_HASH_MMU
+
+-config GENERIC_CPU
++config POWERPC_CPU
+ bool "Generic 32 bits powerpc"
+- depends on PPC32 && !PPC_8xx
++ depends on PPC32 && !PPC_8xx && !PPC_85xx
+
+ config CELL_CPU
+ bool "Cell Broadband Engine"
+@@ -197,11 +197,23 @@ config G4_CPU
+ depends on PPC_BOOK3S_32
+ select ALTIVEC
+
++config E500_CPU
++ bool "e500 (8540)"
++ depends on PPC_85xx && !PPC_E500MC
++
++config E500MC_CPU
++ bool "e500mc"
++ depends on PPC_85xx && PPC_E500MC
++
++config TOOLCHAIN_DEFAULT_CPU
++ bool "Rely on the toolchain's implicit default CPU"
++ depends on PPC32
++
+ endchoice
+
+ config TARGET_CPU_BOOL
+ bool
+- default !GENERIC_CPU
++ default !GENERIC_CPU && !TOOLCHAIN_DEFAULT_CPU
+
+ config TARGET_CPU
+ string
+@@ -216,6 +228,9 @@ config TARGET_CPU
+ default "e300c2" if E300C2_CPU
+ default "e300c3" if E300C3_CPU
+ default "G4" if G4_CPU
++ default "8540" if E500_CPU
++ default "e500mc" if E500MC_CPU
++ default "powerpc" if POWERPC_CPU
+
+ config PPC_BOOK3S
+ def_bool y
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index c8cf2728031a2..9de9b2fb163d3 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -1609,6 +1609,7 @@ found:
+ tbl->it_ops = &pnv_ioda1_iommu_ops;
+ pe->table_group.tce32_start = tbl->it_offset << tbl->it_page_shift;
+ pe->table_group.tce32_size = tbl->it_size << tbl->it_page_shift;
++ tbl->it_index = (phb->hose->global_number << 16) | pe->pe_number;
+ if (!iommu_init_table(tbl, phb->hose->node, 0, 0))
+ panic("Failed to initialize iommu table");
+
+@@ -1779,6 +1780,7 @@ static long pnv_pci_ioda2_setup_default_config(struct pnv_ioda_pe *pe)
+ res_end = min(window_size, SZ_4G) >> tbl->it_page_shift;
+ }
+
++ tbl->it_index = (pe->phb->hose->global_number << 16) | pe->pe_number;
+ if (iommu_init_table(tbl, pe->phb->hose->node, res_start, res_end))
+ rc = pnv_pci_ioda2_set_window(&pe->table_group, 0, tbl);
+ else
+diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
+index 78f3f74c7056b..cbe0989239bf2 100644
+--- a/arch/powerpc/platforms/pseries/mobility.c
++++ b/arch/powerpc/platforms/pseries/mobility.c
+@@ -48,6 +48,39 @@ struct update_props_workarea {
+ #define MIGRATION_SCOPE (1)
+ #define PRRN_SCOPE -2
+
++#ifdef CONFIG_PPC_WATCHDOG
++static unsigned int nmi_wd_lpm_factor = 200;
++
++#ifdef CONFIG_SYSCTL
++static struct ctl_table nmi_wd_lpm_factor_ctl_table[] = {
++ {
++ .procname = "nmi_wd_lpm_factor",
++ .data = &nmi_wd_lpm_factor,
++ .maxlen = sizeof(int),
++ .mode = 0644,
++ .proc_handler = proc_douintvec_minmax,
++ },
++ {}
++};
++static struct ctl_table nmi_wd_lpm_factor_sysctl_root[] = {
++ {
++ .procname = "kernel",
++ .mode = 0555,
++ .child = nmi_wd_lpm_factor_ctl_table,
++ },
++ {}
++};
++
++static int __init register_nmi_wd_lpm_factor_sysctl(void)
++{
++ register_sysctl_table(nmi_wd_lpm_factor_sysctl_root);
++
++ return 0;
++}
++device_initcall(register_nmi_wd_lpm_factor_sysctl);
++#endif /* CONFIG_SYSCTL */
++#endif /* CONFIG_PPC_WATCHDOG */
++
+ static int mobility_rtas_call(int token, char *buf, s32 scope)
+ {
+ int rc;
+@@ -665,19 +698,29 @@ static int pseries_suspend(u64 handle)
+ static int pseries_migrate_partition(u64 handle)
+ {
+ int ret;
++ unsigned int factor = 0;
+
++#ifdef CONFIG_PPC_WATCHDOG
++ factor = nmi_wd_lpm_factor;
++#endif
+ ret = wait_for_vasi_session_suspending(handle);
+ if (ret)
+ return ret;
+
+ vas_migration_handler(VAS_SUSPEND);
+
++ if (factor)
++ watchdog_nmi_set_timeout_pct(factor);
++
+ ret = pseries_suspend(handle);
+ if (ret == 0)
+ post_mobility_fixup();
+ else
+ pseries_cancel_migration(handle, ret);
+
++ if (factor)
++ watchdog_nmi_set_timeout_pct(0);
++
+ vas_migration_handler(VAS_RESUME);
+
+ return ret;
+diff --git a/arch/riscv/boot/dts/canaan/k210.dtsi b/arch/riscv/boot/dts/canaan/k210.dtsi
+index 44d3385147612..ec944d1537dc4 100644
+--- a/arch/riscv/boot/dts/canaan/k210.dtsi
++++ b/arch/riscv/boot/dts/canaan/k210.dtsi
+@@ -65,6 +65,18 @@
+ compatible = "riscv,cpu-intc";
+ };
+ };
++
++ cpu-map {
++ cluster0 {
++ core0 {
++ cpu = <&cpu0>;
++ };
++
++ core1 {
++ cpu = <&cpu1>;
++ };
++ };
++ };
+ };
+
+ sram: memory@80000000 {
+diff --git a/arch/riscv/boot/dts/sifive/fu740-c000.dtsi b/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
+index 7b77c13496d83..43bed6c0a84fe 100644
+--- a/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
++++ b/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
+@@ -134,6 +134,30 @@
+ interrupt-controller;
+ };
+ };
++
++ cpu-map {
++ cluster0 {
++ core0 {
++ cpu = <&cpu0>;
++ };
++
++ core1 {
++ cpu = <&cpu1>;
++ };
++
++ core2 {
++ cpu = <&cpu2>;
++ };
++
++ core3 {
++ cpu = <&cpu3>;
++ };
++
++ core4 {
++ cpu = <&cpu4>;
++ };
++ };
++ };
+ };
+ soc {
+ #address-cells = <2>;
+diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c
+index 9c0194f176fc0..571556bb9261a 100644
+--- a/arch/riscv/kernel/sys_riscv.c
++++ b/arch/riscv/kernel/sys_riscv.c
+@@ -18,9 +18,8 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len,
+ if (unlikely(offset & (~PAGE_MASK >> page_shift_offset)))
+ return -EINVAL;
+
+- if ((prot & PROT_WRITE) && (prot & PROT_EXEC))
+- if (unlikely(!(prot & PROT_READ)))
+- return -EINVAL;
++ if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ)))
++ return -EINVAL;
+
+ return ksys_mmap_pgoff(addr, len, prot, flags, fd,
+ offset >> (PAGE_SHIFT - page_shift_offset));
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index b404265092447..39d0f8bba4b40 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -16,6 +16,7 @@
+ #include <linux/mm.h>
+ #include <linux/module.h>
+ #include <linux/irq.h>
++#include <linux/kexec.h>
+
+ #include <asm/asm-prototypes.h>
+ #include <asm/bug.h>
+@@ -44,6 +45,9 @@ void die(struct pt_regs *regs, const char *str)
+
+ ret = notify_die(DIE_OOPS, str, regs, 0, regs->cause, SIGSEGV);
+
++ if (regs && kexec_should_crash(current))
++ crash_kexec(regs);
++
+ bust_spinlocks(0);
+ add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+ spin_unlock_irq(&die_lock);
+diff --git a/arch/um/os-Linux/skas/process.c b/arch/um/os-Linux/skas/process.c
+index c316c993a9491..b24db6017ded6 100644
+--- a/arch/um/os-Linux/skas/process.c
++++ b/arch/um/os-Linux/skas/process.c
+@@ -5,6 +5,7 @@
+ */
+
+ #include <stdlib.h>
++#include <stdbool.h>
+ #include <unistd.h>
+ #include <sched.h>
+ #include <errno.h>
+@@ -707,10 +708,24 @@ void halt_skas(void)
+ UML_LONGJMP(&initial_jmpbuf, INIT_JMP_HALT);
+ }
+
++static bool noreboot;
++
++static int __init noreboot_cmd_param(char *str, int *add)
++{
++ noreboot = true;
++ return 0;
++}
++
++__uml_setup("noreboot", noreboot_cmd_param,
++"noreboot\n"
++" Rather than rebooting, exit always, akin to QEMU's -no-reboot option.\n"
++" This is useful if you're using CONFIG_PANIC_TIMEOUT in order to catch\n"
++" crashes in CI\n");
++
+ void reboot_skas(void)
+ {
+ block_signals_trace();
+- UML_LONGJMP(&initial_jmpbuf, INIT_JMP_REBOOT);
++ UML_LONGJMP(&initial_jmpbuf, noreboot ? INIT_JMP_HALT : INIT_JMP_REBOOT);
+ }
+
+ void __switch_mm(struct mm_id *mm_idp)
+diff --git a/arch/x86/include/asm/ibt.h b/arch/x86/include/asm/ibt.h
+index 689880eca9bab..9b08082a5d9f5 100644
+--- a/arch/x86/include/asm/ibt.h
++++ b/arch/x86/include/asm/ibt.h
+@@ -31,6 +31,16 @@
+
+ #define __noendbr __attribute__((nocf_check))
+
++/*
++ * Create a dummy function pointer reference to prevent objtool from marking
++ * the function as needing to be "sealed" (i.e. ENDBR converted to NOP by
++ * apply_ibt_endbr()).
++ */
++#define IBT_NOSEAL(fname) \
++ ".pushsection .discard.ibt_endbr_noseal\n\t" \
++ _ASM_PTR fname "\n\t" \
++ ".popsection\n\t"
++
+ static inline __attribute_const__ u32 gen_endbr(void)
+ {
+ u32 endbr;
+@@ -84,6 +94,7 @@ extern __noendbr void ibt_restore(u64 save);
+ #ifndef __ASSEMBLY__
+
+ #define ASM_ENDBR
++#define IBT_NOSEAL(name)
+
+ #define __noendbr
+
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 74167dc5f55ec..4c3c27b6aea3b 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -505,7 +505,7 @@ static void kprobe_emulate_jcc(struct kprobe *p, struct pt_regs *regs)
+ match = ((regs->flags & X86_EFLAGS_SF) >> X86_EFLAGS_SF_BIT) ^
+ ((regs->flags & X86_EFLAGS_OF) >> X86_EFLAGS_OF_BIT);
+ if (p->ainsn.jcc.type >= 0xe)
+- match = match && (regs->flags & X86_EFLAGS_ZF);
++ match = match || (regs->flags & X86_EFLAGS_ZF);
+ }
+ __kprobe_emulate_jmp(p, regs, (match && !invert) || (!match && invert));
+ }
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index aa907cec09187..09fa8a94807bf 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -316,7 +316,8 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
+ ".align " __stringify(FASTOP_SIZE) " \n\t" \
+ ".type " name ", @function \n\t" \
+ name ":\n\t" \
+- ASM_ENDBR
++ ASM_ENDBR \
++ IBT_NOSEAL(name)
+
+ #define FOP_FUNC(name) \
+ __FOP_FUNC(#name)
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 39c5246964a91..0fe690ebc269b 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -645,7 +645,7 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
+ pages++;
+ spin_lock(&init_mm.page_table_lock);
+
+- prot = __pgprot(pgprot_val(prot) | __PAGE_KERNEL_LARGE);
++ prot = __pgprot(pgprot_val(prot) | _PAGE_PSE);
+
+ set_pte_init((pte_t *)pud,
+ pfn_pte((paddr & PUD_MASK) >> PAGE_SHIFT,
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 93d9d60980fb5..1eb13d57a946f 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2568,7 +2568,7 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule)
+ break;
+ case BLK_STS_RESOURCE:
+ case BLK_STS_DEV_RESOURCE:
+- blk_mq_request_bypass_insert(rq, false, last);
++ blk_mq_request_bypass_insert(rq, false, true);
+ blk_mq_commit_rqs(hctx, &queued, from_schedule);
+ return;
+ default:
+diff --git a/drivers/acpi/pci_mcfg.c b/drivers/acpi/pci_mcfg.c
+index 53cab975f612c..63b98eae5e75e 100644
+--- a/drivers/acpi/pci_mcfg.c
++++ b/drivers/acpi/pci_mcfg.c
+@@ -41,6 +41,8 @@ struct mcfg_fixup {
+ static struct mcfg_fixup mcfg_quirks[] = {
+ /* { OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */
+
++#ifdef CONFIG_ARM64
++
+ #define AL_ECAM(table_id, rev, seg, ops) \
+ { "AMAZON", table_id, rev, seg, MCFG_BUS_ANY, ops }
+
+@@ -169,6 +171,7 @@ static struct mcfg_fixup mcfg_quirks[] = {
+ ALTRA_ECAM_QUIRK(1, 13),
+ ALTRA_ECAM_QUIRK(1, 14),
+ ALTRA_ECAM_QUIRK(1, 15),
++#endif /* ARM64 */
+ };
+
+ static char mcfg_oem_id[ACPI_OEM_ID_SIZE];
+diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c
+index 701f61c01359f..3ad2823eb6f84 100644
+--- a/drivers/acpi/pptt.c
++++ b/drivers/acpi/pptt.c
+@@ -532,21 +532,37 @@ static int topology_get_acpi_cpu_tag(struct acpi_table_header *table,
+ return -ENOENT;
+ }
+
++
++static struct acpi_table_header *acpi_get_pptt(void)
++{
++ static struct acpi_table_header *pptt;
++ acpi_status status;
++
++ /*
++ * PPTT will be used at runtime on every CPU hotplug in path, so we
++ * don't need to call acpi_put_table() to release the table mapping.
++ */
++ if (!pptt) {
++ status = acpi_get_table(ACPI_SIG_PPTT, 0, &pptt);
++ if (ACPI_FAILURE(status))
++ acpi_pptt_warn_missing();
++ }
++
++ return pptt;
++}
++
+ static int find_acpi_cpu_topology_tag(unsigned int cpu, int level, int flag)
+ {
+ struct acpi_table_header *table;
+- acpi_status status;
+ int retval;
+
+- status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+- if (ACPI_FAILURE(status)) {
+- acpi_pptt_warn_missing();
++ table = acpi_get_pptt();
++ if (!table)
+ return -ENOENT;
+- }
++
+ retval = topology_get_acpi_cpu_tag(table, cpu, level, flag);
+ pr_debug("Topology Setup ACPI CPU %d, level %d ret = %d\n",
+ cpu, level, retval);
+- acpi_put_table(table);
+
+ return retval;
+ }
+@@ -567,16 +583,13 @@ static int find_acpi_cpu_topology_tag(unsigned int cpu, int level, int flag)
+ static int check_acpi_cpu_flag(unsigned int cpu, int rev, u32 flag)
+ {
+ struct acpi_table_header *table;
+- acpi_status status;
+ u32 acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+ struct acpi_pptt_processor *cpu_node = NULL;
+ int ret = -ENOENT;
+
+- status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+- if (ACPI_FAILURE(status)) {
+- acpi_pptt_warn_missing();
+- return ret;
+- }
++ table = acpi_get_pptt();
++ if (!table)
++ return -ENOENT;
+
+ if (table->revision >= rev)
+ cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
+@@ -584,8 +597,6 @@ static int check_acpi_cpu_flag(unsigned int cpu, int rev, u32 flag)
+ if (cpu_node)
+ ret = (cpu_node->flags & flag) != 0;
+
+- acpi_put_table(table);
+-
+ return ret;
+ }
+
+@@ -604,18 +615,15 @@ int acpi_find_last_cache_level(unsigned int cpu)
+ u32 acpi_cpu_id;
+ struct acpi_table_header *table;
+ int number_of_levels = 0;
+- acpi_status status;
++
++ table = acpi_get_pptt();
++ if (!table)
++ return -ENOENT;
+
+ pr_debug("Cache Setup find last level CPU=%d\n", cpu);
+
+ acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+- status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+- if (ACPI_FAILURE(status)) {
+- acpi_pptt_warn_missing();
+- } else {
+- number_of_levels = acpi_find_cache_levels(table, acpi_cpu_id);
+- acpi_put_table(table);
+- }
++ number_of_levels = acpi_find_cache_levels(table, acpi_cpu_id);
+ pr_debug("Cache Setup find last level level=%d\n", number_of_levels);
+
+ return number_of_levels;
+@@ -637,20 +645,16 @@ int acpi_find_last_cache_level(unsigned int cpu)
+ int cache_setup_acpi(unsigned int cpu)
+ {
+ struct acpi_table_header *table;
+- acpi_status status;
+
+- pr_debug("Cache Setup ACPI CPU %d\n", cpu);
+-
+- status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+- if (ACPI_FAILURE(status)) {
+- acpi_pptt_warn_missing();
++ table = acpi_get_pptt();
++ if (!table)
+ return -ENOENT;
+- }
++
++ pr_debug("Cache Setup ACPI CPU %d\n", cpu);
+
+ cache_setup_acpi_cpu(table, cpu);
+- acpi_put_table(table);
+
+- return status;
++ return 0;
+ }
+
+ /**
+@@ -766,50 +770,38 @@ int find_acpi_cpu_topology_package(unsigned int cpu)
+ int find_acpi_cpu_topology_cluster(unsigned int cpu)
+ {
+ struct acpi_table_header *table;
+- acpi_status status;
+ struct acpi_pptt_processor *cpu_node, *cluster_node;
+ u32 acpi_cpu_id;
+ int retval;
+ int is_thread;
+
+- status = acpi_get_table(ACPI_SIG_PPTT, 0, &table);
+- if (ACPI_FAILURE(status)) {
+- acpi_pptt_warn_missing();
++ table = acpi_get_pptt();
++ if (!table)
+ return -ENOENT;
+- }
+
+ acpi_cpu_id = get_acpi_id_for_cpu(cpu);
+ cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
+- if (cpu_node == NULL || !cpu_node->parent) {
+- retval = -ENOENT;
+- goto put_table;
+- }
++ if (!cpu_node || !cpu_node->parent)
++ return -ENOENT;
+
+ is_thread = cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_IS_THREAD;
+ cluster_node = fetch_pptt_node(table, cpu_node->parent);
+- if (cluster_node == NULL) {
+- retval = -ENOENT;
+- goto put_table;
+- }
++ if (!cluster_node)
++ return -ENOENT;
++
+ if (is_thread) {
+- if (!cluster_node->parent) {
+- retval = -ENOENT;
+- goto put_table;
+- }
++ if (!cluster_node->parent)
++ return -ENOENT;
++
+ cluster_node = fetch_pptt_node(table, cluster_node->parent);
+- if (cluster_node == NULL) {
+- retval = -ENOENT;
+- goto put_table;
+- }
++ if (!cluster_node)
++ return -ENOENT;
+ }
+ if (cluster_node->flags & ACPI_PPTT_ACPI_PROCESSOR_ID_VALID)
+ retval = cluster_node->acpi_processor_id;
+ else
+ retval = ACPI_PTR_DIFF(cluster_node, table);
+
+-put_table:
+- acpi_put_table(table);
+-
+ return retval;
+ }
+
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index d3173811614ec..bc9a645f8bb77 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -155,10 +155,10 @@ static bool acpi_nondev_subnode_ok(acpi_handle scope,
+ return acpi_nondev_subnode_data_ok(handle, link, list, parent);
+ }
+
+-static int acpi_add_nondev_subnodes(acpi_handle scope,
+- const union acpi_object *links,
+- struct list_head *list,
+- struct fwnode_handle *parent)
++static bool acpi_add_nondev_subnodes(acpi_handle scope,
++ const union acpi_object *links,
++ struct list_head *list,
++ struct fwnode_handle *parent)
+ {
+ bool ret = false;
+ int i;
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 3307ed45fe4d0..ceb0c64cb670a 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -2122,6 +2122,7 @@ const char *ata_get_cmd_name(u8 command)
+ { ATA_CMD_WRITE_QUEUED_FUA_EXT, "WRITE DMA QUEUED FUA EXT" },
+ { ATA_CMD_FPDMA_READ, "READ FPDMA QUEUED" },
+ { ATA_CMD_FPDMA_WRITE, "WRITE FPDMA QUEUED" },
++ { ATA_CMD_NCQ_NON_DATA, "NCQ NON-DATA" },
+ { ATA_CMD_FPDMA_SEND, "SEND FPDMA QUEUED" },
+ { ATA_CMD_FPDMA_RECV, "RECEIVE FPDMA QUEUED" },
+ { ATA_CMD_PIO_READ, "READ SECTOR(S)" },
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index 81ce81a75fc67..681cb3786794d 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -3752,6 +3752,7 @@ static void __exit idt77252_exit(void)
+ card = idt77252_chain;
+ dev = card->atmdev;
+ idt77252_chain = card->next;
++ del_timer_sync(&card->tst_timer);
+
+ if (dev->phy->stop)
+ dev->phy->stop(dev);
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 6fc7850c2b0a0..d756423e0059a 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -101,6 +101,14 @@ static inline blk_status_t virtblk_result(struct virtblk_req *vbr)
+ }
+ }
+
++static inline struct virtio_blk_vq *get_virtio_blk_vq(struct blk_mq_hw_ctx *hctx)
++{
++ struct virtio_blk *vblk = hctx->queue->queuedata;
++ struct virtio_blk_vq *vq = &vblk->vqs[hctx->queue_num];
++
++ return vq;
++}
++
+ static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr)
+ {
+ struct scatterlist hdr, status, *sgs[3];
+@@ -416,7 +424,7 @@ static void virtio_queue_rqs(struct request **rqlist)
+ struct request *requeue_list = NULL;
+
+ rq_list_for_each_safe(rqlist, req, next) {
+- struct virtio_blk_vq *vq = req->mq_hctx->driver_data;
++ struct virtio_blk_vq *vq = get_virtio_blk_vq(req->mq_hctx);
+ bool kick;
+
+ if (!virtblk_prep_rq_batch(req)) {
+@@ -837,7 +845,7 @@ static void virtblk_complete_batch(struct io_comp_batch *iob)
+ static int virtblk_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
+ {
+ struct virtio_blk *vblk = hctx->queue->queuedata;
+- struct virtio_blk_vq *vq = hctx->driver_data;
++ struct virtio_blk_vq *vq = get_virtio_blk_vq(hctx);
+ struct virtblk_req *vbr;
+ unsigned long flags;
+ unsigned int len;
+@@ -862,22 +870,10 @@ static int virtblk_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
+ return found;
+ }
+
+-static int virtblk_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
+- unsigned int hctx_idx)
+-{
+- struct virtio_blk *vblk = data;
+- struct virtio_blk_vq *vq = &vblk->vqs[hctx_idx];
+-
+- WARN_ON(vblk->tag_set.tags[hctx_idx] != hctx->tags);
+- hctx->driver_data = vq;
+- return 0;
+-}
+-
+ static const struct blk_mq_ops virtio_mq_ops = {
+ .queue_rq = virtio_queue_rq,
+ .queue_rqs = virtio_queue_rqs,
+ .commit_rqs = virtio_commit_rqs,
+- .init_hctx = virtblk_init_hctx,
+ .complete = virtblk_request_done,
+ .map_queues = virtblk_map_queues,
+ .poll = virtblk_poll,
+diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
+index 052aa3f65514e..0916de952e091 100644
+--- a/drivers/block/zram/zcomp.c
++++ b/drivers/block/zram/zcomp.c
+@@ -63,12 +63,6 @@ static int zcomp_strm_init(struct zcomp_strm *zstrm, struct zcomp *comp)
+
+ bool zcomp_available_algorithm(const char *comp)
+ {
+- int i;
+-
+- i = sysfs_match_string(backends, comp);
+- if (i >= 0)
+- return true;
+-
+ /*
+ * Crypto does not ignore a trailing new line symbol,
+ * so make sure you don't supply a string containing
+@@ -217,6 +211,11 @@ struct zcomp *zcomp_create(const char *compress)
+ struct zcomp *comp;
+ int error;
+
++ /*
++ * Crypto API will execute /sbin/modprobe if the compression module
++ * is not loaded yet. We must do it here, otherwise we are about to
++ * call /sbin/modprobe under CPU hot-plug lock.
++ */
+ if (!zcomp_available_algorithm(compress))
+ return ERR_PTR(-EINVAL);
+
+diff --git a/drivers/clk/imx/clk-imx93.c b/drivers/clk/imx/clk-imx93.c
+index 26885bd3971c4..f5c9fa40491c5 100644
+--- a/drivers/clk/imx/clk-imx93.c
++++ b/drivers/clk/imx/clk-imx93.c
+@@ -160,7 +160,7 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_SEMA2_GATE, "sema2", "bus_wakeup_root", 0x8480, },
+ { IMX93_CLK_MU_A_GATE, "mu_a", "bus_aon_root", 0x84c0, },
+ { IMX93_CLK_MU_B_GATE, "mu_b", "bus_aon_root", 0x8500, },
+- { IMX93_CLK_EDMA1_GATE, "edma1", "wakeup_axi_root", 0x8540, },
++ { IMX93_CLK_EDMA1_GATE, "edma1", "m33_root", 0x8540, },
+ { IMX93_CLK_EDMA2_GATE, "edma2", "wakeup_axi_root", 0x8580, },
+ { IMX93_CLK_FLEXSPI1_GATE, "flexspi", "flexspi_root", 0x8640, },
+ { IMX93_CLK_GPIO1_GATE, "gpio1", "m33_root", 0x8880, },
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 4406cf609aae7..288692f0ea390 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -1439,7 +1439,7 @@ const struct clk_ops clk_alpha_pll_postdiv_fabia_ops = {
+ EXPORT_SYMBOL_GPL(clk_alpha_pll_postdiv_fabia_ops);
+
+ /**
+- * clk_lucid_pll_configure - configure the lucid pll
++ * clk_trion_pll_configure - configure the trion pll
+ *
+ * @pll: clk alpha pll
+ * @regmap: register map
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index 2c2ecfc5e61f5..d6d5defb82c9f 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -662,6 +662,7 @@ static struct clk_branch gcc_sleep_clk_src = {
+ },
+ .num_parents = 1,
+ .ops = &clk_branch2_ops,
++ .flags = CLK_IS_CRITICAL,
+ },
+ },
+ };
+diff --git a/drivers/clk/ti/clk-44xx.c b/drivers/clk/ti/clk-44xx.c
+index d078e5d73ed94..868bc7af21b0b 100644
+--- a/drivers/clk/ti/clk-44xx.c
++++ b/drivers/clk/ti/clk-44xx.c
+@@ -56,7 +56,7 @@ static const struct omap_clkctrl_bit_data omap4_aess_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_func_dmic_abe_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0018:26",
++ "abe-clkctrl:0018:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -76,7 +76,7 @@ static const struct omap_clkctrl_bit_data omap4_dmic_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_func_mcasp_abe_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0020:26",
++ "abe-clkctrl:0020:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -89,7 +89,7 @@ static const struct omap_clkctrl_bit_data omap4_mcasp_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_func_mcbsp1_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0028:26",
++ "abe-clkctrl:0028:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -102,7 +102,7 @@ static const struct omap_clkctrl_bit_data omap4_mcbsp1_bit_data[] __initconst =
+ };
+
+ static const char * const omap4_func_mcbsp2_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0030:26",
++ "abe-clkctrl:0030:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -115,7 +115,7 @@ static const struct omap_clkctrl_bit_data omap4_mcbsp2_bit_data[] __initconst =
+ };
+
+ static const char * const omap4_func_mcbsp3_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0038:26",
++ "abe-clkctrl:0038:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -183,18 +183,18 @@ static const struct omap_clkctrl_bit_data omap4_timer8_bit_data[] __initconst =
+
+ static const struct omap_clkctrl_reg_data omap4_abe_clkctrl_regs[] __initconst = {
+ { OMAP4_L4_ABE_CLKCTRL, NULL, 0, "ocp_abe_iclk" },
+- { OMAP4_AESS_CLKCTRL, omap4_aess_bit_data, CLKF_SW_SUP, "abe_cm:clk:0008:24" },
++ { OMAP4_AESS_CLKCTRL, omap4_aess_bit_data, CLKF_SW_SUP, "abe-clkctrl:0008:24" },
+ { OMAP4_MCPDM_CLKCTRL, NULL, CLKF_SW_SUP, "pad_clks_ck" },
+- { OMAP4_DMIC_CLKCTRL, omap4_dmic_bit_data, CLKF_SW_SUP, "abe_cm:clk:0018:24" },
+- { OMAP4_MCASP_CLKCTRL, omap4_mcasp_bit_data, CLKF_SW_SUP, "abe_cm:clk:0020:24" },
+- { OMAP4_MCBSP1_CLKCTRL, omap4_mcbsp1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0028:24" },
+- { OMAP4_MCBSP2_CLKCTRL, omap4_mcbsp2_bit_data, CLKF_SW_SUP, "abe_cm:clk:0030:24" },
+- { OMAP4_MCBSP3_CLKCTRL, omap4_mcbsp3_bit_data, CLKF_SW_SUP, "abe_cm:clk:0038:24" },
+- { OMAP4_SLIMBUS1_CLKCTRL, omap4_slimbus1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0040:8" },
+- { OMAP4_TIMER5_CLKCTRL, omap4_timer5_bit_data, CLKF_SW_SUP, "abe_cm:clk:0048:24" },
+- { OMAP4_TIMER6_CLKCTRL, omap4_timer6_bit_data, CLKF_SW_SUP, "abe_cm:clk:0050:24" },
+- { OMAP4_TIMER7_CLKCTRL, omap4_timer7_bit_data, CLKF_SW_SUP, "abe_cm:clk:0058:24" },
+- { OMAP4_TIMER8_CLKCTRL, omap4_timer8_bit_data, CLKF_SW_SUP, "abe_cm:clk:0060:24" },
++ { OMAP4_DMIC_CLKCTRL, omap4_dmic_bit_data, CLKF_SW_SUP, "abe-clkctrl:0018:24" },
++ { OMAP4_MCASP_CLKCTRL, omap4_mcasp_bit_data, CLKF_SW_SUP, "abe-clkctrl:0020:24" },
++ { OMAP4_MCBSP1_CLKCTRL, omap4_mcbsp1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0028:24" },
++ { OMAP4_MCBSP2_CLKCTRL, omap4_mcbsp2_bit_data, CLKF_SW_SUP, "abe-clkctrl:0030:24" },
++ { OMAP4_MCBSP3_CLKCTRL, omap4_mcbsp3_bit_data, CLKF_SW_SUP, "abe-clkctrl:0038:24" },
++ { OMAP4_SLIMBUS1_CLKCTRL, omap4_slimbus1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0040:8" },
++ { OMAP4_TIMER5_CLKCTRL, omap4_timer5_bit_data, CLKF_SW_SUP, "abe-clkctrl:0048:24" },
++ { OMAP4_TIMER6_CLKCTRL, omap4_timer6_bit_data, CLKF_SW_SUP, "abe-clkctrl:0050:24" },
++ { OMAP4_TIMER7_CLKCTRL, omap4_timer7_bit_data, CLKF_SW_SUP, "abe-clkctrl:0058:24" },
++ { OMAP4_TIMER8_CLKCTRL, omap4_timer8_bit_data, CLKF_SW_SUP, "abe-clkctrl:0060:24" },
+ { OMAP4_WD_TIMER3_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { 0 },
+ };
+@@ -287,7 +287,7 @@ static const struct omap_clkctrl_bit_data omap4_fdif_bit_data[] __initconst = {
+
+ static const struct omap_clkctrl_reg_data omap4_iss_clkctrl_regs[] __initconst = {
+ { OMAP4_ISS_CLKCTRL, omap4_iss_bit_data, CLKF_SW_SUP, "ducati_clk_mux_ck" },
+- { OMAP4_FDIF_CLKCTRL, omap4_fdif_bit_data, CLKF_SW_SUP, "iss_cm:clk:0008:24" },
++ { OMAP4_FDIF_CLKCTRL, omap4_fdif_bit_data, CLKF_SW_SUP, "iss-clkctrl:0008:24" },
+ { 0 },
+ };
+
+@@ -320,7 +320,7 @@ static const struct omap_clkctrl_bit_data omap4_dss_core_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l3_dss_clkctrl_regs[] __initconst = {
+- { OMAP4_DSS_CORE_CLKCTRL, omap4_dss_core_bit_data, CLKF_SW_SUP, "l3_dss_cm:clk:0000:8" },
++ { OMAP4_DSS_CORE_CLKCTRL, omap4_dss_core_bit_data, CLKF_SW_SUP, "l3-dss-clkctrl:0000:8" },
+ { 0 },
+ };
+
+@@ -336,7 +336,7 @@ static const struct omap_clkctrl_bit_data omap4_gpu_bit_data[] __initconst = {
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l3_gfx_clkctrl_regs[] __initconst = {
+- { OMAP4_GPU_CLKCTRL, omap4_gpu_bit_data, CLKF_SW_SUP, "l3_gfx_cm:clk:0000:24" },
++ { OMAP4_GPU_CLKCTRL, omap4_gpu_bit_data, CLKF_SW_SUP, "l3-gfx-clkctrl:0000:24" },
+ { 0 },
+ };
+
+@@ -372,12 +372,12 @@ static const struct omap_clkctrl_bit_data omap4_hsi_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_usb_host_hs_utmi_p1_clk_parents[] __initconst = {
+- "l3_init_cm:clk:0038:24",
++ "l3-init-clkctrl:0038:24",
+ NULL,
+ };
+
+ static const char * const omap4_usb_host_hs_utmi_p2_clk_parents[] __initconst = {
+- "l3_init_cm:clk:0038:25",
++ "l3-init-clkctrl:0038:25",
+ NULL,
+ };
+
+@@ -418,7 +418,7 @@ static const struct omap_clkctrl_bit_data omap4_usb_host_hs_bit_data[] __initcon
+ };
+
+ static const char * const omap4_usb_otg_hs_xclk_parents[] __initconst = {
+- "l3_init_cm:clk:0040:24",
++ "l3-init-clkctrl:0040:24",
+ NULL,
+ };
+
+@@ -452,14 +452,14 @@ static const struct omap_clkctrl_bit_data omap4_ocp2scp_usb_phy_bit_data[] __ini
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l3_init_clkctrl_regs[] __initconst = {
+- { OMAP4_MMC1_CLKCTRL, omap4_mmc1_bit_data, CLKF_SW_SUP, "l3_init_cm:clk:0008:24" },
+- { OMAP4_MMC2_CLKCTRL, omap4_mmc2_bit_data, CLKF_SW_SUP, "l3_init_cm:clk:0010:24" },
+- { OMAP4_HSI_CLKCTRL, omap4_hsi_bit_data, CLKF_HW_SUP, "l3_init_cm:clk:0018:24" },
++ { OMAP4_MMC1_CLKCTRL, omap4_mmc1_bit_data, CLKF_SW_SUP, "l3-init-clkctrl:0008:24" },
++ { OMAP4_MMC2_CLKCTRL, omap4_mmc2_bit_data, CLKF_SW_SUP, "l3-init-clkctrl:0010:24" },
++ { OMAP4_HSI_CLKCTRL, omap4_hsi_bit_data, CLKF_HW_SUP, "l3-init-clkctrl:0018:24" },
+ { OMAP4_USB_HOST_HS_CLKCTRL, omap4_usb_host_hs_bit_data, CLKF_SW_SUP, "init_60m_fclk" },
+ { OMAP4_USB_OTG_HS_CLKCTRL, omap4_usb_otg_hs_bit_data, CLKF_HW_SUP, "l3_div_ck" },
+ { OMAP4_USB_TLL_HS_CLKCTRL, omap4_usb_tll_hs_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+ { OMAP4_USB_HOST_FS_CLKCTRL, NULL, CLKF_SW_SUP, "func_48mc_fclk" },
+- { OMAP4_OCP2SCP_USB_PHY_CLKCTRL, omap4_ocp2scp_usb_phy_bit_data, CLKF_HW_SUP, "l3_init_cm:clk:00c0:8" },
++ { OMAP4_OCP2SCP_USB_PHY_CLKCTRL, omap4_ocp2scp_usb_phy_bit_data, CLKF_HW_SUP, "l3-init-clkctrl:00c0:8" },
+ { 0 },
+ };
+
+@@ -530,7 +530,7 @@ static const struct omap_clkctrl_bit_data omap4_gpio6_bit_data[] __initconst = {
+ };
+
+ static const char * const omap4_per_mcbsp4_gfclk_parents[] __initconst = {
+- "l4_per_cm:clk:00c0:26",
++ "l4-per-clkctrl:00c0:26",
+ "pad_clks_ck",
+ NULL,
+ };
+@@ -570,12 +570,12 @@ static const struct omap_clkctrl_bit_data omap4_slimbus2_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap4_l4_per_clkctrl_regs[] __initconst = {
+- { OMAP4_TIMER10_CLKCTRL, omap4_timer10_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0008:24" },
+- { OMAP4_TIMER11_CLKCTRL, omap4_timer11_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0010:24" },
+- { OMAP4_TIMER2_CLKCTRL, omap4_timer2_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0018:24" },
+- { OMAP4_TIMER3_CLKCTRL, omap4_timer3_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0020:24" },
+- { OMAP4_TIMER4_CLKCTRL, omap4_timer4_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0028:24" },
+- { OMAP4_TIMER9_CLKCTRL, omap4_timer9_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0030:24" },
++ { OMAP4_TIMER10_CLKCTRL, omap4_timer10_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0008:24" },
++ { OMAP4_TIMER11_CLKCTRL, omap4_timer11_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0010:24" },
++ { OMAP4_TIMER2_CLKCTRL, omap4_timer2_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0018:24" },
++ { OMAP4_TIMER3_CLKCTRL, omap4_timer3_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0020:24" },
++ { OMAP4_TIMER4_CLKCTRL, omap4_timer4_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0028:24" },
++ { OMAP4_TIMER9_CLKCTRL, omap4_timer9_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0030:24" },
+ { OMAP4_ELM_CLKCTRL, NULL, 0, "l4_div_ck" },
+ { OMAP4_GPIO2_CLKCTRL, omap4_gpio2_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+ { OMAP4_GPIO3_CLKCTRL, omap4_gpio3_bit_data, CLKF_HW_SUP, "l4_div_ck" },
+@@ -588,14 +588,14 @@ static const struct omap_clkctrl_reg_data omap4_l4_per_clkctrl_regs[] __initcons
+ { OMAP4_I2C3_CLKCTRL, NULL, CLKF_SW_SUP, "func_96m_fclk" },
+ { OMAP4_I2C4_CLKCTRL, NULL, CLKF_SW_SUP, "func_96m_fclk" },
+ { OMAP4_L4_PER_CLKCTRL, NULL, 0, "l4_div_ck" },
+- { OMAP4_MCBSP4_CLKCTRL, omap4_mcbsp4_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:00c0:24" },
++ { OMAP4_MCBSP4_CLKCTRL, omap4_mcbsp4_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:00c0:24" },
+ { OMAP4_MCSPI1_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MCSPI2_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MCSPI3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MCSPI4_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MMC3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_MMC4_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+- { OMAP4_SLIMBUS2_CLKCTRL, omap4_slimbus2_bit_data, CLKF_SW_SUP, "l4_per_cm:clk:0118:8" },
++ { OMAP4_SLIMBUS2_CLKCTRL, omap4_slimbus2_bit_data, CLKF_SW_SUP, "l4-per-clkctrl:0118:8" },
+ { OMAP4_UART1_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_UART2_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+ { OMAP4_UART3_CLKCTRL, NULL, CLKF_SW_SUP, "func_48m_fclk" },
+@@ -630,7 +630,7 @@ static const struct omap_clkctrl_reg_data omap4_l4_wkup_clkctrl_regs[] __initcon
+ { OMAP4_L4_WKUP_CLKCTRL, NULL, 0, "l4_wkup_clk_mux_ck" },
+ { OMAP4_WD_TIMER2_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { OMAP4_GPIO1_CLKCTRL, omap4_gpio1_bit_data, CLKF_HW_SUP, "l4_wkup_clk_mux_ck" },
+- { OMAP4_TIMER1_CLKCTRL, omap4_timer1_bit_data, CLKF_SW_SUP, "l4_wkup_cm:clk:0020:24" },
++ { OMAP4_TIMER1_CLKCTRL, omap4_timer1_bit_data, CLKF_SW_SUP, "l4-wkup-clkctrl:0020:24" },
+ { OMAP4_COUNTER_32K_CLKCTRL, NULL, 0, "sys_32k_ck" },
+ { OMAP4_KBD_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { 0 },
+@@ -644,7 +644,7 @@ static const char * const omap4_pmd_stm_clock_mux_ck_parents[] __initconst = {
+ };
+
+ static const char * const omap4_trace_clk_div_div_ck_parents[] __initconst = {
+- "emu_sys_cm:clk:0000:22",
++ "emu-sys-clkctrl:0000:22",
+ NULL,
+ };
+
+@@ -662,7 +662,7 @@ static const struct omap_clkctrl_div_data omap4_trace_clk_div_div_ck_data __init
+ };
+
+ static const char * const omap4_stm_clk_div_ck_parents[] __initconst = {
+- "emu_sys_cm:clk:0000:20",
++ "emu-sys-clkctrl:0000:20",
+ NULL,
+ };
+
+@@ -716,73 +716,73 @@ static struct ti_dt_clk omap44xx_clks[] = {
+ * hwmod support. Once hwmod is removed, these can be removed
+ * also.
+ */
+- DT_CLK(NULL, "aess_fclk", "abe_cm:0008:24"),
+- DT_CLK(NULL, "cm2_dm10_mux", "l4_per_cm:0008:24"),
+- DT_CLK(NULL, "cm2_dm11_mux", "l4_per_cm:0010:24"),
+- DT_CLK(NULL, "cm2_dm2_mux", "l4_per_cm:0018:24"),
+- DT_CLK(NULL, "cm2_dm3_mux", "l4_per_cm:0020:24"),
+- DT_CLK(NULL, "cm2_dm4_mux", "l4_per_cm:0028:24"),
+- DT_CLK(NULL, "cm2_dm9_mux", "l4_per_cm:0030:24"),
+- DT_CLK(NULL, "dmic_sync_mux_ck", "abe_cm:0018:26"),
+- DT_CLK(NULL, "dmt1_clk_mux", "l4_wkup_cm:0020:24"),
+- DT_CLK(NULL, "dss_48mhz_clk", "l3_dss_cm:0000:9"),
+- DT_CLK(NULL, "dss_dss_clk", "l3_dss_cm:0000:8"),
+- DT_CLK(NULL, "dss_sys_clk", "l3_dss_cm:0000:10"),
+- DT_CLK(NULL, "dss_tv_clk", "l3_dss_cm:0000:11"),
+- DT_CLK(NULL, "fdif_fck", "iss_cm:0008:24"),
+- DT_CLK(NULL, "func_dmic_abe_gfclk", "abe_cm:0018:24"),
+- DT_CLK(NULL, "func_mcasp_abe_gfclk", "abe_cm:0020:24"),
+- DT_CLK(NULL, "func_mcbsp1_gfclk", "abe_cm:0028:24"),
+- DT_CLK(NULL, "func_mcbsp2_gfclk", "abe_cm:0030:24"),
+- DT_CLK(NULL, "func_mcbsp3_gfclk", "abe_cm:0038:24"),
+- DT_CLK(NULL, "gpio1_dbclk", "l4_wkup_cm:0018:8"),
+- DT_CLK(NULL, "gpio2_dbclk", "l4_per_cm:0040:8"),
+- DT_CLK(NULL, "gpio3_dbclk", "l4_per_cm:0048:8"),
+- DT_CLK(NULL, "gpio4_dbclk", "l4_per_cm:0050:8"),
+- DT_CLK(NULL, "gpio5_dbclk", "l4_per_cm:0058:8"),
+- DT_CLK(NULL, "gpio6_dbclk", "l4_per_cm:0060:8"),
+- DT_CLK(NULL, "hsi_fck", "l3_init_cm:0018:24"),
+- DT_CLK(NULL, "hsmmc1_fclk", "l3_init_cm:0008:24"),
+- DT_CLK(NULL, "hsmmc2_fclk", "l3_init_cm:0010:24"),
+- DT_CLK(NULL, "iss_ctrlclk", "iss_cm:0000:8"),
+- DT_CLK(NULL, "mcasp_sync_mux_ck", "abe_cm:0020:26"),
+- DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe_cm:0028:26"),
+- DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe_cm:0030:26"),
+- DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe_cm:0038:26"),
+- DT_CLK(NULL, "mcbsp4_sync_mux_ck", "l4_per_cm:00c0:26"),
+- DT_CLK(NULL, "ocp2scp_usb_phy_phy_48m", "l3_init_cm:00c0:8"),
+- DT_CLK(NULL, "otg_60m_gfclk", "l3_init_cm:0040:24"),
+- DT_CLK(NULL, "per_mcbsp4_gfclk", "l4_per_cm:00c0:24"),
+- DT_CLK(NULL, "pmd_stm_clock_mux_ck", "emu_sys_cm:0000:20"),
+- DT_CLK(NULL, "pmd_trace_clk_mux_ck", "emu_sys_cm:0000:22"),
+- DT_CLK(NULL, "sgx_clk_mux", "l3_gfx_cm:0000:24"),
+- DT_CLK(NULL, "slimbus1_fclk_0", "abe_cm:0040:8"),
+- DT_CLK(NULL, "slimbus1_fclk_1", "abe_cm:0040:9"),
+- DT_CLK(NULL, "slimbus1_fclk_2", "abe_cm:0040:10"),
+- DT_CLK(NULL, "slimbus1_slimbus_clk", "abe_cm:0040:11"),
+- DT_CLK(NULL, "slimbus2_fclk_0", "l4_per_cm:0118:8"),
+- DT_CLK(NULL, "slimbus2_fclk_1", "l4_per_cm:0118:9"),
+- DT_CLK(NULL, "slimbus2_slimbus_clk", "l4_per_cm:0118:10"),
+- DT_CLK(NULL, "stm_clk_div_ck", "emu_sys_cm:0000:27"),
+- DT_CLK(NULL, "timer5_sync_mux", "abe_cm:0048:24"),
+- DT_CLK(NULL, "timer6_sync_mux", "abe_cm:0050:24"),
+- DT_CLK(NULL, "timer7_sync_mux", "abe_cm:0058:24"),
+- DT_CLK(NULL, "timer8_sync_mux", "abe_cm:0060:24"),
+- DT_CLK(NULL, "trace_clk_div_div_ck", "emu_sys_cm:0000:24"),
+- DT_CLK(NULL, "usb_host_hs_func48mclk", "l3_init_cm:0038:15"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3_init_cm:0038:13"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3_init_cm:0038:14"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3_init_cm:0038:11"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3_init_cm:0038:12"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3_init_cm:0038:8"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3_init_cm:0038:9"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3_init_cm:0038:10"),
+- DT_CLK(NULL, "usb_otg_hs_xclk", "l3_init_cm:0040:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3_init_cm:0048:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3_init_cm:0048:9"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3_init_cm:0048:10"),
+- DT_CLK(NULL, "utmi_p1_gfclk", "l3_init_cm:0038:24"),
+- DT_CLK(NULL, "utmi_p2_gfclk", "l3_init_cm:0038:25"),
++ DT_CLK(NULL, "aess_fclk", "abe-clkctrl:0008:24"),
++ DT_CLK(NULL, "cm2_dm10_mux", "l4-per-clkctrl:0008:24"),
++ DT_CLK(NULL, "cm2_dm11_mux", "l4-per-clkctrl:0010:24"),
++ DT_CLK(NULL, "cm2_dm2_mux", "l4-per-clkctrl:0018:24"),
++ DT_CLK(NULL, "cm2_dm3_mux", "l4-per-clkctrl:0020:24"),
++ DT_CLK(NULL, "cm2_dm4_mux", "l4-per-clkctrl:0028:24"),
++ DT_CLK(NULL, "cm2_dm9_mux", "l4-per-clkctrl:0030:24"),
++ DT_CLK(NULL, "dmic_sync_mux_ck", "abe-clkctrl:0018:26"),
++ DT_CLK(NULL, "dmt1_clk_mux", "l4-wkup-clkctrl:0020:24"),
++ DT_CLK(NULL, "dss_48mhz_clk", "l3-dss-clkctrl:0000:9"),
++ DT_CLK(NULL, "dss_dss_clk", "l3-dss-clkctrl:0000:8"),
++ DT_CLK(NULL, "dss_sys_clk", "l3-dss-clkctrl:0000:10"),
++ DT_CLK(NULL, "dss_tv_clk", "l3-dss-clkctrl:0000:11"),
++ DT_CLK(NULL, "fdif_fck", "iss-clkctrl:0008:24"),
++ DT_CLK(NULL, "func_dmic_abe_gfclk", "abe-clkctrl:0018:24"),
++ DT_CLK(NULL, "func_mcasp_abe_gfclk", "abe-clkctrl:0020:24"),
++ DT_CLK(NULL, "func_mcbsp1_gfclk", "abe-clkctrl:0028:24"),
++ DT_CLK(NULL, "func_mcbsp2_gfclk", "abe-clkctrl:0030:24"),
++ DT_CLK(NULL, "func_mcbsp3_gfclk", "abe-clkctrl:0038:24"),
++ DT_CLK(NULL, "gpio1_dbclk", "l4-wkup-clkctrl:0018:8"),
++ DT_CLK(NULL, "gpio2_dbclk", "l4-per-clkctrl:0040:8"),
++ DT_CLK(NULL, "gpio3_dbclk", "l4-per-clkctrl:0048:8"),
++ DT_CLK(NULL, "gpio4_dbclk", "l4-per-clkctrl:0050:8"),
++ DT_CLK(NULL, "gpio5_dbclk", "l4-per-clkctrl:0058:8"),
++ DT_CLK(NULL, "gpio6_dbclk", "l4-per-clkctrl:0060:8"),
++ DT_CLK(NULL, "hsi_fck", "l3-init-clkctrl:0018:24"),
++ DT_CLK(NULL, "hsmmc1_fclk", "l3-init-clkctrl:0008:24"),
++ DT_CLK(NULL, "hsmmc2_fclk", "l3-init-clkctrl:0010:24"),
++ DT_CLK(NULL, "iss_ctrlclk", "iss-clkctrl:0000:8"),
++ DT_CLK(NULL, "mcasp_sync_mux_ck", "abe-clkctrl:0020:26"),
++ DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe-clkctrl:0028:26"),
++ DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe-clkctrl:0030:26"),
++ DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe-clkctrl:0038:26"),
++ DT_CLK(NULL, "mcbsp4_sync_mux_ck", "l4-per-clkctrl:00c0:26"),
++ DT_CLK(NULL, "ocp2scp_usb_phy_phy_48m", "l3-init-clkctrl:00c0:8"),
++ DT_CLK(NULL, "otg_60m_gfclk", "l3-init-clkctrl:0040:24"),
++ DT_CLK(NULL, "per_mcbsp4_gfclk", "l4-per-clkctrl:00c0:24"),
++ DT_CLK(NULL, "pmd_stm_clock_mux_ck", "emu-sys-clkctrl:0000:20"),
++ DT_CLK(NULL, "pmd_trace_clk_mux_ck", "emu-sys-clkctrl:0000:22"),
++ DT_CLK(NULL, "sgx_clk_mux", "l3-gfx-clkctrl:0000:24"),
++ DT_CLK(NULL, "slimbus1_fclk_0", "abe-clkctrl:0040:8"),
++ DT_CLK(NULL, "slimbus1_fclk_1", "abe-clkctrl:0040:9"),
++ DT_CLK(NULL, "slimbus1_fclk_2", "abe-clkctrl:0040:10"),
++ DT_CLK(NULL, "slimbus1_slimbus_clk", "abe-clkctrl:0040:11"),
++ DT_CLK(NULL, "slimbus2_fclk_0", "l4-per-clkctrl:0118:8"),
++ DT_CLK(NULL, "slimbus2_fclk_1", "l4-per-clkctrl:0118:9"),
++ DT_CLK(NULL, "slimbus2_slimbus_clk", "l4-per-clkctrl:0118:10"),
++ DT_CLK(NULL, "stm_clk_div_ck", "emu-sys-clkctrl:0000:27"),
++ DT_CLK(NULL, "timer5_sync_mux", "abe-clkctrl:0048:24"),
++ DT_CLK(NULL, "timer6_sync_mux", "abe-clkctrl:0050:24"),
++ DT_CLK(NULL, "timer7_sync_mux", "abe-clkctrl:0058:24"),
++ DT_CLK(NULL, "timer8_sync_mux", "abe-clkctrl:0060:24"),
++ DT_CLK(NULL, "trace_clk_div_div_ck", "emu-sys-clkctrl:0000:24"),
++ DT_CLK(NULL, "usb_host_hs_func48mclk", "l3-init-clkctrl:0038:15"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3-init-clkctrl:0038:13"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3-init-clkctrl:0038:14"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3-init-clkctrl:0038:11"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3-init-clkctrl:0038:12"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3-init-clkctrl:0038:8"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3-init-clkctrl:0038:9"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3_init-clkctrl:0038:10"),
++ DT_CLK(NULL, "usb_otg_hs_xclk", "l3-init-clkctrl:0040:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3-init-clkctrl:0048:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3-init-clkctrl:0048:9"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3-init-clkctrl:0048:10"),
++ DT_CLK(NULL, "utmi_p1_gfclk", "l3-init-clkctrl:0038:24"),
++ DT_CLK(NULL, "utmi_p2_gfclk", "l3-init-clkctrl:0038:25"),
+ { .node_name = NULL },
+ };
+
+diff --git a/drivers/clk/ti/clk-54xx.c b/drivers/clk/ti/clk-54xx.c
+index 90e0a9ea63515..b4aff76eb3735 100644
+--- a/drivers/clk/ti/clk-54xx.c
++++ b/drivers/clk/ti/clk-54xx.c
+@@ -50,7 +50,7 @@ static const struct omap_clkctrl_bit_data omap5_aess_bit_data[] __initconst = {
+ };
+
+ static const char * const omap5_dmic_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0018:26",
++ "abe-clkctrl:0018:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -70,7 +70,7 @@ static const struct omap_clkctrl_bit_data omap5_dmic_bit_data[] __initconst = {
+ };
+
+ static const char * const omap5_mcbsp1_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0028:26",
++ "abe-clkctrl:0028:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -83,7 +83,7 @@ static const struct omap_clkctrl_bit_data omap5_mcbsp1_bit_data[] __initconst =
+ };
+
+ static const char * const omap5_mcbsp2_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0030:26",
++ "abe-clkctrl:0030:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -96,7 +96,7 @@ static const struct omap_clkctrl_bit_data omap5_mcbsp2_bit_data[] __initconst =
+ };
+
+ static const char * const omap5_mcbsp3_gfclk_parents[] __initconst = {
+- "abe_cm:clk:0038:26",
++ "abe-clkctrl:0038:26",
+ "pad_clks_ck",
+ "slimbus_clk",
+ NULL,
+@@ -136,16 +136,16 @@ static const struct omap_clkctrl_bit_data omap5_timer8_bit_data[] __initconst =
+
+ static const struct omap_clkctrl_reg_data omap5_abe_clkctrl_regs[] __initconst = {
+ { OMAP5_L4_ABE_CLKCTRL, NULL, 0, "abe_iclk" },
+- { OMAP5_AESS_CLKCTRL, omap5_aess_bit_data, CLKF_SW_SUP, "abe_cm:clk:0008:24" },
++ { OMAP5_AESS_CLKCTRL, omap5_aess_bit_data, CLKF_SW_SUP, "abe-clkctrl:0008:24" },
+ { OMAP5_MCPDM_CLKCTRL, NULL, CLKF_SW_SUP, "pad_clks_ck" },
+- { OMAP5_DMIC_CLKCTRL, omap5_dmic_bit_data, CLKF_SW_SUP, "abe_cm:clk:0018:24" },
+- { OMAP5_MCBSP1_CLKCTRL, omap5_mcbsp1_bit_data, CLKF_SW_SUP, "abe_cm:clk:0028:24" },
+- { OMAP5_MCBSP2_CLKCTRL, omap5_mcbsp2_bit_data, CLKF_SW_SUP, "abe_cm:clk:0030:24" },
+- { OMAP5_MCBSP3_CLKCTRL, omap5_mcbsp3_bit_data, CLKF_SW_SUP, "abe_cm:clk:0038:24" },
+- { OMAP5_TIMER5_CLKCTRL, omap5_timer5_bit_data, CLKF_SW_SUP, "abe_cm:clk:0048:24" },
+- { OMAP5_TIMER6_CLKCTRL, omap5_timer6_bit_data, CLKF_SW_SUP, "abe_cm:clk:0050:24" },
+- { OMAP5_TIMER7_CLKCTRL, omap5_timer7_bit_data, CLKF_SW_SUP, "abe_cm:clk:0058:24" },
+- { OMAP5_TIMER8_CLKCTRL, omap5_timer8_bit_data, CLKF_SW_SUP, "abe_cm:clk:0060:24" },
++ { OMAP5_DMIC_CLKCTRL, omap5_dmic_bit_data, CLKF_SW_SUP, "abe-clkctrl:0018:24" },
++ { OMAP5_MCBSP1_CLKCTRL, omap5_mcbsp1_bit_data, CLKF_SW_SUP, "abe-clkctrl:0028:24" },
++ { OMAP5_MCBSP2_CLKCTRL, omap5_mcbsp2_bit_data, CLKF_SW_SUP, "abe-clkctrl:0030:24" },
++ { OMAP5_MCBSP3_CLKCTRL, omap5_mcbsp3_bit_data, CLKF_SW_SUP, "abe-clkctrl:0038:24" },
++ { OMAP5_TIMER5_CLKCTRL, omap5_timer5_bit_data, CLKF_SW_SUP, "abe-clkctrl:0048:24" },
++ { OMAP5_TIMER6_CLKCTRL, omap5_timer6_bit_data, CLKF_SW_SUP, "abe-clkctrl:0050:24" },
++ { OMAP5_TIMER7_CLKCTRL, omap5_timer7_bit_data, CLKF_SW_SUP, "abe-clkctrl:0058:24" },
++ { OMAP5_TIMER8_CLKCTRL, omap5_timer8_bit_data, CLKF_SW_SUP, "abe-clkctrl:0060:24" },
+ { 0 },
+ };
+
+@@ -268,12 +268,12 @@ static const struct omap_clkctrl_bit_data omap5_gpio8_bit_data[] __initconst = {
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_l4per_clkctrl_regs[] __initconst = {
+- { OMAP5_TIMER10_CLKCTRL, omap5_timer10_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0008:24" },
+- { OMAP5_TIMER11_CLKCTRL, omap5_timer11_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0010:24" },
+- { OMAP5_TIMER2_CLKCTRL, omap5_timer2_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0018:24" },
+- { OMAP5_TIMER3_CLKCTRL, omap5_timer3_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0020:24" },
+- { OMAP5_TIMER4_CLKCTRL, omap5_timer4_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0028:24" },
+- { OMAP5_TIMER9_CLKCTRL, omap5_timer9_bit_data, CLKF_SW_SUP, "l4per_cm:clk:0030:24" },
++ { OMAP5_TIMER10_CLKCTRL, omap5_timer10_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0008:24" },
++ { OMAP5_TIMER11_CLKCTRL, omap5_timer11_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0010:24" },
++ { OMAP5_TIMER2_CLKCTRL, omap5_timer2_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0018:24" },
++ { OMAP5_TIMER3_CLKCTRL, omap5_timer3_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0020:24" },
++ { OMAP5_TIMER4_CLKCTRL, omap5_timer4_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0028:24" },
++ { OMAP5_TIMER9_CLKCTRL, omap5_timer9_bit_data, CLKF_SW_SUP, "l4per-clkctrl:0030:24" },
+ { OMAP5_GPIO2_CLKCTRL, omap5_gpio2_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ { OMAP5_GPIO3_CLKCTRL, omap5_gpio3_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ { OMAP5_GPIO4_CLKCTRL, omap5_gpio4_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+@@ -345,7 +345,7 @@ static const struct omap_clkctrl_bit_data omap5_dss_core_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_dss_clkctrl_regs[] __initconst = {
+- { OMAP5_DSS_CORE_CLKCTRL, omap5_dss_core_bit_data, CLKF_SW_SUP, "dss_cm:clk:0000:8" },
++ { OMAP5_DSS_CORE_CLKCTRL, omap5_dss_core_bit_data, CLKF_SW_SUP, "dss-clkctrl:0000:8" },
+ { 0 },
+ };
+
+@@ -378,7 +378,7 @@ static const struct omap_clkctrl_bit_data omap5_gpu_core_bit_data[] __initconst
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_gpu_clkctrl_regs[] __initconst = {
+- { OMAP5_GPU_CLKCTRL, omap5_gpu_core_bit_data, CLKF_SW_SUP, "gpu_cm:clk:0000:24" },
++ { OMAP5_GPU_CLKCTRL, omap5_gpu_core_bit_data, CLKF_SW_SUP, "gpu-clkctrl:0000:24" },
+ { 0 },
+ };
+
+@@ -389,7 +389,7 @@ static const char * const omap5_mmc1_fclk_mux_parents[] __initconst = {
+ };
+
+ static const char * const omap5_mmc1_fclk_parents[] __initconst = {
+- "l3init_cm:clk:0008:24",
++ "l3init-clkctrl:0008:24",
+ NULL,
+ };
+
+@@ -405,7 +405,7 @@ static const struct omap_clkctrl_bit_data omap5_mmc1_bit_data[] __initconst = {
+ };
+
+ static const char * const omap5_mmc2_fclk_parents[] __initconst = {
+- "l3init_cm:clk:0010:24",
++ "l3init-clkctrl:0010:24",
+ NULL,
+ };
+
+@@ -430,12 +430,12 @@ static const char * const omap5_usb_host_hs_hsic480m_p3_clk_parents[] __initcons
+ };
+
+ static const char * const omap5_usb_host_hs_utmi_p1_clk_parents[] __initconst = {
+- "l3init_cm:clk:0038:24",
++ "l3init-clkctrl:0038:24",
+ NULL,
+ };
+
+ static const char * const omap5_usb_host_hs_utmi_p2_clk_parents[] __initconst = {
+- "l3init_cm:clk:0038:25",
++ "l3init-clkctrl:0038:25",
+ NULL,
+ };
+
+@@ -494,8 +494,8 @@ static const struct omap_clkctrl_bit_data omap5_usb_otg_ss_bit_data[] __initcons
+ };
+
+ static const struct omap_clkctrl_reg_data omap5_l3init_clkctrl_regs[] __initconst = {
+- { OMAP5_MMC1_CLKCTRL, omap5_mmc1_bit_data, CLKF_SW_SUP, "l3init_cm:clk:0008:25" },
+- { OMAP5_MMC2_CLKCTRL, omap5_mmc2_bit_data, CLKF_SW_SUP, "l3init_cm:clk:0010:25" },
++ { OMAP5_MMC1_CLKCTRL, omap5_mmc1_bit_data, CLKF_SW_SUP, "l3init-clkctrl:0008:25" },
++ { OMAP5_MMC2_CLKCTRL, omap5_mmc2_bit_data, CLKF_SW_SUP, "l3init-clkctrl:0010:25" },
+ { OMAP5_USB_HOST_HS_CLKCTRL, omap5_usb_host_hs_bit_data, CLKF_SW_SUP, "l3init_60m_fclk" },
+ { OMAP5_USB_TLL_HS_CLKCTRL, omap5_usb_tll_hs_bit_data, CLKF_HW_SUP, "l4_root_clk_div" },
+ { OMAP5_SATA_CLKCTRL, omap5_sata_bit_data, CLKF_SW_SUP, "func_48m_fclk" },
+@@ -519,7 +519,7 @@ static const struct omap_clkctrl_reg_data omap5_wkupaon_clkctrl_regs[] __initcon
+ { OMAP5_L4_WKUP_CLKCTRL, NULL, 0, "wkupaon_iclk_mux" },
+ { OMAP5_WD_TIMER2_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { OMAP5_GPIO1_CLKCTRL, omap5_gpio1_bit_data, CLKF_HW_SUP, "wkupaon_iclk_mux" },
+- { OMAP5_TIMER1_CLKCTRL, omap5_timer1_bit_data, CLKF_SW_SUP, "wkupaon_cm:clk:0020:24" },
++ { OMAP5_TIMER1_CLKCTRL, omap5_timer1_bit_data, CLKF_SW_SUP, "wkupaon-clkctrl:0020:24" },
+ { OMAP5_COUNTER_32K_CLKCTRL, NULL, 0, "wkupaon_iclk_mux" },
+ { OMAP5_KBD_CLKCTRL, NULL, CLKF_SW_SUP, "sys_32k_ck" },
+ { 0 },
+@@ -549,58 +549,58 @@ const struct omap_clkctrl_data omap5_clkctrl_data[] __initconst = {
+ static struct ti_dt_clk omap54xx_clks[] = {
+ DT_CLK(NULL, "timer_32k_ck", "sys_32k_ck"),
+ DT_CLK(NULL, "sys_clkin_ck", "sys_clkin"),
+- DT_CLK(NULL, "dmic_gfclk", "abe_cm:0018:24"),
+- DT_CLK(NULL, "dmic_sync_mux_ck", "abe_cm:0018:26"),
+- DT_CLK(NULL, "dss_32khz_clk", "dss_cm:0000:11"),
+- DT_CLK(NULL, "dss_48mhz_clk", "dss_cm:0000:9"),
+- DT_CLK(NULL, "dss_dss_clk", "dss_cm:0000:8"),
+- DT_CLK(NULL, "dss_sys_clk", "dss_cm:0000:10"),
+- DT_CLK(NULL, "gpio1_dbclk", "wkupaon_cm:0018:8"),
+- DT_CLK(NULL, "gpio2_dbclk", "l4per_cm:0040:8"),
+- DT_CLK(NULL, "gpio3_dbclk", "l4per_cm:0048:8"),
+- DT_CLK(NULL, "gpio4_dbclk", "l4per_cm:0050:8"),
+- DT_CLK(NULL, "gpio5_dbclk", "l4per_cm:0058:8"),
+- DT_CLK(NULL, "gpio6_dbclk", "l4per_cm:0060:8"),
+- DT_CLK(NULL, "gpio7_dbclk", "l4per_cm:00f0:8"),
+- DT_CLK(NULL, "gpio8_dbclk", "l4per_cm:00f8:8"),
+- DT_CLK(NULL, "mcbsp1_gfclk", "abe_cm:0028:24"),
+- DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe_cm:0028:26"),
+- DT_CLK(NULL, "mcbsp2_gfclk", "abe_cm:0030:24"),
+- DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe_cm:0030:26"),
+- DT_CLK(NULL, "mcbsp3_gfclk", "abe_cm:0038:24"),
+- DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe_cm:0038:26"),
+- DT_CLK(NULL, "mmc1_32khz_clk", "l3init_cm:0008:8"),
+- DT_CLK(NULL, "mmc1_fclk", "l3init_cm:0008:25"),
+- DT_CLK(NULL, "mmc1_fclk_mux", "l3init_cm:0008:24"),
+- DT_CLK(NULL, "mmc2_fclk", "l3init_cm:0010:25"),
+- DT_CLK(NULL, "mmc2_fclk_mux", "l3init_cm:0010:24"),
+- DT_CLK(NULL, "sata_ref_clk", "l3init_cm:0068:8"),
+- DT_CLK(NULL, "timer10_gfclk_mux", "l4per_cm:0008:24"),
+- DT_CLK(NULL, "timer11_gfclk_mux", "l4per_cm:0010:24"),
+- DT_CLK(NULL, "timer1_gfclk_mux", "wkupaon_cm:0020:24"),
+- DT_CLK(NULL, "timer2_gfclk_mux", "l4per_cm:0018:24"),
+- DT_CLK(NULL, "timer3_gfclk_mux", "l4per_cm:0020:24"),
+- DT_CLK(NULL, "timer4_gfclk_mux", "l4per_cm:0028:24"),
+- DT_CLK(NULL, "timer5_gfclk_mux", "abe_cm:0048:24"),
+- DT_CLK(NULL, "timer6_gfclk_mux", "abe_cm:0050:24"),
+- DT_CLK(NULL, "timer7_gfclk_mux", "abe_cm:0058:24"),
+- DT_CLK(NULL, "timer8_gfclk_mux", "abe_cm:0060:24"),
+- DT_CLK(NULL, "timer9_gfclk_mux", "l4per_cm:0030:24"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3init_cm:0038:13"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3init_cm:0038:14"),
+- DT_CLK(NULL, "usb_host_hs_hsic480m_p3_clk", "l3init_cm:0038:7"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3init_cm:0038:11"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3init_cm:0038:12"),
+- DT_CLK(NULL, "usb_host_hs_hsic60m_p3_clk", "l3init_cm:0038:6"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3init_cm:0038:8"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3init_cm:0038:9"),
+- DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3init_cm:0038:10"),
+- DT_CLK(NULL, "usb_otg_ss_refclk960m", "l3init_cm:00d0:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3init_cm:0048:8"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3init_cm:0048:9"),
+- DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3init_cm:0048:10"),
+- DT_CLK(NULL, "utmi_p1_gfclk", "l3init_cm:0038:24"),
+- DT_CLK(NULL, "utmi_p2_gfclk", "l3init_cm:0038:25"),
++ DT_CLK(NULL, "dmic_gfclk", "abe-clkctrl:0018:24"),
++ DT_CLK(NULL, "dmic_sync_mux_ck", "abe-clkctrl:0018:26"),
++ DT_CLK(NULL, "dss_32khz_clk", "dss-clkctrl:0000:11"),
++ DT_CLK(NULL, "dss_48mhz_clk", "dss-clkctrl:0000:9"),
++ DT_CLK(NULL, "dss_dss_clk", "dss-clkctrl:0000:8"),
++ DT_CLK(NULL, "dss_sys_clk", "dss-clkctrl:0000:10"),
++ DT_CLK(NULL, "gpio1_dbclk", "wkupaon-clkctrl:0018:8"),
++ DT_CLK(NULL, "gpio2_dbclk", "l4per-clkctrl:0040:8"),
++ DT_CLK(NULL, "gpio3_dbclk", "l4per-clkctrl:0048:8"),
++ DT_CLK(NULL, "gpio4_dbclk", "l4per-clkctrl:0050:8"),
++ DT_CLK(NULL, "gpio5_dbclk", "l4per-clkctrl:0058:8"),
++ DT_CLK(NULL, "gpio6_dbclk", "l4per-clkctrl:0060:8"),
++ DT_CLK(NULL, "gpio7_dbclk", "l4per-clkctrl:00f0:8"),
++ DT_CLK(NULL, "gpio8_dbclk", "l4per-clkctrl:00f8:8"),
++ DT_CLK(NULL, "mcbsp1_gfclk", "abe-clkctrl:0028:24"),
++ DT_CLK(NULL, "mcbsp1_sync_mux_ck", "abe-clkctrl:0028:26"),
++ DT_CLK(NULL, "mcbsp2_gfclk", "abe-clkctrl:0030:24"),
++ DT_CLK(NULL, "mcbsp2_sync_mux_ck", "abe-clkctrl:0030:26"),
++ DT_CLK(NULL, "mcbsp3_gfclk", "abe-clkctrl:0038:24"),
++ DT_CLK(NULL, "mcbsp3_sync_mux_ck", "abe-clkctrl:0038:26"),
++ DT_CLK(NULL, "mmc1_32khz_clk", "l3init-clkctrl:0008:8"),
++ DT_CLK(NULL, "mmc1_fclk", "l3init-clkctrl:0008:25"),
++ DT_CLK(NULL, "mmc1_fclk_mux", "l3init-clkctrl:0008:24"),
++ DT_CLK(NULL, "mmc2_fclk", "l3init-clkctrl:0010:25"),
++ DT_CLK(NULL, "mmc2_fclk_mux", "l3init-clkctrl:0010:24"),
++ DT_CLK(NULL, "sata_ref_clk", "l3init-clkctrl:0068:8"),
++ DT_CLK(NULL, "timer10_gfclk_mux", "l4per-clkctrl:0008:24"),
++ DT_CLK(NULL, "timer11_gfclk_mux", "l4per-clkctrl:0010:24"),
++ DT_CLK(NULL, "timer1_gfclk_mux", "wkupaon-clkctrl:0020:24"),
++ DT_CLK(NULL, "timer2_gfclk_mux", "l4per-clkctrl:0018:24"),
++ DT_CLK(NULL, "timer3_gfclk_mux", "l4per-clkctrl:0020:24"),
++ DT_CLK(NULL, "timer4_gfclk_mux", "l4per-clkctrl:0028:24"),
++ DT_CLK(NULL, "timer5_gfclk_mux", "abe-clkctrl:0048:24"),
++ DT_CLK(NULL, "timer6_gfclk_mux", "abe-clkctrl:0050:24"),
++ DT_CLK(NULL, "timer7_gfclk_mux", "abe-clkctrl:0058:24"),
++ DT_CLK(NULL, "timer8_gfclk_mux", "abe-clkctrl:0060:24"),
++ DT_CLK(NULL, "timer9_gfclk_mux", "l4per-clkctrl:0030:24"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p1_clk", "l3init-clkctrl:0038:13"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p2_clk", "l3init-clkctrl:0038:14"),
++ DT_CLK(NULL, "usb_host_hs_hsic480m_p3_clk", "l3init-clkctrl:0038:7"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p1_clk", "l3init-clkctrl:0038:11"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p2_clk", "l3init-clkctrl:0038:12"),
++ DT_CLK(NULL, "usb_host_hs_hsic60m_p3_clk", "l3init-clkctrl:0038:6"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p1_clk", "l3init-clkctrl:0038:8"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p2_clk", "l3init-clkctrl:0038:9"),
++ DT_CLK(NULL, "usb_host_hs_utmi_p3_clk", "l3init-clkctrl:0038:10"),
++ DT_CLK(NULL, "usb_otg_ss_refclk960m", "l3init-clkctrl:00d0:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch0_clk", "l3init-clkctrl:0048:8"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch1_clk", "l3init-clkctrl:0048:9"),
++ DT_CLK(NULL, "usb_tll_hs_usb_ch2_clk", "l3init-clkctrl:0048:10"),
++ DT_CLK(NULL, "utmi_p1_gfclk", "l3init-clkctrl:0038:24"),
++ DT_CLK(NULL, "utmi_p2_gfclk", "l3init-clkctrl:0038:25"),
+ { .node_name = NULL },
+ };
+
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 617360e20d86f..e23bf04586320 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -528,10 +528,6 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
+ char *c;
+ u16 soc_mask = 0;
+
+- if (!(ti_clk_get_features()->flags & TI_CLK_CLKCTRL_COMPAT) &&
+- of_node_name_eq(node, "clk"))
+- ti_clk_features.flags |= TI_CLK_CLKCTRL_COMPAT;
+-
+ addrp = of_get_address(node, 0, NULL, NULL);
+ addr = (u32)of_translate_address(node, addrp);
+
+diff --git a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+index c741da02b67e9..a183d93bd7e29 100644
+--- a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
++++ b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+@@ -982,6 +982,11 @@ static int dw_axi_dma_chan_slave_config(struct dma_chan *dchan,
+ static void axi_chan_dump_lli(struct axi_dma_chan *chan,
+ struct axi_dma_hw_desc *desc)
+ {
++ if (!desc->lli) {
++ dev_err(dchan2dev(&chan->vc.chan), "NULL LLI\n");
++ return;
++ }
++
+ dev_err(dchan2dev(&chan->vc.chan),
+ "SAR: 0x%llx DAR: 0x%llx LLP: 0x%llx BTS 0x%x CTL: 0x%x:%08x",
+ le64_to_cpu(desc->lli->sar),
+@@ -1049,6 +1054,11 @@ static void axi_chan_block_xfer_complete(struct axi_dma_chan *chan)
+
+ /* The completed descriptor currently is in the head of vc list */
+ vd = vchan_next_desc(&chan->vc);
++ if (!vd) {
++ dev_err(chan2dev(chan), "BUG: %s, IRQ with no descriptors\n",
++ axi_chan_name(chan));
++ goto out;
++ }
+
+ if (chan->cyclic) {
+ desc = vd_to_axi_desc(vd);
+@@ -1078,6 +1088,7 @@ static void axi_chan_block_xfer_complete(struct axi_dma_chan *chan)
+ axi_chan_start_first_queued(chan);
+ }
+
++out:
+ spin_unlock_irqrestore(&chan->vc.lock, flags);
+ }
+
+diff --git a/drivers/dma/sprd-dma.c b/drivers/dma/sprd-dma.c
+index 2138b80435abf..474d3ba8ec9f9 100644
+--- a/drivers/dma/sprd-dma.c
++++ b/drivers/dma/sprd-dma.c
+@@ -1237,11 +1237,8 @@ static int sprd_dma_remove(struct platform_device *pdev)
+ {
+ struct sprd_dma_dev *sdev = platform_get_drvdata(pdev);
+ struct sprd_dma_chn *c, *cn;
+- int ret;
+
+- ret = pm_runtime_get_sync(&pdev->dev);
+- if (ret < 0)
+- return ret;
++ pm_runtime_get_sync(&pdev->dev);
+
+ /* explicitly free the irq */
+ if (sdev->irq > 0)
+diff --git a/drivers/dma/tegra186-gpc-dma.c b/drivers/dma/tegra186-gpc-dma.c
+index 05cd451f541d8..fa9bda4a2bc6f 100644
+--- a/drivers/dma/tegra186-gpc-dma.c
++++ b/drivers/dma/tegra186-gpc-dma.c
+@@ -157,8 +157,8 @@
+ * If any burst is in flight and DMA paused then this is the time to complete
+ * on-flight burst and update DMA status register.
+ */
+-#define TEGRA_GPCDMA_BURST_COMPLETE_TIME 20
+-#define TEGRA_GPCDMA_BURST_COMPLETION_TIMEOUT 100
++#define TEGRA_GPCDMA_BURST_COMPLETE_TIME 10
++#define TEGRA_GPCDMA_BURST_COMPLETION_TIMEOUT 5000 /* 5 msec */
+
+ /* Channel base address offset from GPCDMA base address */
+ #define TEGRA_GPCDMA_CHANNEL_BASE_ADD_OFFSET 0x20000
+@@ -432,6 +432,17 @@ static int tegra_dma_device_resume(struct dma_chan *dc)
+ return 0;
+ }
+
++static inline int tegra_dma_pause_noerr(struct tegra_dma_channel *tdc)
++{
++ /* Return 0 irrespective of PAUSE status.
++ * This is useful to recover channels that can exit out of flush
++ * state when the channel is disabled.
++ */
++
++ tegra_dma_pause(tdc);
++ return 0;
++}
++
+ static void tegra_dma_disable(struct tegra_dma_channel *tdc)
+ {
+ u32 csr, status;
+@@ -1292,6 +1303,14 @@ static const struct tegra_dma_chip_data tegra194_dma_chip_data = {
+ .terminate = tegra_dma_pause,
+ };
+
++static const struct tegra_dma_chip_data tegra234_dma_chip_data = {
++ .nr_channels = 31,
++ .channel_reg_size = SZ_64K,
++ .max_dma_count = SZ_1G,
++ .hw_support_pause = true,
++ .terminate = tegra_dma_pause_noerr,
++};
++
+ static const struct of_device_id tegra_dma_of_match[] = {
+ {
+ .compatible = "nvidia,tegra186-gpcdma",
+@@ -1299,6 +1318,9 @@ static const struct of_device_id tegra_dma_of_match[] = {
+ }, {
+ .compatible = "nvidia,tegra194-gpcdma",
+ .data = &tegra194_dma_chip_data,
++ }, {
++ .compatible = "nvidia,tegra234-gpcdma",
++ .data = &tegra234_dma_chip_data,
+ }, {
+ },
+ };
+diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+index c6cc493a54866..2b97b8a96fb49 100644
+--- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c
++++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c
+@@ -148,30 +148,22 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
+ struct amdgpu_reset_context *reset_context)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle;
++ struct list_head *reset_device_list = reset_context->reset_device_list;
+ struct amdgpu_device *tmp_adev = NULL;
+- struct list_head reset_device_list;
+ int r = 0;
+
+ dev_dbg(adev->dev, "aldebaran perform hw reset\n");
++
++ if (reset_device_list == NULL)
++ return -EINVAL;
++
+ if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2) &&
+ reset_context->hive == NULL) {
+ /* Wrong context, return error */
+ return -EINVAL;
+ }
+
+- INIT_LIST_HEAD(&reset_device_list);
+- if (reset_context->hive) {
+- list_for_each_entry (tmp_adev,
+- &reset_context->hive->device_list,
+- gmc.xgmi.head)
+- list_add_tail(&tmp_adev->reset_list,
+- &reset_device_list);
+- } else {
+- list_add_tail(&reset_context->reset_req_dev->reset_list,
+- &reset_device_list);
+- }
+-
+- list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
++ list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
+ mutex_lock(&tmp_adev->reset_cntl->reset_lock);
+ tmp_adev->reset_cntl->active_reset = AMD_RESET_METHOD_MODE2;
+ }
+@@ -179,7 +171,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
+ * Mode2 reset doesn't need any sync between nodes in XGMI hive, instead launch
+ * them together so that they can be completed asynchronously on multiple nodes
+ */
+- list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
++ list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
+ /* For XGMI run all resets in parallel to speed up the process */
+ if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
+ if (!queue_work(system_unbound_wq,
+@@ -197,7 +189,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
+
+ /* For XGMI wait for all resets to complete before proceed */
+ if (!r) {
+- list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
++ list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
+ if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) {
+ flush_work(&tmp_adev->reset_cntl->reset_work);
+ r = tmp_adev->asic_reset_res;
+@@ -207,7 +199,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
+ }
+ }
+
+- list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
++ list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
+ mutex_unlock(&tmp_adev->reset_cntl->reset_lock);
+ tmp_adev->reset_cntl->active_reset = AMD_RESET_METHOD_NONE;
+ }
+@@ -339,10 +331,13 @@ static int
+ aldebaran_mode2_restore_hwcontext(struct amdgpu_reset_control *reset_ctl,
+ struct amdgpu_reset_context *reset_context)
+ {
++ struct list_head *reset_device_list = reset_context->reset_device_list;
+ struct amdgpu_device *tmp_adev = NULL;
+- struct list_head reset_device_list;
+ int r;
+
++ if (reset_device_list == NULL)
++ return -EINVAL;
++
+ if (reset_context->reset_req_dev->ip_versions[MP1_HWIP][0] ==
+ IP_VERSION(13, 0, 2) &&
+ reset_context->hive == NULL) {
+@@ -350,19 +345,7 @@ aldebaran_mode2_restore_hwcontext(struct amdgpu_reset_control *reset_ctl,
+ return -EINVAL;
+ }
+
+- INIT_LIST_HEAD(&reset_device_list);
+- if (reset_context->hive) {
+- list_for_each_entry (tmp_adev,
+- &reset_context->hive->device_list,
+- gmc.xgmi.head)
+- list_add_tail(&tmp_adev->reset_list,
+- &reset_device_list);
+- } else {
+- list_add_tail(&reset_context->reset_req_dev->reset_list,
+- &reset_device_list);
+- }
+-
+- list_for_each_entry (tmp_adev, &reset_device_list, reset_list) {
++ list_for_each_entry(tmp_adev, reset_device_list, reset_list) {
+ dev_info(tmp_adev->dev,
+ "GPU reset succeeded, trying to resume\n");
+ r = aldebaran_mode2_restore_ip(tmp_adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
+index fd8f3731758ed..b81b77a9efa61 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
+@@ -314,7 +314,7 @@ amdgpu_atomfirmware_get_vram_info(struct amdgpu_device *adev,
+ mem_channel_number = vram_info->v30.channel_num;
+ mem_channel_width = vram_info->v30.channel_width;
+ if (vram_width)
+- *vram_width = mem_channel_number * mem_channel_width;
++ *vram_width = mem_channel_number * (1 << mem_channel_width);
+ break;
+ default:
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index d8f1335bc68f4..b7bae833c804b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -837,16 +837,12 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
+ continue;
+
+ r = amdgpu_vm_bo_update(adev, bo_va, false);
+- if (r) {
+- mutex_unlock(&p->bo_list->bo_list_mutex);
++ if (r)
+ return r;
+- }
+
+ r = amdgpu_sync_fence(&p->job->sync, bo_va->last_pt_update);
+- if (r) {
+- mutex_unlock(&p->bo_list->bo_list_mutex);
++ if (r)
+ return r;
+- }
+ }
+
+ r = amdgpu_vm_handle_moved(adev, vm);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 58df107e3beba..3adebb63680e0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4746,6 +4746,8 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
+ tmp_adev = list_first_entry(device_list_handle, struct amdgpu_device,
+ reset_list);
+ amdgpu_reset_reg_dumps(tmp_adev);
++
++ reset_context->reset_device_list = device_list_handle;
+ r = amdgpu_reset_perform_reset(tmp_adev, reset_context);
+ /* If reset handler not implemented, continue; otherwise return */
+ if (r == -ENOSYS)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
+index 1949dbe28a865..0c3ad85d84a43 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
+@@ -37,6 +37,7 @@ struct amdgpu_reset_context {
+ struct amdgpu_device *reset_req_dev;
+ struct amdgpu_job *job;
+ struct amdgpu_hive_info *hive;
++ struct list_head *reset_device_list;
+ unsigned long flags;
+ };
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
+index 108e8e8a1a367..576849e952964 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
+@@ -496,8 +496,7 @@ static int amdgpu_vkms_sw_init(void *handle)
+ adev_to_drm(adev)->mode_config.max_height = YRES_MAX;
+
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+
+ adev_to_drm(adev)->mode_config.fb_base = adev->gmc.aper_base;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
+index 9c964cd3b5d4e..288fce7dc0ed1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
+@@ -2796,8 +2796,7 @@ static int dce_v10_0_sw_init(void *handle)
+ adev_to_drm(adev)->mode_config.max_height = 16384;
+
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+
+ adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
+index e0ad9f27dc3f9..cbe5250b31cb4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
+@@ -2914,8 +2914,7 @@ static int dce_v11_0_sw_init(void *handle)
+ adev_to_drm(adev)->mode_config.max_height = 16384;
+
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+
+ adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
+index 3caf6f386042f..982855e6cf52e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
+@@ -2673,8 +2673,7 @@ static int dce_v6_0_sw_init(void *handle)
+ adev_to_drm(adev)->mode_config.max_width = 16384;
+ adev_to_drm(adev)->mode_config.max_height = 16384;
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+ adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true;
+ adev_to_drm(adev)->mode_config.fb_base = adev->gmc.aper_base;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
+index 7c75df5bffed3..cf44d1b054acf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
+@@ -2693,8 +2693,11 @@ static int dce_v8_0_sw_init(void *handle)
+ adev_to_drm(adev)->mode_config.max_height = 16384;
+
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ if (adev->asic_type == CHIP_HAWAII)
++ /* disable prefer shadow for now due to hibernation issues */
++ adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ else
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+
+ adev_to_drm(adev)->mode_config.fb_modifiers_not_supported = true;
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index d055d3c7eed6a..0424570c736fa 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3894,8 +3894,11 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
+ adev_to_drm(adev)->mode_config.max_height = 16384;
+
+ adev_to_drm(adev)->mode_config.preferred_depth = 24;
+- /* disable prefer shadow for now due to hibernation issues */
+- adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ if (adev->asic_type == CHIP_HAWAII)
++ /* disable prefer shadow for now due to hibernation issues */
++ adev_to_drm(adev)->mode_config.prefer_shadow = 0;
++ else
++ adev_to_drm(adev)->mode_config.prefer_shadow = 1;
+ /* indicates support for immediate flip */
+ adev_to_drm(adev)->mode_config.async_page_flip = true;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
+index 76f863eb86ef2..401ccc676ae9a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
+@@ -372,7 +372,7 @@ static struct stream_encoder *dcn303_stream_encoder_create(enum engine_id eng_id
+ int afmt_inst;
+
+ /* Mapping of VPG, AFMT, DME register blocks to DIO block instance */
+- if (eng_id <= ENGINE_ID_DIGE) {
++ if (eng_id <= ENGINE_ID_DIGB) {
+ vpg_inst = eng_id;
+ afmt_inst = eng_id;
+ } else
+diff --git a/drivers/gpu/drm/bridge/lvds-codec.c b/drivers/gpu/drm/bridge/lvds-codec.c
+index 702ea803a743c..39e7004de7200 100644
+--- a/drivers/gpu/drm/bridge/lvds-codec.c
++++ b/drivers/gpu/drm/bridge/lvds-codec.c
+@@ -180,7 +180,7 @@ static int lvds_codec_probe(struct platform_device *pdev)
+ of_node_put(bus_node);
+ if (ret == -ENODEV) {
+ dev_warn(dev, "missing 'data-mapping' DT property\n");
+- } else if (ret) {
++ } else if (ret < 0) {
+ dev_err(dev, "invalid 'data-mapping' DT property\n");
+ return ret;
+ } else {
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c
+index 06b1b188ce5a4..cbe607cadd7fb 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_object.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c
+@@ -268,7 +268,7 @@ static void __i915_gem_object_free_mmaps(struct drm_i915_gem_object *obj)
+ */
+ void __i915_gem_object_pages_fini(struct drm_i915_gem_object *obj)
+ {
+- assert_object_held(obj);
++ assert_object_held_shared(obj);
+
+ if (!list_empty(&obj->vma.list)) {
+ struct i915_vma *vma;
+@@ -331,15 +331,7 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
+ continue;
+ }
+
+- if (!i915_gem_object_trylock(obj, NULL)) {
+- /* busy, toss it back to the pile */
+- if (llist_add(&obj->freed, &i915->mm.free_list))
+- queue_delayed_work(i915->wq, &i915->mm.free_work, msecs_to_jiffies(10));
+- continue;
+- }
+-
+ __i915_gem_object_pages_fini(obj);
+- i915_gem_object_unlock(obj);
+ __i915_gem_free_object(obj);
+
+ /* But keep the pointer alive for RCU-protected lookups */
+@@ -359,7 +351,7 @@ void i915_gem_flush_free_objects(struct drm_i915_private *i915)
+ static void __i915_gem_free_work(struct work_struct *work)
+ {
+ struct drm_i915_private *i915 =
+- container_of(work, struct drm_i915_private, mm.free_work.work);
++ container_of(work, struct drm_i915_private, mm.free_work);
+
+ i915_gem_flush_free_objects(i915);
+ }
+@@ -391,7 +383,7 @@ static void i915_gem_free_object(struct drm_gem_object *gem_obj)
+ */
+
+ if (llist_add(&obj->freed, &i915->mm.free_list))
+- queue_delayed_work(i915->wq, &i915->mm.free_work, 0);
++ queue_work(i915->wq, &i915->mm.free_work);
+ }
+
+ void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
+@@ -719,7 +711,7 @@ bool i915_gem_object_placement_possible(struct drm_i915_gem_object *obj,
+
+ void i915_gem_init__objects(struct drm_i915_private *i915)
+ {
+- INIT_DELAYED_WORK(&i915->mm.free_work, __i915_gem_free_work);
++ INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);
+ }
+
+ void i915_objects_module_exit(void)
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+index 2c88bdb8ff7cc..4e224ef359406 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
++++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+@@ -335,7 +335,6 @@ struct drm_i915_gem_object {
+ #define I915_BO_READONLY BIT(7)
+ #define I915_TILING_QUIRK_BIT 8 /* unknown swizzling; do not release! */
+ #define I915_BO_PROTECTED BIT(9)
+-#define I915_BO_WAS_BOUND_BIT 10
+ /**
+ * @mem_flags - Mutable placement-related flags
+ *
+@@ -598,6 +597,8 @@ struct drm_i915_gem_object {
+ * pages were last acquired.
+ */
+ bool dirty:1;
++
++ u32 tlb;
+ } mm;
+
+ struct {
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+index 97c820eee115a..8357dbdcab5cb 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+@@ -6,14 +6,15 @@
+
+ #include <drm/drm_cache.h>
+
++#include "gt/intel_gt.h"
++#include "gt/intel_gt_pm.h"
++
+ #include "i915_drv.h"
+ #include "i915_gem_object.h"
+ #include "i915_scatterlist.h"
+ #include "i915_gem_lmem.h"
+ #include "i915_gem_mman.h"
+
+-#include "gt/intel_gt.h"
+-
+ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
+ struct sg_table *pages,
+ unsigned int sg_page_sizes)
+@@ -190,6 +191,18 @@ static void unmap_object(struct drm_i915_gem_object *obj, void *ptr)
+ vunmap(ptr);
+ }
+
++static void flush_tlb_invalidate(struct drm_i915_gem_object *obj)
++{
++ struct drm_i915_private *i915 = to_i915(obj->base.dev);
++ struct intel_gt *gt = to_gt(i915);
++
++ if (!obj->mm.tlb)
++ return;
++
++ intel_gt_invalidate_tlb(gt, obj->mm.tlb);
++ obj->mm.tlb = 0;
++}
++
+ struct sg_table *
+ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
+ {
+@@ -215,13 +228,7 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
+ __i915_gem_object_reset_page_iter(obj);
+ obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0;
+
+- if (test_and_clear_bit(I915_BO_WAS_BOUND_BIT, &obj->flags)) {
+- struct drm_i915_private *i915 = to_i915(obj->base.dev);
+- intel_wakeref_t wakeref;
+-
+- with_intel_runtime_pm_if_active(&i915->runtime_pm, wakeref)
+- intel_gt_invalidate_tlbs(to_gt(i915));
+- }
++ flush_tlb_invalidate(obj);
+
+ return pages;
+ }
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
+index 531af6ad70071..a47dcf7663aef 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt.c
+@@ -10,7 +10,9 @@
+ #include "pxp/intel_pxp.h"
+
+ #include "i915_drv.h"
++#include "i915_perf_oa_regs.h"
+ #include "intel_context.h"
++#include "intel_engine_pm.h"
+ #include "intel_engine_regs.h"
+ #include "intel_gt.h"
+ #include "intel_gt_buffer_pool.h"
+@@ -34,8 +36,6 @@ static void __intel_gt_init_early(struct intel_gt *gt)
+ {
+ spin_lock_init(>->irq_lock);
+
+- mutex_init(>->tlb_invalidate_lock);
+-
+ INIT_LIST_HEAD(>->closed_vma);
+ spin_lock_init(>->closed_lock);
+
+@@ -46,6 +46,8 @@ static void __intel_gt_init_early(struct intel_gt *gt)
+ intel_gt_init_reset(gt);
+ intel_gt_init_requests(gt);
+ intel_gt_init_timelines(gt);
++ mutex_init(>->tlb.invalidate_lock);
++ seqcount_mutex_init(>->tlb.seqno, >->tlb.invalidate_lock);
+ intel_gt_pm_init_early(gt);
+
+ intel_uc_init_early(>->uc);
+@@ -831,6 +833,7 @@ void intel_gt_driver_late_release_all(struct drm_i915_private *i915)
+ intel_gt_fini_requests(gt);
+ intel_gt_fini_reset(gt);
+ intel_gt_fini_timelines(gt);
++ mutex_destroy(>->tlb.invalidate_lock);
+ intel_engines_free(gt);
+ }
+ }
+@@ -1163,7 +1166,7 @@ get_reg_and_bit(const struct intel_engine_cs *engine, const bool gen8,
+ return rb;
+ }
+
+-void intel_gt_invalidate_tlbs(struct intel_gt *gt)
++static void mmio_invalidate_full(struct intel_gt *gt)
+ {
+ static const i915_reg_t gen8_regs[] = {
+ [RENDER_CLASS] = GEN8_RTCR,
+@@ -1181,13 +1184,11 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
+ struct drm_i915_private *i915 = gt->i915;
+ struct intel_uncore *uncore = gt->uncore;
+ struct intel_engine_cs *engine;
++ intel_engine_mask_t awake, tmp;
+ enum intel_engine_id id;
+ const i915_reg_t *regs;
+ unsigned int num = 0;
+
+- if (I915_SELFTEST_ONLY(gt->awake == -ENODEV))
+- return;
+-
+ if (GRAPHICS_VER(i915) == 12) {
+ regs = gen12_regs;
+ num = ARRAY_SIZE(gen12_regs);
+@@ -1202,28 +1203,41 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
+ "Platform does not implement TLB invalidation!"))
+ return;
+
+- GEM_TRACE("\n");
+-
+- assert_rpm_wakelock_held(&i915->runtime_pm);
+-
+- mutex_lock(>->tlb_invalidate_lock);
+ intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
+
+ spin_lock_irq(&uncore->lock); /* serialise invalidate with GT reset */
+
++ awake = 0;
+ for_each_engine(engine, gt, id) {
+ struct reg_and_bit rb;
+
++ if (!intel_engine_pm_is_awake(engine))
++ continue;
++
+ rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num);
+ if (!i915_mmio_reg_offset(rb.reg))
+ continue;
+
+ intel_uncore_write_fw(uncore, rb.reg, rb.bit);
++ awake |= engine->mask;
+ }
+
++ GT_TRACE(gt, "invalidated engines %08x\n", awake);
++
++ /* Wa_2207587034:tgl,dg1,rkl,adl-s,adl-p */
++ if (awake &&
++ (IS_TIGERLAKE(i915) ||
++ IS_DG1(i915) ||
++ IS_ROCKETLAKE(i915) ||
++ IS_ALDERLAKE_S(i915) ||
++ IS_ALDERLAKE_P(i915)))
++ intel_uncore_write_fw(uncore, GEN12_OA_TLB_INV_CR, 1);
++
+ spin_unlock_irq(&uncore->lock);
+
+- for_each_engine(engine, gt, id) {
++ for_each_engine_masked(engine, gt, awake, tmp) {
++ struct reg_and_bit rb;
++
+ /*
+ * HW architecture suggest typical invalidation time at 40us,
+ * with pessimistic cases up to 100us and a recommendation to
+@@ -1231,12 +1245,8 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
+ */
+ const unsigned int timeout_us = 100;
+ const unsigned int timeout_ms = 4;
+- struct reg_and_bit rb;
+
+ rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num);
+- if (!i915_mmio_reg_offset(rb.reg))
+- continue;
+-
+ if (__intel_wait_for_register_fw(uncore,
+ rb.reg, rb.bit, 0,
+ timeout_us, timeout_ms,
+@@ -1253,5 +1263,38 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
+ * transitions.
+ */
+ intel_uncore_forcewake_put_delayed(uncore, FORCEWAKE_ALL);
+- mutex_unlock(>->tlb_invalidate_lock);
++}
++
++static bool tlb_seqno_passed(const struct intel_gt *gt, u32 seqno)
++{
++ u32 cur = intel_gt_tlb_seqno(gt);
++
++ /* Only skip if a *full* TLB invalidate barrier has passed */
++ return (s32)(cur - ALIGN(seqno, 2)) > 0;
++}
++
++void intel_gt_invalidate_tlb(struct intel_gt *gt, u32 seqno)
++{
++ intel_wakeref_t wakeref;
++
++ if (I915_SELFTEST_ONLY(gt->awake == -ENODEV))
++ return;
++
++ if (intel_gt_is_wedged(gt))
++ return;
++
++ if (tlb_seqno_passed(gt, seqno))
++ return;
++
++ with_intel_gt_pm_if_awake(gt, wakeref) {
++ mutex_lock(>->tlb.invalidate_lock);
++ if (tlb_seqno_passed(gt, seqno))
++ goto unlock;
++
++ mmio_invalidate_full(gt);
++
++ write_seqcount_invalidate(>->tlb.seqno);
++unlock:
++ mutex_unlock(>->tlb.invalidate_lock);
++ }
+ }
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt.h b/drivers/gpu/drm/i915/gt/intel_gt.h
+index 44c6cb63ccbc8..d5a2af76d6a52 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt.h
+@@ -123,7 +123,17 @@ void intel_gt_info_print(const struct intel_gt_info *info,
+
+ void intel_gt_watchdog_work(struct work_struct *work);
+
+-void intel_gt_invalidate_tlbs(struct intel_gt *gt);
++static inline u32 intel_gt_tlb_seqno(const struct intel_gt *gt)
++{
++ return seqprop_sequence(>->tlb.seqno);
++}
++
++static inline u32 intel_gt_next_invalidate_tlb_full(const struct intel_gt *gt)
++{
++ return intel_gt_tlb_seqno(gt) | 1;
++}
++
++void intel_gt_invalidate_tlb(struct intel_gt *gt, u32 seqno);
+
+ struct resource intel_pci_resource(struct pci_dev *pdev, int bar);
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
+index bc898df7a48cc..a334787a4939f 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
+@@ -55,6 +55,9 @@ static inline void intel_gt_pm_might_put(struct intel_gt *gt)
+ for (tmp = 1, intel_gt_pm_get(gt); tmp; \
+ intel_gt_pm_put(gt), tmp = 0)
+
++#define with_intel_gt_pm_if_awake(gt, wf) \
++ for (wf = intel_gt_pm_get_if_awake(gt); wf; intel_gt_pm_put_async(gt), wf = 0)
++
+ static inline int intel_gt_pm_wait_for_idle(struct intel_gt *gt)
+ {
+ return intel_wakeref_wait_for_idle(>->wakeref);
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_types.h b/drivers/gpu/drm/i915/gt/intel_gt_types.h
+index edd7a3cf5f5f5..b1120f73dc8c4 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_types.h
++++ b/drivers/gpu/drm/i915/gt/intel_gt_types.h
+@@ -11,6 +11,7 @@
+ #include <linux/llist.h>
+ #include <linux/mutex.h>
+ #include <linux/notifier.h>
++#include <linux/seqlock.h>
+ #include <linux/spinlock.h>
+ #include <linux/types.h>
+ #include <linux/workqueue.h>
+@@ -76,7 +77,22 @@ struct intel_gt {
+ struct intel_uc uc;
+ struct intel_gsc gsc;
+
+- struct mutex tlb_invalidate_lock;
++ struct {
++ /* Serialize global tlb invalidations */
++ struct mutex invalidate_lock;
++
++ /*
++ * Batch TLB invalidations
++ *
++ * After unbinding the PTE, we need to ensure the TLB
++ * are invalidated prior to releasing the physical pages.
++ * But we only need one such invalidation for all unbinds,
++ * so we track how many TLB invalidations have been
++ * performed since unbind the PTE and only emit an extra
++ * invalidate if no full barrier has been passed.
++ */
++ seqcount_mutex_t seqno;
++ } tlb;
+
+ struct i915_wa_list wa_list;
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c
+index 2c35324b5f68c..2b10b96b17b5b 100644
+--- a/drivers/gpu/drm/i915/gt/intel_migrate.c
++++ b/drivers/gpu/drm/i915/gt/intel_migrate.c
+@@ -708,7 +708,7 @@ intel_context_migrate_copy(struct intel_context *ce,
+ u8 src_access, dst_access;
+ struct i915_request *rq;
+ int src_sz, dst_sz;
+- bool ccs_is_src;
++ bool ccs_is_src, overwrite_ccs;
+ int err;
+
+ GEM_BUG_ON(ce->vm != ce->engine->gt->migrate.context->vm);
+@@ -749,6 +749,8 @@ intel_context_migrate_copy(struct intel_context *ce,
+ get_ccs_sg_sgt(&it_ccs, bytes_to_cpy);
+ }
+
++ overwrite_ccs = HAS_FLAT_CCS(i915) && !ccs_bytes_to_cpy && dst_is_lmem;
++
+ src_offset = 0;
+ dst_offset = CHUNK_SZ;
+ if (HAS_64K_PAGES(ce->engine->i915)) {
+@@ -852,6 +854,25 @@ intel_context_migrate_copy(struct intel_context *ce,
+ if (err)
+ goto out_rq;
+ ccs_bytes_to_cpy -= ccs_sz;
++ } else if (overwrite_ccs) {
++ err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
++ if (err)
++ goto out_rq;
++
++ /*
++ * While we can't always restore/manage the CCS state,
++ * we still need to ensure we don't leak the CCS state
++ * from the previous user, so make sure we overwrite it
++ * with something.
++ */
++ err = emit_copy_ccs(rq, dst_offset, INDIRECT_ACCESS,
++ dst_offset, DIRECT_ACCESS, len);
++ if (err)
++ goto out_rq;
++
++ err = rq->engine->emit_flush(rq, EMIT_INVALIDATE);
++ if (err)
++ goto out_rq;
+ }
+
+ /* Arbitration is re-enabled between requests. */
+diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c
+index d8b94d6385598..6ee8d11270168 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c
++++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c
+@@ -206,8 +206,12 @@ void ppgtt_bind_vma(struct i915_address_space *vm,
+ void ppgtt_unbind_vma(struct i915_address_space *vm,
+ struct i915_vma_resource *vma_res)
+ {
+- if (vma_res->allocated)
+- vm->clear_range(vm, vma_res->start, vma_res->vma_size);
++ if (!vma_res->allocated)
++ return;
++
++ vm->clear_range(vm, vma_res->start, vma_res->vma_size);
++ if (vma_res->tlb)
++ vma_invalidate_tlb(vm, vma_res->tlb);
+ }
+
+ static unsigned long pd_count(u64 size, int shift)
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 00d7eeae33bd3..5184d70d48382 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -254,7 +254,7 @@ struct i915_gem_mm {
+ * List of objects which are pending destruction.
+ */
+ struct llist_head free_list;
+- struct delayed_work free_work;
++ struct work_struct free_work;
+ /**
+ * Count of objects pending destructions. Used to skip needlessly
+ * waiting on an RCU barrier if no objects are waiting to be freed.
+@@ -1415,7 +1415,7 @@ static inline void i915_gem_drain_freed_objects(struct drm_i915_private *i915)
+ * armed the work again.
+ */
+ while (atomic_read(&i915->mm.free_count)) {
+- flush_delayed_work(&i915->mm.free_work);
++ flush_work(&i915->mm.free_work);
+ flush_delayed_work(&i915->bdev.wq);
+ rcu_barrier();
+ }
+diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
+index 04d12f278f572..16460b169ed21 100644
+--- a/drivers/gpu/drm/i915/i915_vma.c
++++ b/drivers/gpu/drm/i915/i915_vma.c
+@@ -537,8 +537,6 @@ int i915_vma_bind(struct i915_vma *vma,
+ bind_flags);
+ }
+
+- set_bit(I915_BO_WAS_BOUND_BIT, &vma->obj->flags);
+-
+ atomic_or(bind_flags, &vma->flags);
+ return 0;
+ }
+@@ -1301,6 +1299,19 @@ err_unpin:
+ return err;
+ }
+
++void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb)
++{
++ /*
++ * Before we release the pages that were bound by this vma, we
++ * must invalidate all the TLBs that may still have a reference
++ * back to our physical address. It only needs to be done once,
++ * so after updating the PTE to point away from the pages, record
++ * the most recent TLB invalidation seqno, and if we have not yet
++ * flushed the TLBs upon release, perform a full invalidation.
++ */
++ WRITE_ONCE(*tlb, intel_gt_next_invalidate_tlb_full(vm->gt));
++}
++
+ static void __vma_put_pages(struct i915_vma *vma, unsigned int count)
+ {
+ /* We allocate under vma_get_pages, so beware the shrinker */
+@@ -1927,7 +1938,12 @@ struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async)
+ vma->vm->skip_pte_rewrite;
+ trace_i915_vma_unbind(vma);
+
+- unbind_fence = i915_vma_resource_unbind(vma_res);
++ if (async)
++ unbind_fence = i915_vma_resource_unbind(vma_res,
++ &vma->obj->mm.tlb);
++ else
++ unbind_fence = i915_vma_resource_unbind(vma_res, NULL);
++
+ vma->resource = NULL;
+
+ atomic_and(~(I915_VMA_BIND_MASK | I915_VMA_ERROR | I915_VMA_GGTT_WRITE),
+@@ -1935,10 +1951,13 @@ struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async)
+
+ i915_vma_detach(vma);
+
+- if (!async && unbind_fence) {
+- dma_fence_wait(unbind_fence, false);
+- dma_fence_put(unbind_fence);
+- unbind_fence = NULL;
++ if (!async) {
++ if (unbind_fence) {
++ dma_fence_wait(unbind_fence, false);
++ dma_fence_put(unbind_fence);
++ unbind_fence = NULL;
++ }
++ vma_invalidate_tlb(vma->vm, &vma->obj->mm.tlb);
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h
+index 88ca0bd9c9003..33a58f605d75c 100644
+--- a/drivers/gpu/drm/i915/i915_vma.h
++++ b/drivers/gpu/drm/i915/i915_vma.h
+@@ -213,6 +213,7 @@ bool i915_vma_misplaced(const struct i915_vma *vma,
+ u64 size, u64 alignment, u64 flags);
+ void __i915_vma_set_map_and_fenceable(struct i915_vma *vma);
+ void i915_vma_revoke_mmap(struct i915_vma *vma);
++void vma_invalidate_tlb(struct i915_address_space *vm, u32 *tlb);
+ struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async);
+ int __i915_vma_unbind(struct i915_vma *vma);
+ int __must_check i915_vma_unbind(struct i915_vma *vma);
+diff --git a/drivers/gpu/drm/i915/i915_vma_resource.c b/drivers/gpu/drm/i915/i915_vma_resource.c
+index 27c55027387a0..5a67995ea5fe2 100644
+--- a/drivers/gpu/drm/i915/i915_vma_resource.c
++++ b/drivers/gpu/drm/i915/i915_vma_resource.c
+@@ -223,10 +223,13 @@ i915_vma_resource_fence_notify(struct i915_sw_fence *fence,
+ * Return: A refcounted pointer to a dma-fence that signals when unbinding is
+ * complete.
+ */
+-struct dma_fence *i915_vma_resource_unbind(struct i915_vma_resource *vma_res)
++struct dma_fence *i915_vma_resource_unbind(struct i915_vma_resource *vma_res,
++ u32 *tlb)
+ {
+ struct i915_address_space *vm = vma_res->vm;
+
++ vma_res->tlb = tlb;
++
+ /* Reference for the sw fence */
+ i915_vma_resource_get(vma_res);
+
+diff --git a/drivers/gpu/drm/i915/i915_vma_resource.h b/drivers/gpu/drm/i915/i915_vma_resource.h
+index 5d8427caa2ba2..06923d1816e7e 100644
+--- a/drivers/gpu/drm/i915/i915_vma_resource.h
++++ b/drivers/gpu/drm/i915/i915_vma_resource.h
+@@ -67,6 +67,7 @@ struct i915_page_sizes {
+ * taken when the unbind is scheduled.
+ * @skip_pte_rewrite: During ggtt suspend and vm takedown pte rewriting
+ * needs to be skipped for unbind.
++ * @tlb: pointer for obj->mm.tlb, if async unbind. Otherwise, NULL
+ *
+ * The lifetime of a struct i915_vma_resource is from a binding request to
+ * the actual possible asynchronous unbind has completed.
+@@ -119,6 +120,8 @@ struct i915_vma_resource {
+ bool immediate_unbind:1;
+ bool needs_wakeref:1;
+ bool skip_pte_rewrite:1;
++
++ u32 *tlb;
+ };
+
+ bool i915_vma_resource_hold(struct i915_vma_resource *vma_res,
+@@ -131,7 +134,8 @@ struct i915_vma_resource *i915_vma_resource_alloc(void);
+
+ void i915_vma_resource_free(struct i915_vma_resource *vma_res);
+
+-struct dma_fence *i915_vma_resource_unbind(struct i915_vma_resource *vma_res);
++struct dma_fence *i915_vma_resource_unbind(struct i915_vma_resource *vma_res,
++ u32 *tlb);
+
+ void __i915_vma_resource_init(struct i915_vma_resource *vma_res);
+
+diff --git a/drivers/gpu/drm/imx/dcss/dcss-kms.c b/drivers/gpu/drm/imx/dcss/dcss-kms.c
+index 9b84df34a6a12..8cf3352d88582 100644
+--- a/drivers/gpu/drm/imx/dcss/dcss-kms.c
++++ b/drivers/gpu/drm/imx/dcss/dcss-kms.c
+@@ -142,8 +142,6 @@ struct dcss_kms_dev *dcss_kms_attach(struct dcss_dev *dcss)
+
+ drm_kms_helper_poll_init(drm);
+
+- drm_bridge_connector_enable_hpd(kms->connector);
+-
+ ret = drm_dev_register(drm, 0);
+ if (ret)
+ goto cleanup_crtc;
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 1b70938cfd2c4..bd4ca11d3ff53 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -115,8 +115,11 @@ static bool meson_vpu_has_available_connectors(struct device *dev)
+ for_each_endpoint_of_node(dev->of_node, ep) {
+ /* If the endpoint node exists, consider it enabled */
+ remote = of_graph_get_remote_port(ep);
+- if (remote)
++ if (remote) {
++ of_node_put(remote);
++ of_node_put(ep);
+ return true;
++ }
+ }
+
+ return false;
+diff --git a/drivers/gpu/drm/meson/meson_viu.c b/drivers/gpu/drm/meson/meson_viu.c
+index 259f3e6bec90a..bb7e109534de1 100644
+--- a/drivers/gpu/drm/meson/meson_viu.c
++++ b/drivers/gpu/drm/meson/meson_viu.c
+@@ -469,17 +469,17 @@ void meson_viu_init(struct meson_drm *priv)
+ priv->io_base + _REG(VD2_IF0_LUMA_FIFO_SIZE));
+
+ if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) {
+- writel_relaxed(VIU_OSD_BLEND_REORDER(0, 1) |
+- VIU_OSD_BLEND_REORDER(1, 0) |
+- VIU_OSD_BLEND_REORDER(2, 0) |
+- VIU_OSD_BLEND_REORDER(3, 0) |
+- VIU_OSD_BLEND_DIN_EN(1) |
+- VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 |
+- VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 |
+- VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 |
+- VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) |
+- VIU_OSD_BLEND_HOLD_LINES(4),
+- priv->io_base + _REG(VIU_OSD_BLEND_CTRL));
++ u32 val = (u32)VIU_OSD_BLEND_REORDER(0, 1) |
++ (u32)VIU_OSD_BLEND_REORDER(1, 0) |
++ (u32)VIU_OSD_BLEND_REORDER(2, 0) |
++ (u32)VIU_OSD_BLEND_REORDER(3, 0) |
++ (u32)VIU_OSD_BLEND_DIN_EN(1) |
++ (u32)VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 |
++ (u32)VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 |
++ (u32)VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 |
++ (u32)VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) |
++ (u32)VIU_OSD_BLEND_HOLD_LINES(4);
++ writel_relaxed(val, priv->io_base + _REG(VIU_OSD_BLEND_CTRL));
+
+ writel_relaxed(OSD_BLEND_PATH_SEL_ENABLE,
+ priv->io_base + _REG(OSD1_BLEND_SRC_CTRL));
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+index 62efbd0f38466..b7246b146e51d 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+@@ -2605,6 +2605,27 @@ nv172_chipset = {
+ .fifo = { 0x00000001, ga102_fifo_new },
+ };
+
++static const struct nvkm_device_chip
++nv173_chipset = {
++ .name = "GA103",
++ .bar = { 0x00000001, tu102_bar_new },
++ .bios = { 0x00000001, nvkm_bios_new },
++ .devinit = { 0x00000001, ga100_devinit_new },
++ .fb = { 0x00000001, ga102_fb_new },
++ .gpio = { 0x00000001, ga102_gpio_new },
++ .i2c = { 0x00000001, gm200_i2c_new },
++ .imem = { 0x00000001, nv50_instmem_new },
++ .mc = { 0x00000001, ga100_mc_new },
++ .mmu = { 0x00000001, tu102_mmu_new },
++ .pci = { 0x00000001, gp100_pci_new },
++ .privring = { 0x00000001, gm200_privring_new },
++ .timer = { 0x00000001, gk20a_timer_new },
++ .top = { 0x00000001, ga100_top_new },
++ .disp = { 0x00000001, ga102_disp_new },
++ .dma = { 0x00000001, gv100_dma_new },
++ .fifo = { 0x00000001, ga102_fifo_new },
++};
++
+ static const struct nvkm_device_chip
+ nv174_chipset = {
+ .name = "GA104",
+@@ -3092,6 +3113,7 @@ nvkm_device_ctor(const struct nvkm_device_func *func,
+ case 0x167: device->chip = &nv167_chipset; break;
+ case 0x168: device->chip = &nv168_chipset; break;
+ case 0x172: device->chip = &nv172_chipset; break;
++ case 0x173: device->chip = &nv173_chipset; break;
+ case 0x174: device->chip = &nv174_chipset; break;
+ case 0x176: device->chip = &nv176_chipset; break;
+ case 0x177: device->chip = &nv177_chipset; break;
+diff --git a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+index b4dfa166eccdf..34234a144e87d 100644
+--- a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
++++ b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+@@ -531,7 +531,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ struct drm_display_mode *mode)
+ {
+ struct mipi_dsi_device *device = dsi->device;
+- unsigned int Bpp = mipi_dsi_pixel_format_to_bpp(device->format) / 8;
++ int Bpp = mipi_dsi_pixel_format_to_bpp(device->format) / 8;
+ u16 hbp = 0, hfp = 0, hsa = 0, hblk = 0, vblk = 0;
+ u32 basic_ctl = 0;
+ size_t bytes;
+@@ -555,7 +555,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ * (4 bytes). Its minimal size is therefore 10 bytes
+ */
+ #define HSA_PACKET_OVERHEAD 10
+- hsa = max((unsigned int)HSA_PACKET_OVERHEAD,
++ hsa = max(HSA_PACKET_OVERHEAD,
+ (mode->hsync_end - mode->hsync_start) * Bpp - HSA_PACKET_OVERHEAD);
+
+ /*
+@@ -564,7 +564,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ * therefore 6 bytes
+ */
+ #define HBP_PACKET_OVERHEAD 6
+- hbp = max((unsigned int)HBP_PACKET_OVERHEAD,
++ hbp = max(HBP_PACKET_OVERHEAD,
+ (mode->htotal - mode->hsync_end) * Bpp - HBP_PACKET_OVERHEAD);
+
+ /*
+@@ -574,7 +574,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ * 16 bytes
+ */
+ #define HFP_PACKET_OVERHEAD 16
+- hfp = max((unsigned int)HFP_PACKET_OVERHEAD,
++ hfp = max(HFP_PACKET_OVERHEAD,
+ (mode->hsync_start - mode->hdisplay) * Bpp - HFP_PACKET_OVERHEAD);
+
+ /*
+@@ -583,7 +583,7 @@ static void sun6i_dsi_setup_timings(struct sun6i_dsi *dsi,
+ * bytes). Its minimal size is therefore 10 bytes.
+ */
+ #define HBLK_PACKET_OVERHEAD 10
+- hblk = max((unsigned int)HBLK_PACKET_OVERHEAD,
++ hblk = max(HBLK_PACKET_OVERHEAD,
+ (mode->htotal - (mode->hsync_end - mode->hsync_start)) * Bpp -
+ HBLK_PACKET_OVERHEAD);
+
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index 406e9c324e76a..5bf7124ece96d 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -918,7 +918,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo,
+ /*
+ * We might need to add a TTM.
+ */
+- if (bo->resource->mem_type == TTM_PL_SYSTEM) {
++ if (!bo->resource || bo->resource->mem_type == TTM_PL_SYSTEM) {
+ ret = ttm_tt_create(bo, true);
+ if (ret)
+ return ret;
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 6bb3890b0f2c9..2e72922e36f56 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -194,6 +194,7 @@ static void mt_post_parse(struct mt_device *td, struct mt_application *app);
+ #define MT_CLS_WIN_8_FORCE_MULTI_INPUT 0x0015
+ #define MT_CLS_WIN_8_DISABLE_WAKEUP 0x0016
+ #define MT_CLS_WIN_8_NO_STICKY_FINGERS 0x0017
++#define MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU 0x0018
+
+ /* vendor specific classes */
+ #define MT_CLS_3M 0x0101
+@@ -286,6 +287,15 @@ static const struct mt_class mt_classes[] = {
+ MT_QUIRK_WIN8_PTP_BUTTONS |
+ MT_QUIRK_FORCE_MULTI_INPUT,
+ .export_all_inputs = true },
++ { .name = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
++ .quirks = MT_QUIRK_IGNORE_DUPLICATES |
++ MT_QUIRK_HOVERING |
++ MT_QUIRK_CONTACT_CNT_ACCURATE |
++ MT_QUIRK_STICKY_FINGERS |
++ MT_QUIRK_WIN8_PTP_BUTTONS |
++ MT_QUIRK_FORCE_MULTI_INPUT |
++ MT_QUIRK_NOT_SEEN_MEANS_UP,
++ .export_all_inputs = true },
+ { .name = MT_CLS_WIN_8_DISABLE_WAKEUP,
+ .quirks = MT_QUIRK_ALWAYS_VALID |
+ MT_QUIRK_IGNORE_DUPLICATES |
+@@ -783,6 +793,7 @@ static int mt_touch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ case HID_DG_CONFIDENCE:
+ if ((cls->name == MT_CLS_WIN_8 ||
+ cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT ||
++ cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU ||
+ cls->name == MT_CLS_WIN_8_DISABLE_WAKEUP) &&
+ (field->application == HID_DG_TOUCHPAD ||
+ field->application == HID_DG_TOUCHSCREEN))
+@@ -2035,7 +2046,7 @@ static const struct hid_device_id mt_devices[] = {
+ USB_DEVICE_ID_LENOVO_X1_TAB3) },
+
+ /* Lenovo X12 TAB Gen 1 */
+- { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
+ HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
+ USB_VENDOR_ID_LENOVO,
+ USB_DEVICE_ID_LENOVO_X12_TAB) },
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
+index 33869c1d20c31..a7bfea31f7d8f 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.h
++++ b/drivers/hwtracing/coresight/coresight-etm4x.h
+@@ -7,6 +7,7 @@
+ #define _CORESIGHT_CORESIGHT_ETM_H
+
+ #include <asm/local.h>
++#include <linux/const.h>
+ #include <linux/spinlock.h>
+ #include <linux/types.h>
+ #include "coresight-priv.h"
+@@ -515,7 +516,7 @@
+ ({ \
+ u64 __val; \
+ \
+- if (__builtin_constant_p((offset))) \
++ if (__is_constexpr((offset))) \
+ __val = read_etm4x_sysreg_const_offset((offset)); \
+ else \
+ __val = etm4x_sysreg_read((offset), true, (_64bit)); \
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 78fb1a4274a6c..e47fa34656717 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -1572,9 +1572,7 @@ static int i2c_imx_remove(struct platform_device *pdev)
+ struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev);
+ int irq, ret;
+
+- ret = pm_runtime_resume_and_get(&pdev->dev);
+- if (ret < 0)
+- return ret;
++ ret = pm_runtime_get_sync(&pdev->dev);
+
+ hrtimer_cancel(&i2c_imx->slave_timer);
+
+@@ -1585,17 +1583,21 @@ static int i2c_imx_remove(struct platform_device *pdev)
+ if (i2c_imx->dma)
+ i2c_imx_dma_free(i2c_imx);
+
+- /* setup chip registers to defaults */
+- imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IADR);
+- imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IFDR);
+- imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2CR);
+- imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2SR);
++ if (ret == 0) {
++ /* setup chip registers to defaults */
++ imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IADR);
++ imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IFDR);
++ imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2CR);
++ imx_i2c_write_reg(0, i2c_imx, IMX_I2C_I2SR);
++ clk_disable(i2c_imx->clk);
++ }
+
+ clk_notifier_unregister(i2c_imx->clk, &i2c_imx->clk_change_nb);
+ irq = platform_get_irq(pdev, 0);
+ if (irq >= 0)
+ free_irq(irq, i2c_imx);
+- clk_disable_unprepare(i2c_imx->clk);
++
++ clk_unprepare(i2c_imx->clk);
+
+ pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
+index 3bec7c782824a..906df87a89f23 100644
+--- a/drivers/i2c/busses/i2c-qcom-geni.c
++++ b/drivers/i2c/busses/i2c-qcom-geni.c
+@@ -484,12 +484,12 @@ static void geni_i2c_gpi_unmap(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ {
+ if (tx_buf) {
+ dma_unmap_single(gi2c->se.dev->parent, tx_addr, msg->len, DMA_TO_DEVICE);
+- i2c_put_dma_safe_msg_buf(tx_buf, msg, false);
++ i2c_put_dma_safe_msg_buf(tx_buf, msg, !gi2c->err);
+ }
+
+ if (rx_buf) {
+ dma_unmap_single(gi2c->se.dev->parent, rx_addr, msg->len, DMA_FROM_DEVICE);
+- i2c_put_dma_safe_msg_buf(rx_buf, msg, false);
++ i2c_put_dma_safe_msg_buf(rx_buf, msg, !gi2c->err);
+ }
+ }
+
+@@ -553,6 +553,7 @@ static int geni_i2c_gpi(struct geni_i2c_dev *gi2c, struct i2c_msg *msg,
+ desc->callback_param = gi2c;
+
+ dmaengine_submit(desc);
++ *buf = dma_buf;
+ *dma_addr_p = addr;
+
+ return 0;
+diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c
+index fce80a4a5147c..04c04e6d24c35 100644
+--- a/drivers/infiniband/core/umem_dmabuf.c
++++ b/drivers/infiniband/core/umem_dmabuf.c
+@@ -18,6 +18,7 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf)
+ struct scatterlist *sg;
+ unsigned long start, end, cur = 0;
+ unsigned int nmap = 0;
++ long ret;
+ int i;
+
+ dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv);
+@@ -67,9 +68,14 @@ wait_fence:
+ * may be not up-to-date. Wait for the exporter to finish
+ * the migration.
+ */
+- return dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv,
++ ret = dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv,
+ DMA_RESV_USAGE_KERNEL,
+ false, MAX_SCHEDULE_TIMEOUT);
++ if (ret < 0)
++ return ret;
++ if (ret == 0)
++ return -ETIMEDOUT;
++ return 0;
+ }
+ EXPORT_SYMBOL(ib_umem_dmabuf_map_pages);
+
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index c16017f6e8db2..14392c942f492 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -2468,31 +2468,24 @@ static int accept_cr(struct c4iw_ep *ep, struct sk_buff *skb,
+ opt2 |= CCTRL_ECN_V(1);
+ }
+
+- skb_get(skb);
+- rpl = cplhdr(skb);
+ if (!is_t4(adapter_type)) {
+- BUILD_BUG_ON(sizeof(*rpl5) != roundup(sizeof(*rpl5), 16));
+- skb_trim(skb, sizeof(*rpl5));
+- rpl5 = (void *)rpl;
+- INIT_TP_WR(rpl5, ep->hwtid);
+- } else {
+- skb_trim(skb, sizeof(*rpl));
+- INIT_TP_WR(rpl, ep->hwtid);
+- }
+- OPCODE_TID(rpl) = cpu_to_be32(MK_OPCODE_TID(CPL_PASS_ACCEPT_RPL,
+- ep->hwtid));
+-
+- if (CHELSIO_CHIP_VERSION(adapter_type) > CHELSIO_T4) {
+ u32 isn = (prandom_u32() & ~7UL) - 1;
++
++ skb = get_skb(skb, roundup(sizeof(*rpl5), 16), GFP_KERNEL);
++ rpl5 = __skb_put_zero(skb, roundup(sizeof(*rpl5), 16));
++ rpl = (void *)rpl5;
++ INIT_TP_WR_CPL(rpl5, CPL_PASS_ACCEPT_RPL, ep->hwtid);
+ opt2 |= T5_OPT_2_VALID_F;
+ opt2 |= CONG_CNTRL_V(CONG_ALG_TAHOE);
+ opt2 |= T5_ISS_F;
+- rpl5 = (void *)rpl;
+- memset_after(rpl5, 0, iss);
+ if (peer2peer)
+ isn += 4;
+ rpl5->iss = cpu_to_be32(isn);
+ pr_debug("iss %u\n", be32_to_cpu(rpl5->iss));
++ } else {
++ skb = get_skb(skb, sizeof(*rpl), GFP_KERNEL);
++ rpl = __skb_put_zero(skb, sizeof(*rpl));
++ INIT_TP_WR_CPL(rpl, CPL_PASS_ACCEPT_RPL, ep->hwtid);
+ }
+
+ rpl->opt0 = cpu_to_be64(opt0);
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index b68fddeac0f12..63c89a72cc352 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2738,26 +2738,24 @@ static int set_has_smi_cap(struct mlx5_ib_dev *dev)
+ int err;
+ int port;
+
+- for (port = 1; port <= ARRAY_SIZE(dev->port_caps); port++) {
+- dev->port_caps[port - 1].has_smi = false;
+- if (MLX5_CAP_GEN(dev->mdev, port_type) ==
+- MLX5_CAP_PORT_TYPE_IB) {
+- if (MLX5_CAP_GEN(dev->mdev, ib_virt)) {
+- err = mlx5_query_hca_vport_context(dev->mdev, 0,
+- port, 0,
+- &vport_ctx);
+- if (err) {
+- mlx5_ib_err(dev, "query_hca_vport_context for port=%d failed %d\n",
+- port, err);
+- return err;
+- }
+- dev->port_caps[port - 1].has_smi =
+- vport_ctx.has_smi;
+- } else {
+- dev->port_caps[port - 1].has_smi = true;
+- }
++ if (MLX5_CAP_GEN(dev->mdev, port_type) != MLX5_CAP_PORT_TYPE_IB)
++ return 0;
++
++ for (port = 1; port <= dev->num_ports; port++) {
++ if (!MLX5_CAP_GEN(dev->mdev, ib_virt)) {
++ dev->port_caps[port - 1].has_smi = true;
++ continue;
+ }
++ err = mlx5_query_hca_vport_context(dev->mdev, 0, port, 0,
++ &vport_ctx);
++ if (err) {
++ mlx5_ib_err(dev, "query_hca_vport_context for port=%d failed %d\n",
++ port, err);
++ return err;
++ }
++ dev->port_caps[port - 1].has_smi = vport_ctx.has_smi;
+ }
++
+ return 0;
+ }
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
+index 37484a559d209..d86253c6d6b56 100644
+--- a/drivers/infiniband/sw/rxe/rxe_loc.h
++++ b/drivers/infiniband/sw/rxe/rxe_loc.h
+@@ -79,7 +79,6 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length);
+ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length);
+ int rxe_invalidate_mr(struct rxe_qp *qp, u32 key);
+ int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe);
+-int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr);
+ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata);
+ void rxe_mr_cleanup(struct rxe_pool_elem *elem);
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
+index 3add521290064..c28b18d59a064 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mr.c
++++ b/drivers/infiniband/sw/rxe/rxe_mr.c
+@@ -24,7 +24,7 @@ u8 rxe_get_next_key(u32 last_key)
+
+ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
+ {
+- struct rxe_map_set *set = mr->cur_map_set;
++
+
+ switch (mr->type) {
+ case IB_MR_TYPE_DMA:
+@@ -32,8 +32,8 @@ int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length)
+
+ case IB_MR_TYPE_USER:
+ case IB_MR_TYPE_MEM_REG:
+- if (iova < set->iova || length > set->length ||
+- iova > set->iova + set->length - length)
++ if (iova < mr->iova || length > mr->length ||
++ iova > mr->iova + mr->length - length)
+ return -EFAULT;
+ return 0;
+
+@@ -65,89 +65,41 @@ static void rxe_mr_init(int access, struct rxe_mr *mr)
+ mr->map_shift = ilog2(RXE_BUF_PER_MAP);
+ }
+
+-static void rxe_mr_free_map_set(int num_map, struct rxe_map_set *set)
+-{
+- int i;
+-
+- for (i = 0; i < num_map; i++)
+- kfree(set->map[i]);
+-
+- kfree(set->map);
+- kfree(set);
+-}
+-
+-static int rxe_mr_alloc_map_set(int num_map, struct rxe_map_set **setp)
++static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf)
+ {
+ int i;
+- struct rxe_map_set *set;
++ int num_map;
++ struct rxe_map **map = mr->map;
+
+- set = kmalloc(sizeof(*set), GFP_KERNEL);
+- if (!set)
+- goto err_out;
++ num_map = (num_buf + RXE_BUF_PER_MAP - 1) / RXE_BUF_PER_MAP;
+
+- set->map = kmalloc_array(num_map, sizeof(struct rxe_map *), GFP_KERNEL);
+- if (!set->map)
+- goto err_free_set;
++ mr->map = kmalloc_array(num_map, sizeof(*map), GFP_KERNEL);
++ if (!mr->map)
++ goto err1;
+
+ for (i = 0; i < num_map; i++) {
+- set->map[i] = kmalloc(sizeof(struct rxe_map), GFP_KERNEL);
+- if (!set->map[i])
+- goto err_free_map;
++ mr->map[i] = kmalloc(sizeof(**map), GFP_KERNEL);
++ if (!mr->map[i])
++ goto err2;
+ }
+
+- *setp = set;
+-
+- return 0;
+-
+-err_free_map:
+- for (i--; i >= 0; i--)
+- kfree(set->map[i]);
+-
+- kfree(set->map);
+-err_free_set:
+- kfree(set);
+-err_out:
+- return -ENOMEM;
+-}
+-
+-/**
+- * rxe_mr_alloc() - Allocate memory map array(s) for MR
+- * @mr: Memory region
+- * @num_buf: Number of buffer descriptors to support
+- * @both: If non zero allocate both mr->map and mr->next_map
+- * else just allocate mr->map. Used for fast MRs
+- *
+- * Return: 0 on success else an error
+- */
+-static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf, int both)
+-{
+- int ret;
+- int num_map;
+-
+ BUILD_BUG_ON(!is_power_of_2(RXE_BUF_PER_MAP));
+- num_map = (num_buf + RXE_BUF_PER_MAP - 1) / RXE_BUF_PER_MAP;
+
+ mr->map_shift = ilog2(RXE_BUF_PER_MAP);
+ mr->map_mask = RXE_BUF_PER_MAP - 1;
++
+ mr->num_buf = num_buf;
+- mr->max_buf = num_map * RXE_BUF_PER_MAP;
+ mr->num_map = num_map;
+-
+- ret = rxe_mr_alloc_map_set(num_map, &mr->cur_map_set);
+- if (ret)
+- return -ENOMEM;
+-
+- if (both) {
+- ret = rxe_mr_alloc_map_set(num_map, &mr->next_map_set);
+- if (ret)
+- goto err_free;
+- }
++ mr->max_buf = num_map * RXE_BUF_PER_MAP;
+
+ return 0;
+
+-err_free:
+- rxe_mr_free_map_set(mr->num_map, mr->cur_map_set);
+- mr->cur_map_set = NULL;
++err2:
++ for (i--; i >= 0; i--)
++ kfree(mr->map[i]);
++
++ kfree(mr->map);
++err1:
+ return -ENOMEM;
+ }
+
+@@ -164,7 +116,6 @@ void rxe_mr_init_dma(struct rxe_pd *pd, int access, struct rxe_mr *mr)
+ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+ int access, struct rxe_mr *mr)
+ {
+- struct rxe_map_set *set;
+ struct rxe_map **map;
+ struct rxe_phys_buf *buf = NULL;
+ struct ib_umem *umem;
+@@ -172,6 +123,7 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+ int num_buf;
+ void *vaddr;
+ int err;
++ int i;
+
+ umem = ib_umem_get(pd->ibpd.device, start, length, access);
+ if (IS_ERR(umem)) {
+@@ -185,20 +137,18 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+
+ rxe_mr_init(access, mr);
+
+- err = rxe_mr_alloc(mr, num_buf, 0);
++ err = rxe_mr_alloc(mr, num_buf);
+ if (err) {
+ pr_warn("%s: Unable to allocate memory for map\n",
+ __func__);
+ goto err_release_umem;
+ }
+
+- set = mr->cur_map_set;
+- set->page_shift = PAGE_SHIFT;
+- set->page_mask = PAGE_SIZE - 1;
+-
+- num_buf = 0;
+- map = set->map;
++ mr->page_shift = PAGE_SHIFT;
++ mr->page_mask = PAGE_SIZE - 1;
+
++ num_buf = 0;
++ map = mr->map;
+ if (length > 0) {
+ buf = map[0]->buf;
+
+@@ -214,29 +164,33 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
+ pr_warn("%s: Unable to get virtual address\n",
+ __func__);
+ err = -ENOMEM;
+- goto err_release_umem;
++ goto err_cleanup_map;
+ }
+
+ buf->addr = (uintptr_t)vaddr;
+ buf->size = PAGE_SIZE;
+ num_buf++;
+ buf++;
++
+ }
+ }
+
+ mr->ibmr.pd = &pd->ibpd;
+ mr->umem = umem;
+ mr->access = access;
++ mr->length = length;
++ mr->iova = iova;
++ mr->va = start;
++ mr->offset = ib_umem_offset(umem);
+ mr->state = RXE_MR_STATE_VALID;
+ mr->type = IB_MR_TYPE_USER;
+
+- set->length = length;
+- set->iova = iova;
+- set->va = start;
+- set->offset = ib_umem_offset(umem);
+-
+ return 0;
+
++err_cleanup_map:
++ for (i = 0; i < mr->num_map; i++)
++ kfree(mr->map[i]);
++ kfree(mr->map);
+ err_release_umem:
+ ib_umem_release(umem);
+ err_out:
+@@ -250,7 +204,7 @@ int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages, struct rxe_mr *mr)
+ /* always allow remote access for FMRs */
+ rxe_mr_init(IB_ACCESS_REMOTE, mr);
+
+- err = rxe_mr_alloc(mr, max_pages, 1);
++ err = rxe_mr_alloc(mr, max_pages);
+ if (err)
+ goto err1;
+
+@@ -268,24 +222,21 @@ err1:
+ static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out,
+ size_t *offset_out)
+ {
+- struct rxe_map_set *set = mr->cur_map_set;
+- size_t offset = iova - set->iova + set->offset;
++ size_t offset = iova - mr->iova + mr->offset;
+ int map_index;
+ int buf_index;
+ u64 length;
+- struct rxe_map *map;
+
+- if (likely(set->page_shift)) {
+- *offset_out = offset & set->page_mask;
+- offset >>= set->page_shift;
++ if (likely(mr->page_shift)) {
++ *offset_out = offset & mr->page_mask;
++ offset >>= mr->page_shift;
+ *n_out = offset & mr->map_mask;
+ *m_out = offset >> mr->map_shift;
+ } else {
+ map_index = 0;
+ buf_index = 0;
+
+- map = set->map[map_index];
+- length = map->buf[buf_index].size;
++ length = mr->map[map_index]->buf[buf_index].size;
+
+ while (offset >= length) {
+ offset -= length;
+@@ -295,8 +246,7 @@ static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out,
+ map_index++;
+ buf_index = 0;
+ }
+- map = set->map[map_index];
+- length = map->buf[buf_index].size;
++ length = mr->map[map_index]->buf[buf_index].size;
+ }
+
+ *m_out = map_index;
+@@ -317,7 +267,7 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length)
+ goto out;
+ }
+
+- if (!mr->cur_map_set) {
++ if (!mr->map) {
+ addr = (void *)(uintptr_t)iova;
+ goto out;
+ }
+@@ -330,13 +280,13 @@ void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length)
+
+ lookup_iova(mr, iova, &m, &n, &offset);
+
+- if (offset + length > mr->cur_map_set->map[m]->buf[n].size) {
++ if (offset + length > mr->map[m]->buf[n].size) {
+ pr_warn("crosses page boundary\n");
+ addr = NULL;
+ goto out;
+ }
+
+- addr = (void *)(uintptr_t)mr->cur_map_set->map[m]->buf[n].addr + offset;
++ addr = (void *)(uintptr_t)mr->map[m]->buf[n].addr + offset;
+
+ out:
+ return addr;
+@@ -372,7 +322,7 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length,
+ return 0;
+ }
+
+- WARN_ON_ONCE(!mr->cur_map_set);
++ WARN_ON_ONCE(!mr->map);
+
+ err = mr_check_range(mr, iova, length);
+ if (err) {
+@@ -382,7 +332,7 @@ int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length,
+
+ lookup_iova(mr, iova, &m, &i, &offset);
+
+- map = mr->cur_map_set->map + m;
++ map = mr->map + m;
+ buf = map[0]->buf + i;
+
+ while (length > 0) {
+@@ -628,9 +578,8 @@ err:
+ int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ {
+ struct rxe_mr *mr = to_rmr(wqe->wr.wr.reg.mr);
+- u32 key = wqe->wr.wr.reg.key & 0xff;
++ u32 key = wqe->wr.wr.reg.key;
+ u32 access = wqe->wr.wr.reg.access;
+- struct rxe_map_set *set;
+
+ /* user can only register MR in free state */
+ if (unlikely(mr->state != RXE_MR_STATE_FREE)) {
+@@ -646,36 +595,19 @@ int rxe_reg_fast_mr(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ return -EINVAL;
+ }
+
++ /* user is only allowed to change key portion of l/rkey */
++ if (unlikely((mr->lkey & ~0xff) != (key & ~0xff))) {
++ pr_warn("%s: key = 0x%x has wrong index mr->lkey = 0x%x\n",
++ __func__, key, mr->lkey);
++ return -EINVAL;
++ }
++
+ mr->access = access;
+- mr->lkey = (mr->lkey & ~0xff) | key;
+- mr->rkey = (access & IB_ACCESS_REMOTE) ? mr->lkey : 0;
++ mr->lkey = key;
++ mr->rkey = (access & IB_ACCESS_REMOTE) ? key : 0;
++ mr->iova = wqe->wr.wr.reg.mr->iova;
+ mr->state = RXE_MR_STATE_VALID;
+
+- set = mr->cur_map_set;
+- mr->cur_map_set = mr->next_map_set;
+- mr->cur_map_set->iova = wqe->wr.wr.reg.mr->iova;
+- mr->next_map_set = set;
+-
+- return 0;
+-}
+-
+-int rxe_mr_set_page(struct ib_mr *ibmr, u64 addr)
+-{
+- struct rxe_mr *mr = to_rmr(ibmr);
+- struct rxe_map_set *set = mr->next_map_set;
+- struct rxe_map *map;
+- struct rxe_phys_buf *buf;
+-
+- if (unlikely(set->nbuf == mr->num_buf))
+- return -ENOMEM;
+-
+- map = set->map[set->nbuf / RXE_BUF_PER_MAP];
+- buf = &map->buf[set->nbuf % RXE_BUF_PER_MAP];
+-
+- buf->addr = addr;
+- buf->size = ibmr->page_size;
+- set->nbuf++;
+-
+ return 0;
+ }
+
+@@ -695,14 +627,15 @@ int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata)
+ void rxe_mr_cleanup(struct rxe_pool_elem *elem)
+ {
+ struct rxe_mr *mr = container_of(elem, typeof(*mr), elem);
++ int i;
+
+ rxe_put(mr_pd(mr));
+-
+ ib_umem_release(mr->umem);
+
+- if (mr->cur_map_set)
+- rxe_mr_free_map_set(mr->num_map, mr->cur_map_set);
++ if (mr->map) {
++ for (i = 0; i < mr->num_map; i++)
++ kfree(mr->map[i]);
+
+- if (mr->next_map_set)
+- rxe_mr_free_map_set(mr->num_map, mr->next_map_set);
++ kfree(mr->map);
++ }
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
+index 824739008d5b6..6c24bc4318e82 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mw.c
++++ b/drivers/infiniband/sw/rxe/rxe_mw.c
+@@ -112,15 +112,15 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+
+ /* C10-75 */
+ if (mw->access & IB_ZERO_BASED) {
+- if (unlikely(wqe->wr.wr.mw.length > mr->cur_map_set->length)) {
++ if (unlikely(wqe->wr.wr.mw.length > mr->length)) {
+ pr_err_once(
+ "attempt to bind a ZB MW outside of the MR\n");
+ return -EINVAL;
+ }
+ } else {
+- if (unlikely((wqe->wr.wr.mw.addr < mr->cur_map_set->iova) ||
++ if (unlikely((wqe->wr.wr.mw.addr < mr->iova) ||
+ ((wqe->wr.wr.mw.addr + wqe->wr.wr.mw.length) >
+- (mr->cur_map_set->iova + mr->cur_map_set->length)))) {
++ (mr->iova + mr->length)))) {
+ pr_err_once(
+ "attempt to bind a VA MW outside of the MR\n");
+ return -EINVAL;
+diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h
+index 568a7cbd13d4c..86c7a8bf3cbbd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_param.h
++++ b/drivers/infiniband/sw/rxe/rxe_param.h
+@@ -105,6 +105,12 @@ enum rxe_device_param {
+ RXE_INFLIGHT_SKBS_PER_QP_HIGH = 64,
+ RXE_INFLIGHT_SKBS_PER_QP_LOW = 16,
+
++ /* Max number of interations of each tasklet
++ * before yielding the cpu to let other
++ * work make progress
++ */
++ RXE_MAX_ITERATIONS = 1024,
++
+ /* Delay before calling arbiter timer */
+ RXE_NSEC_ARB_TIMER_DELAY = 200,
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_task.c b/drivers/infiniband/sw/rxe/rxe_task.c
+index 0c4db5bb17d75..2248cf33d7766 100644
+--- a/drivers/infiniband/sw/rxe/rxe_task.c
++++ b/drivers/infiniband/sw/rxe/rxe_task.c
+@@ -8,7 +8,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/hardirq.h>
+
+-#include "rxe_task.h"
++#include "rxe.h"
+
+ int __rxe_do_task(struct rxe_task *task)
+
+@@ -33,6 +33,7 @@ void rxe_do_task(struct tasklet_struct *t)
+ int cont;
+ int ret;
+ struct rxe_task *task = from_tasklet(task, t, tasklet);
++ unsigned int iterations = RXE_MAX_ITERATIONS;
+
+ spin_lock_bh(&task->state_lock);
+ switch (task->state) {
+@@ -61,13 +62,20 @@ void rxe_do_task(struct tasklet_struct *t)
+ spin_lock_bh(&task->state_lock);
+ switch (task->state) {
+ case TASK_STATE_BUSY:
+- if (ret)
++ if (ret) {
+ task->state = TASK_STATE_START;
+- else
++ } else if (iterations--) {
+ cont = 1;
++ } else {
++ /* reschedule the tasklet and exit
++ * the loop to give up the cpu
++ */
++ tasklet_schedule(&task->tasklet);
++ task->state = TASK_STATE_START;
++ }
+ break;
+
+- /* soneone tried to run the task since the last time we called
++ /* someone tried to run the task since the last time we called
+ * func, so we will call one more time regardless of the
+ * return value
+ */
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index 9d995854a1749..d2b4e68402d45 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -967,26 +967,41 @@ err1:
+ return ERR_PTR(err);
+ }
+
+-/* build next_map_set from scatterlist
+- * The IB_WR_REG_MR WR will swap map_sets
+- */
++static int rxe_set_page(struct ib_mr *ibmr, u64 addr)
++{
++ struct rxe_mr *mr = to_rmr(ibmr);
++ struct rxe_map *map;
++ struct rxe_phys_buf *buf;
++
++ if (unlikely(mr->nbuf == mr->num_buf))
++ return -ENOMEM;
++
++ map = mr->map[mr->nbuf / RXE_BUF_PER_MAP];
++ buf = &map->buf[mr->nbuf % RXE_BUF_PER_MAP];
++
++ buf->addr = addr;
++ buf->size = ibmr->page_size;
++ mr->nbuf++;
++
++ return 0;
++}
++
+ static int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg,
+ int sg_nents, unsigned int *sg_offset)
+ {
+ struct rxe_mr *mr = to_rmr(ibmr);
+- struct rxe_map_set *set = mr->next_map_set;
+ int n;
+
+- set->nbuf = 0;
++ mr->nbuf = 0;
+
+- n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_mr_set_page);
++ n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rxe_set_page);
+
+- set->va = ibmr->iova;
+- set->iova = ibmr->iova;
+- set->length = ibmr->length;
+- set->page_shift = ilog2(ibmr->page_size);
+- set->page_mask = ibmr->page_size - 1;
+- set->offset = set->iova & set->page_mask;
++ mr->va = ibmr->iova;
++ mr->iova = ibmr->iova;
++ mr->length = ibmr->length;
++ mr->page_shift = ilog2(ibmr->page_size);
++ mr->page_mask = ibmr->page_size - 1;
++ mr->offset = mr->iova & mr->page_mask;
+
+ return n;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index 9bdf333465114..3d524238e5c4e 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -289,17 +289,6 @@ struct rxe_map {
+ struct rxe_phys_buf buf[RXE_BUF_PER_MAP];
+ };
+
+-struct rxe_map_set {
+- struct rxe_map **map;
+- u64 va;
+- u64 iova;
+- size_t length;
+- u32 offset;
+- u32 nbuf;
+- int page_shift;
+- int page_mask;
+-};
+-
+ static inline int rkey_is_mw(u32 rkey)
+ {
+ u32 index = rkey >> 8;
+@@ -317,20 +306,26 @@ struct rxe_mr {
+ u32 rkey;
+ enum rxe_mr_state state;
+ enum ib_mr_type type;
++ u64 va;
++ u64 iova;
++ size_t length;
++ u32 offset;
+ int access;
+
++ int page_shift;
++ int page_mask;
+ int map_shift;
+ int map_mask;
+
+ u32 num_buf;
++ u32 nbuf;
+
+ u32 max_buf;
+ u32 num_map;
+
+ atomic_t num_mw;
+
+- struct rxe_map_set *cur_map_set;
+- struct rxe_map_set *next_map_set;
++ struct rxe_map **map;
+ };
+
+ enum rxe_mw_state {
+diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
+index bd5f3b5e17278..7b83f48f60c5e 100644
+--- a/drivers/infiniband/ulp/iser/iser_initiator.c
++++ b/drivers/infiniband/ulp/iser/iser_initiator.c
+@@ -537,6 +537,7 @@ void iser_login_rsp(struct ib_cq *cq, struct ib_wc *wc)
+ struct iscsi_hdr *hdr;
+ char *data;
+ int length;
++ bool full_feature_phase;
+
+ if (unlikely(wc->status != IB_WC_SUCCESS)) {
+ iser_err_comp(wc, "login_rsp");
+@@ -550,6 +551,9 @@ void iser_login_rsp(struct ib_cq *cq, struct ib_wc *wc)
+ hdr = desc->rsp + sizeof(struct iser_ctrl);
+ data = desc->rsp + ISER_HEADERS_LEN;
+ length = wc->byte_len - ISER_HEADERS_LEN;
++ full_feature_phase = ((hdr->flags & ISCSI_FULL_FEATURE_PHASE) ==
++ ISCSI_FULL_FEATURE_PHASE) &&
++ (hdr->flags & ISCSI_FLAG_CMD_FINAL);
+
+ iser_dbg("op 0x%x itt 0x%x dlen %d\n", hdr->opcode,
+ hdr->itt, length);
+@@ -560,7 +564,8 @@ void iser_login_rsp(struct ib_cq *cq, struct ib_wc *wc)
+ desc->rsp_dma, ISER_RX_LOGIN_SIZE,
+ DMA_FROM_DEVICE);
+
+- if (iser_conn->iscsi_conn->session->discovery_sess)
++ if (!full_feature_phase ||
++ iser_conn->iscsi_conn->session->discovery_sess)
+ return;
+
+ /* Post the first RX buffer that is skipped in iser_post_rx_bufs() */
+diff --git a/drivers/input/keyboard/mt6779-keypad.c b/drivers/input/keyboard/mt6779-keypad.c
+index 2e7c9187c10f2..bd86cb95bde30 100644
+--- a/drivers/input/keyboard/mt6779-keypad.c
++++ b/drivers/input/keyboard/mt6779-keypad.c
+@@ -42,7 +42,7 @@ static irqreturn_t mt6779_keypad_irq_handler(int irq, void *dev_id)
+ const unsigned short *keycode = keypad->input_dev->keycode;
+ DECLARE_BITMAP(new_state, MTK_KPD_NUM_BITS);
+ DECLARE_BITMAP(change, MTK_KPD_NUM_BITS);
+- unsigned int bit_nr;
++ unsigned int bit_nr, key;
+ unsigned int row, col;
+ unsigned int scancode;
+ unsigned int row_shift = get_count_order(keypad->n_cols);
+@@ -61,8 +61,10 @@ static irqreturn_t mt6779_keypad_irq_handler(int irq, void *dev_id)
+ if (bit_nr % 32 >= 16)
+ continue;
+
+- row = bit_nr / 32;
+- col = bit_nr % 32;
++ key = bit_nr / 32 * 16 + bit_nr % 32;
++ row = key / 9;
++ col = key % 9;
++
+ scancode = MATRIX_SCAN_CODE(row, col, row_shift);
+ /* 1: not pressed, 0: pressed */
+ pressed = !test_bit(bit_nr, new_state);
+diff --git a/drivers/input/misc/iqs7222.c b/drivers/input/misc/iqs7222.c
+index 6b4138771a3f2..b2e8097a2e6d9 100644
+--- a/drivers/input/misc/iqs7222.c
++++ b/drivers/input/misc/iqs7222.c
+@@ -40,7 +40,6 @@
+ #define IQS7222_SLDR_SETUP_2_RES_MASK GENMASK(15, 8)
+ #define IQS7222_SLDR_SETUP_2_RES_SHIFT 8
+ #define IQS7222_SLDR_SETUP_2_TOP_SPEED_MASK GENMASK(7, 0)
+-#define IQS7222_SLDR_SETUP_3_CHAN_SEL_MASK GENMASK(9, 0)
+
+ #define IQS7222_GPIO_SETUP_0_GPIO_EN BIT(0)
+
+@@ -54,6 +53,9 @@
+ #define IQS7222_SYS_SETUP_ACK_RESET BIT(0)
+
+ #define IQS7222_EVENT_MASK_ATI BIT(12)
++#define IQS7222_EVENT_MASK_SLDR BIT(10)
++#define IQS7222_EVENT_MASK_TOUCH BIT(1)
++#define IQS7222_EVENT_MASK_PROX BIT(0)
+
+ #define IQS7222_COMMS_HOLD BIT(0)
+ #define IQS7222_COMMS_ERROR 0xEEEE
+@@ -92,11 +94,11 @@ enum iqs7222_reg_key_id {
+
+ enum iqs7222_reg_grp_id {
+ IQS7222_REG_GRP_STAT,
++ IQS7222_REG_GRP_FILT,
+ IQS7222_REG_GRP_CYCLE,
+ IQS7222_REG_GRP_GLBL,
+ IQS7222_REG_GRP_BTN,
+ IQS7222_REG_GRP_CHAN,
+- IQS7222_REG_GRP_FILT,
+ IQS7222_REG_GRP_SLDR,
+ IQS7222_REG_GRP_GPIO,
+ IQS7222_REG_GRP_SYS,
+@@ -135,12 +137,12 @@ struct iqs7222_event_desc {
+ static const struct iqs7222_event_desc iqs7222_kp_events[] = {
+ {
+ .name = "event-prox",
+- .enable = BIT(0),
++ .enable = IQS7222_EVENT_MASK_PROX,
+ .reg_key = IQS7222_REG_KEY_PROX,
+ },
+ {
+ .name = "event-touch",
+- .enable = BIT(1),
++ .enable = IQS7222_EVENT_MASK_TOUCH,
+ .reg_key = IQS7222_REG_KEY_TOUCH,
+ },
+ };
+@@ -555,13 +557,6 @@ static const struct iqs7222_prop_desc iqs7222_props[] = {
+ .reg_width = 4,
+ .label = "current reference trim",
+ },
+- {
+- .name = "azoteq,rf-filt-enable",
+- .reg_grp = IQS7222_REG_GRP_GLBL,
+- .reg_offset = 0,
+- .reg_shift = 15,
+- .reg_width = 1,
+- },
+ {
+ .name = "azoteq,max-counts",
+ .reg_grp = IQS7222_REG_GRP_GLBL,
+@@ -1272,9 +1267,22 @@ static int iqs7222_ati_trigger(struct iqs7222_private *iqs7222)
+ struct i2c_client *client = iqs7222->client;
+ ktime_t ati_timeout;
+ u16 sys_status = 0;
+- u16 sys_setup = iqs7222->sys_setup[0] & ~IQS7222_SYS_SETUP_ACK_RESET;
++ u16 sys_setup;
+ int error, i;
+
++ /*
++ * The reserved fields of the system setup register may have changed
++ * as a result of other registers having been written. As such, read
++ * the register's latest value to avoid unexpected behavior when the
++ * register is written in the loop that follows.
++ */
++ error = iqs7222_read_word(iqs7222, IQS7222_SYS_SETUP, &sys_setup);
++ if (error)
++ return error;
++
++ sys_setup &= ~IQS7222_SYS_SETUP_INTF_MODE_MASK;
++ sys_setup &= ~IQS7222_SYS_SETUP_PWR_MODE_MASK;
++
+ for (i = 0; i < IQS7222_NUM_RETRIES; i++) {
+ /*
+ * Trigger ATI from streaming and normal-power modes so that
+@@ -1299,12 +1307,15 @@ static int iqs7222_ati_trigger(struct iqs7222_private *iqs7222)
+ if (error)
+ return error;
+
+- if (sys_status & IQS7222_SYS_STATUS_ATI_ACTIVE)
+- continue;
++ if (sys_status & IQS7222_SYS_STATUS_RESET)
++ return 0;
+
+ if (sys_status & IQS7222_SYS_STATUS_ATI_ERROR)
+ break;
+
++ if (sys_status & IQS7222_SYS_STATUS_ATI_ACTIVE)
++ continue;
++
+ /*
+ * Use stream-in-touch mode if either slider reports
+ * absolute position.
+@@ -1321,7 +1332,7 @@ static int iqs7222_ati_trigger(struct iqs7222_private *iqs7222)
+ dev_err(&client->dev,
+ "ATI attempt %d of %d failed with status 0x%02X, %s\n",
+ i + 1, IQS7222_NUM_RETRIES, (u8)sys_status,
+- i < IQS7222_NUM_RETRIES ? "retrying..." : "stopping");
++ i + 1 < IQS7222_NUM_RETRIES ? "retrying" : "stopping");
+ }
+
+ return -ETIMEDOUT;
+@@ -1333,6 +1344,34 @@ static int iqs7222_dev_init(struct iqs7222_private *iqs7222, int dir)
+ int comms_offset = dev_desc->comms_offset;
+ int error, i, j, k;
+
++ /*
++ * Acknowledge reset before writing any registers in case the device
++ * suffers a spurious reset during initialization. Because this step
++ * may change the reserved fields of the second filter beta register,
++ * its cache must be updated.
++ *
++ * Writing the second filter beta register, in turn, may clobber the
++ * system status register. As such, the filter beta register pair is
++ * written first to protect against this hazard.
++ */
++ if (dir == WRITE) {
++ u16 reg = dev_desc->reg_grps[IQS7222_REG_GRP_FILT].base + 1;
++ u16 filt_setup;
++
++ error = iqs7222_write_word(iqs7222, IQS7222_SYS_SETUP,
++ iqs7222->sys_setup[0] |
++ IQS7222_SYS_SETUP_ACK_RESET);
++ if (error)
++ return error;
++
++ error = iqs7222_read_word(iqs7222, reg, &filt_setup);
++ if (error)
++ return error;
++
++ iqs7222->filt_setup[1] &= GENMASK(7, 0);
++ iqs7222->filt_setup[1] |= (filt_setup & ~GENMASK(7, 0));
++ }
++
+ /*
+ * Take advantage of the stop-bit disable function, if available, to
+ * save the trouble of having to reopen a communication window after
+@@ -1957,8 +1996,8 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
+ int num_chan = dev_desc->reg_grps[IQS7222_REG_GRP_CHAN].num_row;
+ int ext_chan = rounddown(num_chan, 10);
+ int count, error, reg_offset, i;
++ u16 *event_mask = &iqs7222->sys_setup[dev_desc->event_offset];
+ u16 *sldr_setup = iqs7222->sldr_setup[sldr_index];
+- u16 *sys_setup = iqs7222->sys_setup;
+ unsigned int chan_sel[4], val;
+
+ error = iqs7222_parse_props(iqs7222, &sldr_node, sldr_index,
+@@ -2003,7 +2042,7 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
+ reg_offset = dev_desc->sldr_res < U16_MAX ? 0 : 1;
+
+ sldr_setup[0] |= count;
+- sldr_setup[3 + reg_offset] &= ~IQS7222_SLDR_SETUP_3_CHAN_SEL_MASK;
++ sldr_setup[3 + reg_offset] &= ~GENMASK(ext_chan - 1, 0);
+
+ for (i = 0; i < ARRAY_SIZE(chan_sel); i++) {
+ sldr_setup[5 + reg_offset + i] = 0;
+@@ -2081,17 +2120,19 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
+ sldr_setup[0] |= dev_desc->wheel_enable;
+ }
+
++ /*
++ * The absence of a register offset makes it safe to assume the device
++ * supports gestures, each of which is first disabled until explicitly
++ * enabled.
++ */
++ if (!reg_offset)
++ for (i = 0; i < ARRAY_SIZE(iqs7222_sl_events); i++)
++ sldr_setup[9] &= ~iqs7222_sl_events[i].enable;
++
+ for (i = 0; i < ARRAY_SIZE(iqs7222_sl_events); i++) {
+ const char *event_name = iqs7222_sl_events[i].name;
+ struct fwnode_handle *event_node;
+
+- /*
+- * The absence of a register offset means the remaining fields
+- * in the group represent gesture settings.
+- */
+- if (iqs7222_sl_events[i].enable && !reg_offset)
+- sldr_setup[9] &= ~iqs7222_sl_events[i].enable;
+-
+ event_node = fwnode_get_named_child_node(sldr_node, event_name);
+ if (!event_node)
+ continue;
+@@ -2104,6 +2145,22 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
+ if (error)
+ return error;
+
++ /*
++ * The press/release event does not expose a direct GPIO link,
++ * but one can be emulated by tying each of the participating
++ * channels to the same GPIO.
++ */
++ error = iqs7222_gpio_select(iqs7222, event_node,
++ i ? iqs7222_sl_events[i].enable
++ : sldr_setup[3 + reg_offset],
++ i ? 1568 + sldr_index * 30
++ : sldr_setup[4 + reg_offset]);
++ if (error)
++ return error;
++
++ if (!reg_offset)
++ sldr_setup[9] |= iqs7222_sl_events[i].enable;
++
+ error = fwnode_property_read_u32(event_node, "linux,code",
+ &val);
+ if (error) {
+@@ -2115,26 +2172,20 @@ static int iqs7222_parse_sldr(struct iqs7222_private *iqs7222, int sldr_index)
+ iqs7222->sl_code[sldr_index][i] = val;
+ input_set_capability(iqs7222->keypad, EV_KEY, val);
+
+- /*
+- * The press/release event is determined based on whether the
+- * coordinate field reports 0xFFFF and has no explicit enable
+- * control.
+- */
+- if (!iqs7222_sl_events[i].enable || reg_offset)
+- continue;
+-
+- sldr_setup[9] |= iqs7222_sl_events[i].enable;
+-
+- error = iqs7222_gpio_select(iqs7222, event_node,
+- iqs7222_sl_events[i].enable,
+- 1568 + sldr_index * 30);
+- if (error)
+- return error;
+-
+ if (!dev_desc->event_offset)
+ continue;
+
+- sys_setup[dev_desc->event_offset] |= BIT(10 + sldr_index);
++ /*
++ * The press/release event is determined based on whether the
++ * coordinate field reports 0xFFFF and solely relies on touch
++ * or proximity interrupts to be unmasked.
++ */
++ if (i && !reg_offset)
++ *event_mask |= (IQS7222_EVENT_MASK_SLDR << sldr_index);
++ else if (sldr_setup[4 + reg_offset] == dev_desc->touch_link)
++ *event_mask |= IQS7222_EVENT_MASK_TOUCH;
++ else
++ *event_mask |= IQS7222_EVENT_MASK_PROX;
+ }
+
+ /*
+@@ -2227,11 +2278,6 @@ static int iqs7222_parse_all(struct iqs7222_private *iqs7222)
+ return error;
+ }
+
+- sys_setup[0] &= ~IQS7222_SYS_SETUP_INTF_MODE_MASK;
+- sys_setup[0] &= ~IQS7222_SYS_SETUP_PWR_MODE_MASK;
+-
+- sys_setup[0] |= IQS7222_SYS_SETUP_ACK_RESET;
+-
+ return iqs7222_parse_props(iqs7222, NULL, 0, IQS7222_REG_GRP_SYS,
+ IQS7222_REG_KEY_NONE);
+ }
+@@ -2299,29 +2345,37 @@ static int iqs7222_report(struct iqs7222_private *iqs7222)
+ input_report_abs(iqs7222->keypad, iqs7222->sl_axis[i],
+ sldr_pos);
+
+- for (j = 0; j < ARRAY_SIZE(iqs7222_sl_events); j++) {
+- u16 mask = iqs7222_sl_events[j].mask;
+- u16 val = iqs7222_sl_events[j].val;
++ input_report_key(iqs7222->keypad, iqs7222->sl_code[i][0],
++ sldr_pos < dev_desc->sldr_res);
+
+- if (!iqs7222_sl_events[j].enable) {
+- input_report_key(iqs7222->keypad,
+- iqs7222->sl_code[i][j],
+- sldr_pos < dev_desc->sldr_res);
+- continue;
+- }
++ /*
++ * A maximum resolution indicates the device does not support
++ * gestures, in which case the remaining fields are ignored.
++ */
++ if (dev_desc->sldr_res == U16_MAX)
++ continue;
+
+- /*
+- * The remaining offsets represent gesture state, and
+- * are discarded in the case of IQS7222C because only
+- * absolute position is reported.
+- */
+- if (num_stat < IQS7222_MAX_COLS_STAT)
+- continue;
++ if (!(le16_to_cpu(status[1]) & IQS7222_EVENT_MASK_SLDR << i))
++ continue;
++
++ /*
++ * Skip the press/release event, as it does not have separate
++ * status fields and is handled separately.
++ */
++ for (j = 1; j < ARRAY_SIZE(iqs7222_sl_events); j++) {
++ u16 mask = iqs7222_sl_events[j].mask;
++ u16 val = iqs7222_sl_events[j].val;
+
+ input_report_key(iqs7222->keypad,
+ iqs7222->sl_code[i][j],
+ (state & mask) == val);
+ }
++
++ input_sync(iqs7222->keypad);
++
++ for (j = 1; j < ARRAY_SIZE(iqs7222_sl_events); j++)
++ input_report_key(iqs7222->keypad,
++ iqs7222->sl_code[i][j], 0);
+ }
+
+ input_sync(iqs7222->keypad);
+diff --git a/drivers/input/touchscreen/exc3000.c b/drivers/input/touchscreen/exc3000.c
+index cbe0dd4129121..4b7eee01c6aad 100644
+--- a/drivers/input/touchscreen/exc3000.c
++++ b/drivers/input/touchscreen/exc3000.c
+@@ -220,6 +220,7 @@ static int exc3000_vendor_data_request(struct exc3000_data *data, u8 *request,
+ {
+ u8 buf[EXC3000_LEN_VENDOR_REQUEST] = { 0x67, 0x00, 0x42, 0x00, 0x03 };
+ int ret;
++ unsigned long time_left;
+
+ mutex_lock(&data->query_lock);
+
+@@ -233,9 +234,9 @@ static int exc3000_vendor_data_request(struct exc3000_data *data, u8 *request,
+ goto out_unlock;
+
+ if (response) {
+- ret = wait_for_completion_timeout(&data->wait_event,
+- timeout * HZ);
+- if (ret <= 0) {
++ time_left = wait_for_completion_timeout(&data->wait_event,
++ timeout * HZ);
++ if (time_left == 0) {
+ ret = -ETIMEDOUT;
+ goto out_unlock;
+ }
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index be066c1503d37..ba3115fd0f86a 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -182,14 +182,8 @@ static bool arm_v7s_is_mtk_enabled(struct io_pgtable_cfg *cfg)
+ (cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_EXT);
+ }
+
+-static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl,
+- struct io_pgtable_cfg *cfg)
++static arm_v7s_iopte to_mtk_iopte(phys_addr_t paddr, arm_v7s_iopte pte)
+ {
+- arm_v7s_iopte pte = paddr & ARM_V7S_LVL_MASK(lvl);
+-
+- if (!arm_v7s_is_mtk_enabled(cfg))
+- return pte;
+-
+ if (paddr & BIT_ULL(32))
+ pte |= ARM_V7S_ATTR_MTK_PA_BIT32;
+ if (paddr & BIT_ULL(33))
+@@ -199,6 +193,17 @@ static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl,
+ return pte;
+ }
+
++static arm_v7s_iopte paddr_to_iopte(phys_addr_t paddr, int lvl,
++ struct io_pgtable_cfg *cfg)
++{
++ arm_v7s_iopte pte = paddr & ARM_V7S_LVL_MASK(lvl);
++
++ if (arm_v7s_is_mtk_enabled(cfg))
++ return to_mtk_iopte(paddr, pte);
++
++ return pte;
++}
++
+ static phys_addr_t iopte_to_paddr(arm_v7s_iopte pte, int lvl,
+ struct io_pgtable_cfg *cfg)
+ {
+@@ -240,10 +245,17 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ dma_addr_t dma;
+ size_t size = ARM_V7S_TABLE_SIZE(lvl, cfg);
+ void *table = NULL;
++ gfp_t gfp_l1;
++
++ /*
++ * ARM_MTK_TTBR_EXT extend the translation table base support larger
++ * memory address.
++ */
++ gfp_l1 = cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT ?
++ GFP_KERNEL : ARM_V7S_TABLE_GFP_DMA;
+
+ if (lvl == 1)
+- table = (void *)__get_free_pages(
+- __GFP_ZERO | ARM_V7S_TABLE_GFP_DMA, get_order(size));
++ table = (void *)__get_free_pages(gfp_l1 | __GFP_ZERO, get_order(size));
+ else if (lvl == 2)
+ table = kmem_cache_zalloc(data->l2_tables, gfp);
+
+@@ -251,7 +263,8 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ return NULL;
+
+ phys = virt_to_phys(table);
+- if (phys != (arm_v7s_iopte)phys) {
++ if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT ?
++ phys >= (1ULL << cfg->oas) : phys != (arm_v7s_iopte)phys) {
+ /* Doesn't fit in PTE */
+ dev_err(dev, "Page table does not fit in PTE: %pa", &phys);
+ goto out_free;
+@@ -457,9 +470,14 @@ static arm_v7s_iopte arm_v7s_install_table(arm_v7s_iopte *table,
+ arm_v7s_iopte curr,
+ struct io_pgtable_cfg *cfg)
+ {
++ phys_addr_t phys = virt_to_phys(table);
+ arm_v7s_iopte old, new;
+
+- new = virt_to_phys(table) | ARM_V7S_PTE_TYPE_TABLE;
++ new = phys | ARM_V7S_PTE_TYPE_TABLE;
++
++ if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT)
++ new = to_mtk_iopte(phys, new);
++
+ if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS)
+ new |= ARM_V7S_ATTR_NS_TABLE;
+
+@@ -779,6 +797,8 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+ void *cookie)
+ {
+ struct arm_v7s_io_pgtable *data;
++ slab_flags_t slab_flag;
++ phys_addr_t paddr;
+
+ if (cfg->ias > (arm_v7s_is_mtk_enabled(cfg) ? 34 : ARM_V7S_ADDR_BITS))
+ return NULL;
+@@ -788,7 +808,8 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+
+ if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS |
+ IO_PGTABLE_QUIRK_NO_PERMS |
+- IO_PGTABLE_QUIRK_ARM_MTK_EXT))
++ IO_PGTABLE_QUIRK_ARM_MTK_EXT |
++ IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT))
+ return NULL;
+
+ /* If ARM_MTK_4GB is enabled, the NO_PERMS is also expected. */
+@@ -796,15 +817,27 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+ !(cfg->quirks & IO_PGTABLE_QUIRK_NO_PERMS))
+ return NULL;
+
++ if ((cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT) &&
++ !arm_v7s_is_mtk_enabled(cfg))
++ return NULL;
++
+ data = kmalloc(sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return NULL;
+
+ spin_lock_init(&data->split_lock);
++
++ /*
++ * ARM_MTK_TTBR_EXT extend the translation table base support larger
++ * memory address.
++ */
++ slab_flag = cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT ?
++ 0 : ARM_V7S_TABLE_SLAB_FLAGS;
++
+ data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2",
+ ARM_V7S_TABLE_SIZE(2, cfg),
+ ARM_V7S_TABLE_SIZE(2, cfg),
+- ARM_V7S_TABLE_SLAB_FLAGS, NULL);
++ slab_flag, NULL);
+ if (!data->l2_tables)
+ goto out_free_data;
+
+@@ -850,12 +883,16 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+ wmb();
+
+ /* TTBR */
+- cfg->arm_v7s_cfg.ttbr = virt_to_phys(data->pgd) | ARM_V7S_TTBR_S |
+- (cfg->coherent_walk ? (ARM_V7S_TTBR_NOS |
+- ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_WBWA) |
+- ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_WBWA)) :
+- (ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_NC) |
+- ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_NC)));
++ paddr = virt_to_phys(data->pgd);
++ if (arm_v7s_is_mtk_enabled(cfg))
++ cfg->arm_v7s_cfg.ttbr = paddr | upper_32_bits(paddr);
++ else
++ cfg->arm_v7s_cfg.ttbr = paddr | ARM_V7S_TTBR_S |
++ (cfg->coherent_walk ? (ARM_V7S_TTBR_NOS |
++ ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_WBWA) |
++ ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_WBWA)) :
++ (ARM_V7S_TTBR_IRGN_ATTR(ARM_V7S_RGN_NC) |
++ ARM_V7S_TTBR_ORGN_ATTR(ARM_V7S_RGN_NC)));
+ return &data->iop;
+
+ out_free_data:
+diff --git a/drivers/irqchip/irq-tegra.c b/drivers/irqchip/irq-tegra.c
+index e1f771c72fc4c..ad3e2c1b3c87b 100644
+--- a/drivers/irqchip/irq-tegra.c
++++ b/drivers/irqchip/irq-tegra.c
+@@ -148,10 +148,10 @@ static int tegra_ictlr_suspend(void)
+ lic->cop_iep[i] = readl_relaxed(ictlr + ICTLR_COP_IEP_CLASS);
+
+ /* Disable COP interrupts */
+- writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
++ writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
+
+ /* Disable CPU interrupts */
+- writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
++ writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
+
+ /* Enable the wakeup sources of ictlr */
+ writel_relaxed(lic->ictlr_wake_mask[i], ictlr + ICTLR_CPU_IER_SET);
+@@ -172,12 +172,12 @@ static void tegra_ictlr_resume(void)
+
+ writel_relaxed(lic->cpu_iep[i],
+ ictlr + ICTLR_CPU_IEP_CLASS);
+- writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
++ writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
+ writel_relaxed(lic->cpu_ier[i],
+ ictlr + ICTLR_CPU_IER_SET);
+ writel_relaxed(lic->cop_iep[i],
+ ictlr + ICTLR_COP_IEP_CLASS);
+- writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
++ writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
+ writel_relaxed(lic->cop_ier[i],
+ ictlr + ICTLR_COP_IER_SET);
+ }
+@@ -312,7 +312,7 @@ static int __init tegra_ictlr_init(struct device_node *node,
+ lic->base[i] = base;
+
+ /* Disable all interrupts */
+- writel_relaxed(~0UL, base + ICTLR_CPU_IER_CLR);
++ writel_relaxed(GENMASK(31, 0), base + ICTLR_CPU_IER_CLR);
+ /* All interrupts target IRQ */
+ writel_relaxed(0, base + ICTLR_CPU_IEP_CLASS);
+
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 660c52d48256d..522b3d6b8c46b 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -9466,6 +9466,7 @@ void md_reap_sync_thread(struct mddev *mddev)
+ wake_up(&resync_wait);
+ /* flag recovery needed just to double check */
+ set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
++ sysfs_notify_dirent_safe(mddev->sysfs_completed);
+ sysfs_notify_dirent_safe(mddev->sysfs_action);
+ md_new_event();
+ if (mddev->event_work.func)
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index c8539d0e12dd7..1c1310d539f2e 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -2881,10 +2881,10 @@ static void raid5_end_write_request(struct bio *bi)
+ if (!test_and_clear_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags))
+ clear_bit(R5_LOCKED, &sh->dev[i].flags);
+ set_bit(STRIPE_HANDLE, &sh->state);
+- raid5_release_stripe(sh);
+
+ if (sh->batch_head && sh != sh->batch_head)
+ raid5_release_stripe(sh->batch_head);
++ raid5_release_stripe(sh);
+ }
+
+ static void raid5_error(struct mddev *mddev, struct md_rdev *rdev)
+@@ -5841,7 +5841,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
+ if ((bi->bi_opf & REQ_NOWAIT) &&
+ (conf->reshape_progress != MaxSector) &&
+ (mddev->reshape_backwards
+- ? (logical_sector > conf->reshape_progress && logical_sector <= conf->reshape_safe)
++ ? (logical_sector >= conf->reshape_progress && logical_sector < conf->reshape_safe)
+ : (logical_sector >= conf->reshape_safe && logical_sector < conf->reshape_progress))) {
+ bio_wouldblock_error(bi);
+ if (rw == WRITE)
+diff --git a/drivers/media/platform/qcom/venus/pm_helpers.c b/drivers/media/platform/qcom/venus/pm_helpers.c
+index cb48c5ff3dee2..c93d2906e4c7d 100644
+--- a/drivers/media/platform/qcom/venus/pm_helpers.c
++++ b/drivers/media/platform/qcom/venus/pm_helpers.c
+@@ -875,7 +875,7 @@ static int vcodec_domains_get(struct venus_core *core)
+ }
+
+ skip_pmdomains:
+- if (!core->has_opp_table)
++ if (!core->res->opp_pmdomain)
+ return 0;
+
+ /* Attach the power domain for setting performance state */
+@@ -1007,6 +1007,10 @@ static int core_get_v4(struct venus_core *core)
+ if (ret)
+ return ret;
+
++ ret = vcodec_domains_get(core);
++ if (ret)
++ return ret;
++
+ if (core->res->opp_pmdomain) {
+ ret = devm_pm_opp_of_add_table(dev);
+ if (!ret) {
+@@ -1017,10 +1021,6 @@ static int core_get_v4(struct venus_core *core)
+ }
+ }
+
+- ret = vcodec_domains_get(core);
+- if (ret)
+- return ret;
+-
+ return 0;
+ }
+
+diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
+index 5f0e2dcebb349..ac3795a7e1f66 100644
+--- a/drivers/misc/cxl/irq.c
++++ b/drivers/misc/cxl/irq.c
+@@ -350,6 +350,7 @@ int afu_allocate_irqs(struct cxl_context *ctx, u32 count)
+
+ out:
+ cxl_ops->release_irq_ranges(&ctx->irqs, ctx->afu->adapter);
++ bitmap_free(ctx->irq_bitmap);
+ afu_irq_name_free(ctx);
+ return -ENOMEM;
+ }
+diff --git a/drivers/misc/habanalabs/common/sysfs.c b/drivers/misc/habanalabs/common/sysfs.c
+index 9ebeb18ab85e9..da81810688951 100644
+--- a/drivers/misc/habanalabs/common/sysfs.c
++++ b/drivers/misc/habanalabs/common/sysfs.c
+@@ -73,6 +73,7 @@ static DEVICE_ATTR_RO(clk_cur_freq_mhz);
+ static struct attribute *hl_dev_clk_attrs[] = {
+ &dev_attr_clk_max_freq_mhz.attr,
+ &dev_attr_clk_cur_freq_mhz.attr,
++ NULL,
+ };
+
+ static ssize_t vrm_ver_show(struct device *dev, struct device_attribute *attr, char *buf)
+@@ -93,6 +94,7 @@ static DEVICE_ATTR_RO(vrm_ver);
+
+ static struct attribute *hl_dev_vrm_attrs[] = {
+ &dev_attr_vrm_ver.attr,
++ NULL,
+ };
+
+ static ssize_t uboot_ver_show(struct device *dev, struct device_attribute *attr,
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index fba3222410960..b33616aacb33b 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -3339,19 +3339,19 @@ static void gaudi_init_nic_qman(struct hl_device *hdev, u32 nic_offset,
+ u32 nic_qm_err_cfg, irq_handler_offset;
+ u32 q_off;
+
+- mtr_base_en_lo = lower_32_bits(CFG_BASE +
++ mtr_base_en_lo = lower_32_bits((CFG_BASE & U32_MAX) +
+ mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_MON_PAY_ADDRL_0);
+ mtr_base_en_hi = upper_32_bits(CFG_BASE +
+ mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_MON_PAY_ADDRL_0);
+- so_base_en_lo = lower_32_bits(CFG_BASE +
++ so_base_en_lo = lower_32_bits((CFG_BASE & U32_MAX) +
+ mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_SOB_OBJ_0);
+ so_base_en_hi = upper_32_bits(CFG_BASE +
+ mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_SOB_OBJ_0);
+- mtr_base_ws_lo = lower_32_bits(CFG_BASE +
++ mtr_base_ws_lo = lower_32_bits((CFG_BASE & U32_MAX) +
+ mmSYNC_MNGR_W_S_SYNC_MNGR_OBJS_MON_PAY_ADDRL_0);
+ mtr_base_ws_hi = upper_32_bits(CFG_BASE +
+ mmSYNC_MNGR_W_S_SYNC_MNGR_OBJS_MON_PAY_ADDRL_0);
+- so_base_ws_lo = lower_32_bits(CFG_BASE +
++ so_base_ws_lo = lower_32_bits((CFG_BASE & U32_MAX) +
+ mmSYNC_MNGR_W_S_SYNC_MNGR_OBJS_SOB_OBJ_0);
+ so_base_ws_hi = upper_32_bits(CFG_BASE +
+ mmSYNC_MNGR_W_S_SYNC_MNGR_OBJS_SOB_OBJ_0);
+@@ -5654,15 +5654,17 @@ static int gaudi_parse_cb_no_ext_queue(struct hl_device *hdev,
+ {
+ struct asic_fixed_properties *asic_prop = &hdev->asic_prop;
+ struct gaudi_device *gaudi = hdev->asic_specific;
+- u32 nic_mask_q_id = 1 << (HW_CAP_NIC_SHIFT +
+- ((parser->hw_queue_id - GAUDI_QUEUE_ID_NIC_0_0) >> 2));
++ u32 nic_queue_offset, nic_mask_q_id;
+
+ if ((parser->hw_queue_id >= GAUDI_QUEUE_ID_NIC_0_0) &&
+- (parser->hw_queue_id <= GAUDI_QUEUE_ID_NIC_9_3) &&
+- (!(gaudi->hw_cap_initialized & nic_mask_q_id))) {
+- dev_err(hdev->dev, "h/w queue %d is disabled\n",
+- parser->hw_queue_id);
+- return -EINVAL;
++ (parser->hw_queue_id <= GAUDI_QUEUE_ID_NIC_9_3)) {
++ nic_queue_offset = parser->hw_queue_id - GAUDI_QUEUE_ID_NIC_0_0;
++ nic_mask_q_id = 1 << (HW_CAP_NIC_SHIFT + (nic_queue_offset >> 2));
++
++ if (!(gaudi->hw_cap_initialized & nic_mask_q_id)) {
++ dev_err(hdev->dev, "h/w queue %d is disabled\n", parser->hw_queue_id);
++ return -EINVAL;
++ }
+ }
+
+ /* For internal queue jobs just check if CB address is valid */
+@@ -7717,10 +7719,10 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ struct gaudi_device *gaudi = hdev->asic_specific;
+ u64 data = le64_to_cpu(eq_entry->data[0]);
+ u32 ctl = le32_to_cpu(eq_entry->hdr.ctl);
+- u32 fw_fatal_err_flag = 0;
++ u32 fw_fatal_err_flag = 0, flags = 0;
+ u16 event_type = ((ctl & EQ_CTL_EVENT_TYPE_MASK)
+ >> EQ_CTL_EVENT_TYPE_SHIFT);
+- bool reset_required;
++ bool reset_required, reset_direct = false;
+ u8 cause;
+ int rc;
+
+@@ -7808,7 +7810,8 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ dev_err(hdev->dev, "reset required due to %s\n",
+ gaudi_irq_map_table[event_type].name);
+
+- hl_device_reset(hdev, 0);
++ reset_direct = true;
++ goto reset_device;
+ } else {
+ hl_fw_unmask_irq(hdev, event_type);
+ }
+@@ -7830,7 +7833,8 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ dev_err(hdev->dev, "reset required due to %s\n",
+ gaudi_irq_map_table[event_type].name);
+
+- hl_device_reset(hdev, 0);
++ reset_direct = true;
++ goto reset_device;
+ } else {
+ hl_fw_unmask_irq(hdev, event_type);
+ }
+@@ -7981,12 +7985,17 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ return;
+
+ reset_device:
+- if (hdev->asic_prop.fw_security_enabled)
+- hl_device_reset(hdev, HL_DRV_RESET_HARD
+- | HL_DRV_RESET_BYPASS_REQ_TO_FW
+- | fw_fatal_err_flag);
++ reset_required = true;
++
++ if (hdev->asic_prop.fw_security_enabled && !reset_direct)
++ flags = HL_DRV_RESET_HARD | HL_DRV_RESET_BYPASS_REQ_TO_FW | fw_fatal_err_flag;
+ else if (hdev->hard_reset_on_fw_events)
+- hl_device_reset(hdev, HL_DRV_RESET_HARD | HL_DRV_RESET_DELAY | fw_fatal_err_flag);
++ flags = HL_DRV_RESET_HARD | HL_DRV_RESET_DELAY | fw_fatal_err_flag;
++ else
++ reset_required = false;
++
++ if (reset_required)
++ hl_device_reset(hdev, flags);
+ else
+ hl_fw_unmask_irq(hdev, event_type);
+ }
+@@ -9187,6 +9196,7 @@ static DEVICE_ATTR_RO(infineon_ver);
+
+ static struct attribute *gaudi_vrm_dev_attrs[] = {
+ &dev_attr_infineon_ver.attr,
++ NULL,
+ };
+
+ static void gaudi_add_device_attr(struct hl_device *hdev, struct attribute_group *dev_clk_attr_grp,
+diff --git a/drivers/misc/habanalabs/goya/goya_hwmgr.c b/drivers/misc/habanalabs/goya/goya_hwmgr.c
+index 6580fc6a486a8..b595721751c1b 100644
+--- a/drivers/misc/habanalabs/goya/goya_hwmgr.c
++++ b/drivers/misc/habanalabs/goya/goya_hwmgr.c
+@@ -359,6 +359,7 @@ static struct attribute *goya_clk_dev_attrs[] = {
+ &dev_attr_pm_mng_profile.attr,
+ &dev_attr_tpc_clk.attr,
+ &dev_attr_tpc_clk_curr.attr,
++ NULL,
+ };
+
+ static ssize_t infineon_ver_show(struct device *dev, struct device_attribute *attr, char *buf)
+@@ -375,6 +376,7 @@ static DEVICE_ATTR_RO(infineon_ver);
+
+ static struct attribute *goya_vrm_dev_attrs[] = {
+ &dev_attr_infineon_ver.attr,
++ NULL,
+ };
+
+ void goya_add_device_attr(struct hl_device *hdev, struct attribute_group *dev_clk_attr_grp,
+diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
+index 281c54003edc4..b70a013139c74 100644
+--- a/drivers/misc/uacce/uacce.c
++++ b/drivers/misc/uacce/uacce.c
+@@ -9,43 +9,38 @@
+
+ static struct class *uacce_class;
+ static dev_t uacce_devt;
+-static DEFINE_MUTEX(uacce_mutex);
+ static DEFINE_XARRAY_ALLOC(uacce_xa);
+
+-static int uacce_start_queue(struct uacce_queue *q)
++/*
++ * If the parent driver or the device disappears, the queue state is invalid and
++ * ops are not usable anymore.
++ */
++static bool uacce_queue_is_valid(struct uacce_queue *q)
+ {
+- int ret = 0;
++ return q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED;
++}
+
+- mutex_lock(&uacce_mutex);
++static int uacce_start_queue(struct uacce_queue *q)
++{
++ int ret;
+
+- if (q->state != UACCE_Q_INIT) {
+- ret = -EINVAL;
+- goto out_with_lock;
+- }
++ if (q->state != UACCE_Q_INIT)
++ return -EINVAL;
+
+ if (q->uacce->ops->start_queue) {
+ ret = q->uacce->ops->start_queue(q);
+ if (ret < 0)
+- goto out_with_lock;
++ return ret;
+ }
+
+ q->state = UACCE_Q_STARTED;
+-
+-out_with_lock:
+- mutex_unlock(&uacce_mutex);
+-
+- return ret;
++ return 0;
+ }
+
+ static int uacce_put_queue(struct uacce_queue *q)
+ {
+ struct uacce_device *uacce = q->uacce;
+
+- mutex_lock(&uacce_mutex);
+-
+- if (q->state == UACCE_Q_ZOMBIE)
+- goto out;
+-
+ if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue)
+ uacce->ops->stop_queue(q);
+
+@@ -54,8 +49,6 @@ static int uacce_put_queue(struct uacce_queue *q)
+ uacce->ops->put_queue(q);
+
+ q->state = UACCE_Q_ZOMBIE;
+-out:
+- mutex_unlock(&uacce_mutex);
+
+ return 0;
+ }
+@@ -65,20 +58,36 @@ static long uacce_fops_unl_ioctl(struct file *filep,
+ {
+ struct uacce_queue *q = filep->private_data;
+ struct uacce_device *uacce = q->uacce;
++ long ret = -ENXIO;
++
++ /*
++ * uacce->ops->ioctl() may take the mmap_lock when copying arg to/from
++ * user. Avoid a circular lock dependency with uacce_fops_mmap(), which
++ * gets called with mmap_lock held, by taking uacce->mutex instead of
++ * q->mutex. Doing this in uacce_fops_mmap() is not possible because
++ * uacce_fops_open() calls iommu_sva_bind_device(), which takes
++ * mmap_lock, while holding uacce->mutex.
++ */
++ mutex_lock(&uacce->mutex);
++ if (!uacce_queue_is_valid(q))
++ goto out_unlock;
+
+ switch (cmd) {
+ case UACCE_CMD_START_Q:
+- return uacce_start_queue(q);
+-
++ ret = uacce_start_queue(q);
++ break;
+ case UACCE_CMD_PUT_Q:
+- return uacce_put_queue(q);
+-
++ ret = uacce_put_queue(q);
++ break;
+ default:
+- if (!uacce->ops->ioctl)
+- return -EINVAL;
+-
+- return uacce->ops->ioctl(q, cmd, arg);
++ if (uacce->ops->ioctl)
++ ret = uacce->ops->ioctl(q, cmd, arg);
++ else
++ ret = -EINVAL;
+ }
++out_unlock:
++ mutex_unlock(&uacce->mutex);
++ return ret;
+ }
+
+ #ifdef CONFIG_COMPAT
+@@ -136,6 +145,13 @@ static int uacce_fops_open(struct inode *inode, struct file *filep)
+ if (!q)
+ return -ENOMEM;
+
++ mutex_lock(&uacce->mutex);
++
++ if (!uacce->parent) {
++ ret = -EINVAL;
++ goto out_with_mem;
++ }
++
+ ret = uacce_bind_queue(uacce, q);
+ if (ret)
+ goto out_with_mem;
+@@ -152,10 +168,9 @@ static int uacce_fops_open(struct inode *inode, struct file *filep)
+ filep->private_data = q;
+ uacce->inode = inode;
+ q->state = UACCE_Q_INIT;
+-
+- mutex_lock(&uacce->queues_lock);
++ mutex_init(&q->mutex);
+ list_add(&q->list, &uacce->queues);
+- mutex_unlock(&uacce->queues_lock);
++ mutex_unlock(&uacce->mutex);
+
+ return 0;
+
+@@ -163,18 +178,20 @@ out_with_bond:
+ uacce_unbind_queue(q);
+ out_with_mem:
+ kfree(q);
++ mutex_unlock(&uacce->mutex);
+ return ret;
+ }
+
+ static int uacce_fops_release(struct inode *inode, struct file *filep)
+ {
+ struct uacce_queue *q = filep->private_data;
++ struct uacce_device *uacce = q->uacce;
+
+- mutex_lock(&q->uacce->queues_lock);
+- list_del(&q->list);
+- mutex_unlock(&q->uacce->queues_lock);
++ mutex_lock(&uacce->mutex);
+ uacce_put_queue(q);
+ uacce_unbind_queue(q);
++ list_del(&q->list);
++ mutex_unlock(&uacce->mutex);
+ kfree(q);
+
+ return 0;
+@@ -217,10 +234,9 @@ static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
+ vma->vm_private_data = q;
+ qfr->type = type;
+
+- mutex_lock(&uacce_mutex);
+-
+- if (q->state != UACCE_Q_INIT && q->state != UACCE_Q_STARTED) {
+- ret = -EINVAL;
++ mutex_lock(&q->mutex);
++ if (!uacce_queue_is_valid(q)) {
++ ret = -ENXIO;
+ goto out_with_lock;
+ }
+
+@@ -248,12 +264,12 @@ static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
+ }
+
+ q->qfrs[type] = qfr;
+- mutex_unlock(&uacce_mutex);
++ mutex_unlock(&q->mutex);
+
+ return ret;
+
+ out_with_lock:
+- mutex_unlock(&uacce_mutex);
++ mutex_unlock(&q->mutex);
+ kfree(qfr);
+ return ret;
+ }
+@@ -262,12 +278,20 @@ static __poll_t uacce_fops_poll(struct file *file, poll_table *wait)
+ {
+ struct uacce_queue *q = file->private_data;
+ struct uacce_device *uacce = q->uacce;
++ __poll_t ret = 0;
++
++ mutex_lock(&q->mutex);
++ if (!uacce_queue_is_valid(q))
++ goto out_unlock;
+
+ poll_wait(file, &q->wait, wait);
++
+ if (uacce->ops->is_q_updated && uacce->ops->is_q_updated(q))
+- return EPOLLIN | EPOLLRDNORM;
++ ret = EPOLLIN | EPOLLRDNORM;
+
+- return 0;
++out_unlock:
++ mutex_unlock(&q->mutex);
++ return ret;
+ }
+
+ static const struct file_operations uacce_fops = {
+@@ -450,7 +474,7 @@ struct uacce_device *uacce_alloc(struct device *parent,
+ goto err_with_uacce;
+
+ INIT_LIST_HEAD(&uacce->queues);
+- mutex_init(&uacce->queues_lock);
++ mutex_init(&uacce->mutex);
+ device_initialize(&uacce->dev);
+ uacce->dev.devt = MKDEV(MAJOR(uacce_devt), uacce->dev_id);
+ uacce->dev.class = uacce_class;
+@@ -507,13 +531,23 @@ void uacce_remove(struct uacce_device *uacce)
+ if (uacce->inode)
+ unmap_mapping_range(uacce->inode->i_mapping, 0, 0, 1);
+
++ /*
++ * uacce_fops_open() may be running concurrently, even after we remove
++ * the cdev. Holding uacce->mutex ensures that open() does not obtain a
++ * removed uacce device.
++ */
++ mutex_lock(&uacce->mutex);
+ /* ensure no open queue remains */
+- mutex_lock(&uacce->queues_lock);
+ list_for_each_entry_safe(q, next_q, &uacce->queues, list) {
++ /*
++ * Taking q->mutex ensures that fops do not use the defunct
++ * uacce->ops after the queue is disabled.
++ */
++ mutex_lock(&q->mutex);
+ uacce_put_queue(q);
++ mutex_unlock(&q->mutex);
+ uacce_unbind_queue(q);
+ }
+- mutex_unlock(&uacce->queues_lock);
+
+ /* disable sva now since no opened queues */
+ uacce_disable_sva(uacce);
+@@ -521,6 +555,13 @@ void uacce_remove(struct uacce_device *uacce)
+ if (uacce->cdev)
+ cdev_device_del(uacce->cdev, &uacce->dev);
+ xa_erase(&uacce_xa, uacce->dev_id);
++ /*
++ * uacce exists as long as there are open fds, but ops will be freed
++ * now. Ensure that bugs cause NULL deref rather than use-after-free.
++ */
++ uacce->ops = NULL;
++ uacce->parent = NULL;
++ mutex_unlock(&uacce->mutex);
+ put_device(&uacce->dev);
+ }
+ EXPORT_SYMBOL_GPL(uacce_remove);
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index 2f08d442e5577..fc462995cf94a 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -1172,8 +1172,10 @@ static int meson_mmc_probe(struct platform_device *pdev)
+ }
+
+ ret = device_reset_optional(&pdev->dev);
+- if (ret)
+- return dev_err_probe(&pdev->dev, ret, "device reset failed\n");
++ if (ret) {
++ dev_err_probe(&pdev->dev, ret, "device reset failed\n");
++ goto free_host;
++ }
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ host->regs = devm_ioremap_resource(&pdev->dev, res);
+diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
+index 0db9490dc6595..e4003f6058eb5 100644
+--- a/drivers/mmc/host/pxamci.c
++++ b/drivers/mmc/host/pxamci.c
+@@ -648,7 +648,7 @@ static int pxamci_probe(struct platform_device *pdev)
+
+ ret = pxamci_of_init(pdev, mmc);
+ if (ret)
+- return ret;
++ goto out;
+
+ host = mmc_priv(mmc);
+ host->mmc = mmc;
+@@ -672,7 +672,7 @@ static int pxamci_probe(struct platform_device *pdev)
+
+ ret = pxamci_init_ocr(host);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+ mmc->caps = 0;
+ host->cmdat = 0;
+diff --git a/drivers/mmc/host/renesas_sdhi.h b/drivers/mmc/host/renesas_sdhi.h
+index 1a1e3e020a8c2..c4abfee1ebae1 100644
+--- a/drivers/mmc/host/renesas_sdhi.h
++++ b/drivers/mmc/host/renesas_sdhi.h
+@@ -43,6 +43,7 @@ struct renesas_sdhi_quirks {
+ bool hs400_4taps;
+ bool fixed_addr_mode;
+ bool dma_one_rx_only;
++ bool manual_tap_correction;
+ u32 hs400_bad_taps;
+ const u8 (*hs400_calib_table)[SDHI_CALIB_TABLE_MAX];
+ };
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 0d258b6e1a436..6edbf5c161ab9 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -49,9 +49,6 @@
+ #define HOST_MODE_GEN3_32BIT (HOST_MODE_GEN3_WMODE | HOST_MODE_GEN3_BUSWIDTH)
+ #define HOST_MODE_GEN3_64BIT 0
+
+-#define CTL_SDIF_MODE 0xe6
+-#define SDIF_MODE_HS400 BIT(0)
+-
+ #define SDHI_VER_GEN2_SDR50 0x490c
+ #define SDHI_VER_RZ_A1 0x820b
+ /* very old datasheets said 0x490c for SDR104, too. They are wrong! */
+@@ -383,8 +380,7 @@ static void renesas_sdhi_hs400_complete(struct mmc_host *mmc)
+ sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_DT2FF,
+ priv->scc_tappos_hs400);
+
+- /* Gen3 can't do automatic tap correction with HS400, so disable it */
+- if (sd_ctrl_read16(host, CTL_VERSION) == SDHI_VER_GEN3_SDMMC)
++ if (priv->quirks && priv->quirks->manual_tap_correction)
+ sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL,
+ ~SH_MOBILE_SDHI_SCC_RVSCNTL_RVSEN &
+ sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_RVSCNTL));
+@@ -562,23 +558,25 @@ static void renesas_sdhi_scc_reset(struct tmio_mmc_host *host, struct renesas_sd
+ }
+
+ /* only populated for TMIO_MMC_MIN_RCAR2 */
+-static void renesas_sdhi_reset(struct tmio_mmc_host *host)
++static void renesas_sdhi_reset(struct tmio_mmc_host *host, bool preserve)
+ {
+ struct renesas_sdhi *priv = host_to_priv(host);
+ int ret;
+ u16 val;
+
+- if (priv->rstc) {
+- reset_control_reset(priv->rstc);
+- /* Unknown why but without polling reset status, it will hang */
+- read_poll_timeout(reset_control_status, ret, ret == 0, 1, 100,
+- false, priv->rstc);
+- /* At least SDHI_VER_GEN2_SDR50 needs manual release of reset */
+- sd_ctrl_write16(host, CTL_RESET_SD, 0x0001);
+- priv->needs_adjust_hs400 = false;
+- renesas_sdhi_set_clock(host, host->clk_cache);
+- } else if (priv->scc_ctl) {
+- renesas_sdhi_scc_reset(host, priv);
++ if (!preserve) {
++ if (priv->rstc) {
++ reset_control_reset(priv->rstc);
++ /* Unknown why but without polling reset status, it will hang */
++ read_poll_timeout(reset_control_status, ret, ret == 0, 1, 100,
++ false, priv->rstc);
++ /* At least SDHI_VER_GEN2_SDR50 needs manual release of reset */
++ sd_ctrl_write16(host, CTL_RESET_SD, 0x0001);
++ priv->needs_adjust_hs400 = false;
++ renesas_sdhi_set_clock(host, host->clk_cache);
++ } else if (priv->scc_ctl) {
++ renesas_sdhi_scc_reset(host, priv);
++ }
+ }
+
+ if (sd_ctrl_read16(host, CTL_VERSION) >= SDHI_VER_GEN3_SD) {
+@@ -719,7 +717,7 @@ static bool renesas_sdhi_manual_correction(struct tmio_mmc_host *host, bool use_
+ sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_RVSREQ, 0);
+
+ /* Change TAP position according to correction status */
+- if (sd_ctrl_read16(host, CTL_VERSION) == SDHI_VER_GEN3_SDMMC &&
++ if (priv->quirks && priv->quirks->manual_tap_correction &&
+ host->mmc->ios.timing == MMC_TIMING_MMC_HS400) {
+ u32 bad_taps = priv->quirks ? priv->quirks->hs400_bad_taps : 0;
+ /*
+diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+index 3084b15ae2cbb..52915404eb071 100644
+--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c
++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c
+@@ -170,6 +170,7 @@ static const struct renesas_sdhi_quirks sdhi_quirks_4tap_nohs400_one_rx = {
+ static const struct renesas_sdhi_quirks sdhi_quirks_4tap = {
+ .hs400_4taps = true,
+ .hs400_bad_taps = BIT(2) | BIT(3) | BIT(6) | BIT(7),
++ .manual_tap_correction = true,
+ };
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_nohs400 = {
+@@ -182,25 +183,30 @@ static const struct renesas_sdhi_quirks sdhi_quirks_fixed_addr = {
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_bad_taps1357 = {
+ .hs400_bad_taps = BIT(1) | BIT(3) | BIT(5) | BIT(7),
++ .manual_tap_correction = true,
+ };
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_bad_taps2367 = {
+ .hs400_bad_taps = BIT(2) | BIT(3) | BIT(6) | BIT(7),
++ .manual_tap_correction = true,
+ };
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_r8a7796_es13 = {
+ .hs400_4taps = true,
+ .hs400_bad_taps = BIT(2) | BIT(3) | BIT(6) | BIT(7),
+ .hs400_calib_table = r8a7796_es13_calib_table,
++ .manual_tap_correction = true,
+ };
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_r8a77965 = {
+ .hs400_bad_taps = BIT(2) | BIT(3) | BIT(6) | BIT(7),
+ .hs400_calib_table = r8a77965_calib_table,
++ .manual_tap_correction = true,
+ };
+
+ static const struct renesas_sdhi_quirks sdhi_quirks_r8a77990 = {
+ .hs400_calib_table = r8a77990_calib_table,
++ .manual_tap_correction = true,
+ };
+
+ /*
+diff --git a/drivers/mmc/host/tmio_mmc.c b/drivers/mmc/host/tmio_mmc.c
+index b55a29c53d9c3..53a2ad9a24b87 100644
+--- a/drivers/mmc/host/tmio_mmc.c
++++ b/drivers/mmc/host/tmio_mmc.c
+@@ -75,7 +75,7 @@ static void tmio_mmc_set_clock(struct tmio_mmc_host *host,
+ tmio_mmc_clk_start(host);
+ }
+
+-static void tmio_mmc_reset(struct tmio_mmc_host *host)
++static void tmio_mmc_reset(struct tmio_mmc_host *host, bool preserve)
+ {
+ sd_ctrl_write16(host, CTL_RESET_SDIO, 0x0000);
+ usleep_range(10000, 11000);
+diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h
+index e754bb3f5c323..501613c744063 100644
+--- a/drivers/mmc/host/tmio_mmc.h
++++ b/drivers/mmc/host/tmio_mmc.h
+@@ -42,6 +42,7 @@
+ #define CTL_DMA_ENABLE 0xd8
+ #define CTL_RESET_SD 0xe0
+ #define CTL_VERSION 0xe2
++#define CTL_SDIF_MODE 0xe6 /* only known on R-Car 2+ */
+
+ /* Definitions for values the CTL_STOP_INTERNAL_ACTION register can take */
+ #define TMIO_STOP_STP BIT(0)
+@@ -98,6 +99,9 @@
+ /* Definitions for values the CTL_DMA_ENABLE register can take */
+ #define DMA_ENABLE_DMASDRW BIT(1)
+
++/* Definitions for values the CTL_SDIF_MODE register can take */
++#define SDIF_MODE_HS400 BIT(0) /* only known on R-Car 2+ */
++
+ /* Define some IRQ masks */
+ /* This is the mask used at reset by the chip */
+ #define TMIO_MASK_ALL 0x837f031d
+@@ -181,7 +185,7 @@ struct tmio_mmc_host {
+ int (*multi_io_quirk)(struct mmc_card *card,
+ unsigned int direction, int blk_size);
+ int (*write16_hook)(struct tmio_mmc_host *host, int addr);
+- void (*reset)(struct tmio_mmc_host *host);
++ void (*reset)(struct tmio_mmc_host *host, bool preserve);
+ bool (*check_retune)(struct tmio_mmc_host *host, struct mmc_request *mrq);
+ void (*fixup_request)(struct tmio_mmc_host *host, struct mmc_request *mrq);
+ unsigned int (*get_timeout_cycles)(struct tmio_mmc_host *host);
+diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
+index a5850d83908be..437048bb80273 100644
+--- a/drivers/mmc/host/tmio_mmc_core.c
++++ b/drivers/mmc/host/tmio_mmc_core.c
+@@ -179,8 +179,17 @@ static void tmio_mmc_set_bus_width(struct tmio_mmc_host *host,
+ sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, reg);
+ }
+
+-static void tmio_mmc_reset(struct tmio_mmc_host *host)
++static void tmio_mmc_reset(struct tmio_mmc_host *host, bool preserve)
+ {
++ u16 card_opt, clk_ctrl, sdif_mode;
++
++ if (preserve) {
++ card_opt = sd_ctrl_read16(host, CTL_SD_MEM_CARD_OPT);
++ clk_ctrl = sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL);
++ if (host->pdata->flags & TMIO_MMC_MIN_RCAR2)
++ sdif_mode = sd_ctrl_read16(host, CTL_SDIF_MODE);
++ }
++
+ /* FIXME - should we set stop clock reg here */
+ sd_ctrl_write16(host, CTL_RESET_SD, 0x0000);
+ usleep_range(10000, 11000);
+@@ -190,7 +199,7 @@ static void tmio_mmc_reset(struct tmio_mmc_host *host)
+ tmio_mmc_abort_dma(host);
+
+ if (host->reset)
+- host->reset(host);
++ host->reset(host, preserve);
+
+ sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, host->sdcard_irq_mask_all);
+ host->sdcard_irq_mask = host->sdcard_irq_mask_all;
+@@ -206,6 +215,13 @@ static void tmio_mmc_reset(struct tmio_mmc_host *host)
+ sd_ctrl_write16(host, CTL_TRANSACTION_CTL, 0x0001);
+ }
+
++ if (preserve) {
++ sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, card_opt);
++ sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, clk_ctrl);
++ if (host->pdata->flags & TMIO_MMC_MIN_RCAR2)
++ sd_ctrl_write16(host, CTL_SDIF_MODE, sdif_mode);
++ }
++
+ if (host->mmc->card)
+ mmc_retune_needed(host->mmc);
+ }
+@@ -248,7 +264,7 @@ static void tmio_mmc_reset_work(struct work_struct *work)
+
+ spin_unlock_irqrestore(&host->lock, flags);
+
+- tmio_mmc_reset(host);
++ tmio_mmc_reset(host, true);
+
+ /* Ready for new calls */
+ host->mrq = NULL;
+@@ -961,7 +977,7 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ tmio_mmc_power_off(host);
+ /* For R-Car Gen2+, we need to reset SDHI specific SCC */
+ if (host->pdata->flags & TMIO_MMC_MIN_RCAR2)
+- tmio_mmc_reset(host);
++ tmio_mmc_reset(host, false);
+
+ host->set_clock(host, 0);
+ break;
+@@ -1189,7 +1205,7 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host)
+ _host->sdcard_irq_mask_all = TMIO_MASK_ALL;
+
+ _host->set_clock(_host, 0);
+- tmio_mmc_reset(_host);
++ tmio_mmc_reset(_host, false);
+
+ spin_lock_init(&_host->lock);
+ mutex_init(&_host->ios_lock);
+@@ -1285,7 +1301,7 @@ int tmio_mmc_host_runtime_resume(struct device *dev)
+ struct tmio_mmc_host *host = dev_get_drvdata(dev);
+
+ tmio_mmc_clk_enable(host);
+- tmio_mmc_reset(host);
++ tmio_mmc_reset(host, false);
+
+ if (host->clk_cache)
+ host->set_clock(host, host->clk_cache);
+diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
+index 666a4505a55a9..fa46a54d22ed4 100644
+--- a/drivers/net/can/spi/mcp251x.c
++++ b/drivers/net/can/spi/mcp251x.c
+@@ -1069,9 +1069,6 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
+
+ mcp251x_read_2regs(spi, CANINTF, &intf, &eflag);
+
+- /* mask out flags we don't care about */
+- intf &= CANINTF_RX | CANINTF_TX | CANINTF_ERR;
+-
+ /* receive buffer 0 */
+ if (intf & CANINTF_RX0IF) {
+ mcp251x_hw_rx(spi, 0);
+@@ -1081,6 +1078,18 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
+ if (mcp251x_is_2510(spi))
+ mcp251x_write_bits(spi, CANINTF,
+ CANINTF_RX0IF, 0x00);
++
++ /* check if buffer 1 is already known to be full, no need to re-read */
++ if (!(intf & CANINTF_RX1IF)) {
++ u8 intf1, eflag1;
++
++ /* intf needs to be read again to avoid a race condition */
++ mcp251x_read_2regs(spi, CANINTF, &intf1, &eflag1);
++
++ /* combine flags from both operations for error handling */
++ intf |= intf1;
++ eflag |= eflag1;
++ }
+ }
+
+ /* receive buffer 1 */
+@@ -1091,6 +1100,9 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
+ clear_intf |= CANINTF_RX1IF;
+ }
+
++ /* mask out flags we don't care about */
++ intf &= CANINTF_RX | CANINTF_TX | CANINTF_ERR;
++
+ /* any error or tx interrupt we need to clear? */
+ if (intf & (CANINTF_ERR | CANINTF_TX))
+ clear_intf |= intf & (CANINTF_ERR | CANINTF_TX);
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index bbec3311d8934..e09b6732117cf 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -194,7 +194,7 @@ struct __packed ems_cpc_msg {
+ __le32 ts_sec; /* timestamp in seconds */
+ __le32 ts_nsec; /* timestamp in nano seconds */
+
+- union {
++ union __packed {
+ u8 generic[64];
+ struct cpc_can_msg can_msg;
+ struct cpc_can_params can_params;
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index ab40b700cf1af..ebad795e4e95f 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -658,6 +658,9 @@ static int ksz9477_port_fdb_dump(struct dsa_switch *ds, int port,
+ goto exit;
+ }
+
++ if (!(ksz_data & ALU_VALID))
++ continue;
++
+ /* read ALU table */
+ ksz9477_read_table(dev, alu_table);
+
+diff --git a/drivers/net/dsa/mv88e6060.c b/drivers/net/dsa/mv88e6060.c
+index a4c6eb9a52d0d..83dca9179aa07 100644
+--- a/drivers/net/dsa/mv88e6060.c
++++ b/drivers/net/dsa/mv88e6060.c
+@@ -118,6 +118,9 @@ static int mv88e6060_setup_port(struct mv88e6060_priv *priv, int p)
+ int addr = REG_PORT(p);
+ int ret;
+
++ if (dsa_is_unused_port(priv->ds, p))
++ return 0;
++
+ /* Do not force flow control, disable Ingress and Egress
+ * Header tagging, disable VLAN tunneling, and set the port
+ * state to Forwarding. Additionally, if this is the CPU
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index 859196898a7d0..aadb0bd7c24f1 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -610,6 +610,9 @@ static int felix_change_tag_protocol(struct dsa_switch *ds,
+
+ old_proto_ops = felix->tag_proto_ops;
+
++ if (proto_ops == old_proto_ops)
++ return 0;
++
+ err = proto_ops->setup(ds);
+ if (err)
+ goto setup_failed;
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index d0920f5a8f04f..6439b56f381f9 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -280,19 +280,23 @@ static const u32 vsc9959_sys_regmap[] = {
+ REG(SYS_COUNT_RX_64, 0x000024),
+ REG(SYS_COUNT_RX_65_127, 0x000028),
+ REG(SYS_COUNT_RX_128_255, 0x00002c),
+- REG(SYS_COUNT_RX_256_1023, 0x000030),
+- REG(SYS_COUNT_RX_1024_1526, 0x000034),
+- REG(SYS_COUNT_RX_1527_MAX, 0x000038),
+- REG(SYS_COUNT_RX_LONGS, 0x000044),
++ REG(SYS_COUNT_RX_256_511, 0x000030),
++ REG(SYS_COUNT_RX_512_1023, 0x000034),
++ REG(SYS_COUNT_RX_1024_1526, 0x000038),
++ REG(SYS_COUNT_RX_1527_MAX, 0x00003c),
++ REG(SYS_COUNT_RX_PAUSE, 0x000040),
++ REG(SYS_COUNT_RX_CONTROL, 0x000044),
++ REG(SYS_COUNT_RX_LONGS, 0x000048),
+ REG(SYS_COUNT_TX_OCTETS, 0x000200),
+ REG(SYS_COUNT_TX_COLLISION, 0x000210),
+ REG(SYS_COUNT_TX_DROPS, 0x000214),
+ REG(SYS_COUNT_TX_64, 0x00021c),
+ REG(SYS_COUNT_TX_65_127, 0x000220),
+- REG(SYS_COUNT_TX_128_511, 0x000224),
+- REG(SYS_COUNT_TX_512_1023, 0x000228),
+- REG(SYS_COUNT_TX_1024_1526, 0x00022c),
+- REG(SYS_COUNT_TX_1527_MAX, 0x000230),
++ REG(SYS_COUNT_TX_128_255, 0x000224),
++ REG(SYS_COUNT_TX_256_511, 0x000228),
++ REG(SYS_COUNT_TX_512_1023, 0x00022c),
++ REG(SYS_COUNT_TX_1024_1526, 0x000230),
++ REG(SYS_COUNT_TX_1527_MAX, 0x000234),
+ REG(SYS_COUNT_TX_AGING, 0x000278),
+ REG(SYS_RESET_CFG, 0x000e00),
+ REG(SYS_SR_ETYPE_CFG, 0x000e04),
+@@ -546,100 +550,379 @@ static const struct reg_field vsc9959_regfields[REGFIELD_MAX] = {
+ [SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 7, 4),
+ };
+
+-static const struct ocelot_stat_layout vsc9959_stats_layout[] = {
+- { .offset = 0x00, .name = "rx_octets", },
+- { .offset = 0x01, .name = "rx_unicast", },
+- { .offset = 0x02, .name = "rx_multicast", },
+- { .offset = 0x03, .name = "rx_broadcast", },
+- { .offset = 0x04, .name = "rx_shorts", },
+- { .offset = 0x05, .name = "rx_fragments", },
+- { .offset = 0x06, .name = "rx_jabbers", },
+- { .offset = 0x07, .name = "rx_crc_align_errs", },
+- { .offset = 0x08, .name = "rx_sym_errs", },
+- { .offset = 0x09, .name = "rx_frames_below_65_octets", },
+- { .offset = 0x0A, .name = "rx_frames_65_to_127_octets", },
+- { .offset = 0x0B, .name = "rx_frames_128_to_255_octets", },
+- { .offset = 0x0C, .name = "rx_frames_256_to_511_octets", },
+- { .offset = 0x0D, .name = "rx_frames_512_to_1023_octets", },
+- { .offset = 0x0E, .name = "rx_frames_1024_to_1526_octets", },
+- { .offset = 0x0F, .name = "rx_frames_over_1526_octets", },
+- { .offset = 0x10, .name = "rx_pause", },
+- { .offset = 0x11, .name = "rx_control", },
+- { .offset = 0x12, .name = "rx_longs", },
+- { .offset = 0x13, .name = "rx_classified_drops", },
+- { .offset = 0x14, .name = "rx_red_prio_0", },
+- { .offset = 0x15, .name = "rx_red_prio_1", },
+- { .offset = 0x16, .name = "rx_red_prio_2", },
+- { .offset = 0x17, .name = "rx_red_prio_3", },
+- { .offset = 0x18, .name = "rx_red_prio_4", },
+- { .offset = 0x19, .name = "rx_red_prio_5", },
+- { .offset = 0x1A, .name = "rx_red_prio_6", },
+- { .offset = 0x1B, .name = "rx_red_prio_7", },
+- { .offset = 0x1C, .name = "rx_yellow_prio_0", },
+- { .offset = 0x1D, .name = "rx_yellow_prio_1", },
+- { .offset = 0x1E, .name = "rx_yellow_prio_2", },
+- { .offset = 0x1F, .name = "rx_yellow_prio_3", },
+- { .offset = 0x20, .name = "rx_yellow_prio_4", },
+- { .offset = 0x21, .name = "rx_yellow_prio_5", },
+- { .offset = 0x22, .name = "rx_yellow_prio_6", },
+- { .offset = 0x23, .name = "rx_yellow_prio_7", },
+- { .offset = 0x24, .name = "rx_green_prio_0", },
+- { .offset = 0x25, .name = "rx_green_prio_1", },
+- { .offset = 0x26, .name = "rx_green_prio_2", },
+- { .offset = 0x27, .name = "rx_green_prio_3", },
+- { .offset = 0x28, .name = "rx_green_prio_4", },
+- { .offset = 0x29, .name = "rx_green_prio_5", },
+- { .offset = 0x2A, .name = "rx_green_prio_6", },
+- { .offset = 0x2B, .name = "rx_green_prio_7", },
+- { .offset = 0x80, .name = "tx_octets", },
+- { .offset = 0x81, .name = "tx_unicast", },
+- { .offset = 0x82, .name = "tx_multicast", },
+- { .offset = 0x83, .name = "tx_broadcast", },
+- { .offset = 0x84, .name = "tx_collision", },
+- { .offset = 0x85, .name = "tx_drops", },
+- { .offset = 0x86, .name = "tx_pause", },
+- { .offset = 0x87, .name = "tx_frames_below_65_octets", },
+- { .offset = 0x88, .name = "tx_frames_65_to_127_octets", },
+- { .offset = 0x89, .name = "tx_frames_128_255_octets", },
+- { .offset = 0x8B, .name = "tx_frames_256_511_octets", },
+- { .offset = 0x8C, .name = "tx_frames_1024_1526_octets", },
+- { .offset = 0x8D, .name = "tx_frames_over_1526_octets", },
+- { .offset = 0x8E, .name = "tx_yellow_prio_0", },
+- { .offset = 0x8F, .name = "tx_yellow_prio_1", },
+- { .offset = 0x90, .name = "tx_yellow_prio_2", },
+- { .offset = 0x91, .name = "tx_yellow_prio_3", },
+- { .offset = 0x92, .name = "tx_yellow_prio_4", },
+- { .offset = 0x93, .name = "tx_yellow_prio_5", },
+- { .offset = 0x94, .name = "tx_yellow_prio_6", },
+- { .offset = 0x95, .name = "tx_yellow_prio_7", },
+- { .offset = 0x96, .name = "tx_green_prio_0", },
+- { .offset = 0x97, .name = "tx_green_prio_1", },
+- { .offset = 0x98, .name = "tx_green_prio_2", },
+- { .offset = 0x99, .name = "tx_green_prio_3", },
+- { .offset = 0x9A, .name = "tx_green_prio_4", },
+- { .offset = 0x9B, .name = "tx_green_prio_5", },
+- { .offset = 0x9C, .name = "tx_green_prio_6", },
+- { .offset = 0x9D, .name = "tx_green_prio_7", },
+- { .offset = 0x9E, .name = "tx_aged", },
+- { .offset = 0x100, .name = "drop_local", },
+- { .offset = 0x101, .name = "drop_tail", },
+- { .offset = 0x102, .name = "drop_yellow_prio_0", },
+- { .offset = 0x103, .name = "drop_yellow_prio_1", },
+- { .offset = 0x104, .name = "drop_yellow_prio_2", },
+- { .offset = 0x105, .name = "drop_yellow_prio_3", },
+- { .offset = 0x106, .name = "drop_yellow_prio_4", },
+- { .offset = 0x107, .name = "drop_yellow_prio_5", },
+- { .offset = 0x108, .name = "drop_yellow_prio_6", },
+- { .offset = 0x109, .name = "drop_yellow_prio_7", },
+- { .offset = 0x10A, .name = "drop_green_prio_0", },
+- { .offset = 0x10B, .name = "drop_green_prio_1", },
+- { .offset = 0x10C, .name = "drop_green_prio_2", },
+- { .offset = 0x10D, .name = "drop_green_prio_3", },
+- { .offset = 0x10E, .name = "drop_green_prio_4", },
+- { .offset = 0x10F, .name = "drop_green_prio_5", },
+- { .offset = 0x110, .name = "drop_green_prio_6", },
+- { .offset = 0x111, .name = "drop_green_prio_7", },
+- OCELOT_STAT_END
++static const struct ocelot_stat_layout vsc9959_stats_layout[OCELOT_NUM_STATS] = {
++ [OCELOT_STAT_RX_OCTETS] = {
++ .name = "rx_octets",
++ .offset = 0x00,
++ },
++ [OCELOT_STAT_RX_UNICAST] = {
++ .name = "rx_unicast",
++ .offset = 0x01,
++ },
++ [OCELOT_STAT_RX_MULTICAST] = {
++ .name = "rx_multicast",
++ .offset = 0x02,
++ },
++ [OCELOT_STAT_RX_BROADCAST] = {
++ .name = "rx_broadcast",
++ .offset = 0x03,
++ },
++ [OCELOT_STAT_RX_SHORTS] = {
++ .name = "rx_shorts",
++ .offset = 0x04,
++ },
++ [OCELOT_STAT_RX_FRAGMENTS] = {
++ .name = "rx_fragments",
++ .offset = 0x05,
++ },
++ [OCELOT_STAT_RX_JABBERS] = {
++ .name = "rx_jabbers",
++ .offset = 0x06,
++ },
++ [OCELOT_STAT_RX_CRC_ALIGN_ERRS] = {
++ .name = "rx_crc_align_errs",
++ .offset = 0x07,
++ },
++ [OCELOT_STAT_RX_SYM_ERRS] = {
++ .name = "rx_sym_errs",
++ .offset = 0x08,
++ },
++ [OCELOT_STAT_RX_64] = {
++ .name = "rx_frames_below_65_octets",
++ .offset = 0x09,
++ },
++ [OCELOT_STAT_RX_65_127] = {
++ .name = "rx_frames_65_to_127_octets",
++ .offset = 0x0A,
++ },
++ [OCELOT_STAT_RX_128_255] = {
++ .name = "rx_frames_128_to_255_octets",
++ .offset = 0x0B,
++ },
++ [OCELOT_STAT_RX_256_511] = {
++ .name = "rx_frames_256_to_511_octets",
++ .offset = 0x0C,
++ },
++ [OCELOT_STAT_RX_512_1023] = {
++ .name = "rx_frames_512_to_1023_octets",
++ .offset = 0x0D,
++ },
++ [OCELOT_STAT_RX_1024_1526] = {
++ .name = "rx_frames_1024_to_1526_octets",
++ .offset = 0x0E,
++ },
++ [OCELOT_STAT_RX_1527_MAX] = {
++ .name = "rx_frames_over_1526_octets",
++ .offset = 0x0F,
++ },
++ [OCELOT_STAT_RX_PAUSE] = {
++ .name = "rx_pause",
++ .offset = 0x10,
++ },
++ [OCELOT_STAT_RX_CONTROL] = {
++ .name = "rx_control",
++ .offset = 0x11,
++ },
++ [OCELOT_STAT_RX_LONGS] = {
++ .name = "rx_longs",
++ .offset = 0x12,
++ },
++ [OCELOT_STAT_RX_CLASSIFIED_DROPS] = {
++ .name = "rx_classified_drops",
++ .offset = 0x13,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_0] = {
++ .name = "rx_red_prio_0",
++ .offset = 0x14,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_1] = {
++ .name = "rx_red_prio_1",
++ .offset = 0x15,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_2] = {
++ .name = "rx_red_prio_2",
++ .offset = 0x16,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_3] = {
++ .name = "rx_red_prio_3",
++ .offset = 0x17,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_4] = {
++ .name = "rx_red_prio_4",
++ .offset = 0x18,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_5] = {
++ .name = "rx_red_prio_5",
++ .offset = 0x19,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_6] = {
++ .name = "rx_red_prio_6",
++ .offset = 0x1A,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_7] = {
++ .name = "rx_red_prio_7",
++ .offset = 0x1B,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_0] = {
++ .name = "rx_yellow_prio_0",
++ .offset = 0x1C,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_1] = {
++ .name = "rx_yellow_prio_1",
++ .offset = 0x1D,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_2] = {
++ .name = "rx_yellow_prio_2",
++ .offset = 0x1E,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_3] = {
++ .name = "rx_yellow_prio_3",
++ .offset = 0x1F,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_4] = {
++ .name = "rx_yellow_prio_4",
++ .offset = 0x20,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_5] = {
++ .name = "rx_yellow_prio_5",
++ .offset = 0x21,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_6] = {
++ .name = "rx_yellow_prio_6",
++ .offset = 0x22,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_7] = {
++ .name = "rx_yellow_prio_7",
++ .offset = 0x23,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_0] = {
++ .name = "rx_green_prio_0",
++ .offset = 0x24,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_1] = {
++ .name = "rx_green_prio_1",
++ .offset = 0x25,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_2] = {
++ .name = "rx_green_prio_2",
++ .offset = 0x26,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_3] = {
++ .name = "rx_green_prio_3",
++ .offset = 0x27,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_4] = {
++ .name = "rx_green_prio_4",
++ .offset = 0x28,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_5] = {
++ .name = "rx_green_prio_5",
++ .offset = 0x29,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_6] = {
++ .name = "rx_green_prio_6",
++ .offset = 0x2A,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_7] = {
++ .name = "rx_green_prio_7",
++ .offset = 0x2B,
++ },
++ [OCELOT_STAT_TX_OCTETS] = {
++ .name = "tx_octets",
++ .offset = 0x80,
++ },
++ [OCELOT_STAT_TX_UNICAST] = {
++ .name = "tx_unicast",
++ .offset = 0x81,
++ },
++ [OCELOT_STAT_TX_MULTICAST] = {
++ .name = "tx_multicast",
++ .offset = 0x82,
++ },
++ [OCELOT_STAT_TX_BROADCAST] = {
++ .name = "tx_broadcast",
++ .offset = 0x83,
++ },
++ [OCELOT_STAT_TX_COLLISION] = {
++ .name = "tx_collision",
++ .offset = 0x84,
++ },
++ [OCELOT_STAT_TX_DROPS] = {
++ .name = "tx_drops",
++ .offset = 0x85,
++ },
++ [OCELOT_STAT_TX_PAUSE] = {
++ .name = "tx_pause",
++ .offset = 0x86,
++ },
++ [OCELOT_STAT_TX_64] = {
++ .name = "tx_frames_below_65_octets",
++ .offset = 0x87,
++ },
++ [OCELOT_STAT_TX_65_127] = {
++ .name = "tx_frames_65_to_127_octets",
++ .offset = 0x88,
++ },
++ [OCELOT_STAT_TX_128_255] = {
++ .name = "tx_frames_128_255_octets",
++ .offset = 0x89,
++ },
++ [OCELOT_STAT_TX_256_511] = {
++ .name = "tx_frames_256_511_octets",
++ .offset = 0x8A,
++ },
++ [OCELOT_STAT_TX_512_1023] = {
++ .name = "tx_frames_512_1023_octets",
++ .offset = 0x8B,
++ },
++ [OCELOT_STAT_TX_1024_1526] = {
++ .name = "tx_frames_1024_1526_octets",
++ .offset = 0x8C,
++ },
++ [OCELOT_STAT_TX_1527_MAX] = {
++ .name = "tx_frames_over_1526_octets",
++ .offset = 0x8D,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_0] = {
++ .name = "tx_yellow_prio_0",
++ .offset = 0x8E,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_1] = {
++ .name = "tx_yellow_prio_1",
++ .offset = 0x8F,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_2] = {
++ .name = "tx_yellow_prio_2",
++ .offset = 0x90,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_3] = {
++ .name = "tx_yellow_prio_3",
++ .offset = 0x91,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_4] = {
++ .name = "tx_yellow_prio_4",
++ .offset = 0x92,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_5] = {
++ .name = "tx_yellow_prio_5",
++ .offset = 0x93,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_6] = {
++ .name = "tx_yellow_prio_6",
++ .offset = 0x94,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_7] = {
++ .name = "tx_yellow_prio_7",
++ .offset = 0x95,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_0] = {
++ .name = "tx_green_prio_0",
++ .offset = 0x96,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_1] = {
++ .name = "tx_green_prio_1",
++ .offset = 0x97,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_2] = {
++ .name = "tx_green_prio_2",
++ .offset = 0x98,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_3] = {
++ .name = "tx_green_prio_3",
++ .offset = 0x99,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_4] = {
++ .name = "tx_green_prio_4",
++ .offset = 0x9A,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_5] = {
++ .name = "tx_green_prio_5",
++ .offset = 0x9B,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_6] = {
++ .name = "tx_green_prio_6",
++ .offset = 0x9C,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_7] = {
++ .name = "tx_green_prio_7",
++ .offset = 0x9D,
++ },
++ [OCELOT_STAT_TX_AGED] = {
++ .name = "tx_aged",
++ .offset = 0x9E,
++ },
++ [OCELOT_STAT_DROP_LOCAL] = {
++ .name = "drop_local",
++ .offset = 0x100,
++ },
++ [OCELOT_STAT_DROP_TAIL] = {
++ .name = "drop_tail",
++ .offset = 0x101,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_0] = {
++ .name = "drop_yellow_prio_0",
++ .offset = 0x102,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_1] = {
++ .name = "drop_yellow_prio_1",
++ .offset = 0x103,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_2] = {
++ .name = "drop_yellow_prio_2",
++ .offset = 0x104,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_3] = {
++ .name = "drop_yellow_prio_3",
++ .offset = 0x105,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_4] = {
++ .name = "drop_yellow_prio_4",
++ .offset = 0x106,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_5] = {
++ .name = "drop_yellow_prio_5",
++ .offset = 0x107,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_6] = {
++ .name = "drop_yellow_prio_6",
++ .offset = 0x108,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_7] = {
++ .name = "drop_yellow_prio_7",
++ .offset = 0x109,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_0] = {
++ .name = "drop_green_prio_0",
++ .offset = 0x10A,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_1] = {
++ .name = "drop_green_prio_1",
++ .offset = 0x10B,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_2] = {
++ .name = "drop_green_prio_2",
++ .offset = 0x10C,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_3] = {
++ .name = "drop_green_prio_3",
++ .offset = 0x10D,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_4] = {
++ .name = "drop_green_prio_4",
++ .offset = 0x10E,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_5] = {
++ .name = "drop_green_prio_5",
++ .offset = 0x10F,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_6] = {
++ .name = "drop_green_prio_6",
++ .offset = 0x110,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_7] = {
++ .name = "drop_green_prio_7",
++ .offset = 0x111,
++ },
+ };
+
+ static const struct vcap_field vsc9959_vcap_es0_keys[] = {
+@@ -2172,7 +2455,7 @@ static void vsc9959_psfp_sgi_table_del(struct ocelot *ocelot,
+ static void vsc9959_psfp_counters_get(struct ocelot *ocelot, u32 index,
+ struct felix_stream_filter_counters *counters)
+ {
+- mutex_lock(&ocelot->stats_lock);
++ spin_lock(&ocelot->stats_lock);
+
+ ocelot_rmw(ocelot, SYS_STAT_CFG_STAT_VIEW(index),
+ SYS_STAT_CFG_STAT_VIEW_M,
+@@ -2189,7 +2472,7 @@ static void vsc9959_psfp_counters_get(struct ocelot *ocelot, u32 index,
+ SYS_STAT_CFG_STAT_CLEAR_SHOT(0x10),
+ SYS_STAT_CFG);
+
+- mutex_unlock(&ocelot->stats_lock);
++ spin_unlock(&ocelot->stats_lock);
+ }
+
+ static int vsc9959_psfp_filter_add(struct ocelot *ocelot, int port,
+diff --git a/drivers/net/dsa/ocelot/seville_vsc9953.c b/drivers/net/dsa/ocelot/seville_vsc9953.c
+index ea06492113568..fe5d4642d0bcb 100644
+--- a/drivers/net/dsa/ocelot/seville_vsc9953.c
++++ b/drivers/net/dsa/ocelot/seville_vsc9953.c
+@@ -277,19 +277,21 @@ static const u32 vsc9953_sys_regmap[] = {
+ REG(SYS_COUNT_RX_64, 0x000024),
+ REG(SYS_COUNT_RX_65_127, 0x000028),
+ REG(SYS_COUNT_RX_128_255, 0x00002c),
+- REG(SYS_COUNT_RX_256_1023, 0x000030),
+- REG(SYS_COUNT_RX_1024_1526, 0x000034),
+- REG(SYS_COUNT_RX_1527_MAX, 0x000038),
++ REG(SYS_COUNT_RX_256_511, 0x000030),
++ REG(SYS_COUNT_RX_512_1023, 0x000034),
++ REG(SYS_COUNT_RX_1024_1526, 0x000038),
++ REG(SYS_COUNT_RX_1527_MAX, 0x00003c),
+ REG(SYS_COUNT_RX_LONGS, 0x000048),
+ REG(SYS_COUNT_TX_OCTETS, 0x000100),
+ REG(SYS_COUNT_TX_COLLISION, 0x000110),
+ REG(SYS_COUNT_TX_DROPS, 0x000114),
+ REG(SYS_COUNT_TX_64, 0x00011c),
+ REG(SYS_COUNT_TX_65_127, 0x000120),
+- REG(SYS_COUNT_TX_128_511, 0x000124),
+- REG(SYS_COUNT_TX_512_1023, 0x000128),
+- REG(SYS_COUNT_TX_1024_1526, 0x00012c),
+- REG(SYS_COUNT_TX_1527_MAX, 0x000130),
++ REG(SYS_COUNT_TX_128_255, 0x000124),
++ REG(SYS_COUNT_TX_256_511, 0x000128),
++ REG(SYS_COUNT_TX_512_1023, 0x00012c),
++ REG(SYS_COUNT_TX_1024_1526, 0x000130),
++ REG(SYS_COUNT_TX_1527_MAX, 0x000134),
+ REG(SYS_COUNT_TX_AGING, 0x000178),
+ REG(SYS_RESET_CFG, 0x000318),
+ REG_RESERVED(SYS_SR_ETYPE_CFG),
+@@ -543,101 +545,379 @@ static const struct reg_field vsc9953_regfields[REGFIELD_MAX] = {
+ [SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 11, 4),
+ };
+
+-static const struct ocelot_stat_layout vsc9953_stats_layout[] = {
+- { .offset = 0x00, .name = "rx_octets", },
+- { .offset = 0x01, .name = "rx_unicast", },
+- { .offset = 0x02, .name = "rx_multicast", },
+- { .offset = 0x03, .name = "rx_broadcast", },
+- { .offset = 0x04, .name = "rx_shorts", },
+- { .offset = 0x05, .name = "rx_fragments", },
+- { .offset = 0x06, .name = "rx_jabbers", },
+- { .offset = 0x07, .name = "rx_crc_align_errs", },
+- { .offset = 0x08, .name = "rx_sym_errs", },
+- { .offset = 0x09, .name = "rx_frames_below_65_octets", },
+- { .offset = 0x0A, .name = "rx_frames_65_to_127_octets", },
+- { .offset = 0x0B, .name = "rx_frames_128_to_255_octets", },
+- { .offset = 0x0C, .name = "rx_frames_256_to_511_octets", },
+- { .offset = 0x0D, .name = "rx_frames_512_to_1023_octets", },
+- { .offset = 0x0E, .name = "rx_frames_1024_to_1526_octets", },
+- { .offset = 0x0F, .name = "rx_frames_over_1526_octets", },
+- { .offset = 0x10, .name = "rx_pause", },
+- { .offset = 0x11, .name = "rx_control", },
+- { .offset = 0x12, .name = "rx_longs", },
+- { .offset = 0x13, .name = "rx_classified_drops", },
+- { .offset = 0x14, .name = "rx_red_prio_0", },
+- { .offset = 0x15, .name = "rx_red_prio_1", },
+- { .offset = 0x16, .name = "rx_red_prio_2", },
+- { .offset = 0x17, .name = "rx_red_prio_3", },
+- { .offset = 0x18, .name = "rx_red_prio_4", },
+- { .offset = 0x19, .name = "rx_red_prio_5", },
+- { .offset = 0x1A, .name = "rx_red_prio_6", },
+- { .offset = 0x1B, .name = "rx_red_prio_7", },
+- { .offset = 0x1C, .name = "rx_yellow_prio_0", },
+- { .offset = 0x1D, .name = "rx_yellow_prio_1", },
+- { .offset = 0x1E, .name = "rx_yellow_prio_2", },
+- { .offset = 0x1F, .name = "rx_yellow_prio_3", },
+- { .offset = 0x20, .name = "rx_yellow_prio_4", },
+- { .offset = 0x21, .name = "rx_yellow_prio_5", },
+- { .offset = 0x22, .name = "rx_yellow_prio_6", },
+- { .offset = 0x23, .name = "rx_yellow_prio_7", },
+- { .offset = 0x24, .name = "rx_green_prio_0", },
+- { .offset = 0x25, .name = "rx_green_prio_1", },
+- { .offset = 0x26, .name = "rx_green_prio_2", },
+- { .offset = 0x27, .name = "rx_green_prio_3", },
+- { .offset = 0x28, .name = "rx_green_prio_4", },
+- { .offset = 0x29, .name = "rx_green_prio_5", },
+- { .offset = 0x2A, .name = "rx_green_prio_6", },
+- { .offset = 0x2B, .name = "rx_green_prio_7", },
+- { .offset = 0x40, .name = "tx_octets", },
+- { .offset = 0x41, .name = "tx_unicast", },
+- { .offset = 0x42, .name = "tx_multicast", },
+- { .offset = 0x43, .name = "tx_broadcast", },
+- { .offset = 0x44, .name = "tx_collision", },
+- { .offset = 0x45, .name = "tx_drops", },
+- { .offset = 0x46, .name = "tx_pause", },
+- { .offset = 0x47, .name = "tx_frames_below_65_octets", },
+- { .offset = 0x48, .name = "tx_frames_65_to_127_octets", },
+- { .offset = 0x49, .name = "tx_frames_128_255_octets", },
+- { .offset = 0x4A, .name = "tx_frames_256_511_octets", },
+- { .offset = 0x4B, .name = "tx_frames_512_1023_octets", },
+- { .offset = 0x4C, .name = "tx_frames_1024_1526_octets", },
+- { .offset = 0x4D, .name = "tx_frames_over_1526_octets", },
+- { .offset = 0x4E, .name = "tx_yellow_prio_0", },
+- { .offset = 0x4F, .name = "tx_yellow_prio_1", },
+- { .offset = 0x50, .name = "tx_yellow_prio_2", },
+- { .offset = 0x51, .name = "tx_yellow_prio_3", },
+- { .offset = 0x52, .name = "tx_yellow_prio_4", },
+- { .offset = 0x53, .name = "tx_yellow_prio_5", },
+- { .offset = 0x54, .name = "tx_yellow_prio_6", },
+- { .offset = 0x55, .name = "tx_yellow_prio_7", },
+- { .offset = 0x56, .name = "tx_green_prio_0", },
+- { .offset = 0x57, .name = "tx_green_prio_1", },
+- { .offset = 0x58, .name = "tx_green_prio_2", },
+- { .offset = 0x59, .name = "tx_green_prio_3", },
+- { .offset = 0x5A, .name = "tx_green_prio_4", },
+- { .offset = 0x5B, .name = "tx_green_prio_5", },
+- { .offset = 0x5C, .name = "tx_green_prio_6", },
+- { .offset = 0x5D, .name = "tx_green_prio_7", },
+- { .offset = 0x5E, .name = "tx_aged", },
+- { .offset = 0x80, .name = "drop_local", },
+- { .offset = 0x81, .name = "drop_tail", },
+- { .offset = 0x82, .name = "drop_yellow_prio_0", },
+- { .offset = 0x83, .name = "drop_yellow_prio_1", },
+- { .offset = 0x84, .name = "drop_yellow_prio_2", },
+- { .offset = 0x85, .name = "drop_yellow_prio_3", },
+- { .offset = 0x86, .name = "drop_yellow_prio_4", },
+- { .offset = 0x87, .name = "drop_yellow_prio_5", },
+- { .offset = 0x88, .name = "drop_yellow_prio_6", },
+- { .offset = 0x89, .name = "drop_yellow_prio_7", },
+- { .offset = 0x8A, .name = "drop_green_prio_0", },
+- { .offset = 0x8B, .name = "drop_green_prio_1", },
+- { .offset = 0x8C, .name = "drop_green_prio_2", },
+- { .offset = 0x8D, .name = "drop_green_prio_3", },
+- { .offset = 0x8E, .name = "drop_green_prio_4", },
+- { .offset = 0x8F, .name = "drop_green_prio_5", },
+- { .offset = 0x90, .name = "drop_green_prio_6", },
+- { .offset = 0x91, .name = "drop_green_prio_7", },
+- OCELOT_STAT_END
++static const struct ocelot_stat_layout vsc9953_stats_layout[OCELOT_NUM_STATS] = {
++ [OCELOT_STAT_RX_OCTETS] = {
++ .name = "rx_octets",
++ .offset = 0x00,
++ },
++ [OCELOT_STAT_RX_UNICAST] = {
++ .name = "rx_unicast",
++ .offset = 0x01,
++ },
++ [OCELOT_STAT_RX_MULTICAST] = {
++ .name = "rx_multicast",
++ .offset = 0x02,
++ },
++ [OCELOT_STAT_RX_BROADCAST] = {
++ .name = "rx_broadcast",
++ .offset = 0x03,
++ },
++ [OCELOT_STAT_RX_SHORTS] = {
++ .name = "rx_shorts",
++ .offset = 0x04,
++ },
++ [OCELOT_STAT_RX_FRAGMENTS] = {
++ .name = "rx_fragments",
++ .offset = 0x05,
++ },
++ [OCELOT_STAT_RX_JABBERS] = {
++ .name = "rx_jabbers",
++ .offset = 0x06,
++ },
++ [OCELOT_STAT_RX_CRC_ALIGN_ERRS] = {
++ .name = "rx_crc_align_errs",
++ .offset = 0x07,
++ },
++ [OCELOT_STAT_RX_SYM_ERRS] = {
++ .name = "rx_sym_errs",
++ .offset = 0x08,
++ },
++ [OCELOT_STAT_RX_64] = {
++ .name = "rx_frames_below_65_octets",
++ .offset = 0x09,
++ },
++ [OCELOT_STAT_RX_65_127] = {
++ .name = "rx_frames_65_to_127_octets",
++ .offset = 0x0A,
++ },
++ [OCELOT_STAT_RX_128_255] = {
++ .name = "rx_frames_128_to_255_octets",
++ .offset = 0x0B,
++ },
++ [OCELOT_STAT_RX_256_511] = {
++ .name = "rx_frames_256_to_511_octets",
++ .offset = 0x0C,
++ },
++ [OCELOT_STAT_RX_512_1023] = {
++ .name = "rx_frames_512_to_1023_octets",
++ .offset = 0x0D,
++ },
++ [OCELOT_STAT_RX_1024_1526] = {
++ .name = "rx_frames_1024_to_1526_octets",
++ .offset = 0x0E,
++ },
++ [OCELOT_STAT_RX_1527_MAX] = {
++ .name = "rx_frames_over_1526_octets",
++ .offset = 0x0F,
++ },
++ [OCELOT_STAT_RX_PAUSE] = {
++ .name = "rx_pause",
++ .offset = 0x10,
++ },
++ [OCELOT_STAT_RX_CONTROL] = {
++ .name = "rx_control",
++ .offset = 0x11,
++ },
++ [OCELOT_STAT_RX_LONGS] = {
++ .name = "rx_longs",
++ .offset = 0x12,
++ },
++ [OCELOT_STAT_RX_CLASSIFIED_DROPS] = {
++ .name = "rx_classified_drops",
++ .offset = 0x13,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_0] = {
++ .name = "rx_red_prio_0",
++ .offset = 0x14,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_1] = {
++ .name = "rx_red_prio_1",
++ .offset = 0x15,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_2] = {
++ .name = "rx_red_prio_2",
++ .offset = 0x16,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_3] = {
++ .name = "rx_red_prio_3",
++ .offset = 0x17,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_4] = {
++ .name = "rx_red_prio_4",
++ .offset = 0x18,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_5] = {
++ .name = "rx_red_prio_5",
++ .offset = 0x19,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_6] = {
++ .name = "rx_red_prio_6",
++ .offset = 0x1A,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_7] = {
++ .name = "rx_red_prio_7",
++ .offset = 0x1B,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_0] = {
++ .name = "rx_yellow_prio_0",
++ .offset = 0x1C,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_1] = {
++ .name = "rx_yellow_prio_1",
++ .offset = 0x1D,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_2] = {
++ .name = "rx_yellow_prio_2",
++ .offset = 0x1E,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_3] = {
++ .name = "rx_yellow_prio_3",
++ .offset = 0x1F,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_4] = {
++ .name = "rx_yellow_prio_4",
++ .offset = 0x20,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_5] = {
++ .name = "rx_yellow_prio_5",
++ .offset = 0x21,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_6] = {
++ .name = "rx_yellow_prio_6",
++ .offset = 0x22,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_7] = {
++ .name = "rx_yellow_prio_7",
++ .offset = 0x23,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_0] = {
++ .name = "rx_green_prio_0",
++ .offset = 0x24,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_1] = {
++ .name = "rx_green_prio_1",
++ .offset = 0x25,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_2] = {
++ .name = "rx_green_prio_2",
++ .offset = 0x26,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_3] = {
++ .name = "rx_green_prio_3",
++ .offset = 0x27,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_4] = {
++ .name = "rx_green_prio_4",
++ .offset = 0x28,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_5] = {
++ .name = "rx_green_prio_5",
++ .offset = 0x29,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_6] = {
++ .name = "rx_green_prio_6",
++ .offset = 0x2A,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_7] = {
++ .name = "rx_green_prio_7",
++ .offset = 0x2B,
++ },
++ [OCELOT_STAT_TX_OCTETS] = {
++ .name = "tx_octets",
++ .offset = 0x40,
++ },
++ [OCELOT_STAT_TX_UNICAST] = {
++ .name = "tx_unicast",
++ .offset = 0x41,
++ },
++ [OCELOT_STAT_TX_MULTICAST] = {
++ .name = "tx_multicast",
++ .offset = 0x42,
++ },
++ [OCELOT_STAT_TX_BROADCAST] = {
++ .name = "tx_broadcast",
++ .offset = 0x43,
++ },
++ [OCELOT_STAT_TX_COLLISION] = {
++ .name = "tx_collision",
++ .offset = 0x44,
++ },
++ [OCELOT_STAT_TX_DROPS] = {
++ .name = "tx_drops",
++ .offset = 0x45,
++ },
++ [OCELOT_STAT_TX_PAUSE] = {
++ .name = "tx_pause",
++ .offset = 0x46,
++ },
++ [OCELOT_STAT_TX_64] = {
++ .name = "tx_frames_below_65_octets",
++ .offset = 0x47,
++ },
++ [OCELOT_STAT_TX_65_127] = {
++ .name = "tx_frames_65_to_127_octets",
++ .offset = 0x48,
++ },
++ [OCELOT_STAT_TX_128_255] = {
++ .name = "tx_frames_128_255_octets",
++ .offset = 0x49,
++ },
++ [OCELOT_STAT_TX_256_511] = {
++ .name = "tx_frames_256_511_octets",
++ .offset = 0x4A,
++ },
++ [OCELOT_STAT_TX_512_1023] = {
++ .name = "tx_frames_512_1023_octets",
++ .offset = 0x4B,
++ },
++ [OCELOT_STAT_TX_1024_1526] = {
++ .name = "tx_frames_1024_1526_octets",
++ .offset = 0x4C,
++ },
++ [OCELOT_STAT_TX_1527_MAX] = {
++ .name = "tx_frames_over_1526_octets",
++ .offset = 0x4D,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_0] = {
++ .name = "tx_yellow_prio_0",
++ .offset = 0x4E,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_1] = {
++ .name = "tx_yellow_prio_1",
++ .offset = 0x4F,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_2] = {
++ .name = "tx_yellow_prio_2",
++ .offset = 0x50,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_3] = {
++ .name = "tx_yellow_prio_3",
++ .offset = 0x51,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_4] = {
++ .name = "tx_yellow_prio_4",
++ .offset = 0x52,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_5] = {
++ .name = "tx_yellow_prio_5",
++ .offset = 0x53,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_6] = {
++ .name = "tx_yellow_prio_6",
++ .offset = 0x54,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_7] = {
++ .name = "tx_yellow_prio_7",
++ .offset = 0x55,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_0] = {
++ .name = "tx_green_prio_0",
++ .offset = 0x56,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_1] = {
++ .name = "tx_green_prio_1",
++ .offset = 0x57,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_2] = {
++ .name = "tx_green_prio_2",
++ .offset = 0x58,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_3] = {
++ .name = "tx_green_prio_3",
++ .offset = 0x59,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_4] = {
++ .name = "tx_green_prio_4",
++ .offset = 0x5A,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_5] = {
++ .name = "tx_green_prio_5",
++ .offset = 0x5B,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_6] = {
++ .name = "tx_green_prio_6",
++ .offset = 0x5C,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_7] = {
++ .name = "tx_green_prio_7",
++ .offset = 0x5D,
++ },
++ [OCELOT_STAT_TX_AGED] = {
++ .name = "tx_aged",
++ .offset = 0x5E,
++ },
++ [OCELOT_STAT_DROP_LOCAL] = {
++ .name = "drop_local",
++ .offset = 0x80,
++ },
++ [OCELOT_STAT_DROP_TAIL] = {
++ .name = "drop_tail",
++ .offset = 0x81,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_0] = {
++ .name = "drop_yellow_prio_0",
++ .offset = 0x82,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_1] = {
++ .name = "drop_yellow_prio_1",
++ .offset = 0x83,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_2] = {
++ .name = "drop_yellow_prio_2",
++ .offset = 0x84,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_3] = {
++ .name = "drop_yellow_prio_3",
++ .offset = 0x85,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_4] = {
++ .name = "drop_yellow_prio_4",
++ .offset = 0x86,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_5] = {
++ .name = "drop_yellow_prio_5",
++ .offset = 0x87,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_6] = {
++ .name = "drop_yellow_prio_6",
++ .offset = 0x88,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_7] = {
++ .name = "drop_yellow_prio_7",
++ .offset = 0x89,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_0] = {
++ .name = "drop_green_prio_0",
++ .offset = 0x8A,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_1] = {
++ .name = "drop_green_prio_1",
++ .offset = 0x8B,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_2] = {
++ .name = "drop_green_prio_2",
++ .offset = 0x8C,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_3] = {
++ .name = "drop_green_prio_3",
++ .offset = 0x8D,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_4] = {
++ .name = "drop_green_prio_4",
++ .offset = 0x8E,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_5] = {
++ .name = "drop_green_prio_5",
++ .offset = 0x8F,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_6] = {
++ .name = "drop_green_prio_6",
++ .offset = 0x90,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_7] = {
++ .name = "drop_green_prio_7",
++ .offset = 0x91,
++ },
+ };
+
+ static const struct vcap_field vsc9953_vcap_es0_keys[] = {
+diff --git a/drivers/net/dsa/sja1105/sja1105_devlink.c b/drivers/net/dsa/sja1105/sja1105_devlink.c
+index 0569ff066634d..10c6fea1227fa 100644
+--- a/drivers/net/dsa/sja1105/sja1105_devlink.c
++++ b/drivers/net/dsa/sja1105/sja1105_devlink.c
+@@ -93,7 +93,7 @@ static int sja1105_setup_devlink_regions(struct dsa_switch *ds)
+
+ region = dsa_devlink_region_create(ds, ops, 1, size);
+ if (IS_ERR(region)) {
+- while (i-- >= 0)
++ while (--i >= 0)
+ dsa_devlink_region_destroy(priv->regions[i]);
+ return PTR_ERR(region);
+ }
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index e11cc29d3264c..06508eebb5853 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -265,12 +265,10 @@ static void aq_nic_service_timer_cb(struct timer_list *t)
+ static void aq_nic_polling_timer_cb(struct timer_list *t)
+ {
+ struct aq_nic_s *self = from_timer(self, t, polling_timer);
+- struct aq_vec_s *aq_vec = NULL;
+ unsigned int i = 0U;
+
+- for (i = 0U, aq_vec = self->aq_vec[0];
+- self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i])
+- aq_vec_isr(i, (void *)aq_vec);
++ for (i = 0U; self->aq_vecs > i; ++i)
++ aq_vec_isr(i, (void *)self->aq_vec[i]);
+
+ mod_timer(&self->polling_timer, jiffies +
+ AQ_CFG_POLLING_TIMER_INTERVAL);
+@@ -1014,7 +1012,6 @@ int aq_nic_get_regs_count(struct aq_nic_s *self)
+
+ u64 *aq_nic_get_stats(struct aq_nic_s *self, u64 *data)
+ {
+- struct aq_vec_s *aq_vec = NULL;
+ struct aq_stats_s *stats;
+ unsigned int count = 0U;
+ unsigned int i = 0U;
+@@ -1064,11 +1061,11 @@ u64 *aq_nic_get_stats(struct aq_nic_s *self, u64 *data)
+ data += i;
+
+ for (tc = 0U; tc < self->aq_nic_cfg.tcs; tc++) {
+- for (i = 0U, aq_vec = self->aq_vec[0];
+- aq_vec && self->aq_vecs > i;
+- ++i, aq_vec = self->aq_vec[i]) {
++ for (i = 0U; self->aq_vecs > i; ++i) {
++ if (!self->aq_vec[i])
++ break;
+ data += count;
+- count = aq_vec_get_sw_stats(aq_vec, tc, data);
++ count = aq_vec_get_sw_stats(self->aq_vec[i], tc, data);
+ }
+ }
+
+@@ -1382,7 +1379,6 @@ int aq_nic_set_loopback(struct aq_nic_s *self)
+
+ int aq_nic_stop(struct aq_nic_s *self)
+ {
+- struct aq_vec_s *aq_vec = NULL;
+ unsigned int i = 0U;
+
+ netif_tx_disable(self->ndev);
+@@ -1400,9 +1396,8 @@ int aq_nic_stop(struct aq_nic_s *self)
+
+ aq_ptp_irq_free(self);
+
+- for (i = 0U, aq_vec = self->aq_vec[0];
+- self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i])
+- aq_vec_stop(aq_vec);
++ for (i = 0U; self->aq_vecs > i; ++i)
++ aq_vec_stop(self->aq_vec[i]);
+
+ aq_ptp_ring_stop(self);
+
+diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
+index 2dfc1e32bbb31..93580484a3f4e 100644
+--- a/drivers/net/ethernet/broadcom/bgmac.c
++++ b/drivers/net/ethernet/broadcom/bgmac.c
+@@ -189,8 +189,8 @@ static netdev_tx_t bgmac_dma_tx_add(struct bgmac *bgmac,
+ }
+
+ slot->skb = skb;
+- ring->end += nr_frags + 1;
+ netdev_sent_queue(net_dev, skb->len);
++ ring->end += nr_frags + 1;
+
+ wmb();
+
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index c888ddee1fc41..7ded559842e83 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -393,6 +393,9 @@ int bcmgenet_mii_probe(struct net_device *dev)
+ if (priv->internal_phy && !GENET_IS_V5(priv))
+ dev->phydev->irq = PHY_MAC_INTERRUPT;
+
++ /* Indicate that the MAC is responsible for PHY PM */
++ dev->phydev->mac_managed_pm = true;
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+index 26433a62d7f0d..fed5f93bf620a 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+@@ -497,7 +497,7 @@ struct cpl_t5_pass_accept_rpl {
+ __be32 opt2;
+ __be64 opt0;
+ __be32 iss;
+- __be32 rsvd[3];
++ __be32 rsvd;
+ };
+
+ struct cpl_act_open_req {
+diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c
+index cb069a0af7b92..251ea16ed0fae 100644
+--- a/drivers/net/ethernet/engleder/tsnep_main.c
++++ b/drivers/net/ethernet/engleder/tsnep_main.c
+@@ -340,14 +340,14 @@ static int tsnep_tx_map(struct sk_buff *skb, struct tsnep_tx *tx, int count)
+ return 0;
+ }
+
+-static void tsnep_tx_unmap(struct tsnep_tx *tx, int count)
++static void tsnep_tx_unmap(struct tsnep_tx *tx, int index, int count)
+ {
+ struct device *dmadev = tx->adapter->dmadev;
+ struct tsnep_tx_entry *entry;
+ int i;
+
+ for (i = 0; i < count; i++) {
+- entry = &tx->entry[(tx->read + i) % TSNEP_RING_SIZE];
++ entry = &tx->entry[(index + i) % TSNEP_RING_SIZE];
+
+ if (entry->len) {
+ if (i == 0)
+@@ -395,7 +395,7 @@ static netdev_tx_t tsnep_xmit_frame_ring(struct sk_buff *skb,
+
+ retval = tsnep_tx_map(skb, tx, count);
+ if (retval != 0) {
+- tsnep_tx_unmap(tx, count);
++ tsnep_tx_unmap(tx, tx->write, count);
+ dev_kfree_skb_any(entry->skb);
+ entry->skb = NULL;
+
+@@ -464,7 +464,7 @@ static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget)
+ if (skb_shinfo(entry->skb)->nr_frags > 0)
+ count += skb_shinfo(entry->skb)->nr_frags;
+
+- tsnep_tx_unmap(tx, count);
++ tsnep_tx_unmap(tx, tx->read, count);
+
+ if ((skb_shinfo(entry->skb)->tx_flags & SKBTX_IN_PROGRESS) &&
+ (__le32_to_cpu(entry->desc_wb->properties) &
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index cd9ec80522e75..75d51572693d6 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -1660,8 +1660,8 @@ static int dpaa2_eth_add_bufs(struct dpaa2_eth_priv *priv,
+ buf_array[i] = addr;
+
+ /* tracing point */
+- trace_dpaa2_eth_buf_seed(priv->net_dev,
+- page, DPAA2_ETH_RX_BUF_RAW_SIZE,
++ trace_dpaa2_eth_buf_seed(priv->net_dev, page_address(page),
++ DPAA2_ETH_RX_BUF_RAW_SIZE,
+ addr, priv->rx_buf_size,
+ bpid);
+ }
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index 7d49c28215f31..3dc3c0b626c21 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -135,11 +135,7 @@ static int fec_ptp_enable_pps(struct fec_enet_private *fep, uint enable)
+ * NSEC_PER_SEC - ts.tv_nsec. Add the remaining nanoseconds
+ * to current timer would be next second.
+ */
+- tempval = readl(fep->hwp + FEC_ATIME_CTRL);
+- tempval |= FEC_T_CTRL_CAPTURE;
+- writel(tempval, fep->hwp + FEC_ATIME_CTRL);
+-
+- tempval = readl(fep->hwp + FEC_ATIME);
++ tempval = fep->cc.read(&fep->cc);
+ /* Convert the ptp local counter to 1588 timestamp */
+ ns = timecounter_cyc2time(&fep->tc, tempval);
+ ts = ns_to_timespec64(ns);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 685556e968f20..71a8e1698ed48 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -384,7 +384,9 @@ static void i40e_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+ set_bit(__I40E_GLOBAL_RESET_REQUESTED, pf->state);
+ break;
+ default:
+- netdev_err(netdev, "tx_timeout recovery unsuccessful\n");
++ netdev_err(netdev, "tx_timeout recovery unsuccessful, device is in non-recoverable state.\n");
++ set_bit(__I40E_DOWN_REQUESTED, pf->state);
++ set_bit(__I40E_VSI_DOWN_REQUESTED, vsi->state);
+ break;
+ }
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 7bc1174edf6b9..af69ccc6e8d2f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -3204,11 +3204,13 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
+
+ protocol = vlan_get_protocol(skb);
+
+- if (eth_p_mpls(protocol))
++ if (eth_p_mpls(protocol)) {
+ ip.hdr = skb_inner_network_header(skb);
+- else
++ l4.hdr = skb_checksum_start(skb);
++ } else {
+ ip.hdr = skb_network_header(skb);
+- l4.hdr = skb_checksum_start(skb);
++ l4.hdr = skb_transport_header(skb);
++ }
+
+ /* set the tx_flags to indicate the IP protocol type. this is
+ * required so that checksum header computation below is accurate.
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_adminq.c b/drivers/net/ethernet/intel/iavf/iavf_adminq.c
+index cd4e6a22d0f9f..9ffbd24d83cb6 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_adminq.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_adminq.c
+@@ -324,6 +324,7 @@ static enum iavf_status iavf_config_arq_regs(struct iavf_hw *hw)
+ static enum iavf_status iavf_init_asq(struct iavf_hw *hw)
+ {
+ enum iavf_status ret_code = 0;
++ int i;
+
+ if (hw->aq.asq.count > 0) {
+ /* queue already initialized */
+@@ -354,12 +355,17 @@ static enum iavf_status iavf_init_asq(struct iavf_hw *hw)
+ /* initialize base registers */
+ ret_code = iavf_config_asq_regs(hw);
+ if (ret_code)
+- goto init_adminq_free_rings;
++ goto init_free_asq_bufs;
+
+ /* success! */
+ hw->aq.asq.count = hw->aq.num_asq_entries;
+ goto init_adminq_exit;
+
++init_free_asq_bufs:
++ for (i = 0; i < hw->aq.num_asq_entries; i++)
++ iavf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]);
++ iavf_free_virt_mem(hw, &hw->aq.asq.dma_head);
++
+ init_adminq_free_rings:
+ iavf_free_adminq_asq(hw);
+
+@@ -383,6 +389,7 @@ init_adminq_exit:
+ static enum iavf_status iavf_init_arq(struct iavf_hw *hw)
+ {
+ enum iavf_status ret_code = 0;
++ int i;
+
+ if (hw->aq.arq.count > 0) {
+ /* queue already initialized */
+@@ -413,12 +420,16 @@ static enum iavf_status iavf_init_arq(struct iavf_hw *hw)
+ /* initialize base registers */
+ ret_code = iavf_config_arq_regs(hw);
+ if (ret_code)
+- goto init_adminq_free_rings;
++ goto init_free_arq_bufs;
+
+ /* success! */
+ hw->aq.arq.count = hw->aq.num_arq_entries;
+ goto init_adminq_exit;
+
++init_free_arq_bufs:
++ for (i = 0; i < hw->aq.num_arq_entries; i++)
++ iavf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]);
++ iavf_free_virt_mem(hw, &hw->aq.arq.dma_head);
+ init_adminq_free_rings:
+ iavf_free_adminq_arq(hw);
+
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 3dbfaead2ac74..6d159334da9ec 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -2281,7 +2281,7 @@ static void iavf_init_get_resources(struct iavf_adapter *adapter)
+ err = iavf_get_vf_config(adapter);
+ if (err == -EALREADY) {
+ err = iavf_send_vf_config_msg(adapter);
+- goto err_alloc;
++ goto err;
+ } else if (err == -EINVAL) {
+ /* We only get -EINVAL if the device is in a very bad
+ * state or if we've been disabled for previous bad
+@@ -2998,12 +2998,15 @@ continue_reset:
+
+ return;
+ reset_err:
++ if (running) {
++ set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
++ iavf_free_traffic_irqs(adapter);
++ }
++ iavf_disable_vf(adapter);
++
+ mutex_unlock(&adapter->client_lock);
+ mutex_unlock(&adapter->crit_lock);
+- if (running)
+- iavf_change_state(adapter, __IAVF_RUNNING);
+ dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");
+- iavf_close(netdev);
+ }
+
+ /**
+@@ -3986,8 +3989,17 @@ static int iavf_open(struct net_device *netdev)
+ return -EIO;
+ }
+
+- while (!mutex_trylock(&adapter->crit_lock))
++ while (!mutex_trylock(&adapter->crit_lock)) {
++ /* If we are in __IAVF_INIT_CONFIG_ADAPTER state the crit_lock
++ * is already taken and iavf_open is called from an upper
++ * device's notifier reacting on NETDEV_REGISTER event.
++ * We have to leave here to avoid dead lock.
++ */
++ if (adapter->state == __IAVF_INIT_CONFIG_ADAPTER)
++ return -EBUSY;
++
+ usleep_range(500, 1000);
++ }
+
+ if (adapter->state != __IAVF_DOWN) {
+ err = -EBUSY;
+diff --git a/drivers/net/ethernet/intel/ice/ice_fltr.c b/drivers/net/ethernet/intel/ice/ice_fltr.c
+index 85a94483c2edc..40e678cfb5078 100644
+--- a/drivers/net/ethernet/intel/ice/ice_fltr.c
++++ b/drivers/net/ethernet/intel/ice/ice_fltr.c
+@@ -62,7 +62,7 @@ ice_fltr_set_vlan_vsi_promisc(struct ice_hw *hw, struct ice_vsi *vsi,
+ int result;
+
+ result = ice_set_vlan_vsi_promisc(hw, vsi->idx, promisc_mask, false);
+- if (result)
++ if (result && result != -EEXIST)
+ dev_err(ice_pf_to_dev(pf),
+ "Error setting promisc mode on VSI %i (rc=%d)\n",
+ vsi->vsi_num, result);
+@@ -86,7 +86,7 @@ ice_fltr_clear_vlan_vsi_promisc(struct ice_hw *hw, struct ice_vsi *vsi,
+ int result;
+
+ result = ice_set_vlan_vsi_promisc(hw, vsi->idx, promisc_mask, true);
+- if (result)
++ if (result && result != -EEXIST)
+ dev_err(ice_pf_to_dev(pf),
+ "Error clearing promisc mode on VSI %i (rc=%d)\n",
+ vsi->vsi_num, result);
+@@ -109,7 +109,7 @@ ice_fltr_clear_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+ int result;
+
+ result = ice_clear_vsi_promisc(hw, vsi_handle, promisc_mask, vid);
+- if (result)
++ if (result && result != -EEXIST)
+ dev_err(ice_pf_to_dev(pf),
+ "Error clearing promisc mode on VSI %i for VID %u (rc=%d)\n",
+ ice_get_hw_vsi_num(hw, vsi_handle), vid, result);
+@@ -132,7 +132,7 @@ ice_fltr_set_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+ int result;
+
+ result = ice_set_vsi_promisc(hw, vsi_handle, promisc_mask, vid);
+- if (result)
++ if (result && result != -EEXIST)
+ dev_err(ice_pf_to_dev(pf),
+ "Error setting promisc mode on VSI %i for VID %u (rc=%d)\n",
+ ice_get_hw_vsi_num(hw, vsi_handle), vid, result);
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index f7f9c973ec54d..d6aafa272fb0b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -3178,7 +3178,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
+
+ pf = vsi->back;
+ vtype = vsi->type;
+- if (WARN_ON(vtype == ICE_VSI_VF) && !vsi->vf)
++ if (WARN_ON(vtype == ICE_VSI_VF && !vsi->vf))
+ return -EINVAL;
+
+ ice_vsi_init_vlan_ops(vsi);
+@@ -4078,7 +4078,11 @@ int ice_vsi_del_vlan_zero(struct ice_vsi *vsi)
+ if (err && err != -EEXIST)
+ return err;
+
+- return 0;
++ /* when deleting the last VLAN filter, make sure to disable the VLAN
++ * promisc mode so the filter isn't left by accident
++ */
++ return ice_clear_vsi_promisc(&vsi->back->hw, vsi->idx,
++ ICE_MCAST_VLAN_PROMISC_BITS, 0);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index bc68dc5c6927d..bfd97a9a8f2e0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -267,8 +267,10 @@ static int ice_set_promisc(struct ice_vsi *vsi, u8 promisc_m)
+ status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx,
+ promisc_m, 0);
+ }
++ if (status && status != -EEXIST)
++ return status;
+
+- return status;
++ return 0;
+ }
+
+ /**
+@@ -3572,6 +3574,14 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid)
+ while (test_and_set_bit(ICE_CFG_BUSY, vsi->state))
+ usleep_range(1000, 2000);
+
++ ret = ice_clear_vsi_promisc(&vsi->back->hw, vsi->idx,
++ ICE_MCAST_VLAN_PROMISC_BITS, vid);
++ if (ret) {
++ netdev_err(netdev, "Error clearing multicast promiscuous mode on VSI %i\n",
++ vsi->vsi_num);
++ vsi->current_netdev_flags |= IFF_ALLMULTI;
++ }
++
+ vlan_ops = ice_get_compat_vsi_vlan_ops(vsi);
+
+ /* Make sure VLAN delete is successful before updating VLAN
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 9b2872e891518..b78a79c058bfc 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -4414,6 +4414,13 @@ ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+ goto free_fltr_list;
+
+ list_for_each_entry(list_itr, &vsi_list_head, list_entry) {
++ /* Avoid enabling or disabling VLAN zero twice when in double
++ * VLAN mode
++ */
++ if (ice_is_dvm_ena(hw) &&
++ list_itr->fltr_info.l_data.vlan.tpid == 0)
++ continue;
++
+ vlan_id = list_itr->fltr_info.l_data.vlan.vlan_id;
+ if (rm_vlan_promisc)
+ status = ice_clear_vsi_promisc(hw, vsi_handle,
+@@ -4421,7 +4428,7 @@ ice_set_vlan_vsi_promisc(struct ice_hw *hw, u16 vsi_handle, u8 promisc_mask,
+ else
+ status = ice_set_vsi_promisc(hw, vsi_handle,
+ promisc_mask, vlan_id);
+- if (status)
++ if (status && status != -EEXIST)
+ break;
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+index 7adf9ddf129eb..7775aaa8cc439 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+@@ -505,8 +505,10 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
+
+ if (ice_is_vf_disabled(vf)) {
+ vsi = ice_get_vf_vsi(vf);
+- if (WARN_ON(!vsi))
++ if (!vsi) {
++ dev_dbg(dev, "VF is already removed\n");
+ return -EINVAL;
++ }
+ ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, vf->vf_id);
+ ice_vsi_stop_all_rx_rings(vsi);
+ dev_dbg(dev, "VF is already disabled, there is no need for resetting it, telling VM, all is fine %d\n",
+@@ -705,13 +707,16 @@ static int ice_cfg_mac_antispoof(struct ice_vsi *vsi, bool enable)
+ static int ice_vsi_ena_spoofchk(struct ice_vsi *vsi)
+ {
+ struct ice_vsi_vlan_ops *vlan_ops;
+- int err;
++ int err = 0;
+
+ vlan_ops = ice_get_compat_vsi_vlan_ops(vsi);
+
+- err = vlan_ops->ena_tx_filtering(vsi);
+- if (err)
+- return err;
++ /* Allow VF with VLAN 0 only to send all tagged traffic */
++ if (vsi->type != ICE_VSI_VF || ice_vsi_has_non_zero_vlans(vsi)) {
++ err = vlan_ops->ena_tx_filtering(vsi);
++ if (err)
++ return err;
++ }
+
+ return ice_cfg_mac_antispoof(vsi, true);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index 24188ec594d5a..a241c0bdc1507 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -2264,6 +2264,15 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
+
+ /* Enable VLAN filtering on first non-zero VLAN */
+ if (!vlan_promisc && vid && !ice_is_dvm_ena(&pf->hw)) {
++ if (vf->spoofchk) {
++ status = vsi->inner_vlan_ops.ena_tx_filtering(vsi);
++ if (status) {
++ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
++ dev_err(dev, "Enable VLAN anti-spoofing on VLAN ID: %d failed error-%d\n",
++ vid, status);
++ goto error_param;
++ }
++ }
+ if (vsi->inner_vlan_ops.ena_rx_filtering(vsi)) {
+ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ dev_err(dev, "Enable VLAN pruning on VLAN ID: %d failed error-%d\n",
+@@ -2309,8 +2318,10 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
+ }
+
+ /* Disable VLAN filtering when only VLAN 0 is left */
+- if (!ice_vsi_has_non_zero_vlans(vsi))
++ if (!ice_vsi_has_non_zero_vlans(vsi)) {
++ vsi->inner_vlan_ops.dis_tx_filtering(vsi);
+ vsi->inner_vlan_ops.dis_rx_filtering(vsi);
++ }
+
+ if (vlan_promisc)
+ ice_vf_dis_vlan_promisc(vsi, &vlan);
+@@ -2814,6 +2825,13 @@ ice_vc_del_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
+
+ if (vlan_promisc)
+ ice_vf_dis_vlan_promisc(vsi, &vlan);
++
++ /* Disable VLAN filtering when only VLAN 0 is left */
++ if (!ice_vsi_has_non_zero_vlans(vsi) && ice_is_dvm_ena(&vsi->back->hw)) {
++ err = vsi->outer_vlan_ops.dis_tx_filtering(vsi);
++ if (err)
++ return err;
++ }
+ }
+
+ vc_vlan = &vlan_fltr->inner;
+@@ -2829,8 +2847,17 @@ ice_vc_del_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
+ /* no support for VLAN promiscuous on inner VLAN unless
+ * we are in Single VLAN Mode (SVM)
+ */
+- if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc)
+- ice_vf_dis_vlan_promisc(vsi, &vlan);
++ if (!ice_is_dvm_ena(&vsi->back->hw)) {
++ if (vlan_promisc)
++ ice_vf_dis_vlan_promisc(vsi, &vlan);
++
++ /* Disable VLAN filtering when only VLAN 0 is left */
++ if (!ice_vsi_has_non_zero_vlans(vsi)) {
++ err = vsi->inner_vlan_ops.dis_tx_filtering(vsi);
++ if (err)
++ return err;
++ }
++ }
+ }
+ }
+
+@@ -2907,6 +2934,13 @@ ice_vc_add_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
+ if (err)
+ return err;
+ }
++
++ /* Enable VLAN filtering on first non-zero VLAN */
++ if (vf->spoofchk && vlan.vid && ice_is_dvm_ena(&vsi->back->hw)) {
++ err = vsi->outer_vlan_ops.ena_tx_filtering(vsi);
++ if (err)
++ return err;
++ }
+ }
+
+ vc_vlan = &vlan_fltr->inner;
+@@ -2922,10 +2956,19 @@ ice_vc_add_vlans(struct ice_vf *vf, struct ice_vsi *vsi,
+ /* no support for VLAN promiscuous on inner VLAN unless
+ * we are in Single VLAN Mode (SVM)
+ */
+- if (!ice_is_dvm_ena(&vsi->back->hw) && vlan_promisc) {
+- err = ice_vf_ena_vlan_promisc(vsi, &vlan);
+- if (err)
+- return err;
++ if (!ice_is_dvm_ena(&vsi->back->hw)) {
++ if (vlan_promisc) {
++ err = ice_vf_ena_vlan_promisc(vsi, &vlan);
++ if (err)
++ return err;
++ }
++
++ /* Enable VLAN filtering on first non-zero VLAN */
++ if (vf->spoofchk && vlan.vid) {
++ err = vsi->inner_vlan_ops.ena_tx_filtering(vsi);
++ if (err)
++ return err;
++ }
+ }
+ }
+ }
+diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h
+index 2d3daf022651c..015b781441149 100644
+--- a/drivers/net/ethernet/intel/igb/igb.h
++++ b/drivers/net/ethernet/intel/igb/igb.h
+@@ -664,6 +664,8 @@ struct igb_adapter {
+ struct igb_mac_addr *mac_table;
+ struct vf_mac_filter vf_macs;
+ struct vf_mac_filter *vf_mac_list;
++ /* lock for VF resources */
++ spinlock_t vfs_lock;
+ };
+
+ /* flags controlling PTP/1588 function */
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index c5f04c40284bf..281a3b21d4257 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -3637,6 +3637,7 @@ static int igb_disable_sriov(struct pci_dev *pdev)
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct igb_adapter *adapter = netdev_priv(netdev);
+ struct e1000_hw *hw = &adapter->hw;
++ unsigned long flags;
+
+ /* reclaim resources allocated to VFs */
+ if (adapter->vf_data) {
+@@ -3649,12 +3650,13 @@ static int igb_disable_sriov(struct pci_dev *pdev)
+ pci_disable_sriov(pdev);
+ msleep(500);
+ }
+-
++ spin_lock_irqsave(&adapter->vfs_lock, flags);
+ kfree(adapter->vf_mac_list);
+ adapter->vf_mac_list = NULL;
+ kfree(adapter->vf_data);
+ adapter->vf_data = NULL;
+ adapter->vfs_allocated_count = 0;
++ spin_unlock_irqrestore(&adapter->vfs_lock, flags);
+ wr32(E1000_IOVCTL, E1000_IOVCTL_REUSE_VFQ);
+ wrfl();
+ msleep(100);
+@@ -3814,7 +3816,9 @@ static void igb_remove(struct pci_dev *pdev)
+ igb_release_hw_control(adapter);
+
+ #ifdef CONFIG_PCI_IOV
++ rtnl_lock();
+ igb_disable_sriov(pdev);
++ rtnl_unlock();
+ #endif
+
+ unregister_netdev(netdev);
+@@ -3974,6 +3978,9 @@ static int igb_sw_init(struct igb_adapter *adapter)
+
+ spin_lock_init(&adapter->nfc_lock);
+ spin_lock_init(&adapter->stats64_lock);
++
++ /* init spinlock to avoid concurrency of VF resources */
++ spin_lock_init(&adapter->vfs_lock);
+ #ifdef CONFIG_PCI_IOV
+ switch (hw->mac.type) {
+ case e1000_82576:
+@@ -7924,8 +7931,10 @@ unlock:
+ static void igb_msg_task(struct igb_adapter *adapter)
+ {
+ struct e1000_hw *hw = &adapter->hw;
++ unsigned long flags;
+ u32 vf;
+
++ spin_lock_irqsave(&adapter->vfs_lock, flags);
+ for (vf = 0; vf < adapter->vfs_allocated_count; vf++) {
+ /* process any reset requests */
+ if (!igb_check_for_rst(hw, vf))
+@@ -7939,6 +7948,7 @@ static void igb_msg_task(struct igb_adapter *adapter)
+ if (!igb_check_for_ack(hw, vf))
+ igb_rcv_ack_from_vf(adapter, vf);
+ }
++ spin_unlock_irqrestore(&adapter->vfs_lock, flags);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 54e1b27a7dfec..1484d332e5949 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -2564,6 +2564,12 @@ static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc)
+ rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NPA);
+ rvu_reset_lmt_map_tbl(rvu, pcifunc);
+ rvu_detach_rsrcs(rvu, NULL, pcifunc);
++ /* In scenarios where PF/VF drivers detach NIXLF without freeing MCAM
++ * entries, check and free the MCAM entries explicitly to avoid leak.
++ * Since LF is detached use LF number as -1.
++ */
++ rvu_npc_free_mcam_entries(rvu, pcifunc, -1);
++
+ mutex_unlock(&rvu->flr_lock);
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index 3a31fb8cc1554..13f8dfaa2ecb1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -1096,6 +1096,9 @@ static void npc_enadis_default_entries(struct rvu *rvu, u16 pcifunc,
+
+ void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
+ {
++ if (nixlf < 0)
++ return;
++
+ npc_enadis_default_entries(rvu, pcifunc, nixlf, false);
+
+ /* Delete multicast and promisc MCAM entries */
+@@ -1107,6 +1110,9 @@ void rvu_npc_disable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
+
+ void rvu_npc_enable_default_entries(struct rvu *rvu, u16 pcifunc, int nixlf)
+ {
++ if (nixlf < 0)
++ return;
++
+ /* Enables only broadcast match entry. Promisc/Allmulti are enabled
+ * in set_rx_mode mbox handler.
+ */
+@@ -1650,7 +1656,7 @@ static void npc_load_kpu_profile(struct rvu *rvu)
+ * Firmware database method.
+ * Default KPU profile.
+ */
+- if (!request_firmware(&fw, kpu_profile, rvu->dev)) {
++ if (!request_firmware_direct(&fw, kpu_profile, rvu->dev)) {
+ dev_info(rvu->dev, "Loading KPU profile from firmware: %s\n",
+ kpu_profile);
+ rvu->kpu_fwdata = kzalloc(fw->size, GFP_KERNEL);
+@@ -1915,6 +1921,7 @@ static void rvu_npc_hw_init(struct rvu *rvu, int blkaddr)
+
+ static void rvu_npc_setup_interfaces(struct rvu *rvu, int blkaddr)
+ {
++ struct npc_mcam_kex *mkex = rvu->kpu.mkex;
+ struct npc_mcam *mcam = &rvu->hw->mcam;
+ struct rvu_hwinfo *hw = rvu->hw;
+ u64 nibble_ena, rx_kex, tx_kex;
+@@ -1927,15 +1934,15 @@ static void rvu_npc_setup_interfaces(struct rvu *rvu, int blkaddr)
+ mcam->counters.max--;
+ mcam->rx_miss_act_cntr = mcam->counters.max;
+
+- rx_kex = npc_mkex_default.keyx_cfg[NIX_INTF_RX];
+- tx_kex = npc_mkex_default.keyx_cfg[NIX_INTF_TX];
++ rx_kex = mkex->keyx_cfg[NIX_INTF_RX];
++ tx_kex = mkex->keyx_cfg[NIX_INTF_TX];
+ nibble_ena = FIELD_GET(NPC_PARSE_NIBBLE, rx_kex);
+
+ nibble_ena = rvu_npc_get_tx_nibble_cfg(rvu, nibble_ena);
+ if (nibble_ena) {
+ tx_kex &= ~NPC_PARSE_NIBBLE;
+ tx_kex |= FIELD_PREP(NPC_PARSE_NIBBLE, nibble_ena);
+- npc_mkex_default.keyx_cfg[NIX_INTF_TX] = tx_kex;
++ mkex->keyx_cfg[NIX_INTF_TX] = tx_kex;
+ }
+
+ /* Configure RX interfaces */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+index 19c53e591d0da..64654bd118845 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+@@ -445,7 +445,8 @@ do { \
+ NPC_SCAN_HDR(NPC_VLAN_TAG1, NPC_LID_LB, NPC_LT_LB_CTAG, 2, 2);
+ NPC_SCAN_HDR(NPC_VLAN_TAG2, NPC_LID_LB, NPC_LT_LB_STAG_QINQ, 2, 2);
+ NPC_SCAN_HDR(NPC_DMAC, NPC_LID_LA, la_ltype, la_start, 6);
+- NPC_SCAN_HDR(NPC_SMAC, NPC_LID_LA, la_ltype, la_start, 6);
++ /* SMAC follows the DMAC(which is 6 bytes) */
++ NPC_SCAN_HDR(NPC_SMAC, NPC_LID_LA, la_ltype, la_start + 6, 6);
+ /* PF_FUNC is 2 bytes at 0th byte of NPC_LT_LA_IH_NIX_ETHER */
+ NPC_SCAN_HDR(NPC_PF_FUNC, NPC_LID_LA, NPC_LT_LA_IH_NIX_ETHER, 0, 2);
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index fb8db5888d2f7..d686c7b6252f4 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -632,6 +632,12 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl)
+ req->num_regs++;
+ req->reg[1] = NIX_AF_TL3X_SCHEDULE(schq);
+ req->regval[1] = dwrr_val;
++ if (lvl == hw->txschq_link_cfg_lvl) {
++ req->num_regs++;
++ req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link);
++ /* Enable this queue and backpressure */
++ req->regval[2] = BIT_ULL(13) | BIT_ULL(12);
++ }
+ } else if (lvl == NIX_TXSCH_LVL_TL2) {
+ parent = hw->txschq_list[NIX_TXSCH_LVL_TL1][0];
+ req->reg[0] = NIX_AF_TL2X_PARENT(schq);
+@@ -641,11 +647,12 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl)
+ req->reg[1] = NIX_AF_TL2X_SCHEDULE(schq);
+ req->regval[1] = TXSCH_TL1_DFLT_RR_PRIO << 24 | dwrr_val;
+
+- req->num_regs++;
+- req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link);
+- /* Enable this queue and backpressure */
+- req->regval[2] = BIT_ULL(13) | BIT_ULL(12);
+-
++ if (lvl == hw->txschq_link_cfg_lvl) {
++ req->num_regs++;
++ req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link);
++ /* Enable this queue and backpressure */
++ req->regval[2] = BIT_ULL(13) | BIT_ULL(12);
++ }
+ } else if (lvl == NIX_TXSCH_LVL_TL1) {
+ /* Default config for TL1.
+ * For VF this is always ignored.
+@@ -1591,6 +1598,8 @@ void mbox_handler_nix_txsch_alloc(struct otx2_nic *pf,
+ for (schq = 0; schq < rsp->schq[lvl]; schq++)
+ pf->hw.txschq_list[lvl][schq] =
+ rsp->schq_list[lvl][schq];
++
++ pf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl;
+ }
+ EXPORT_SYMBOL(mbox_handler_nix_txsch_alloc);
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index ce2766317c0b8..f9c0d2f08e872 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -195,6 +195,7 @@ struct otx2_hw {
+ u16 sqb_size;
+
+ /* NIX */
++ u8 txschq_link_cfg_lvl;
+ u16 txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+ u16 matchall_ipolicer;
+ u32 dwrr_mtu;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index d87bbb0be7c86..e6f64d890fb34 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -506,7 +506,7 @@ int mlx5e_tc_tun_create_header_ipv6(struct mlx5e_priv *priv,
+ int err;
+
+ attr.ttl = tun_key->ttl;
+- attr.fl.fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tun_key->tos), tun_key->label);
++ attr.fl.fl6.flowlabel = ip6_make_flowinfo(tun_key->tos, tun_key->label);
+ attr.fl.fl6.daddr = tun_key->u.ipv6.dst;
+ attr.fl.fl6.saddr = tun_key->u.ipv6.src;
+
+@@ -620,7 +620,7 @@ int mlx5e_tc_tun_update_header_ipv6(struct mlx5e_priv *priv,
+
+ attr.ttl = tun_key->ttl;
+
+- attr.fl.fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tun_key->tos), tun_key->label);
++ attr.fl.fl6.flowlabel = ip6_make_flowinfo(tun_key->tos, tun_key->label);
+ attr.fl.fl6.daddr = tun_key->u.ipv6.dst;
+ attr.fl.fl6.saddr = tun_key->u.ipv6.src;
+
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index cafd206e8d7e9..49a9dca93529a 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -1822,9 +1822,9 @@ static void mlxsw_sp_port_remove(struct mlxsw_sp *mlxsw_sp, u16 local_port)
+
+ cancel_delayed_work_sync(&mlxsw_sp_port->periodic_hw_stats.update_dw);
+ cancel_delayed_work_sync(&mlxsw_sp_port->ptp.shaper_dw);
+- mlxsw_sp_port_ptp_clear(mlxsw_sp_port);
+ mlxsw_core_port_clear(mlxsw_sp->core, local_port, mlxsw_sp);
+ unregister_netdev(mlxsw_sp_port->dev); /* This calls ndo_stop */
++ mlxsw_sp_port_ptp_clear(mlxsw_sp_port);
+ mlxsw_sp_port_vlan_classification_set(mlxsw_sp_port, true, true);
+ mlxsw_sp->ports[local_port] = NULL;
+ mlxsw_sp_port_vlan_flush(mlxsw_sp_port, true);
+diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
+index a3214a762e4b3..f11f1cb92025f 100644
+--- a/drivers/net/ethernet/moxa/moxart_ether.c
++++ b/drivers/net/ethernet/moxa/moxart_ether.c
+@@ -77,7 +77,7 @@ static void moxart_mac_free_memory(struct net_device *ndev)
+ int i;
+
+ for (i = 0; i < RX_DESC_NUM; i++)
+- dma_unmap_single(&ndev->dev, priv->rx_mapping[i],
++ dma_unmap_single(&priv->pdev->dev, priv->rx_mapping[i],
+ priv->rx_buf_size, DMA_FROM_DEVICE);
+
+ if (priv->tx_desc_base)
+@@ -147,11 +147,11 @@ static void moxart_mac_setup_desc_ring(struct net_device *ndev)
+ desc + RX_REG_OFFSET_DESC1);
+
+ priv->rx_buf[i] = priv->rx_buf_base + priv->rx_buf_size * i;
+- priv->rx_mapping[i] = dma_map_single(&ndev->dev,
++ priv->rx_mapping[i] = dma_map_single(&priv->pdev->dev,
+ priv->rx_buf[i],
+ priv->rx_buf_size,
+ DMA_FROM_DEVICE);
+- if (dma_mapping_error(&ndev->dev, priv->rx_mapping[i]))
++ if (dma_mapping_error(&priv->pdev->dev, priv->rx_mapping[i]))
+ netdev_err(ndev, "DMA mapping error\n");
+
+ moxart_desc_write(priv->rx_mapping[i],
+@@ -240,7 +240,7 @@ static int moxart_rx_poll(struct napi_struct *napi, int budget)
+ if (len > RX_BUF_SIZE)
+ len = RX_BUF_SIZE;
+
+- dma_sync_single_for_cpu(&ndev->dev,
++ dma_sync_single_for_cpu(&priv->pdev->dev,
+ priv->rx_mapping[rx_head],
+ priv->rx_buf_size, DMA_FROM_DEVICE);
+ skb = netdev_alloc_skb_ip_align(ndev, len);
+@@ -294,7 +294,7 @@ static void moxart_tx_finished(struct net_device *ndev)
+ unsigned int tx_tail = priv->tx_tail;
+
+ while (tx_tail != tx_head) {
+- dma_unmap_single(&ndev->dev, priv->tx_mapping[tx_tail],
++ dma_unmap_single(&priv->pdev->dev, priv->tx_mapping[tx_tail],
+ priv->tx_len[tx_tail], DMA_TO_DEVICE);
+
+ ndev->stats.tx_packets++;
+@@ -358,9 +358,9 @@ static netdev_tx_t moxart_mac_start_xmit(struct sk_buff *skb,
+
+ len = skb->len > TX_BUF_SIZE ? TX_BUF_SIZE : skb->len;
+
+- priv->tx_mapping[tx_head] = dma_map_single(&ndev->dev, skb->data,
++ priv->tx_mapping[tx_head] = dma_map_single(&priv->pdev->dev, skb->data,
+ len, DMA_TO_DEVICE);
+- if (dma_mapping_error(&ndev->dev, priv->tx_mapping[tx_head])) {
++ if (dma_mapping_error(&priv->pdev->dev, priv->tx_mapping[tx_head])) {
+ netdev_err(ndev, "DMA mapping error\n");
+ goto out_unlock;
+ }
+@@ -379,7 +379,7 @@ static netdev_tx_t moxart_mac_start_xmit(struct sk_buff *skb,
+ len = ETH_ZLEN;
+ }
+
+- dma_sync_single_for_device(&ndev->dev, priv->tx_mapping[tx_head],
++ dma_sync_single_for_device(&priv->pdev->dev, priv->tx_mapping[tx_head],
+ priv->tx_buf_size, DMA_TO_DEVICE);
+
+ txdes1 = TX_DESC1_LTS | TX_DESC1_FTS | (len & TX_DESC1_BUF_SIZE_MASK);
+@@ -493,7 +493,7 @@ static int moxart_mac_probe(struct platform_device *pdev)
+ priv->tx_buf_size = TX_BUF_SIZE;
+ priv->rx_buf_size = RX_BUF_SIZE;
+
+- priv->tx_desc_base = dma_alloc_coherent(&pdev->dev, TX_REG_DESC_SIZE *
++ priv->tx_desc_base = dma_alloc_coherent(p_dev, TX_REG_DESC_SIZE *
+ TX_DESC_NUM, &priv->tx_base,
+ GFP_DMA | GFP_KERNEL);
+ if (!priv->tx_desc_base) {
+@@ -501,7 +501,7 @@ static int moxart_mac_probe(struct platform_device *pdev)
+ goto init_fail;
+ }
+
+- priv->rx_desc_base = dma_alloc_coherent(&pdev->dev, RX_REG_DESC_SIZE *
++ priv->rx_desc_base = dma_alloc_coherent(p_dev, RX_REG_DESC_SIZE *
+ RX_DESC_NUM, &priv->rx_base,
+ GFP_DMA | GFP_KERNEL);
+ if (!priv->rx_desc_base) {
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index d4649e4ee0e7f..68991b021c560 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1860,16 +1860,20 @@ void ocelot_get_strings(struct ocelot *ocelot, int port, u32 sset, u8 *data)
+ if (sset != ETH_SS_STATS)
+ return;
+
+- for (i = 0; i < ocelot->num_stats; i++)
++ for (i = 0; i < OCELOT_NUM_STATS; i++) {
++ if (ocelot->stats_layout[i].name[0] == '\0')
++ continue;
++
+ memcpy(data + i * ETH_GSTRING_LEN, ocelot->stats_layout[i].name,
+ ETH_GSTRING_LEN);
++ }
+ }
+ EXPORT_SYMBOL(ocelot_get_strings);
+
+ /* Caller must hold &ocelot->stats_lock */
+ static int ocelot_port_update_stats(struct ocelot *ocelot, int port)
+ {
+- unsigned int idx = port * ocelot->num_stats;
++ unsigned int idx = port * OCELOT_NUM_STATS;
+ struct ocelot_stats_region *region;
+ int err, j;
+
+@@ -1906,13 +1910,13 @@ static void ocelot_check_stats_work(struct work_struct *work)
+ stats_work);
+ int i, err;
+
+- mutex_lock(&ocelot->stats_lock);
++ spin_lock(&ocelot->stats_lock);
+ for (i = 0; i < ocelot->num_phys_ports; i++) {
+ err = ocelot_port_update_stats(ocelot, i);
+ if (err)
+ break;
+ }
+- mutex_unlock(&ocelot->stats_lock);
++ spin_unlock(&ocelot->stats_lock);
+
+ if (err)
+ dev_err(ocelot->dev, "Error %d updating ethtool stats\n", err);
+@@ -1925,16 +1929,22 @@ void ocelot_get_ethtool_stats(struct ocelot *ocelot, int port, u64 *data)
+ {
+ int i, err;
+
+- mutex_lock(&ocelot->stats_lock);
++ spin_lock(&ocelot->stats_lock);
+
+ /* check and update now */
+ err = ocelot_port_update_stats(ocelot, port);
+
+- /* Copy all counters */
+- for (i = 0; i < ocelot->num_stats; i++)
+- *data++ = ocelot->stats[port * ocelot->num_stats + i];
++ /* Copy all supported counters */
++ for (i = 0; i < OCELOT_NUM_STATS; i++) {
++ int index = port * OCELOT_NUM_STATS + i;
++
++ if (ocelot->stats_layout[i].name[0] == '\0')
++ continue;
++
++ *data++ = ocelot->stats[index];
++ }
+
+- mutex_unlock(&ocelot->stats_lock);
++ spin_unlock(&ocelot->stats_lock);
+
+ if (err)
+ dev_err(ocelot->dev, "Error %d updating ethtool stats\n", err);
+@@ -1943,10 +1953,16 @@ EXPORT_SYMBOL(ocelot_get_ethtool_stats);
+
+ int ocelot_get_sset_count(struct ocelot *ocelot, int port, int sset)
+ {
++ int i, num_stats = 0;
++
+ if (sset != ETH_SS_STATS)
+ return -EOPNOTSUPP;
+
+- return ocelot->num_stats;
++ for (i = 0; i < OCELOT_NUM_STATS; i++)
++ if (ocelot->stats_layout[i].name[0] != '\0')
++ num_stats++;
++
++ return num_stats;
+ }
+ EXPORT_SYMBOL(ocelot_get_sset_count);
+
+@@ -1958,7 +1974,10 @@ static int ocelot_prepare_stats_regions(struct ocelot *ocelot)
+
+ INIT_LIST_HEAD(&ocelot->stats_regions);
+
+- for (i = 0; i < ocelot->num_stats; i++) {
++ for (i = 0; i < OCELOT_NUM_STATS; i++) {
++ if (ocelot->stats_layout[i].name[0] == '\0')
++ continue;
++
+ if (region && ocelot->stats_layout[i].offset == last + 1) {
+ region->count++;
+ } else {
+@@ -3340,7 +3359,6 @@ static void ocelot_detect_features(struct ocelot *ocelot)
+
+ int ocelot_init(struct ocelot *ocelot)
+ {
+- const struct ocelot_stat_layout *stat;
+ char queue_name[32];
+ int i, ret;
+ u32 port;
+@@ -3353,17 +3371,13 @@ int ocelot_init(struct ocelot *ocelot)
+ }
+ }
+
+- ocelot->num_stats = 0;
+- for_each_stat(ocelot, stat)
+- ocelot->num_stats++;
+-
+ ocelot->stats = devm_kcalloc(ocelot->dev,
+- ocelot->num_phys_ports * ocelot->num_stats,
++ ocelot->num_phys_ports * OCELOT_NUM_STATS,
+ sizeof(u64), GFP_KERNEL);
+ if (!ocelot->stats)
+ return -ENOMEM;
+
+- mutex_init(&ocelot->stats_lock);
++ spin_lock_init(&ocelot->stats_lock);
+ mutex_init(&ocelot->ptp_lock);
+ mutex_init(&ocelot->mact_lock);
+ mutex_init(&ocelot->fwd_domain_lock);
+@@ -3511,7 +3525,6 @@ void ocelot_deinit(struct ocelot *ocelot)
+ cancel_delayed_work(&ocelot->stats_work);
+ destroy_workqueue(ocelot->stats_queue);
+ destroy_workqueue(ocelot->owq);
+- mutex_destroy(&ocelot->stats_lock);
+ }
+ EXPORT_SYMBOL(ocelot_deinit);
+
+diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c
+index 5e6136e80282b..330d30841cdc4 100644
+--- a/drivers/net/ethernet/mscc/ocelot_net.c
++++ b/drivers/net/ethernet/mscc/ocelot_net.c
+@@ -725,37 +725,42 @@ static void ocelot_get_stats64(struct net_device *dev,
+ struct ocelot_port_private *priv = netdev_priv(dev);
+ struct ocelot *ocelot = priv->port.ocelot;
+ int port = priv->port.index;
++ u64 *s;
+
+- /* Configure the port to read the stats from */
+- ocelot_write(ocelot, SYS_STAT_CFG_STAT_VIEW(port),
+- SYS_STAT_CFG);
++ spin_lock(&ocelot->stats_lock);
++
++ s = &ocelot->stats[port * OCELOT_NUM_STATS];
+
+ /* Get Rx stats */
+- stats->rx_bytes = ocelot_read(ocelot, SYS_COUNT_RX_OCTETS);
+- stats->rx_packets = ocelot_read(ocelot, SYS_COUNT_RX_SHORTS) +
+- ocelot_read(ocelot, SYS_COUNT_RX_FRAGMENTS) +
+- ocelot_read(ocelot, SYS_COUNT_RX_JABBERS) +
+- ocelot_read(ocelot, SYS_COUNT_RX_LONGS) +
+- ocelot_read(ocelot, SYS_COUNT_RX_64) +
+- ocelot_read(ocelot, SYS_COUNT_RX_65_127) +
+- ocelot_read(ocelot, SYS_COUNT_RX_128_255) +
+- ocelot_read(ocelot, SYS_COUNT_RX_256_1023) +
+- ocelot_read(ocelot, SYS_COUNT_RX_1024_1526) +
+- ocelot_read(ocelot, SYS_COUNT_RX_1527_MAX);
+- stats->multicast = ocelot_read(ocelot, SYS_COUNT_RX_MULTICAST);
++ stats->rx_bytes = s[OCELOT_STAT_RX_OCTETS];
++ stats->rx_packets = s[OCELOT_STAT_RX_SHORTS] +
++ s[OCELOT_STAT_RX_FRAGMENTS] +
++ s[OCELOT_STAT_RX_JABBERS] +
++ s[OCELOT_STAT_RX_LONGS] +
++ s[OCELOT_STAT_RX_64] +
++ s[OCELOT_STAT_RX_65_127] +
++ s[OCELOT_STAT_RX_128_255] +
++ s[OCELOT_STAT_RX_256_511] +
++ s[OCELOT_STAT_RX_512_1023] +
++ s[OCELOT_STAT_RX_1024_1526] +
++ s[OCELOT_STAT_RX_1527_MAX];
++ stats->multicast = s[OCELOT_STAT_RX_MULTICAST];
+ stats->rx_dropped = dev->stats.rx_dropped;
+
+ /* Get Tx stats */
+- stats->tx_bytes = ocelot_read(ocelot, SYS_COUNT_TX_OCTETS);
+- stats->tx_packets = ocelot_read(ocelot, SYS_COUNT_TX_64) +
+- ocelot_read(ocelot, SYS_COUNT_TX_65_127) +
+- ocelot_read(ocelot, SYS_COUNT_TX_128_511) +
+- ocelot_read(ocelot, SYS_COUNT_TX_512_1023) +
+- ocelot_read(ocelot, SYS_COUNT_TX_1024_1526) +
+- ocelot_read(ocelot, SYS_COUNT_TX_1527_MAX);
+- stats->tx_dropped = ocelot_read(ocelot, SYS_COUNT_TX_DROPS) +
+- ocelot_read(ocelot, SYS_COUNT_TX_AGING);
+- stats->collisions = ocelot_read(ocelot, SYS_COUNT_TX_COLLISION);
++ stats->tx_bytes = s[OCELOT_STAT_TX_OCTETS];
++ stats->tx_packets = s[OCELOT_STAT_TX_64] +
++ s[OCELOT_STAT_TX_65_127] +
++ s[OCELOT_STAT_TX_128_255] +
++ s[OCELOT_STAT_TX_256_511] +
++ s[OCELOT_STAT_TX_512_1023] +
++ s[OCELOT_STAT_TX_1024_1526] +
++ s[OCELOT_STAT_TX_1527_MAX];
++ stats->tx_dropped = s[OCELOT_STAT_TX_DROPS] +
++ s[OCELOT_STAT_TX_AGED];
++ stats->collisions = s[OCELOT_STAT_TX_COLLISION];
++
++ spin_unlock(&ocelot->stats_lock);
+ }
+
+ static int ocelot_port_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
+diff --git a/drivers/net/ethernet/mscc/ocelot_vsc7514.c b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+index 961f803aca192..9ff9105600438 100644
+--- a/drivers/net/ethernet/mscc/ocelot_vsc7514.c
++++ b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+@@ -96,101 +96,379 @@ static const struct reg_field ocelot_regfields[REGFIELD_MAX] = {
+ [SYS_PAUSE_CFG_PAUSE_ENA] = REG_FIELD_ID(SYS_PAUSE_CFG, 0, 1, 12, 4),
+ };
+
+-static const struct ocelot_stat_layout ocelot_stats_layout[] = {
+- { .name = "rx_octets", .offset = 0x00, },
+- { .name = "rx_unicast", .offset = 0x01, },
+- { .name = "rx_multicast", .offset = 0x02, },
+- { .name = "rx_broadcast", .offset = 0x03, },
+- { .name = "rx_shorts", .offset = 0x04, },
+- { .name = "rx_fragments", .offset = 0x05, },
+- { .name = "rx_jabbers", .offset = 0x06, },
+- { .name = "rx_crc_align_errs", .offset = 0x07, },
+- { .name = "rx_sym_errs", .offset = 0x08, },
+- { .name = "rx_frames_below_65_octets", .offset = 0x09, },
+- { .name = "rx_frames_65_to_127_octets", .offset = 0x0A, },
+- { .name = "rx_frames_128_to_255_octets", .offset = 0x0B, },
+- { .name = "rx_frames_256_to_511_octets", .offset = 0x0C, },
+- { .name = "rx_frames_512_to_1023_octets", .offset = 0x0D, },
+- { .name = "rx_frames_1024_to_1526_octets", .offset = 0x0E, },
+- { .name = "rx_frames_over_1526_octets", .offset = 0x0F, },
+- { .name = "rx_pause", .offset = 0x10, },
+- { .name = "rx_control", .offset = 0x11, },
+- { .name = "rx_longs", .offset = 0x12, },
+- { .name = "rx_classified_drops", .offset = 0x13, },
+- { .name = "rx_red_prio_0", .offset = 0x14, },
+- { .name = "rx_red_prio_1", .offset = 0x15, },
+- { .name = "rx_red_prio_2", .offset = 0x16, },
+- { .name = "rx_red_prio_3", .offset = 0x17, },
+- { .name = "rx_red_prio_4", .offset = 0x18, },
+- { .name = "rx_red_prio_5", .offset = 0x19, },
+- { .name = "rx_red_prio_6", .offset = 0x1A, },
+- { .name = "rx_red_prio_7", .offset = 0x1B, },
+- { .name = "rx_yellow_prio_0", .offset = 0x1C, },
+- { .name = "rx_yellow_prio_1", .offset = 0x1D, },
+- { .name = "rx_yellow_prio_2", .offset = 0x1E, },
+- { .name = "rx_yellow_prio_3", .offset = 0x1F, },
+- { .name = "rx_yellow_prio_4", .offset = 0x20, },
+- { .name = "rx_yellow_prio_5", .offset = 0x21, },
+- { .name = "rx_yellow_prio_6", .offset = 0x22, },
+- { .name = "rx_yellow_prio_7", .offset = 0x23, },
+- { .name = "rx_green_prio_0", .offset = 0x24, },
+- { .name = "rx_green_prio_1", .offset = 0x25, },
+- { .name = "rx_green_prio_2", .offset = 0x26, },
+- { .name = "rx_green_prio_3", .offset = 0x27, },
+- { .name = "rx_green_prio_4", .offset = 0x28, },
+- { .name = "rx_green_prio_5", .offset = 0x29, },
+- { .name = "rx_green_prio_6", .offset = 0x2A, },
+- { .name = "rx_green_prio_7", .offset = 0x2B, },
+- { .name = "tx_octets", .offset = 0x40, },
+- { .name = "tx_unicast", .offset = 0x41, },
+- { .name = "tx_multicast", .offset = 0x42, },
+- { .name = "tx_broadcast", .offset = 0x43, },
+- { .name = "tx_collision", .offset = 0x44, },
+- { .name = "tx_drops", .offset = 0x45, },
+- { .name = "tx_pause", .offset = 0x46, },
+- { .name = "tx_frames_below_65_octets", .offset = 0x47, },
+- { .name = "tx_frames_65_to_127_octets", .offset = 0x48, },
+- { .name = "tx_frames_128_255_octets", .offset = 0x49, },
+- { .name = "tx_frames_256_511_octets", .offset = 0x4A, },
+- { .name = "tx_frames_512_1023_octets", .offset = 0x4B, },
+- { .name = "tx_frames_1024_1526_octets", .offset = 0x4C, },
+- { .name = "tx_frames_over_1526_octets", .offset = 0x4D, },
+- { .name = "tx_yellow_prio_0", .offset = 0x4E, },
+- { .name = "tx_yellow_prio_1", .offset = 0x4F, },
+- { .name = "tx_yellow_prio_2", .offset = 0x50, },
+- { .name = "tx_yellow_prio_3", .offset = 0x51, },
+- { .name = "tx_yellow_prio_4", .offset = 0x52, },
+- { .name = "tx_yellow_prio_5", .offset = 0x53, },
+- { .name = "tx_yellow_prio_6", .offset = 0x54, },
+- { .name = "tx_yellow_prio_7", .offset = 0x55, },
+- { .name = "tx_green_prio_0", .offset = 0x56, },
+- { .name = "tx_green_prio_1", .offset = 0x57, },
+- { .name = "tx_green_prio_2", .offset = 0x58, },
+- { .name = "tx_green_prio_3", .offset = 0x59, },
+- { .name = "tx_green_prio_4", .offset = 0x5A, },
+- { .name = "tx_green_prio_5", .offset = 0x5B, },
+- { .name = "tx_green_prio_6", .offset = 0x5C, },
+- { .name = "tx_green_prio_7", .offset = 0x5D, },
+- { .name = "tx_aged", .offset = 0x5E, },
+- { .name = "drop_local", .offset = 0x80, },
+- { .name = "drop_tail", .offset = 0x81, },
+- { .name = "drop_yellow_prio_0", .offset = 0x82, },
+- { .name = "drop_yellow_prio_1", .offset = 0x83, },
+- { .name = "drop_yellow_prio_2", .offset = 0x84, },
+- { .name = "drop_yellow_prio_3", .offset = 0x85, },
+- { .name = "drop_yellow_prio_4", .offset = 0x86, },
+- { .name = "drop_yellow_prio_5", .offset = 0x87, },
+- { .name = "drop_yellow_prio_6", .offset = 0x88, },
+- { .name = "drop_yellow_prio_7", .offset = 0x89, },
+- { .name = "drop_green_prio_0", .offset = 0x8A, },
+- { .name = "drop_green_prio_1", .offset = 0x8B, },
+- { .name = "drop_green_prio_2", .offset = 0x8C, },
+- { .name = "drop_green_prio_3", .offset = 0x8D, },
+- { .name = "drop_green_prio_4", .offset = 0x8E, },
+- { .name = "drop_green_prio_5", .offset = 0x8F, },
+- { .name = "drop_green_prio_6", .offset = 0x90, },
+- { .name = "drop_green_prio_7", .offset = 0x91, },
+- OCELOT_STAT_END
++static const struct ocelot_stat_layout ocelot_stats_layout[OCELOT_NUM_STATS] = {
++ [OCELOT_STAT_RX_OCTETS] = {
++ .name = "rx_octets",
++ .offset = 0x00,
++ },
++ [OCELOT_STAT_RX_UNICAST] = {
++ .name = "rx_unicast",
++ .offset = 0x01,
++ },
++ [OCELOT_STAT_RX_MULTICAST] = {
++ .name = "rx_multicast",
++ .offset = 0x02,
++ },
++ [OCELOT_STAT_RX_BROADCAST] = {
++ .name = "rx_broadcast",
++ .offset = 0x03,
++ },
++ [OCELOT_STAT_RX_SHORTS] = {
++ .name = "rx_shorts",
++ .offset = 0x04,
++ },
++ [OCELOT_STAT_RX_FRAGMENTS] = {
++ .name = "rx_fragments",
++ .offset = 0x05,
++ },
++ [OCELOT_STAT_RX_JABBERS] = {
++ .name = "rx_jabbers",
++ .offset = 0x06,
++ },
++ [OCELOT_STAT_RX_CRC_ALIGN_ERRS] = {
++ .name = "rx_crc_align_errs",
++ .offset = 0x07,
++ },
++ [OCELOT_STAT_RX_SYM_ERRS] = {
++ .name = "rx_sym_errs",
++ .offset = 0x08,
++ },
++ [OCELOT_STAT_RX_64] = {
++ .name = "rx_frames_below_65_octets",
++ .offset = 0x09,
++ },
++ [OCELOT_STAT_RX_65_127] = {
++ .name = "rx_frames_65_to_127_octets",
++ .offset = 0x0A,
++ },
++ [OCELOT_STAT_RX_128_255] = {
++ .name = "rx_frames_128_to_255_octets",
++ .offset = 0x0B,
++ },
++ [OCELOT_STAT_RX_256_511] = {
++ .name = "rx_frames_256_to_511_octets",
++ .offset = 0x0C,
++ },
++ [OCELOT_STAT_RX_512_1023] = {
++ .name = "rx_frames_512_to_1023_octets",
++ .offset = 0x0D,
++ },
++ [OCELOT_STAT_RX_1024_1526] = {
++ .name = "rx_frames_1024_to_1526_octets",
++ .offset = 0x0E,
++ },
++ [OCELOT_STAT_RX_1527_MAX] = {
++ .name = "rx_frames_over_1526_octets",
++ .offset = 0x0F,
++ },
++ [OCELOT_STAT_RX_PAUSE] = {
++ .name = "rx_pause",
++ .offset = 0x10,
++ },
++ [OCELOT_STAT_RX_CONTROL] = {
++ .name = "rx_control",
++ .offset = 0x11,
++ },
++ [OCELOT_STAT_RX_LONGS] = {
++ .name = "rx_longs",
++ .offset = 0x12,
++ },
++ [OCELOT_STAT_RX_CLASSIFIED_DROPS] = {
++ .name = "rx_classified_drops",
++ .offset = 0x13,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_0] = {
++ .name = "rx_red_prio_0",
++ .offset = 0x14,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_1] = {
++ .name = "rx_red_prio_1",
++ .offset = 0x15,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_2] = {
++ .name = "rx_red_prio_2",
++ .offset = 0x16,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_3] = {
++ .name = "rx_red_prio_3",
++ .offset = 0x17,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_4] = {
++ .name = "rx_red_prio_4",
++ .offset = 0x18,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_5] = {
++ .name = "rx_red_prio_5",
++ .offset = 0x19,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_6] = {
++ .name = "rx_red_prio_6",
++ .offset = 0x1A,
++ },
++ [OCELOT_STAT_RX_RED_PRIO_7] = {
++ .name = "rx_red_prio_7",
++ .offset = 0x1B,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_0] = {
++ .name = "rx_yellow_prio_0",
++ .offset = 0x1C,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_1] = {
++ .name = "rx_yellow_prio_1",
++ .offset = 0x1D,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_2] = {
++ .name = "rx_yellow_prio_2",
++ .offset = 0x1E,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_3] = {
++ .name = "rx_yellow_prio_3",
++ .offset = 0x1F,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_4] = {
++ .name = "rx_yellow_prio_4",
++ .offset = 0x20,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_5] = {
++ .name = "rx_yellow_prio_5",
++ .offset = 0x21,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_6] = {
++ .name = "rx_yellow_prio_6",
++ .offset = 0x22,
++ },
++ [OCELOT_STAT_RX_YELLOW_PRIO_7] = {
++ .name = "rx_yellow_prio_7",
++ .offset = 0x23,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_0] = {
++ .name = "rx_green_prio_0",
++ .offset = 0x24,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_1] = {
++ .name = "rx_green_prio_1",
++ .offset = 0x25,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_2] = {
++ .name = "rx_green_prio_2",
++ .offset = 0x26,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_3] = {
++ .name = "rx_green_prio_3",
++ .offset = 0x27,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_4] = {
++ .name = "rx_green_prio_4",
++ .offset = 0x28,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_5] = {
++ .name = "rx_green_prio_5",
++ .offset = 0x29,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_6] = {
++ .name = "rx_green_prio_6",
++ .offset = 0x2A,
++ },
++ [OCELOT_STAT_RX_GREEN_PRIO_7] = {
++ .name = "rx_green_prio_7",
++ .offset = 0x2B,
++ },
++ [OCELOT_STAT_TX_OCTETS] = {
++ .name = "tx_octets",
++ .offset = 0x40,
++ },
++ [OCELOT_STAT_TX_UNICAST] = {
++ .name = "tx_unicast",
++ .offset = 0x41,
++ },
++ [OCELOT_STAT_TX_MULTICAST] = {
++ .name = "tx_multicast",
++ .offset = 0x42,
++ },
++ [OCELOT_STAT_TX_BROADCAST] = {
++ .name = "tx_broadcast",
++ .offset = 0x43,
++ },
++ [OCELOT_STAT_TX_COLLISION] = {
++ .name = "tx_collision",
++ .offset = 0x44,
++ },
++ [OCELOT_STAT_TX_DROPS] = {
++ .name = "tx_drops",
++ .offset = 0x45,
++ },
++ [OCELOT_STAT_TX_PAUSE] = {
++ .name = "tx_pause",
++ .offset = 0x46,
++ },
++ [OCELOT_STAT_TX_64] = {
++ .name = "tx_frames_below_65_octets",
++ .offset = 0x47,
++ },
++ [OCELOT_STAT_TX_65_127] = {
++ .name = "tx_frames_65_to_127_octets",
++ .offset = 0x48,
++ },
++ [OCELOT_STAT_TX_128_255] = {
++ .name = "tx_frames_128_255_octets",
++ .offset = 0x49,
++ },
++ [OCELOT_STAT_TX_256_511] = {
++ .name = "tx_frames_256_511_octets",
++ .offset = 0x4A,
++ },
++ [OCELOT_STAT_TX_512_1023] = {
++ .name = "tx_frames_512_1023_octets",
++ .offset = 0x4B,
++ },
++ [OCELOT_STAT_TX_1024_1526] = {
++ .name = "tx_frames_1024_1526_octets",
++ .offset = 0x4C,
++ },
++ [OCELOT_STAT_TX_1527_MAX] = {
++ .name = "tx_frames_over_1526_octets",
++ .offset = 0x4D,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_0] = {
++ .name = "tx_yellow_prio_0",
++ .offset = 0x4E,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_1] = {
++ .name = "tx_yellow_prio_1",
++ .offset = 0x4F,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_2] = {
++ .name = "tx_yellow_prio_2",
++ .offset = 0x50,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_3] = {
++ .name = "tx_yellow_prio_3",
++ .offset = 0x51,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_4] = {
++ .name = "tx_yellow_prio_4",
++ .offset = 0x52,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_5] = {
++ .name = "tx_yellow_prio_5",
++ .offset = 0x53,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_6] = {
++ .name = "tx_yellow_prio_6",
++ .offset = 0x54,
++ },
++ [OCELOT_STAT_TX_YELLOW_PRIO_7] = {
++ .name = "tx_yellow_prio_7",
++ .offset = 0x55,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_0] = {
++ .name = "tx_green_prio_0",
++ .offset = 0x56,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_1] = {
++ .name = "tx_green_prio_1",
++ .offset = 0x57,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_2] = {
++ .name = "tx_green_prio_2",
++ .offset = 0x58,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_3] = {
++ .name = "tx_green_prio_3",
++ .offset = 0x59,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_4] = {
++ .name = "tx_green_prio_4",
++ .offset = 0x5A,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_5] = {
++ .name = "tx_green_prio_5",
++ .offset = 0x5B,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_6] = {
++ .name = "tx_green_prio_6",
++ .offset = 0x5C,
++ },
++ [OCELOT_STAT_TX_GREEN_PRIO_7] = {
++ .name = "tx_green_prio_7",
++ .offset = 0x5D,
++ },
++ [OCELOT_STAT_TX_AGED] = {
++ .name = "tx_aged",
++ .offset = 0x5E,
++ },
++ [OCELOT_STAT_DROP_LOCAL] = {
++ .name = "drop_local",
++ .offset = 0x80,
++ },
++ [OCELOT_STAT_DROP_TAIL] = {
++ .name = "drop_tail",
++ .offset = 0x81,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_0] = {
++ .name = "drop_yellow_prio_0",
++ .offset = 0x82,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_1] = {
++ .name = "drop_yellow_prio_1",
++ .offset = 0x83,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_2] = {
++ .name = "drop_yellow_prio_2",
++ .offset = 0x84,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_3] = {
++ .name = "drop_yellow_prio_3",
++ .offset = 0x85,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_4] = {
++ .name = "drop_yellow_prio_4",
++ .offset = 0x86,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_5] = {
++ .name = "drop_yellow_prio_5",
++ .offset = 0x87,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_6] = {
++ .name = "drop_yellow_prio_6",
++ .offset = 0x88,
++ },
++ [OCELOT_STAT_DROP_YELLOW_PRIO_7] = {
++ .name = "drop_yellow_prio_7",
++ .offset = 0x89,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_0] = {
++ .name = "drop_green_prio_0",
++ .offset = 0x8A,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_1] = {
++ .name = "drop_green_prio_1",
++ .offset = 0x8B,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_2] = {
++ .name = "drop_green_prio_2",
++ .offset = 0x8C,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_3] = {
++ .name = "drop_green_prio_3",
++ .offset = 0x8D,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_4] = {
++ .name = "drop_green_prio_4",
++ .offset = 0x8E,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_5] = {
++ .name = "drop_green_prio_5",
++ .offset = 0x8F,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_6] = {
++ .name = "drop_green_prio_6",
++ .offset = 0x90,
++ },
++ [OCELOT_STAT_DROP_GREEN_PRIO_7] = {
++ .name = "drop_green_prio_7",
++ .offset = 0x91,
++ },
+ };
+
+ static void ocelot_pll5_init(struct ocelot *ocelot)
+diff --git a/drivers/net/ethernet/mscc/vsc7514_regs.c b/drivers/net/ethernet/mscc/vsc7514_regs.c
+index c2af4eb8ca5d3..8ff935f7f150c 100644
+--- a/drivers/net/ethernet/mscc/vsc7514_regs.c
++++ b/drivers/net/ethernet/mscc/vsc7514_regs.c
+@@ -180,13 +180,14 @@ const u32 vsc7514_sys_regmap[] = {
+ REG(SYS_COUNT_RX_64, 0x000024),
+ REG(SYS_COUNT_RX_65_127, 0x000028),
+ REG(SYS_COUNT_RX_128_255, 0x00002c),
+- REG(SYS_COUNT_RX_256_1023, 0x000030),
+- REG(SYS_COUNT_RX_1024_1526, 0x000034),
+- REG(SYS_COUNT_RX_1527_MAX, 0x000038),
+- REG(SYS_COUNT_RX_PAUSE, 0x00003c),
+- REG(SYS_COUNT_RX_CONTROL, 0x000040),
+- REG(SYS_COUNT_RX_LONGS, 0x000044),
+- REG(SYS_COUNT_RX_CLASSIFIED_DROPS, 0x000048),
++ REG(SYS_COUNT_RX_256_511, 0x000030),
++ REG(SYS_COUNT_RX_512_1023, 0x000034),
++ REG(SYS_COUNT_RX_1024_1526, 0x000038),
++ REG(SYS_COUNT_RX_1527_MAX, 0x00003c),
++ REG(SYS_COUNT_RX_PAUSE, 0x000040),
++ REG(SYS_COUNT_RX_CONTROL, 0x000044),
++ REG(SYS_COUNT_RX_LONGS, 0x000048),
++ REG(SYS_COUNT_RX_CLASSIFIED_DROPS, 0x00004c),
+ REG(SYS_COUNT_TX_OCTETS, 0x000100),
+ REG(SYS_COUNT_TX_UNICAST, 0x000104),
+ REG(SYS_COUNT_TX_MULTICAST, 0x000108),
+@@ -196,11 +197,12 @@ const u32 vsc7514_sys_regmap[] = {
+ REG(SYS_COUNT_TX_PAUSE, 0x000118),
+ REG(SYS_COUNT_TX_64, 0x00011c),
+ REG(SYS_COUNT_TX_65_127, 0x000120),
+- REG(SYS_COUNT_TX_128_511, 0x000124),
+- REG(SYS_COUNT_TX_512_1023, 0x000128),
+- REG(SYS_COUNT_TX_1024_1526, 0x00012c),
+- REG(SYS_COUNT_TX_1527_MAX, 0x000130),
+- REG(SYS_COUNT_TX_AGING, 0x000170),
++ REG(SYS_COUNT_TX_128_255, 0x000124),
++ REG(SYS_COUNT_TX_256_511, 0x000128),
++ REG(SYS_COUNT_TX_512_1023, 0x00012c),
++ REG(SYS_COUNT_TX_1024_1526, 0x000130),
++ REG(SYS_COUNT_TX_1527_MAX, 0x000134),
++ REG(SYS_COUNT_TX_AGING, 0x000178),
+ REG(SYS_RESET_CFG, 0x000508),
+ REG(SYS_CMID, 0x00050c),
+ REG(SYS_VLAN_ETYPE_CFG, 0x000510),
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index df0afd271a21e..e6ee45afd80c7 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -1230,6 +1230,8 @@ nfp_port_get_module_info(struct net_device *netdev,
+ u8 data;
+
+ port = nfp_port_from_netdev(netdev);
++ /* update port state to get latest interface */
++ set_bit(NFP_PORT_CHANGED, &port->flags);
+ eth_port = nfp_port_get_eth_port(port);
+ if (!eth_port)
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index 3fe720c5dc9fc..aec0d973ced0e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -1104,6 +1104,7 @@ static void intel_eth_pci_remove(struct pci_dev *pdev)
+
+ stmmac_dvr_remove(&pdev->dev);
+
++ clk_disable_unprepare(priv->plat->stmmac_clk);
+ clk_unregister_fixed_rate(priv->plat->stmmac_clk);
+
+ pcim_iounmap_regions(pdev, BIT(0));
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 018d365f9debf..7962c37b3f14b 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -797,7 +797,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
+ struct geneve_sock *gs4,
+ struct flowi4 *fl4,
+ const struct ip_tunnel_info *info,
+- __be16 dport, __be16 sport)
++ __be16 dport, __be16 sport,
++ __u8 *full_tos)
+ {
+ bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ struct geneve_dev *geneve = netdev_priv(dev);
+@@ -823,6 +824,8 @@ static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
+ use_cache = false;
+ }
+ fl4->flowi4_tos = RT_TOS(tos);
++ if (full_tos)
++ *full_tos = tos;
+
+ dst_cache = (struct dst_cache *)&info->dst_cache;
+ if (use_cache) {
+@@ -876,8 +879,7 @@ static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
+ use_cache = false;
+ }
+
+- fl6->flowlabel = ip6_make_flowinfo(RT_TOS(prio),
+- info->key.label);
++ fl6->flowlabel = ip6_make_flowinfo(prio, info->key.label);
+ dst_cache = (struct dst_cache *)&info->dst_cache;
+ if (use_cache) {
+ dst = dst_cache_get_ip6(dst_cache, &fl6->saddr);
+@@ -911,6 +913,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ const struct ip_tunnel_key *key = &info->key;
+ struct rtable *rt;
+ struct flowi4 fl4;
++ __u8 full_tos;
+ __u8 tos, ttl;
+ __be16 df = 0;
+ __be16 sport;
+@@ -921,7 +924,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+
+ sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
+- geneve->cfg.info.key.tp_dst, sport);
++ geneve->cfg.info.key.tp_dst, sport, &full_tos);
+ if (IS_ERR(rt))
+ return PTR_ERR(rt);
+
+@@ -965,7 +968,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+
+ df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0;
+ } else {
+- tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, ip_hdr(skb), skb);
++ tos = ip_tunnel_ecn_encap(full_tos, ip_hdr(skb), skb);
+ if (geneve->cfg.ttl_inherit)
+ ttl = ip_tunnel_get_ttl(ip_hdr(skb), skb);
+ else
+@@ -1149,7 +1152,7 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
+ 1, USHRT_MAX, true);
+
+ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
+- geneve->cfg.info.key.tp_dst, sport);
++ geneve->cfg.info.key.tp_dst, sport, NULL);
+ if (IS_ERR(rt))
+ return PTR_ERR(rt);
+
+diff --git a/drivers/net/phy/phy-c45.c b/drivers/net/phy/phy-c45.c
+index 29b1df03f3e8b..a87a4b3ffce4e 100644
+--- a/drivers/net/phy/phy-c45.c
++++ b/drivers/net/phy/phy-c45.c
+@@ -190,44 +190,42 @@ EXPORT_SYMBOL_GPL(genphy_c45_pma_setup_forced);
+ */
+ static int genphy_c45_baset1_an_config_aneg(struct phy_device *phydev)
+ {
++ u16 adv_l_mask, adv_l = 0;
++ u16 adv_m_mask, adv_m = 0;
+ int changed = 0;
+- u16 adv_l = 0;
+- u16 adv_m = 0;
+ int ret;
+
++ adv_l_mask = MDIO_AN_T1_ADV_L_FORCE_MS | MDIO_AN_T1_ADV_L_PAUSE_CAP |
++ MDIO_AN_T1_ADV_L_PAUSE_ASYM;
++ adv_m_mask = MDIO_AN_T1_ADV_M_MST | MDIO_AN_T1_ADV_M_B10L;
++
+ switch (phydev->master_slave_set) {
+ case MASTER_SLAVE_CFG_MASTER_FORCE:
++ adv_m |= MDIO_AN_T1_ADV_M_MST;
++ fallthrough;
+ case MASTER_SLAVE_CFG_SLAVE_FORCE:
+ adv_l |= MDIO_AN_T1_ADV_L_FORCE_MS;
+ break;
+ case MASTER_SLAVE_CFG_MASTER_PREFERRED:
++ adv_m |= MDIO_AN_T1_ADV_M_MST;
++ fallthrough;
+ case MASTER_SLAVE_CFG_SLAVE_PREFERRED:
+ break;
+ case MASTER_SLAVE_CFG_UNKNOWN:
+ case MASTER_SLAVE_CFG_UNSUPPORTED:
+- return 0;
++ /* if master/slave role is not specified, do not overwrite it */
++ adv_l_mask &= ~MDIO_AN_T1_ADV_L_FORCE_MS;
++ adv_m_mask &= ~MDIO_AN_T1_ADV_M_MST;
++ break;
+ default:
+ phydev_warn(phydev, "Unsupported Master/Slave mode\n");
+ return -EOPNOTSUPP;
+ }
+
+- switch (phydev->master_slave_set) {
+- case MASTER_SLAVE_CFG_MASTER_FORCE:
+- case MASTER_SLAVE_CFG_MASTER_PREFERRED:
+- adv_m |= MDIO_AN_T1_ADV_M_MST;
+- break;
+- case MASTER_SLAVE_CFG_SLAVE_FORCE:
+- case MASTER_SLAVE_CFG_SLAVE_PREFERRED:
+- break;
+- default:
+- break;
+- }
+-
+ adv_l |= linkmode_adv_to_mii_t1_adv_l_t(phydev->advertising);
+
+ ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_T1_ADV_L,
+- (MDIO_AN_T1_ADV_L_FORCE_MS | MDIO_AN_T1_ADV_L_PAUSE_CAP
+- | MDIO_AN_T1_ADV_L_PAUSE_ASYM), adv_l);
++ adv_l_mask, adv_l);
+ if (ret < 0)
+ return ret;
+ if (ret > 0)
+@@ -236,7 +234,7 @@ static int genphy_c45_baset1_an_config_aneg(struct phy_device *phydev)
+ adv_m |= linkmode_adv_to_mii_t1_adv_m_t(phydev->advertising);
+
+ ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_T1_ADV_M,
+- MDIO_AN_T1_ADV_M_MST | MDIO_AN_T1_ADV_M_B10L, adv_m);
++ adv_m_mask, adv_m);
+ if (ret < 0)
+ return ret;
+ if (ret > 0)
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 46acddd865a78..608de5a94165f 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -316,6 +316,12 @@ static __maybe_unused int mdio_bus_phy_resume(struct device *dev)
+
+ phydev->suspended_by_mdio_bus = 0;
+
++ /* If we managed to get here with the PHY state machine in a state other
++ * than PHY_HALTED this is an indication that something went wrong and
++ * we should most likely be using MAC managed PM and we are not.
++ */
++ WARN_ON(phydev->state != PHY_HALTED && !phydev->mac_managed_pm);
++
+ ret = phy_init_hw(phydev);
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/net/plip/plip.c b/drivers/net/plip/plip.c
+index dafd3e9ebbf87..c8791e9b451d2 100644
+--- a/drivers/net/plip/plip.c
++++ b/drivers/net/plip/plip.c
+@@ -1111,7 +1111,7 @@ plip_open(struct net_device *dev)
+ /* Any address will do - we take the first. We already
+ have the first two bytes filled with 0xfc, from
+ plip_init_dev(). */
+- const struct in_ifaddr *ifa = rcu_dereference(in_dev->ifa_list);
++ const struct in_ifaddr *ifa = rtnl_dereference(in_dev->ifa_list);
+ if (ifa != NULL) {
+ dev_addr_mod(dev, 2, &ifa->ifa_local, 4);
+ }
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index c3d42062559dd..9e75ed3f08ce5 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -716,10 +716,20 @@ static ssize_t tap_get_user(struct tap_queue *q, void *msg_control,
+ skb_reset_mac_header(skb);
+ skb->protocol = eth_hdr(skb)->h_proto;
+
++ rcu_read_lock();
++ tap = rcu_dereference(q->tap);
++ if (!tap) {
++ kfree_skb(skb);
++ rcu_read_unlock();
++ return total_len;
++ }
++ skb->dev = tap->dev;
++
+ if (vnet_hdr_len) {
+ err = virtio_net_hdr_to_skb(skb, &vnet_hdr,
+ tap_is_little_endian(q));
+ if (err) {
++ rcu_read_unlock();
+ drop_reason = SKB_DROP_REASON_DEV_HDR;
+ goto err_kfree;
+ }
+@@ -732,8 +742,6 @@ static ssize_t tap_get_user(struct tap_queue *q, void *msg_control,
+ __vlan_get_protocol(skb, skb->protocol, &depth) != 0)
+ skb_set_network_header(skb, depth);
+
+- rcu_read_lock();
+- tap = rcu_dereference(q->tap);
+ /* copy skb_ubuf_info for callback when skb has no error */
+ if (zerocopy) {
+ skb_zcopy_init(skb, msg_control);
+@@ -742,14 +750,8 @@ static ssize_t tap_get_user(struct tap_queue *q, void *msg_control,
+ uarg->callback(NULL, uarg, false);
+ }
+
+- if (tap) {
+- skb->dev = tap->dev;
+- dev_queue_xmit(skb);
+- } else {
+- kfree_skb(skb);
+- }
++ dev_queue_xmit(skb);
+ rcu_read_unlock();
+-
+ return total_len;
+
+ err_kfree:
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index ec8e1b3108c3a..d4e0a775b1ba7 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1057,8 +1057,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ case XDP_TX:
+ stats->xdp_tx++;
+ xdpf = xdp_convert_buff_to_frame(&xdp);
+- if (unlikely(!xdpf))
++ if (unlikely(!xdpf)) {
++ if (unlikely(xdp_page != page))
++ put_page(xdp_page);
+ goto err_xdp;
++ }
+ err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
+ if (unlikely(!err)) {
+ xdp_return_frame_rx_napi(xdpf);
+@@ -1196,7 +1199,7 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash,
+ if (!hdr_hash || !skb)
+ return;
+
+- switch ((int)hdr_hash->hash_report) {
++ switch (__le16_to_cpu(hdr_hash->hash_report)) {
+ case VIRTIO_NET_HASH_REPORT_TCPv4:
+ case VIRTIO_NET_HASH_REPORT_UDPv4:
+ case VIRTIO_NET_HASH_REPORT_TCPv6:
+@@ -1214,7 +1217,7 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash,
+ default:
+ rss_hash_type = PKT_HASH_TYPE_NONE;
+ }
+- skb_set_hash(skb, (unsigned int)hdr_hash->hash_value, rss_hash_type);
++ skb_set_hash(skb, __le32_to_cpu(hdr_hash->hash_value), rss_hash_type);
+ }
+
+ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index 6991bf7c1cf03..52f58f2fc4627 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -2321,7 +2321,7 @@ static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
+ fl6.flowi6_oif = oif;
+ fl6.daddr = *daddr;
+ fl6.saddr = *saddr;
+- fl6.flowlabel = ip6_make_flowinfo(RT_TOS(tos), label);
++ fl6.flowlabel = ip6_make_flowinfo(tos, label);
+ fl6.flowi6_mark = skb->mark;
+ fl6.flowi6_proto = IPPROTO_UDP;
+ fl6.fl6_dport = dport;
+diff --git a/drivers/ntb/test/ntb_tool.c b/drivers/ntb/test/ntb_tool.c
+index b7bf3f863d79b..5ee0afa621a95 100644
+--- a/drivers/ntb/test/ntb_tool.c
++++ b/drivers/ntb/test/ntb_tool.c
+@@ -367,14 +367,16 @@ static ssize_t tool_fn_write(struct tool_ctx *tc,
+ u64 bits;
+ int n;
+
++ if (*offp)
++ return 0;
++
+ buf = kmalloc(size + 1, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+- ret = simple_write_to_buffer(buf, size, offp, ubuf, size);
+- if (ret < 0) {
++ if (copy_from_user(buf, ubuf, size)) {
+ kfree(buf);
+- return ret;
++ return -EFAULT;
+ }
+
+ buf[size] = 0;
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 3c778bb0c2944..4aff83b1b0c05 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -3880,6 +3880,7 @@ static int fc_parse_cgrpid(const char *buf, u64 *id)
+ static ssize_t fc_appid_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+ {
++ size_t orig_count = count;
+ u64 cgrp_id;
+ int appid_len = 0;
+ int cgrpid_len = 0;
+@@ -3904,7 +3905,7 @@ static ssize_t fc_appid_store(struct device *dev,
+ ret = blkcg_set_fc_appid(app_id, cgrp_id, sizeof(app_id));
+ if (ret < 0)
+ return ret;
+- return count;
++ return orig_count;
+ }
+ static DEVICE_ATTR(appid_store, 0200, NULL, fc_appid_store);
+ #endif /* CONFIG_BLK_CGROUP_FC_APPID */
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 0a9542599ad1c..dc3b4dc8fe08b 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1839,7 +1839,8 @@ static int __init nvmet_tcp_init(void)
+ {
+ int ret;
+
+- nvmet_tcp_wq = alloc_workqueue("nvmet_tcp_wq", WQ_HIGHPRI, 0);
++ nvmet_tcp_wq = alloc_workqueue("nvmet_tcp_wq",
++ WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
+ if (!nvmet_tcp_wq)
+ return -ENOMEM;
+
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index 4044ddcb02c60..84a8d402009cb 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -903,12 +903,6 @@ static int of_overlay_apply(struct overlay_changeset *ovcs)
+ {
+ int ret = 0, ret_revert, ret_tmp;
+
+- if (devicetree_corrupt()) {
+- pr_err("devicetree state suspect, refuse to apply overlay\n");
+- ret = -EBUSY;
+- goto out;
+- }
+-
+ ret = of_resolve_phandles(ovcs->overlay_root);
+ if (ret)
+ goto out;
+@@ -983,6 +977,11 @@ int of_overlay_fdt_apply(const void *overlay_fdt, u32 overlay_fdt_size,
+
+ *ret_ovcs_id = 0;
+
++ if (devicetree_corrupt()) {
++ pr_err("devicetree state suspect, refuse to apply overlay\n");
++ return -EBUSY;
++ }
++
+ if (overlay_fdt_size < sizeof(struct fdt_header) ||
+ fdt_check_header(overlay_fdt)) {
+ pr_err("Invalid overlay_fdt header\n");
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index ffec82c8a523f..62db476a86514 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -8,6 +8,7 @@
+ * Author: Hezi Shahmoon <hezi.shahmoon@marvell.com>
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/delay.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/interrupt.h>
+@@ -857,14 +858,11 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+
+
+ switch (reg) {
+- case PCI_EXP_SLTCTL:
+- *value = PCI_EXP_SLTSTA_PDS << 16;
+- return PCI_BRIDGE_EMUL_HANDLED;
+-
+ /*
+- * PCI_EXP_RTCTL and PCI_EXP_RTSTA are also supported, but do not need
+- * to be handled here, because their values are stored in emulated
+- * config space buffer, and we read them from there when needed.
++ * PCI_EXP_SLTCAP, PCI_EXP_SLTCTL, PCI_EXP_RTCTL and PCI_EXP_RTSTA are
++ * also supported, but do not need to be handled here, because their
++ * values are stored in emulated config space buffer, and we read them
++ * from there when needed.
+ */
+
+ case PCI_EXP_LNKCAP: {
+@@ -977,8 +975,25 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ /* Support interrupt A for MSI feature */
+ bridge->conf.intpin = PCI_INTERRUPT_INTA;
+
+- /* Aardvark HW provides PCIe Capability structure in version 2 */
+- bridge->pcie_conf.cap = cpu_to_le16(2);
++ /*
++ * Aardvark HW provides PCIe Capability structure in version 2 and
++ * indicate slot support, which is emulated.
++ */
++ bridge->pcie_conf.cap = cpu_to_le16(2 | PCI_EXP_FLAGS_SLOT);
++
++ /*
++ * Set Presence Detect State bit permanently since there is no support
++ * for unplugging the card nor detecting whether it is plugged. (If a
++ * platform exists in the future that supports it, via a GPIO for
++ * example, it should be implemented via this bit.)
++ *
++ * Set physical slot number to 1 since there is only one port and zero
++ * value is reserved for ports within the same silicon as Root Port
++ * which is not our case.
++ */
++ bridge->pcie_conf.slotcap = cpu_to_le32(FIELD_PREP(PCI_EXP_SLTCAP_PSN,
++ 1));
++ bridge->pcie_conf.slotsta = cpu_to_le16(PCI_EXP_SLTSTA_PDS);
+
+ /* Indicates supports for Completion Retry Status */
+ bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 41aeaa2351322..2e68f50bc7ae4 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4924,6 +4924,9 @@ static const struct pci_dev_acs_enabled {
+ { PCI_VENDOR_ID_AMPERE, 0xE00C, pci_quirk_xgene_acs },
+ /* Broadcom multi-function device */
+ { PCI_VENDOR_ID_BROADCOM, 0x16D7, pci_quirk_mf_endpoint_acs },
++ { PCI_VENDOR_ID_BROADCOM, 0x1750, pci_quirk_mf_endpoint_acs },
++ { PCI_VENDOR_ID_BROADCOM, 0x1751, pci_quirk_mf_endpoint_acs },
++ { PCI_VENDOR_ID_BROADCOM, 0x1752, pci_quirk_mf_endpoint_acs },
+ { PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs },
+ /* Amazon Annapurna Labs */
+ { PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs },
+diff --git a/drivers/phy/samsung/phy-exynos-pcie.c b/drivers/phy/samsung/phy-exynos-pcie.c
+index 578cfe07d07ab..53c9230c29078 100644
+--- a/drivers/phy/samsung/phy-exynos-pcie.c
++++ b/drivers/phy/samsung/phy-exynos-pcie.c
+@@ -51,6 +51,13 @@ static int exynos5433_pcie_phy_init(struct phy *phy)
+ {
+ struct exynos_pcie_phy *ep = phy_get_drvdata(phy);
+
++ regmap_update_bits(ep->pmureg, EXYNOS5433_PMU_PCIE_PHY_OFFSET,
++ BIT(0), 1);
++ regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_GLOBAL_RESET,
++ PCIE_APP_REQ_EXIT_L1_MODE, 0);
++ regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_L1SUB_CM_CON,
++ PCIE_REFCLK_GATING_EN, 0);
++
+ regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_COMMON_RESET,
+ PCIE_PHY_RESET, 1);
+ regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_MAC_RESET,
+@@ -109,20 +116,7 @@ static int exynos5433_pcie_phy_init(struct phy *phy)
+ return 0;
+ }
+
+-static int exynos5433_pcie_phy_power_on(struct phy *phy)
+-{
+- struct exynos_pcie_phy *ep = phy_get_drvdata(phy);
+-
+- regmap_update_bits(ep->pmureg, EXYNOS5433_PMU_PCIE_PHY_OFFSET,
+- BIT(0), 1);
+- regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_GLOBAL_RESET,
+- PCIE_APP_REQ_EXIT_L1_MODE, 0);
+- regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_L1SUB_CM_CON,
+- PCIE_REFCLK_GATING_EN, 0);
+- return 0;
+-}
+-
+-static int exynos5433_pcie_phy_power_off(struct phy *phy)
++static int exynos5433_pcie_phy_exit(struct phy *phy)
+ {
+ struct exynos_pcie_phy *ep = phy_get_drvdata(phy);
+
+@@ -135,8 +129,7 @@ static int exynos5433_pcie_phy_power_off(struct phy *phy)
+
+ static const struct phy_ops exynos5433_phy_ops = {
+ .init = exynos5433_pcie_phy_init,
+- .power_on = exynos5433_pcie_phy_power_on,
+- .power_off = exynos5433_pcie_phy_power_off,
++ .exit = exynos5433_pcie_phy_exit,
+ .owner = THIS_MODULE,
+ };
+
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index ffc045f7bf00b..fd093e36c3a88 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -1641,16 +1641,14 @@ EXPORT_SYMBOL_GPL(intel_pinctrl_probe_by_uid);
+
+ const struct intel_pinctrl_soc_data *intel_pinctrl_get_soc_data(struct platform_device *pdev)
+ {
++ const struct intel_pinctrl_soc_data * const *table;
+ const struct intel_pinctrl_soc_data *data = NULL;
+- const struct intel_pinctrl_soc_data **table;
+- struct acpi_device *adev;
+- unsigned int i;
+
+- adev = ACPI_COMPANION(&pdev->dev);
+- if (adev) {
+- const void *match = device_get_match_data(&pdev->dev);
++ table = device_get_match_data(&pdev->dev);
++ if (table) {
++ struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
++ unsigned int i;
+
+- table = (const struct intel_pinctrl_soc_data **)match;
+ for (i = 0; table[i]; i++) {
+ if (!strcmp(adev->pnp.unique_id, table[i]->uid)) {
+ data = table[i];
+@@ -1664,7 +1662,7 @@ const struct intel_pinctrl_soc_data *intel_pinctrl_get_soc_data(struct platform_
+ if (!id)
+ return ERR_PTR(-ENODEV);
+
+- table = (const struct intel_pinctrl_soc_data **)id->driver_data;
++ table = (const struct intel_pinctrl_soc_data * const *)id->driver_data;
+ data = table[pdev->id];
+ }
+
+diff --git a/drivers/pinctrl/nomadik/pinctrl-nomadik.c b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+index 640e50d94f27e..f5014d09d81a2 100644
+--- a/drivers/pinctrl/nomadik/pinctrl-nomadik.c
++++ b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+@@ -1421,8 +1421,10 @@ static int nmk_pinctrl_dt_subnode_to_map(struct pinctrl_dev *pctldev,
+
+ has_config = nmk_pinctrl_dt_get_config(np, &configs);
+ np_config = of_parse_phandle(np, "ste,config", 0);
+- if (np_config)
++ if (np_config) {
+ has_config |= nmk_pinctrl_dt_get_config(np_config, &configs);
++ of_node_put(np_config);
++ }
+ if (has_config) {
+ const char *gpio_name;
+ const char *pin;
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 0645c2c24f508..06923e8859b78 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -917,6 +917,7 @@ static int amd_gpio_suspend(struct device *dev)
+ {
+ struct amd_gpio *gpio_dev = dev_get_drvdata(dev);
+ struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
++ unsigned long flags;
+ int i;
+
+ for (i = 0; i < desc->npins; i++) {
+@@ -925,7 +926,9 @@ static int amd_gpio_suspend(struct device *dev)
+ if (!amd_gpio_should_save(gpio_dev, pin))
+ continue;
+
+- gpio_dev->saved_regs[i] = readl(gpio_dev->base + pin*4);
++ raw_spin_lock_irqsave(&gpio_dev->lock, flags);
++ gpio_dev->saved_regs[i] = readl(gpio_dev->base + pin * 4) & ~PIN_IRQ_PENDING;
++ raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ }
+
+ return 0;
+@@ -935,6 +938,7 @@ static int amd_gpio_resume(struct device *dev)
+ {
+ struct amd_gpio *gpio_dev = dev_get_drvdata(dev);
+ struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
++ unsigned long flags;
+ int i;
+
+ for (i = 0; i < desc->npins; i++) {
+@@ -943,7 +947,10 @@ static int amd_gpio_resume(struct device *dev)
+ if (!amd_gpio_should_save(gpio_dev, pin))
+ continue;
+
+- writel(gpio_dev->saved_regs[i], gpio_dev->base + pin*4);
++ raw_spin_lock_irqsave(&gpio_dev->lock, flags);
++ gpio_dev->saved_regs[i] |= readl(gpio_dev->base + pin * 4) & PIN_IRQ_PENDING;
++ writel(gpio_dev->saved_regs[i], gpio_dev->base + pin * 4);
++ raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+ }
+
+ return 0;
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm8916.c b/drivers/pinctrl/qcom/pinctrl-msm8916.c
+index 396db12ae9048..bf68913ba8212 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm8916.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm8916.c
+@@ -844,8 +844,8 @@ static const struct msm_pingroup msm8916_groups[] = {
+ PINGROUP(28, pwr_modem_enabled_a, NA, NA, NA, NA, NA, qdss_tracedata_b, NA, atest_combodac),
+ PINGROUP(29, cci_i2c, NA, NA, NA, NA, NA, qdss_tracedata_b, NA, atest_combodac),
+ PINGROUP(30, cci_i2c, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+- PINGROUP(31, cci_timer0, NA, NA, NA, NA, NA, NA, NA, NA),
+- PINGROUP(32, cci_timer1, NA, NA, NA, NA, NA, NA, NA, NA),
++ PINGROUP(31, cci_timer0, flash_strobe, NA, NA, NA, NA, NA, NA, NA),
++ PINGROUP(32, cci_timer1, flash_strobe, NA, NA, NA, NA, NA, NA, NA),
+ PINGROUP(33, cci_async, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+ PINGROUP(34, pwr_nav_enabled_a, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+ PINGROUP(35, pwr_crypto_enabled_a, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b),
+diff --git a/drivers/pinctrl/qcom/pinctrl-sm8250.c b/drivers/pinctrl/qcom/pinctrl-sm8250.c
+index af144e724bd9c..3bd7f9fedcc34 100644
+--- a/drivers/pinctrl/qcom/pinctrl-sm8250.c
++++ b/drivers/pinctrl/qcom/pinctrl-sm8250.c
+@@ -1316,7 +1316,7 @@ static const struct msm_pingroup sm8250_groups[] = {
+ static const struct msm_gpio_wakeirq_map sm8250_pdc_map[] = {
+ { 0, 79 }, { 1, 84 }, { 2, 80 }, { 3, 82 }, { 4, 107 }, { 7, 43 },
+ { 11, 42 }, { 14, 44 }, { 15, 52 }, { 19, 67 }, { 23, 68 }, { 24, 105 },
+- { 27, 92 }, { 28, 106 }, { 31, 69 }, { 35, 70 }, { 39, 37 },
++ { 27, 92 }, { 28, 106 }, { 31, 69 }, { 35, 70 }, { 39, 73 },
+ { 40, 108 }, { 43, 71 }, { 45, 72 }, { 47, 83 }, { 51, 74 }, { 55, 77 },
+ { 59, 78 }, { 63, 75 }, { 64, 81 }, { 65, 87 }, { 66, 88 }, { 67, 89 },
+ { 68, 54 }, { 70, 85 }, { 77, 46 }, { 80, 90 }, { 81, 91 }, { 83, 97 },
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index a48cac55152ce..c3cdf52b72945 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -517,6 +517,8 @@ static int rzg2l_pinctrl_pinconf_get(struct pinctrl_dev *pctldev,
+ if (!(cfg & PIN_CFG_IEN))
+ return -EINVAL;
+ arg = rzg2l_read_pin_config(pctrl, IEN(port_offset), bit, IEN_MASK);
++ if (!arg)
++ return -EINVAL;
+ break;
+
+ case PIN_CONFIG_POWER_SOURCE: {
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c b/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c
+index c7d90c44e87aa..7b4b9f3d45558 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sun50i-h6-r.c
+@@ -107,6 +107,7 @@ static const struct sunxi_pinctrl_desc sun50i_h6_r_pinctrl_data = {
+ .npins = ARRAY_SIZE(sun50i_h6_r_pins),
+ .pin_base = PL_BASE,
+ .irq_banks = 2,
++ .io_bias_cfg_variant = BIAS_VOLTAGE_PIO_POW_MODE_SEL,
+ };
+
+ static int sun50i_h6_r_pinctrl_probe(struct platform_device *pdev)
+diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+index dd928402af997..09639e1d67098 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c
++++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c
+@@ -624,7 +624,7 @@ static int sunxi_pinctrl_set_io_bias_cfg(struct sunxi_pinctrl *pctl,
+ unsigned pin,
+ struct regulator *supply)
+ {
+- unsigned short bank = pin / PINS_PER_BANK;
++ unsigned short bank;
+ unsigned long flags;
+ u32 val, reg;
+ int uV;
+@@ -640,6 +640,9 @@ static int sunxi_pinctrl_set_io_bias_cfg(struct sunxi_pinctrl *pctl,
+ if (uV == 0)
+ return 0;
+
++ pin -= pctl->desc->pin_base;
++ bank = pin / PINS_PER_BANK;
++
+ switch (pctl->desc->io_bias_cfg_variant) {
+ case BIAS_VOLTAGE_GRP_CONFIG:
+ /*
+@@ -657,8 +660,6 @@ static int sunxi_pinctrl_set_io_bias_cfg(struct sunxi_pinctrl *pctl,
+ else
+ val = 0xD; /* 3.3V */
+
+- pin -= pctl->desc->pin_base;
+-
+ reg = readl(pctl->membase + sunxi_grp_config_reg(pin));
+ reg &= ~IO_BIAS_MASK;
+ writel(reg | val, pctl->membase + sunxi_grp_config_reg(pin));
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index ff767dccdf0f6..40dc048d18ad3 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -509,13 +509,13 @@ int cros_ec_query_all(struct cros_ec_device *ec_dev)
+ ret = cros_ec_get_host_command_version_mask(ec_dev,
+ EC_CMD_GET_NEXT_EVENT,
+ &ver_mask);
+- if (ret < 0 || ver_mask == 0)
++ if (ret < 0 || ver_mask == 0) {
+ ec_dev->mkbp_event_supported = 0;
+- else
++ } else {
+ ec_dev->mkbp_event_supported = fls(ver_mask);
+
+- dev_dbg(ec_dev->dev, "MKBP support version %u\n",
+- ec_dev->mkbp_event_supported - 1);
++ dev_dbg(ec_dev->dev, "MKBP support version %u\n", ec_dev->mkbp_event_supported - 1);
++ }
+
+ /* Probe if host sleep v1 is supported for S0ix failure detection. */
+ ret = cros_ec_get_host_command_version_mask(ec_dev,
+diff --git a/drivers/rtc/rtc-spear.c b/drivers/rtc/rtc-spear.c
+index d4777b01ab220..736fe535cd457 100644
+--- a/drivers/rtc/rtc-spear.c
++++ b/drivers/rtc/rtc-spear.c
+@@ -388,7 +388,7 @@ static int spear_rtc_probe(struct platform_device *pdev)
+
+ config->rtc->ops = &spear_rtc_ops;
+ config->rtc->range_min = RTC_TIMESTAMP_BEGIN_0000;
+- config->rtc->range_min = RTC_TIMESTAMP_END_9999;
++ config->rtc->range_max = RTC_TIMESTAMP_END_9999;
+
+ status = devm_rtc_register_device(config->rtc);
+ if (status)
+diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c
+index 0a9045b49c508..052a9114b2a6c 100644
+--- a/drivers/s390/crypto/ap_bus.c
++++ b/drivers/s390/crypto/ap_bus.c
+@@ -2068,6 +2068,9 @@ static inline void ap_scan_adapter(int ap)
+ */
+ static bool ap_get_configuration(void)
+ {
++ if (!ap_qci_info) /* QCI not supported */
++ return false;
++
+ memcpy(ap_qci_info_old, ap_qci_info, sizeof(*ap_qci_info));
+ ap_fetch_qci_info(ap_qci_info);
+
+diff --git a/drivers/s390/crypto/ap_bus.h b/drivers/s390/crypto/ap_bus.h
+index 0c40af157df23..0f17933954fb2 100644
+--- a/drivers/s390/crypto/ap_bus.h
++++ b/drivers/s390/crypto/ap_bus.h
+@@ -148,12 +148,16 @@ struct ap_driver {
+ /*
+ * Called at the start of the ap bus scan function when
+ * the crypto config information (qci) has changed.
++ * This callback is not invoked if there is no AP
++ * QCI support available.
+ */
+ void (*on_config_changed)(struct ap_config_info *new_config_info,
+ struct ap_config_info *old_config_info);
+ /*
+ * Called at the end of the ap bus scan function when
+ * the crypto config information (qci) has changed.
++ * This callback is not invoked if there is no AP
++ * QCI support available.
+ */
+ void (*on_scan_complete)(struct ap_config_info *new_config_info,
+ struct ap_config_info *old_config_info);
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 7b24c932e8126..25deacc92b020 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -2607,8 +2607,8 @@ lpfc_debugfs_multixripools_write(struct file *file, const char __user *buf,
+ struct lpfc_sli4_hdw_queue *qp;
+ struct lpfc_multixri_pool *multixri_pool;
+
+- if (nbytes > 64)
+- nbytes = 64;
++ if (nbytes > sizeof(mybuf) - 1)
++ nbytes = sizeof(mybuf) - 1;
+
+ memset(mybuf, 0, sizeof(mybuf));
+
+@@ -2688,8 +2688,8 @@ lpfc_debugfs_nvmestat_write(struct file *file, const char __user *buf,
+ if (!phba->targetport)
+ return -ENXIO;
+
+- if (nbytes > 64)
+- nbytes = 64;
++ if (nbytes > sizeof(mybuf) - 1)
++ nbytes = sizeof(mybuf) - 1;
+
+ memset(mybuf, 0, sizeof(mybuf));
+
+@@ -2826,8 +2826,8 @@ lpfc_debugfs_ioktime_write(struct file *file, const char __user *buf,
+ char mybuf[64];
+ char *pbuf;
+
+- if (nbytes > 64)
+- nbytes = 64;
++ if (nbytes > sizeof(mybuf) - 1)
++ nbytes = sizeof(mybuf) - 1;
+
+ memset(mybuf, 0, sizeof(mybuf));
+
+@@ -2954,8 +2954,8 @@ lpfc_debugfs_nvmeio_trc_write(struct file *file, const char __user *buf,
+ char mybuf[64];
+ char *pbuf;
+
+- if (nbytes > 63)
+- nbytes = 63;
++ if (nbytes > sizeof(mybuf) - 1)
++ nbytes = sizeof(mybuf) - 1;
+
+ memset(mybuf, 0, sizeof(mybuf));
+
+@@ -3060,8 +3060,8 @@ lpfc_debugfs_hdwqstat_write(struct file *file, const char __user *buf,
+ char *pbuf;
+ int i;
+
+- if (nbytes > 64)
+- nbytes = 64;
++ if (nbytes > sizeof(mybuf) - 1)
++ nbytes = sizeof(mybuf) - 1;
+
+ memset(mybuf, 0, sizeof(mybuf));
+
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 80ac3a051c192..e2127e85ff325 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -2003,10 +2003,12 @@ initpath:
+
+ sync_buf->cmd_flag |= LPFC_IO_CMF;
+ ret_val = lpfc_sli4_issue_wqe(phba, &phba->sli4_hba.hdwq[0], sync_buf);
+- if (ret_val)
++ if (ret_val) {
+ lpfc_printf_log(phba, KERN_INFO, LOG_CGN_MGMT,
+ "6214 Cannot issue CMF_SYNC_WQE: x%x\n",
+ ret_val);
++ __lpfc_sli_release_iocbq(phba, sync_buf);
++ }
+ out_unlock:
+ spin_unlock_irqrestore(&phba->hbalock, iflags);
+ return ret_val;
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 2a38cd2d24eff..02899e8849dd6 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2143,8 +2143,6 @@ static int iscsi_iter_destroy_conn_fn(struct device *dev, void *data)
+ return 0;
+
+ iscsi_remove_conn(iscsi_dev_to_conn(dev));
+- iscsi_put_conn(iscsi_dev_to_conn(dev));
+-
+ return 0;
+ }
+
+diff --git a/drivers/spi/spi-meson-spicc.c b/drivers/spi/spi-meson-spicc.c
+index 0bc7daa7afc83..e4cb52e1fe261 100644
+--- a/drivers/spi/spi-meson-spicc.c
++++ b/drivers/spi/spi-meson-spicc.c
+@@ -156,6 +156,7 @@ struct meson_spicc_device {
+ void __iomem *base;
+ struct clk *core;
+ struct clk *pclk;
++ struct clk_divider pow2_div;
+ struct clk *clk;
+ struct spi_message *message;
+ struct spi_transfer *xfer;
+@@ -168,6 +169,8 @@ struct meson_spicc_device {
+ unsigned long xfer_remain;
+ };
+
++#define pow2_clk_to_spicc(_div) container_of(_div, struct meson_spicc_device, pow2_div)
++
+ static void meson_spicc_oen_enable(struct meson_spicc_device *spicc)
+ {
+ u32 conf;
+@@ -421,7 +424,7 @@ static int meson_spicc_prepare_message(struct spi_master *master,
+ {
+ struct meson_spicc_device *spicc = spi_master_get_devdata(master);
+ struct spi_device *spi = message->spi;
+- u32 conf = 0;
++ u32 conf = readl_relaxed(spicc->base + SPICC_CONREG) & SPICC_DATARATE_MASK;
+
+ /* Store current message */
+ spicc->message = message;
+@@ -458,8 +461,6 @@ static int meson_spicc_prepare_message(struct spi_master *master,
+ /* Select CS */
+ conf |= FIELD_PREP(SPICC_CS_MASK, spi->chip_select);
+
+- /* Default Clock rate core/4 */
+-
+ /* Default 8bit word */
+ conf |= FIELD_PREP(SPICC_BITLENGTH_MASK, 8 - 1);
+
+@@ -476,12 +477,16 @@ static int meson_spicc_prepare_message(struct spi_master *master,
+ static int meson_spicc_unprepare_transfer(struct spi_master *master)
+ {
+ struct meson_spicc_device *spicc = spi_master_get_devdata(master);
++ u32 conf = readl_relaxed(spicc->base + SPICC_CONREG) & SPICC_DATARATE_MASK;
+
+ /* Disable all IRQs */
+ writel(0, spicc->base + SPICC_INTREG);
+
+ device_reset_optional(&spicc->pdev->dev);
+
++ /* Set default configuration, keeping datarate field */
++ writel_relaxed(conf, spicc->base + SPICC_CONREG);
++
+ return 0;
+ }
+
+@@ -518,14 +523,60 @@ static void meson_spicc_cleanup(struct spi_device *spi)
+ * Clk path for G12A series:
+ * pclk -> pow2 fixed div -> pow2 div -> mux -> out
+ * pclk -> enh fixed div -> enh div -> mux -> out
++ *
++ * The pow2 divider is tied to the controller HW state, and the
++ * divider is only valid when the controller is initialized.
++ *
++ * A set of clock ops is added to make sure we don't read/set this
++ * clock rate while the controller is in an unknown state.
+ */
+
+-static int meson_spicc_clk_init(struct meson_spicc_device *spicc)
++static unsigned long meson_spicc_pow2_recalc_rate(struct clk_hw *hw,
++ unsigned long parent_rate)
++{
++ struct clk_divider *divider = to_clk_divider(hw);
++ struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
++
++ if (!spicc->master->cur_msg || !spicc->master->busy)
++ return 0;
++
++ return clk_divider_ops.recalc_rate(hw, parent_rate);
++}
++
++static int meson_spicc_pow2_determine_rate(struct clk_hw *hw,
++ struct clk_rate_request *req)
++{
++ struct clk_divider *divider = to_clk_divider(hw);
++ struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
++
++ if (!spicc->master->cur_msg || !spicc->master->busy)
++ return -EINVAL;
++
++ return clk_divider_ops.determine_rate(hw, req);
++}
++
++static int meson_spicc_pow2_set_rate(struct clk_hw *hw, unsigned long rate,
++ unsigned long parent_rate)
++{
++ struct clk_divider *divider = to_clk_divider(hw);
++ struct meson_spicc_device *spicc = pow2_clk_to_spicc(divider);
++
++ if (!spicc->master->cur_msg || !spicc->master->busy)
++ return -EINVAL;
++
++ return clk_divider_ops.set_rate(hw, rate, parent_rate);
++}
++
++const struct clk_ops meson_spicc_pow2_clk_ops = {
++ .recalc_rate = meson_spicc_pow2_recalc_rate,
++ .determine_rate = meson_spicc_pow2_determine_rate,
++ .set_rate = meson_spicc_pow2_set_rate,
++};
++
++static int meson_spicc_pow2_clk_init(struct meson_spicc_device *spicc)
+ {
+ struct device *dev = &spicc->pdev->dev;
+- struct clk_fixed_factor *pow2_fixed_div, *enh_fixed_div;
+- struct clk_divider *pow2_div, *enh_div;
+- struct clk_mux *mux;
++ struct clk_fixed_factor *pow2_fixed_div;
+ struct clk_init_data init;
+ struct clk *clk;
+ struct clk_parent_data parent_data[2];
+@@ -560,31 +611,45 @@ static int meson_spicc_clk_init(struct meson_spicc_device *spicc)
+ if (WARN_ON(IS_ERR(clk)))
+ return PTR_ERR(clk);
+
+- pow2_div = devm_kzalloc(dev, sizeof(*pow2_div), GFP_KERNEL);
+- if (!pow2_div)
+- return -ENOMEM;
+-
+ snprintf(name, sizeof(name), "%s#pow2_div", dev_name(dev));
+ init.name = name;
+- init.ops = &clk_divider_ops;
+- init.flags = CLK_SET_RATE_PARENT;
++ init.ops = &meson_spicc_pow2_clk_ops;
++ /*
++ * Set NOCACHE here to make sure we read the actual HW value
++ * since we reset the HW after each transfer.
++ */
++ init.flags = CLK_SET_RATE_PARENT | CLK_GET_RATE_NOCACHE;
+ parent_data[0].hw = &pow2_fixed_div->hw;
+ init.num_parents = 1;
+
+- pow2_div->shift = 16,
+- pow2_div->width = 3,
+- pow2_div->flags = CLK_DIVIDER_POWER_OF_TWO,
+- pow2_div->reg = spicc->base + SPICC_CONREG;
+- pow2_div->hw.init = &init;
++ spicc->pow2_div.shift = 16,
++ spicc->pow2_div.width = 3,
++ spicc->pow2_div.flags = CLK_DIVIDER_POWER_OF_TWO,
++ spicc->pow2_div.reg = spicc->base + SPICC_CONREG;
++ spicc->pow2_div.hw.init = &init;
+
+- clk = devm_clk_register(dev, &pow2_div->hw);
+- if (WARN_ON(IS_ERR(clk)))
+- return PTR_ERR(clk);
++ spicc->clk = devm_clk_register(dev, &spicc->pow2_div.hw);
++ if (WARN_ON(IS_ERR(spicc->clk)))
++ return PTR_ERR(spicc->clk);
+
+- if (!spicc->data->has_enhance_clk_div) {
+- spicc->clk = clk;
+- return 0;
+- }
++ return 0;
++}
++
++static int meson_spicc_enh_clk_init(struct meson_spicc_device *spicc)
++{
++ struct device *dev = &spicc->pdev->dev;
++ struct clk_fixed_factor *enh_fixed_div;
++ struct clk_divider *enh_div;
++ struct clk_mux *mux;
++ struct clk_init_data init;
++ struct clk *clk;
++ struct clk_parent_data parent_data[2];
++ char name[64];
++
++ memset(&init, 0, sizeof(init));
++ memset(&parent_data, 0, sizeof(parent_data));
++
++ init.parent_data = parent_data;
+
+ /* algorithm for enh div: rate = freq / 2 / (N + 1) */
+
+@@ -637,7 +702,7 @@ static int meson_spicc_clk_init(struct meson_spicc_device *spicc)
+ snprintf(name, sizeof(name), "%s#sel", dev_name(dev));
+ init.name = name;
+ init.ops = &clk_mux_ops;
+- parent_data[0].hw = &pow2_div->hw;
++ parent_data[0].hw = &spicc->pow2_div.hw;
+ parent_data[1].hw = &enh_div->hw;
+ init.num_parents = 2;
+ init.flags = CLK_SET_RATE_PARENT;
+@@ -754,12 +819,20 @@ static int meson_spicc_probe(struct platform_device *pdev)
+
+ meson_spicc_oen_enable(spicc);
+
+- ret = meson_spicc_clk_init(spicc);
++ ret = meson_spicc_pow2_clk_init(spicc);
+ if (ret) {
+- dev_err(&pdev->dev, "clock registration failed\n");
++ dev_err(&pdev->dev, "pow2 clock registration failed\n");
+ goto out_clk;
+ }
+
++ if (spicc->data->has_enhance_clk_div) {
++ ret = meson_spicc_enh_clk_init(spicc);
++ if (ret) {
++ dev_err(&pdev->dev, "clock registration failed\n");
++ goto out_clk;
++ }
++ }
++
+ ret = devm_spi_register_master(&pdev->dev, master);
+ if (ret) {
+ dev_err(&pdev->dev, "spi master registration failed\n");
+diff --git a/drivers/staging/r8188eu/core/rtw_cmd.c b/drivers/staging/r8188eu/core/rtw_cmd.c
+index 06523d91939a6..5b6a891b5d67e 100644
+--- a/drivers/staging/r8188eu/core/rtw_cmd.c
++++ b/drivers/staging/r8188eu/core/rtw_cmd.c
+@@ -898,8 +898,12 @@ static void traffic_status_watchdog(struct adapter *padapter)
+ static void rtl8188e_sreset_xmit_status_check(struct adapter *padapter)
+ {
+ u32 txdma_status;
++ int res;
++
++ res = rtw_read32(padapter, REG_TXDMA_STATUS, &txdma_status);
++ if (res)
++ return;
+
+- txdma_status = rtw_read32(padapter, REG_TXDMA_STATUS);
+ if (txdma_status != 0x00)
+ rtw_write32(padapter, REG_TXDMA_STATUS, txdma_status);
+ /* total xmit irp = 4 */
+@@ -1177,7 +1181,14 @@ exit:
+
+ static bool rtw_is_hi_queue_empty(struct adapter *adapter)
+ {
+- return (rtw_read32(adapter, REG_HGQ_INFORMATION) & 0x0000ff00) == 0;
++ int res;
++ u32 reg;
++
++ res = rtw_read32(adapter, REG_HGQ_INFORMATION, ®);
++ if (res)
++ return false;
++
++ return (reg & 0x0000ff00) == 0;
+ }
+
+ static void rtw_chk_hi_queue_hdl(struct adapter *padapter)
+diff --git a/drivers/staging/r8188eu/core/rtw_efuse.c b/drivers/staging/r8188eu/core/rtw_efuse.c
+index 0e0e606388802..8005ed8d3a203 100644
+--- a/drivers/staging/r8188eu/core/rtw_efuse.c
++++ b/drivers/staging/r8188eu/core/rtw_efuse.c
+@@ -28,22 +28,35 @@ ReadEFuseByte(
+ u32 value32;
+ u8 readbyte;
+ u16 retry;
++ int res;
+
+ /* Write Address */
+ rtw_write8(Adapter, EFUSE_CTRL + 1, (_offset & 0xff));
+- readbyte = rtw_read8(Adapter, EFUSE_CTRL + 2);
++ res = rtw_read8(Adapter, EFUSE_CTRL + 2, &readbyte);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, EFUSE_CTRL + 2, ((_offset >> 8) & 0x03) | (readbyte & 0xfc));
+
+ /* Write bit 32 0 */
+- readbyte = rtw_read8(Adapter, EFUSE_CTRL + 3);
++ res = rtw_read8(Adapter, EFUSE_CTRL + 3, &readbyte);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, EFUSE_CTRL + 3, (readbyte & 0x7f));
+
+ /* Check bit 32 read-ready */
+- retry = 0;
+- value32 = rtw_read32(Adapter, EFUSE_CTRL);
+- while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < 10000)) {
+- value32 = rtw_read32(Adapter, EFUSE_CTRL);
+- retry++;
++ res = rtw_read32(Adapter, EFUSE_CTRL, &value32);
++ if (res)
++ return;
++
++ for (retry = 0; retry < 10000; retry++) {
++ res = rtw_read32(Adapter, EFUSE_CTRL, &value32);
++ if (res)
++ continue;
++
++ if (((value32 >> 24) & 0xff) & 0x80)
++ break;
+ }
+
+ /* 20100205 Joseph: Add delay suggested by SD1 Victor. */
+@@ -51,9 +64,13 @@ ReadEFuseByte(
+ /* Designer says that there shall be some delay after ready bit is set, or the */
+ /* result will always stay on last data we read. */
+ udelay(50);
+- value32 = rtw_read32(Adapter, EFUSE_CTRL);
++ res = rtw_read32(Adapter, EFUSE_CTRL, &value32);
++ if (res)
++ return;
+
+ *pbuf = (u8)(value32 & 0xff);
++
++ /* FIXME: return an error to caller */
+ }
+
+ /*-----------------------------------------------------------------------------
+diff --git a/drivers/staging/r8188eu/core/rtw_fw.c b/drivers/staging/r8188eu/core/rtw_fw.c
+index 0451e51776448..04f25e0b3bca5 100644
+--- a/drivers/staging/r8188eu/core/rtw_fw.c
++++ b/drivers/staging/r8188eu/core/rtw_fw.c
+@@ -44,18 +44,28 @@ static_assert(sizeof(struct rt_firmware_hdr) == 32);
+ static void fw_download_enable(struct adapter *padapter, bool enable)
+ {
+ u8 tmp;
++ int res;
+
+ if (enable) {
+ /* MCU firmware download enable. */
+- tmp = rtw_read8(padapter, REG_MCUFWDL);
++ res = rtw_read8(padapter, REG_MCUFWDL, &tmp);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_MCUFWDL, tmp | 0x01);
+
+ /* 8051 reset */
+- tmp = rtw_read8(padapter, REG_MCUFWDL + 2);
++ res = rtw_read8(padapter, REG_MCUFWDL + 2, &tmp);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_MCUFWDL + 2, tmp & 0xf7);
+ } else {
+ /* MCU firmware download disable. */
+- tmp = rtw_read8(padapter, REG_MCUFWDL);
++ res = rtw_read8(padapter, REG_MCUFWDL, &tmp);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_MCUFWDL, tmp & 0xfe);
+
+ /* Reserved for fw extension. */
+@@ -125,8 +135,13 @@ static int page_write(struct adapter *padapter, u32 page, u8 *buffer, u32 size)
+ {
+ u8 value8;
+ u8 u8Page = (u8)(page & 0x07);
++ int res;
++
++ res = rtw_read8(padapter, REG_MCUFWDL + 2, &value8);
++ if (res)
++ return _FAIL;
+
+- value8 = (rtw_read8(padapter, REG_MCUFWDL + 2) & 0xF8) | u8Page;
++ value8 = (value8 & 0xF8) | u8Page;
+ rtw_write8(padapter, REG_MCUFWDL + 2, value8);
+
+ return block_write(padapter, buffer, size);
+@@ -165,8 +180,12 @@ exit:
+ void rtw_reset_8051(struct adapter *padapter)
+ {
+ u8 val8;
++ int res;
++
++ res = rtw_read8(padapter, REG_SYS_FUNC_EN + 1, &val8);
++ if (res)
++ return;
+
+- val8 = rtw_read8(padapter, REG_SYS_FUNC_EN + 1);
+ rtw_write8(padapter, REG_SYS_FUNC_EN + 1, val8 & (~BIT(2)));
+ rtw_write8(padapter, REG_SYS_FUNC_EN + 1, val8 | (BIT(2)));
+ }
+@@ -175,10 +194,14 @@ static int fw_free_to_go(struct adapter *padapter)
+ {
+ u32 counter = 0;
+ u32 value32;
++ int res;
+
+ /* polling CheckSum report */
+ do {
+- value32 = rtw_read32(padapter, REG_MCUFWDL);
++ res = rtw_read32(padapter, REG_MCUFWDL, &value32);
++ if (res)
++ continue;
++
+ if (value32 & FWDL_CHKSUM_RPT)
+ break;
+ } while (counter++ < POLLING_READY_TIMEOUT_COUNT);
+@@ -186,7 +209,10 @@ static int fw_free_to_go(struct adapter *padapter)
+ if (counter >= POLLING_READY_TIMEOUT_COUNT)
+ return _FAIL;
+
+- value32 = rtw_read32(padapter, REG_MCUFWDL);
++ res = rtw_read32(padapter, REG_MCUFWDL, &value32);
++ if (res)
++ return _FAIL;
++
+ value32 |= MCUFWDL_RDY;
+ value32 &= ~WINTINI_RDY;
+ rtw_write32(padapter, REG_MCUFWDL, value32);
+@@ -196,9 +222,10 @@ static int fw_free_to_go(struct adapter *padapter)
+ /* polling for FW ready */
+ counter = 0;
+ do {
+- value32 = rtw_read32(padapter, REG_MCUFWDL);
+- if (value32 & WINTINI_RDY)
++ res = rtw_read32(padapter, REG_MCUFWDL, &value32);
++ if (!res && value32 & WINTINI_RDY)
+ return _SUCCESS;
++
+ udelay(5);
+ } while (counter++ < POLLING_READY_TIMEOUT_COUNT);
+
+@@ -239,7 +266,7 @@ exit:
+ int rtl8188e_firmware_download(struct adapter *padapter)
+ {
+ int ret = _SUCCESS;
+- u8 write_fw_retry = 0;
++ u8 reg;
+ unsigned long fwdl_timeout;
+ struct dvobj_priv *dvobj = adapter_to_dvobj(padapter);
+ struct device *device = dvobj_to_dev(dvobj);
+@@ -269,23 +296,34 @@ int rtl8188e_firmware_download(struct adapter *padapter)
+
+ /* Suggested by Filen. If 8051 is running in RAM code, driver should inform Fw to reset by itself, */
+ /* or it will cause download Fw fail. 2010.02.01. by tynli. */
+- if (rtw_read8(padapter, REG_MCUFWDL) & RAM_DL_SEL) { /* 8051 RAM code */
++ ret = rtw_read8(padapter, REG_MCUFWDL, ®);
++ if (ret) {
++ ret = _FAIL;
++ goto exit;
++ }
++
++ if (reg & RAM_DL_SEL) { /* 8051 RAM code */
+ rtw_write8(padapter, REG_MCUFWDL, 0x00);
+ rtw_reset_8051(padapter);
+ }
+
+ fw_download_enable(padapter, true);
+ fwdl_timeout = jiffies + msecs_to_jiffies(500);
+- while (1) {
++ do {
+ /* reset the FWDL chksum */
+- rtw_write8(padapter, REG_MCUFWDL, rtw_read8(padapter, REG_MCUFWDL) | FWDL_CHKSUM_RPT);
++ ret = rtw_read8(padapter, REG_MCUFWDL, ®);
++ if (ret) {
++ ret = _FAIL;
++ continue;
++ }
+
+- ret = write_fw(padapter, fw_data, fw_size);
++ rtw_write8(padapter, REG_MCUFWDL, reg | FWDL_CHKSUM_RPT);
+
+- if (ret == _SUCCESS ||
+- (time_after(jiffies, fwdl_timeout) && write_fw_retry++ >= 3))
++ ret = write_fw(padapter, fw_data, fw_size);
++ if (ret == _SUCCESS)
+ break;
+- }
++ } while (!time_after(jiffies, fwdl_timeout));
++
+ fw_download_enable(padapter, false);
+ if (ret != _SUCCESS)
+ goto exit;
+diff --git a/drivers/staging/r8188eu/core/rtw_led.c b/drivers/staging/r8188eu/core/rtw_led.c
+index 2f3000428af76..25989acf52599 100644
+--- a/drivers/staging/r8188eu/core/rtw_led.c
++++ b/drivers/staging/r8188eu/core/rtw_led.c
+@@ -35,11 +35,15 @@ static void ResetLedStatus(struct LED_871x *pLed)
+ static void SwLedOn(struct adapter *padapter, struct LED_871x *pLed)
+ {
+ u8 LedCfg;
++ int res;
+
+ if (padapter->bSurpriseRemoved || padapter->bDriverStopped)
+ return;
+
+- LedCfg = rtw_read8(padapter, REG_LEDCFG2);
++ res = rtw_read8(padapter, REG_LEDCFG2, &LedCfg);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_LEDCFG2, (LedCfg & 0xf0) | BIT(5) | BIT(6)); /* SW control led0 on. */
+ pLed->bLedOn = true;
+ }
+@@ -47,15 +51,21 @@ static void SwLedOn(struct adapter *padapter, struct LED_871x *pLed)
+ static void SwLedOff(struct adapter *padapter, struct LED_871x *pLed)
+ {
+ u8 LedCfg;
++ int res;
+
+ if (padapter->bSurpriseRemoved || padapter->bDriverStopped)
+ goto exit;
+
+- LedCfg = rtw_read8(padapter, REG_LEDCFG2);/* 0x4E */
++ res = rtw_read8(padapter, REG_LEDCFG2, &LedCfg);/* 0x4E */
++ if (res)
++ goto exit;
+
+ LedCfg &= 0x90; /* Set to software control. */
+ rtw_write8(padapter, REG_LEDCFG2, (LedCfg | BIT(3)));
+- LedCfg = rtw_read8(padapter, REG_MAC_PINMUX_CFG);
++ res = rtw_read8(padapter, REG_MAC_PINMUX_CFG, &LedCfg);
++ if (res)
++ goto exit;
++
+ LedCfg &= 0xFE;
+ rtw_write8(padapter, REG_MAC_PINMUX_CFG, LedCfg);
+ exit:
+diff --git a/drivers/staging/r8188eu/core/rtw_mlme_ext.c b/drivers/staging/r8188eu/core/rtw_mlme_ext.c
+index faf23fc950c53..88a4953d31d81 100644
+--- a/drivers/staging/r8188eu/core/rtw_mlme_ext.c
++++ b/drivers/staging/r8188eu/core/rtw_mlme_ext.c
+@@ -5667,14 +5667,28 @@ unsigned int send_beacon(struct adapter *padapter)
+
+ bool get_beacon_valid_bit(struct adapter *adapter)
+ {
++ int res;
++ u8 reg;
++
++ res = rtw_read8(adapter, REG_TDECTRL + 2, ®);
++ if (res)
++ return false;
++
+ /* BIT(16) of REG_TDECTRL = BIT(0) of REG_TDECTRL+2 */
+- return BIT(0) & rtw_read8(adapter, REG_TDECTRL + 2);
++ return BIT(0) & reg;
+ }
+
+ void clear_beacon_valid_bit(struct adapter *adapter)
+ {
++ int res;
++ u8 reg;
++
++ res = rtw_read8(adapter, REG_TDECTRL + 2, ®);
++ if (res)
++ return;
++
+ /* BIT(16) of REG_TDECTRL = BIT(0) of REG_TDECTRL+2, write 1 to clear, Clear by sw */
+- rtw_write8(adapter, REG_TDECTRL + 2, rtw_read8(adapter, REG_TDECTRL + 2) | BIT(0));
++ rtw_write8(adapter, REG_TDECTRL + 2, reg | BIT(0));
+ }
+
+ /****************************************************************************
+@@ -6002,7 +6016,9 @@ static void rtw_set_bssid(struct adapter *adapter, u8 *bssid)
+ static void mlme_join(struct adapter *adapter, int type)
+ {
+ struct mlme_priv *mlmepriv = &adapter->mlmepriv;
+- u8 retry_limit = 0x30;
++ u8 retry_limit = 0x30, reg;
++ u32 reg32;
++ int res;
+
+ switch (type) {
+ case 0:
+@@ -6010,8 +6026,12 @@ static void mlme_join(struct adapter *adapter, int type)
+ /* enable to rx data frame, accept all data frame */
+ rtw_write16(adapter, REG_RXFLTMAP2, 0xFFFF);
+
++ res = rtw_read32(adapter, REG_RCR, ®32);
++ if (res)
++ return;
++
+ rtw_write32(adapter, REG_RCR,
+- rtw_read32(adapter, REG_RCR) | RCR_CBSSID_DATA | RCR_CBSSID_BCN);
++ reg32 | RCR_CBSSID_DATA | RCR_CBSSID_BCN);
+
+ if (check_fwstate(mlmepriv, WIFI_STATION_STATE)) {
+ retry_limit = 48;
+@@ -6027,7 +6047,11 @@ static void mlme_join(struct adapter *adapter, int type)
+ case 2:
+ /* sta add event call back */
+ /* enable update TSF */
+- rtw_write8(adapter, REG_BCN_CTRL, rtw_read8(adapter, REG_BCN_CTRL) & (~BIT(4)));
++ res = rtw_read8(adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapter, REG_BCN_CTRL, reg & (~BIT(4)));
+
+ if (check_fwstate(mlmepriv, WIFI_ADHOC_STATE | WIFI_ADHOC_MASTER_STATE))
+ retry_limit = 0x7;
+@@ -6748,6 +6772,9 @@ void mlmeext_sta_add_event_callback(struct adapter *padapter, struct sta_info *p
+
+ static void mlme_disconnect(struct adapter *adapter)
+ {
++ int res;
++ u8 reg;
++
+ /* Set RCR to not to receive data frame when NO LINK state */
+ /* reject all data frames */
+ rtw_write16(adapter, REG_RXFLTMAP2, 0x00);
+@@ -6756,7 +6783,12 @@ static void mlme_disconnect(struct adapter *adapter)
+ rtw_write8(adapter, REG_DUAL_TSF_RST, (BIT(0) | BIT(1)));
+
+ /* disable update TSF */
+- rtw_write8(adapter, REG_BCN_CTRL, rtw_read8(adapter, REG_BCN_CTRL) | BIT(4));
++
++ res = rtw_read8(adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapter, REG_BCN_CTRL, reg | BIT(4));
+ }
+
+ void mlmeext_sta_del_event_callback(struct adapter *padapter)
+@@ -6810,14 +6842,20 @@ static u8 chk_ap_is_alive(struct sta_info *psta)
+ return ret;
+ }
+
+-static void rtl8188e_sreset_linked_status_check(struct adapter *padapter)
++static int rtl8188e_sreset_linked_status_check(struct adapter *padapter)
+ {
+- u32 rx_dma_status = rtw_read32(padapter, REG_RXDMA_STATUS);
++ u32 rx_dma_status;
++ int res;
++ u8 reg;
++
++ res = rtw_read32(padapter, REG_RXDMA_STATUS, &rx_dma_status);
++ if (res)
++ return res;
+
+ if (rx_dma_status != 0x00)
+ rtw_write32(padapter, REG_RXDMA_STATUS, rx_dma_status);
+
+- rtw_read8(padapter, REG_FMETHR);
++ return rtw_read8(padapter, REG_FMETHR, ®);
+ }
+
+ void linked_status_chk(struct adapter *padapter)
+@@ -7219,6 +7257,7 @@ u8 disconnect_hdl(struct adapter *padapter, unsigned char *pbuf)
+ struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info;
+ struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)(&pmlmeinfo->network);
+ u8 val8;
++ int res;
+
+ if (is_client_associated_to_ap(padapter))
+ issue_deauth_ex(padapter, pnetwork->MacAddress, WLAN_REASON_DEAUTH_LEAVING, param->deauth_timeout_ms / 100, 100);
+@@ -7231,7 +7270,10 @@ u8 disconnect_hdl(struct adapter *padapter, unsigned char *pbuf)
+
+ if (((pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE) || ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE)) {
+ /* Stop BCN */
+- val8 = rtw_read8(padapter, REG_BCN_CTRL);
++ res = rtw_read8(padapter, REG_BCN_CTRL, &val8);
++ if (res)
++ return H2C_DROPPED;
++
+ rtw_write8(padapter, REG_BCN_CTRL, val8 & (~(EN_BCN_FUNCTION | EN_TXBCN_RPT)));
+ }
+
+diff --git a/drivers/staging/r8188eu/core/rtw_pwrctrl.c b/drivers/staging/r8188eu/core/rtw_pwrctrl.c
+index 7b816b824947d..45e85b593665f 100644
+--- a/drivers/staging/r8188eu/core/rtw_pwrctrl.c
++++ b/drivers/staging/r8188eu/core/rtw_pwrctrl.c
+@@ -229,6 +229,9 @@ void rtw_set_ps_mode(struct adapter *padapter, u8 ps_mode, u8 smart_ps, u8 bcn_a
+
+ static bool lps_rf_on(struct adapter *adapter)
+ {
++ int res;
++ u32 reg;
++
+ /* When we halt NIC, we should check if FW LPS is leave. */
+ if (adapter->pwrctrlpriv.rf_pwrstate == rf_off) {
+ /* If it is in HW/SW Radio OFF or IPS state, we do not check Fw LPS Leave, */
+@@ -236,7 +239,11 @@ static bool lps_rf_on(struct adapter *adapter)
+ return true;
+ }
+
+- if (rtw_read32(adapter, REG_RCR) & 0x00070000)
++ res = rtw_read32(adapter, REG_RCR, ®);
++ if (res)
++ return false;
++
++ if (reg & 0x00070000)
+ return false;
+
+ return true;
+diff --git a/drivers/staging/r8188eu/core/rtw_wlan_util.c b/drivers/staging/r8188eu/core/rtw_wlan_util.c
+index 392a65783f324..9bd059b86d0c4 100644
+--- a/drivers/staging/r8188eu/core/rtw_wlan_util.c
++++ b/drivers/staging/r8188eu/core/rtw_wlan_util.c
+@@ -279,8 +279,13 @@ void Restore_DM_Func_Flag(struct adapter *padapter)
+ void Set_MSR(struct adapter *padapter, u8 type)
+ {
+ u8 val8;
++ int res;
+
+- val8 = rtw_read8(padapter, MSR) & 0x0c;
++ res = rtw_read8(padapter, MSR, &val8);
++ if (res)
++ return;
++
++ val8 &= 0x0c;
+ val8 |= type;
+ rtw_write8(padapter, MSR, val8);
+ }
+@@ -505,7 +510,11 @@ int WMM_param_handler(struct adapter *padapter, struct ndis_802_11_var_ie *pIE)
+
+ static void set_acm_ctrl(struct adapter *adapter, u8 acm_mask)
+ {
+- u8 acmctrl = rtw_read8(adapter, REG_ACMHWCTRL);
++ u8 acmctrl;
++ int res = rtw_read8(adapter, REG_ACMHWCTRL, &acmctrl);
++
++ if (res)
++ return;
+
+ if (acm_mask > 1)
+ acmctrl = acmctrl | 0x1;
+@@ -765,6 +774,7 @@ void HT_info_handler(struct adapter *padapter, struct ndis_802_11_var_ie *pIE)
+ static void set_min_ampdu_spacing(struct adapter *adapter, u8 spacing)
+ {
+ u8 sec_spacing;
++ int res;
+
+ if (spacing <= 7) {
+ switch (adapter->securitypriv.dot11PrivacyAlgrthm) {
+@@ -786,8 +796,12 @@ static void set_min_ampdu_spacing(struct adapter *adapter, u8 spacing)
+ if (spacing < sec_spacing)
+ spacing = sec_spacing;
+
++ res = rtw_read8(adapter, REG_AMPDU_MIN_SPACE, &sec_spacing);
++ if (res)
++ return;
++
+ rtw_write8(adapter, REG_AMPDU_MIN_SPACE,
+- (rtw_read8(adapter, REG_AMPDU_MIN_SPACE) & 0xf8) | spacing);
++ (sec_spacing & 0xf8) | spacing);
+ }
+ }
+
+diff --git a/drivers/staging/r8188eu/hal/Hal8188ERateAdaptive.c b/drivers/staging/r8188eu/hal/Hal8188ERateAdaptive.c
+index 57e8f55738467..3cefdf90d6e00 100644
+--- a/drivers/staging/r8188eu/hal/Hal8188ERateAdaptive.c
++++ b/drivers/staging/r8188eu/hal/Hal8188ERateAdaptive.c
+@@ -279,6 +279,7 @@ static int odm_ARFBRefresh_8188E(struct odm_dm_struct *dm_odm, struct odm_ra_inf
+ { /* Wilson 2011/10/26 */
+ u32 MaskFromReg;
+ s8 i;
++ int res;
+
+ switch (pRaInfo->RateID) {
+ case RATR_INX_WIRELESS_NGB:
+@@ -303,19 +304,31 @@ static int odm_ARFBRefresh_8188E(struct odm_dm_struct *dm_odm, struct odm_ra_inf
+ pRaInfo->RAUseRate = (pRaInfo->RateMask) & 0x0000000d;
+ break;
+ case 12:
+- MaskFromReg = rtw_read32(dm_odm->Adapter, REG_ARFR0);
++ res = rtw_read32(dm_odm->Adapter, REG_ARFR0, &MaskFromReg);
++ if (res)
++ return res;
++
+ pRaInfo->RAUseRate = (pRaInfo->RateMask) & MaskFromReg;
+ break;
+ case 13:
+- MaskFromReg = rtw_read32(dm_odm->Adapter, REG_ARFR1);
++ res = rtw_read32(dm_odm->Adapter, REG_ARFR1, &MaskFromReg);
++ if (res)
++ return res;
++
+ pRaInfo->RAUseRate = (pRaInfo->RateMask) & MaskFromReg;
+ break;
+ case 14:
+- MaskFromReg = rtw_read32(dm_odm->Adapter, REG_ARFR2);
++ res = rtw_read32(dm_odm->Adapter, REG_ARFR2, &MaskFromReg);
++ if (res)
++ return res;
++
+ pRaInfo->RAUseRate = (pRaInfo->RateMask) & MaskFromReg;
+ break;
+ case 15:
+- MaskFromReg = rtw_read32(dm_odm->Adapter, REG_ARFR3);
++ res = rtw_read32(dm_odm->Adapter, REG_ARFR3, &MaskFromReg);
++ if (res)
++ return res;
++
+ pRaInfo->RAUseRate = (pRaInfo->RateMask) & MaskFromReg;
+ break;
+ default:
+diff --git a/drivers/staging/r8188eu/hal/HalPhyRf_8188e.c b/drivers/staging/r8188eu/hal/HalPhyRf_8188e.c
+index b944c8071a3b9..525deab10820b 100644
+--- a/drivers/staging/r8188eu/hal/HalPhyRf_8188e.c
++++ b/drivers/staging/r8188eu/hal/HalPhyRf_8188e.c
+@@ -463,6 +463,7 @@ void _PHY_SaveADDARegisters(struct adapter *adapt, u32 *ADDAReg, u32 *ADDABackup
+ }
+ }
+
++/* FIXME: return an error to caller */
+ static void _PHY_SaveMACRegisters(
+ struct adapter *adapt,
+ u32 *MACReg,
+@@ -470,11 +471,20 @@ static void _PHY_SaveMACRegisters(
+ )
+ {
+ u32 i;
++ int res;
+
+- for (i = 0; i < (IQK_MAC_REG_NUM - 1); i++)
+- MACBackup[i] = rtw_read8(adapt, MACReg[i]);
++ for (i = 0; i < (IQK_MAC_REG_NUM - 1); i++) {
++ u8 reg;
++
++ res = rtw_read8(adapt, MACReg[i], ®);
++ if (res)
++ return;
+
+- MACBackup[i] = rtw_read32(adapt, MACReg[i]);
++ MACBackup[i] = reg;
++ }
++
++ res = rtw_read32(adapt, MACReg[i], MACBackup + i);
++ (void)res;
+ }
+
+ static void reload_adda_reg(struct adapter *adapt, u32 *ADDAReg, u32 *ADDABackup, u32 RegiesterNum)
+@@ -739,9 +749,12 @@ static void phy_LCCalibrate_8188E(struct adapter *adapt)
+ {
+ u8 tmpreg;
+ u32 RF_Amode = 0, LC_Cal;
++ int res;
+
+ /* Check continuous TX and Packet TX */
+- tmpreg = rtw_read8(adapt, 0xd03);
++ res = rtw_read8(adapt, 0xd03, &tmpreg);
++ if (res)
++ return;
+
+ if ((tmpreg & 0x70) != 0) /* Deal with contisuous TX case */
+ rtw_write8(adapt, 0xd03, tmpreg & 0x8F); /* disable all continuous TX */
+diff --git a/drivers/staging/r8188eu/hal/HalPwrSeqCmd.c b/drivers/staging/r8188eu/hal/HalPwrSeqCmd.c
+index 150ea380c39e9..4a4563b900b39 100644
+--- a/drivers/staging/r8188eu/hal/HalPwrSeqCmd.c
++++ b/drivers/staging/r8188eu/hal/HalPwrSeqCmd.c
+@@ -12,6 +12,7 @@ u8 HalPwrSeqCmdParsing(struct adapter *padapter, struct wl_pwr_cfg pwrseqcmd[])
+ u32 offset = 0;
+ u32 poll_count = 0; /* polling autoload done. */
+ u32 max_poll_count = 5000;
++ int res;
+
+ do {
+ pwrcfgcmd = pwrseqcmd[aryidx];
+@@ -21,7 +22,9 @@ u8 HalPwrSeqCmdParsing(struct adapter *padapter, struct wl_pwr_cfg pwrseqcmd[])
+ offset = GET_PWR_CFG_OFFSET(pwrcfgcmd);
+
+ /* Read the value from system register */
+- value = rtw_read8(padapter, offset);
++ res = rtw_read8(padapter, offset, &value);
++ if (res)
++ return false;
+
+ value &= ~(GET_PWR_CFG_MASK(pwrcfgcmd));
+ value |= (GET_PWR_CFG_VALUE(pwrcfgcmd) & GET_PWR_CFG_MASK(pwrcfgcmd));
+@@ -33,7 +36,9 @@ u8 HalPwrSeqCmdParsing(struct adapter *padapter, struct wl_pwr_cfg pwrseqcmd[])
+ poll_bit = false;
+ offset = GET_PWR_CFG_OFFSET(pwrcfgcmd);
+ do {
+- value = rtw_read8(padapter, offset);
++ res = rtw_read8(padapter, offset, &value);
++ if (res)
++ return false;
+
+ value &= GET_PWR_CFG_MASK(pwrcfgcmd);
+ if (value == (GET_PWR_CFG_VALUE(pwrcfgcmd) & GET_PWR_CFG_MASK(pwrcfgcmd)))
+diff --git a/drivers/staging/r8188eu/hal/hal_com.c b/drivers/staging/r8188eu/hal/hal_com.c
+index 910cc07f656ca..e9a32dd84a8ef 100644
+--- a/drivers/staging/r8188eu/hal/hal_com.c
++++ b/drivers/staging/r8188eu/hal/hal_com.c
+@@ -303,7 +303,9 @@ s32 c2h_evt_read(struct adapter *adapter, u8 *buf)
+ if (!buf)
+ goto exit;
+
+- trigger = rtw_read8(adapter, REG_C2HEVT_CLEAR);
++ ret = rtw_read8(adapter, REG_C2HEVT_CLEAR, &trigger);
++ if (ret)
++ return _FAIL;
+
+ if (trigger == C2H_EVT_HOST_CLOSE)
+ goto exit; /* Not ready */
+@@ -314,13 +316,26 @@ s32 c2h_evt_read(struct adapter *adapter, u8 *buf)
+
+ memset(c2h_evt, 0, 16);
+
+- *buf = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL);
+- *(buf + 1) = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL + 1);
++ ret = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL, buf);
++ if (ret) {
++ ret = _FAIL;
++ goto clear_evt;
++ }
+
++ ret = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL + 1, buf + 1);
++ if (ret) {
++ ret = _FAIL;
++ goto clear_evt;
++ }
+ /* Read the content */
+- for (i = 0; i < c2h_evt->plen; i++)
+- c2h_evt->payload[i] = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL +
+- sizeof(*c2h_evt) + i);
++ for (i = 0; i < c2h_evt->plen; i++) {
++ ret = rtw_read8(adapter, REG_C2HEVT_MSG_NORMAL +
++ sizeof(*c2h_evt) + i, c2h_evt->payload + i);
++ if (ret) {
++ ret = _FAIL;
++ goto clear_evt;
++ }
++ }
+
+ ret = _SUCCESS;
+
+diff --git a/drivers/staging/r8188eu/hal/rtl8188e_cmd.c b/drivers/staging/r8188eu/hal/rtl8188e_cmd.c
+index 475650dc73011..b01ee1695fee2 100644
+--- a/drivers/staging/r8188eu/hal/rtl8188e_cmd.c
++++ b/drivers/staging/r8188eu/hal/rtl8188e_cmd.c
+@@ -18,13 +18,18 @@
+
+ static u8 _is_fw_read_cmd_down(struct adapter *adapt, u8 msgbox_num)
+ {
+- u8 read_down = false;
++ u8 read_down = false, reg;
+ int retry_cnts = 100;
++ int res;
+
+ u8 valid;
+
+ do {
+- valid = rtw_read8(adapt, REG_HMETFR) & BIT(msgbox_num);
++ res = rtw_read8(adapt, REG_HMETFR, ®);
++ if (res)
++ continue;
++
++ valid = reg & BIT(msgbox_num);
+ if (0 == valid)
+ read_down = true;
+ } while ((!read_down) && (retry_cnts--));
+@@ -533,6 +538,8 @@ void rtl8188e_set_FwJoinBssReport_cmd(struct adapter *adapt, u8 mstatus)
+ bool bcn_valid = false;
+ u8 DLBcnCount = 0;
+ u32 poll = 0;
++ u8 reg;
++ int res;
+
+ if (mstatus == 1) {
+ /* We should set AID, correct TSF, HW seq enable before set JoinBssReport to Fw in 88/92C. */
+@@ -547,8 +554,17 @@ void rtl8188e_set_FwJoinBssReport_cmd(struct adapter *adapt, u8 mstatus)
+ /* Disable Hw protection for a time which revserd for Hw sending beacon. */
+ /* Fix download reserved page packet fail that access collision with the protection time. */
+ /* 2010.05.11. Added by tynli. */
+- rtw_write8(adapt, REG_BCN_CTRL, rtw_read8(adapt, REG_BCN_CTRL) & (~BIT(3)));
+- rtw_write8(adapt, REG_BCN_CTRL, rtw_read8(adapt, REG_BCN_CTRL) | BIT(4));
++ res = rtw_read8(adapt, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapt, REG_BCN_CTRL, reg & (~BIT(3)));
++
++ res = rtw_read8(adapt, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapt, REG_BCN_CTRL, reg | BIT(4));
+
+ if (haldata->RegFwHwTxQCtrl & BIT(6))
+ bSendBeacon = true;
+@@ -581,8 +597,17 @@ void rtl8188e_set_FwJoinBssReport_cmd(struct adapter *adapt, u8 mstatus)
+ /* */
+
+ /* Enable Bcn */
+- rtw_write8(adapt, REG_BCN_CTRL, rtw_read8(adapt, REG_BCN_CTRL) | BIT(3));
+- rtw_write8(adapt, REG_BCN_CTRL, rtw_read8(adapt, REG_BCN_CTRL) & (~BIT(4)));
++ res = rtw_read8(adapt, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapt, REG_BCN_CTRL, reg | BIT(3));
++
++ res = rtw_read8(adapt, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapt, REG_BCN_CTRL, reg & (~BIT(4)));
+
+ /* To make sure that if there exists an adapter which would like to send beacon. */
+ /* If exists, the origianl value of 0x422[6] will be 1, we should check this to */
+diff --git a/drivers/staging/r8188eu/hal/rtl8188e_dm.c b/drivers/staging/r8188eu/hal/rtl8188e_dm.c
+index 6d28e3dc0d261..0399872c45460 100644
+--- a/drivers/staging/r8188eu/hal/rtl8188e_dm.c
++++ b/drivers/staging/r8188eu/hal/rtl8188e_dm.c
+@@ -12,8 +12,12 @@
+ static void dm_InitGPIOSetting(struct adapter *Adapter)
+ {
+ u8 tmp1byte;
++ int res;
++
++ res = rtw_read8(Adapter, REG_GPIO_MUXCFG, &tmp1byte);
++ if (res)
++ return;
+
+- tmp1byte = rtw_read8(Adapter, REG_GPIO_MUXCFG);
+ tmp1byte &= (GPIOSEL_GPIO | ~GPIOSEL_ENBT);
+
+ rtw_write8(Adapter, REG_GPIO_MUXCFG, tmp1byte);
+diff --git a/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c b/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
+index e17375a74f179..5549e7be334ab 100644
+--- a/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
++++ b/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
+@@ -13,10 +13,14 @@
+ static void iol_mode_enable(struct adapter *padapter, u8 enable)
+ {
+ u8 reg_0xf0 = 0;
++ int res;
+
+ if (enable) {
+ /* Enable initial offload */
+- reg_0xf0 = rtw_read8(padapter, REG_SYS_CFG);
++ res = rtw_read8(padapter, REG_SYS_CFG, ®_0xf0);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_SYS_CFG, reg_0xf0 | SW_OFFLOAD_EN);
+
+ if (!padapter->bFWReady)
+@@ -24,7 +28,10 @@ static void iol_mode_enable(struct adapter *padapter, u8 enable)
+
+ } else {
+ /* disable initial offload */
+- reg_0xf0 = rtw_read8(padapter, REG_SYS_CFG);
++ res = rtw_read8(padapter, REG_SYS_CFG, ®_0xf0);
++ if (res)
++ return;
++
+ rtw_write8(padapter, REG_SYS_CFG, reg_0xf0 & ~SW_OFFLOAD_EN);
+ }
+ }
+@@ -34,17 +41,31 @@ static s32 iol_execute(struct adapter *padapter, u8 control)
+ s32 status = _FAIL;
+ u8 reg_0x88 = 0;
+ unsigned long timeout;
++ int res;
+
+ control = control & 0x0f;
+- reg_0x88 = rtw_read8(padapter, REG_HMEBOX_E0);
++ res = rtw_read8(padapter, REG_HMEBOX_E0, ®_0x88);
++ if (res)
++ return _FAIL;
++
+ rtw_write8(padapter, REG_HMEBOX_E0, reg_0x88 | control);
+
+ timeout = jiffies + msecs_to_jiffies(1000);
+- while ((reg_0x88 = rtw_read8(padapter, REG_HMEBOX_E0)) & control &&
+- time_before(jiffies, timeout))
+- ;
+
+- reg_0x88 = rtw_read8(padapter, REG_HMEBOX_E0);
++ do {
++ res = rtw_read8(padapter, REG_HMEBOX_E0, ®_0x88);
++ if (res)
++ continue;
++
++ if (!(reg_0x88 & control))
++ break;
++
++ } while (time_before(jiffies, timeout));
++
++ res = rtw_read8(padapter, REG_HMEBOX_E0, ®_0x88);
++ if (res)
++ return _FAIL;
++
+ status = (reg_0x88 & control) ? _FAIL : _SUCCESS;
+ if (reg_0x88 & control << 4)
+ status = _FAIL;
+@@ -179,7 +200,8 @@ exit:
+ kfree(eFuseWord);
+ }
+
+-static void efuse_read_phymap_from_txpktbuf(
++/* FIXME: add error handling in callers */
++static int efuse_read_phymap_from_txpktbuf(
+ struct adapter *adapter,
+ int bcnhead, /* beacon head, where FW store len(2-byte) and efuse physical map. */
+ u8 *content, /* buffer to store efuse physical map */
+@@ -190,13 +212,19 @@ static void efuse_read_phymap_from_txpktbuf(
+ u16 dbg_addr = 0;
+ __le32 lo32 = 0, hi32 = 0;
+ u16 len = 0, count = 0;
+- int i = 0;
++ int i = 0, res;
+ u16 limit = *size;
+-
++ u8 reg;
+ u8 *pos = content;
++ u32 reg32;
+
+- if (bcnhead < 0) /* if not valid */
+- bcnhead = rtw_read8(adapter, REG_TDECTRL + 1);
++ if (bcnhead < 0) { /* if not valid */
++ res = rtw_read8(adapter, REG_TDECTRL + 1, ®);
++ if (res)
++ return res;
++
++ bcnhead = reg;
++ }
+
+ rtw_write8(adapter, REG_PKT_BUFF_ACCESS_CTRL, TXPKT_BUF_SELECT);
+
+@@ -207,19 +235,40 @@ static void efuse_read_phymap_from_txpktbuf(
+
+ rtw_write8(adapter, REG_TXPKTBUF_DBG, 0);
+ timeout = jiffies + msecs_to_jiffies(1000);
+- while (!rtw_read8(adapter, REG_TXPKTBUF_DBG) && time_before(jiffies, timeout))
++ do {
++ res = rtw_read8(adapter, REG_TXPKTBUF_DBG, ®);
++ if (res)
++ continue;
++
++ if (reg)
++ break;
++
+ rtw_usleep_os(100);
++ } while (time_before(jiffies, timeout));
+
+ /* data from EEPROM needs to be in LE */
+- lo32 = cpu_to_le32(rtw_read32(adapter, REG_PKTBUF_DBG_DATA_L));
+- hi32 = cpu_to_le32(rtw_read32(adapter, REG_PKTBUF_DBG_DATA_H));
++ res = rtw_read32(adapter, REG_PKTBUF_DBG_DATA_L, ®32);
++ if (res)
++ return res;
++
++ lo32 = cpu_to_le32(reg32);
++
++ res = rtw_read32(adapter, REG_PKTBUF_DBG_DATA_H, ®32);
++ if (res)
++ return res;
++
++ hi32 = cpu_to_le32(reg32);
+
+ if (i == 0) {
++ u16 reg;
++
+ /* Although lenc is only used in a debug statement,
+ * do not remove it as the rtw_read16() call consumes
+ * 2 bytes from the EEPROM source.
+ */
+- rtw_read16(adapter, REG_PKTBUF_DBG_DATA_L);
++ res = rtw_read16(adapter, REG_PKTBUF_DBG_DATA_L, ®);
++ if (res)
++ return res;
+
+ len = le32_to_cpu(lo32) & 0x0000ffff;
+
+@@ -246,6 +295,8 @@ static void efuse_read_phymap_from_txpktbuf(
+ }
+ rtw_write8(adapter, REG_PKT_BUFF_ACCESS_CTRL, DISABLE_TRXPKT_BUF_ACCESS);
+ *size = count;
++
++ return 0;
+ }
+
+ static s32 iol_read_efuse(struct adapter *padapter, u8 txpktbuf_bndy, u16 offset, u16 size_byte, u8 *logical_map)
+@@ -321,25 +372,35 @@ exit:
+ void rtl8188e_EfusePowerSwitch(struct adapter *pAdapter, u8 PwrState)
+ {
+ u16 tmpV16;
++ int res;
+
+ if (PwrState) {
+ rtw_write8(pAdapter, REG_EFUSE_ACCESS, EFUSE_ACCESS_ON);
+
+ /* 1.2V Power: From VDDON with Power Cut(0x0000h[15]), defualt valid */
+- tmpV16 = rtw_read16(pAdapter, REG_SYS_ISO_CTRL);
++ res = rtw_read16(pAdapter, REG_SYS_ISO_CTRL, &tmpV16);
++ if (res)
++ return;
++
+ if (!(tmpV16 & PWC_EV12V)) {
+ tmpV16 |= PWC_EV12V;
+ rtw_write16(pAdapter, REG_SYS_ISO_CTRL, tmpV16);
+ }
+ /* Reset: 0x0000h[28], default valid */
+- tmpV16 = rtw_read16(pAdapter, REG_SYS_FUNC_EN);
++ res = rtw_read16(pAdapter, REG_SYS_FUNC_EN, &tmpV16);
++ if (res)
++ return;
++
+ if (!(tmpV16 & FEN_ELDR)) {
+ tmpV16 |= FEN_ELDR;
+ rtw_write16(pAdapter, REG_SYS_FUNC_EN, tmpV16);
+ }
+
+ /* Clock: Gated(0x0008h[5]) 8M(0x0008h[1]) clock from ANA, default valid */
+- tmpV16 = rtw_read16(pAdapter, REG_SYS_CLKR);
++ res = rtw_read16(pAdapter, REG_SYS_CLKR, &tmpV16);
++ if (res)
++ return;
++
+ if ((!(tmpV16 & LOADER_CLK_EN)) || (!(tmpV16 & ANA8M))) {
+ tmpV16 |= (LOADER_CLK_EN | ANA8M);
+ rtw_write16(pAdapter, REG_SYS_CLKR, tmpV16);
+@@ -497,8 +558,12 @@ void rtl8188e_read_chip_version(struct adapter *padapter)
+ u32 value32;
+ struct HAL_VERSION ChipVersion;
+ struct hal_data_8188e *pHalData = &padapter->haldata;
++ int res;
++
++ res = rtw_read32(padapter, REG_SYS_CFG, &value32);
++ if (res)
++ return;
+
+- value32 = rtw_read32(padapter, REG_SYS_CFG);
+ ChipVersion.ChipType = ((value32 & RTL_ID) ? TEST_CHIP : NORMAL_CHIP);
+
+ ChipVersion.VendorType = ((value32 & VENDOR_ID) ? CHIP_VENDOR_UMC : CHIP_VENDOR_TSMC);
+@@ -525,10 +590,17 @@ void rtl8188e_SetHalODMVar(struct adapter *Adapter, void *pValue1, bool bSet)
+
+ void hal_notch_filter_8188e(struct adapter *adapter, bool enable)
+ {
++ int res;
++ u8 reg;
++
++ res = rtw_read8(adapter, rOFDM0_RxDSP + 1, ®);
++ if (res)
++ return;
++
+ if (enable)
+- rtw_write8(adapter, rOFDM0_RxDSP + 1, rtw_read8(adapter, rOFDM0_RxDSP + 1) | BIT(1));
++ rtw_write8(adapter, rOFDM0_RxDSP + 1, reg | BIT(1));
+ else
+- rtw_write8(adapter, rOFDM0_RxDSP + 1, rtw_read8(adapter, rOFDM0_RxDSP + 1) & ~BIT(1));
++ rtw_write8(adapter, rOFDM0_RxDSP + 1, reg & ~BIT(1));
+ }
+
+ /* */
+@@ -538,26 +610,24 @@ void hal_notch_filter_8188e(struct adapter *adapter, bool enable)
+ /* */
+ static s32 _LLTWrite(struct adapter *padapter, u32 address, u32 data)
+ {
+- s32 status = _SUCCESS;
+- s32 count = 0;
++ s32 count;
+ u32 value = _LLT_INIT_ADDR(address) | _LLT_INIT_DATA(data) | _LLT_OP(_LLT_WRITE_ACCESS);
+ u16 LLTReg = REG_LLT_INIT;
++ int res;
+
+ rtw_write32(padapter, LLTReg, value);
+
+ /* polling */
+- do {
+- value = rtw_read32(padapter, LLTReg);
+- if (_LLT_NO_ACTIVE == _LLT_OP_VALUE(value))
+- break;
++ for (count = 0; count <= POLLING_LLT_THRESHOLD; count++) {
++ res = rtw_read32(padapter, LLTReg, &value);
++ if (res)
++ continue;
+
+- if (count > POLLING_LLT_THRESHOLD) {
+- status = _FAIL;
++ if (_LLT_NO_ACTIVE == _LLT_OP_VALUE(value))
+ break;
+- }
+- } while (count++);
++ }
+
+- return status;
++ return count > POLLING_LLT_THRESHOLD ? _FAIL : _SUCCESS;
+ }
+
+ s32 InitLLTTable(struct adapter *padapter, u8 txpktbuf_bndy)
+diff --git a/drivers/staging/r8188eu/hal/rtl8188e_phycfg.c b/drivers/staging/r8188eu/hal/rtl8188e_phycfg.c
+index 4864dafd887b9..dea6d915a1f40 100644
+--- a/drivers/staging/r8188eu/hal/rtl8188e_phycfg.c
++++ b/drivers/staging/r8188eu/hal/rtl8188e_phycfg.c
+@@ -56,8 +56,12 @@ rtl8188e_PHY_QueryBBReg(
+ )
+ {
+ u32 ReturnValue = 0, OriginalValue, BitShift;
++ int res;
++
++ res = rtw_read32(Adapter, RegAddr, &OriginalValue);
++ if (res)
++ return 0;
+
+- OriginalValue = rtw_read32(Adapter, RegAddr);
+ BitShift = phy_CalculateBitShift(BitMask);
+ ReturnValue = (OriginalValue & BitMask) >> BitShift;
+ return ReturnValue;
+@@ -84,9 +88,13 @@ rtl8188e_PHY_QueryBBReg(
+ void rtl8188e_PHY_SetBBReg(struct adapter *Adapter, u32 RegAddr, u32 BitMask, u32 Data)
+ {
+ u32 OriginalValue, BitShift;
++ int res;
+
+ if (BitMask != bMaskDWord) { /* if not "double word" write */
+- OriginalValue = rtw_read32(Adapter, RegAddr);
++ res = rtw_read32(Adapter, RegAddr, &OriginalValue);
++ if (res)
++ return;
++
+ BitShift = phy_CalculateBitShift(BitMask);
+ Data = ((OriginalValue & (~BitMask)) | (Data << BitShift));
+ }
+@@ -484,13 +492,17 @@ PHY_BBConfig8188E(
+ {
+ int rtStatus = _SUCCESS;
+ struct hal_data_8188e *pHalData = &Adapter->haldata;
+- u32 RegVal;
++ u16 RegVal;
+ u8 CrystalCap;
++ int res;
+
+ phy_InitBBRFRegisterDefinition(Adapter);
+
+ /* Enable BB and RF */
+- RegVal = rtw_read16(Adapter, REG_SYS_FUNC_EN);
++ res = rtw_read16(Adapter, REG_SYS_FUNC_EN, &RegVal);
++ if (res)
++ return _FAIL;
++
+ rtw_write16(Adapter, REG_SYS_FUNC_EN, (u16)(RegVal | BIT(13) | BIT(0) | BIT(1)));
+
+ /* 20090923 Joseph: Advised by Steven and Jenyu. Power sequence before init RF. */
+@@ -594,6 +606,7 @@ _PHY_SetBWMode92C(
+ struct hal_data_8188e *pHalData = &Adapter->haldata;
+ u8 regBwOpMode;
+ u8 regRRSR_RSC;
++ int res;
+
+ if (Adapter->bDriverStopped)
+ return;
+@@ -602,8 +615,13 @@ _PHY_SetBWMode92C(
+ /* 3<1>Set MAC register */
+ /* 3 */
+
+- regBwOpMode = rtw_read8(Adapter, REG_BWOPMODE);
+- regRRSR_RSC = rtw_read8(Adapter, REG_RRSR + 2);
++ res = rtw_read8(Adapter, REG_BWOPMODE, ®BwOpMode);
++ if (res)
++ return;
++
++ res = rtw_read8(Adapter, REG_RRSR + 2, ®RRSR_RSC);
++ if (res)
++ return;
+
+ switch (pHalData->CurrentChannelBW) {
+ case HT_CHANNEL_WIDTH_20:
+diff --git a/drivers/staging/r8188eu/hal/usb_halinit.c b/drivers/staging/r8188eu/hal/usb_halinit.c
+index a217272a07f85..0afde5038b3f7 100644
+--- a/drivers/staging/r8188eu/hal/usb_halinit.c
++++ b/drivers/staging/r8188eu/hal/usb_halinit.c
+@@ -52,6 +52,8 @@ void rtl8188eu_interface_configure(struct adapter *adapt)
+ u32 rtl8188eu_InitPowerOn(struct adapter *adapt)
+ {
+ u16 value16;
++ int res;
++
+ /* HW Power on sequence */
+ struct hal_data_8188e *haldata = &adapt->haldata;
+ if (haldata->bMacPwrCtrlOn)
+@@ -65,7 +67,10 @@ u32 rtl8188eu_InitPowerOn(struct adapter *adapt)
+ rtw_write16(adapt, REG_CR, 0x00); /* suggseted by zhouzhou, by page, 20111230 */
+
+ /* Enable MAC DMA/WMAC/SCHEDULE/SEC block */
+- value16 = rtw_read16(adapt, REG_CR);
++ res = rtw_read16(adapt, REG_CR, &value16);
++ if (res)
++ return _FAIL;
++
+ value16 |= (HCI_TXDMA_EN | HCI_RXDMA_EN | TXDMA_EN | RXDMA_EN
+ | PROTOCOL_EN | SCHEDULE_EN | ENSEC | CALTMR_EN);
+ /* for SDIO - Set CR bit10 to enable 32k calibration. Suggested by SD1 Gimmy. Added by tynli. 2011.08.31. */
+@@ -81,6 +86,7 @@ static void _InitInterrupt(struct adapter *Adapter)
+ {
+ u32 imr, imr_ex;
+ u8 usb_opt;
++ int res;
+
+ /* HISR write one to clear */
+ rtw_write32(Adapter, REG_HISR_88E, 0xFFFFFFFF);
+@@ -94,7 +100,9 @@ static void _InitInterrupt(struct adapter *Adapter)
+ /* REG_USB_SPECIAL_OPTION - BIT(4) */
+ /* 0; Use interrupt endpoint to upload interrupt pkt */
+ /* 1; Use bulk endpoint to upload interrupt pkt, */
+- usb_opt = rtw_read8(Adapter, REG_USB_SPECIAL_OPTION);
++ res = rtw_read8(Adapter, REG_USB_SPECIAL_OPTION, &usb_opt);
++ if (res)
++ return;
+
+ if (adapter_to_dvobj(Adapter)->pusbdev->speed == USB_SPEED_HIGH)
+ usb_opt = usb_opt | (INT_BULK_SEL);
+@@ -163,7 +171,14 @@ static void _InitNormalChipRegPriority(struct adapter *Adapter, u16 beQ,
+ u16 bkQ, u16 viQ, u16 voQ, u16 mgtQ,
+ u16 hiQ)
+ {
+- u16 value16 = (rtw_read16(Adapter, REG_TRXDMA_CTRL) & 0x7);
++ u16 value16;
++ int res;
++
++ res = rtw_read16(Adapter, REG_TRXDMA_CTRL, &value16);
++ if (res)
++ return;
++
++ value16 &= 0x7;
+
+ value16 |= _TXDMA_BEQ_MAP(beQ) | _TXDMA_BKQ_MAP(bkQ) |
+ _TXDMA_VIQ_MAP(viQ) | _TXDMA_VOQ_MAP(voQ) |
+@@ -282,8 +297,12 @@ static void _InitQueuePriority(struct adapter *Adapter)
+ static void _InitNetworkType(struct adapter *Adapter)
+ {
+ u32 value32;
++ int res;
++
++ res = rtw_read32(Adapter, REG_CR, &value32);
++ if (res)
++ return;
+
+- value32 = rtw_read32(Adapter, REG_CR);
+ /* TODO: use the other function to set network type */
+ value32 = (value32 & ~MASK_NETTYPE) | _NETTYPE(NT_LINK_AP);
+
+@@ -323,9 +342,13 @@ static void _InitAdaptiveCtrl(struct adapter *Adapter)
+ {
+ u16 value16;
+ u32 value32;
++ int res;
+
+ /* Response Rate Set */
+- value32 = rtw_read32(Adapter, REG_RRSR);
++ res = rtw_read32(Adapter, REG_RRSR, &value32);
++ if (res)
++ return;
++
+ value32 &= ~RATE_BITMAP_ALL;
+ value32 |= RATE_RRSR_CCK_ONLY_1M;
+ rtw_write32(Adapter, REG_RRSR, value32);
+@@ -363,8 +386,12 @@ static void _InitEDCA(struct adapter *Adapter)
+ static void _InitRetryFunction(struct adapter *Adapter)
+ {
+ u8 value8;
++ int res;
++
++ res = rtw_read8(Adapter, REG_FWHW_TXQ_CTRL, &value8);
++ if (res)
++ return;
+
+- value8 = rtw_read8(Adapter, REG_FWHW_TXQ_CTRL);
+ value8 |= EN_AMPDU_RTY_NEW;
+ rtw_write8(Adapter, REG_FWHW_TXQ_CTRL, value8);
+
+@@ -390,11 +417,15 @@ static void _InitRetryFunction(struct adapter *Adapter)
+ static void usb_AggSettingTxUpdate(struct adapter *Adapter)
+ {
+ u32 value32;
++ int res;
+
+ if (Adapter->registrypriv.wifi_spec)
+ return;
+
+- value32 = rtw_read32(Adapter, REG_TDECTRL);
++ res = rtw_read32(Adapter, REG_TDECTRL, &value32);
++ if (res)
++ return;
++
+ value32 = value32 & ~(BLK_DESC_NUM_MASK << BLK_DESC_NUM_SHIFT);
+ value32 |= ((USB_TXAGG_DESC_NUM & BLK_DESC_NUM_MASK) << BLK_DESC_NUM_SHIFT);
+
+@@ -423,9 +454,15 @@ usb_AggSettingRxUpdate(
+ {
+ u8 valueDMA;
+ u8 valueUSB;
++ int res;
++
++ res = rtw_read8(Adapter, REG_TRXDMA_CTRL, &valueDMA);
++ if (res)
++ return;
+
+- valueDMA = rtw_read8(Adapter, REG_TRXDMA_CTRL);
+- valueUSB = rtw_read8(Adapter, REG_USB_SPECIAL_OPTION);
++ res = rtw_read8(Adapter, REG_USB_SPECIAL_OPTION, &valueUSB);
++ if (res)
++ return;
+
+ valueDMA |= RXDMA_AGG_EN;
+ valueUSB &= ~USB_AGG_EN;
+@@ -446,9 +483,11 @@ static void InitUsbAggregationSetting(struct adapter *Adapter)
+ usb_AggSettingRxUpdate(Adapter);
+ }
+
+-static void _InitBeaconParameters(struct adapter *Adapter)
++/* FIXME: add error handling in callers */
++static int _InitBeaconParameters(struct adapter *Adapter)
+ {
+ struct hal_data_8188e *haldata = &Adapter->haldata;
++ int res;
+
+ rtw_write16(Adapter, REG_BCN_CTRL, 0x1010);
+
+@@ -461,9 +500,19 @@ static void _InitBeaconParameters(struct adapter *Adapter)
+ /* beacause test chip does not contension before sending beacon. by tynli. 2009.11.03 */
+ rtw_write16(Adapter, REG_BCNTCFG, 0x660F);
+
+- haldata->RegFwHwTxQCtrl = rtw_read8(Adapter, REG_FWHW_TXQ_CTRL + 2);
+- haldata->RegReg542 = rtw_read8(Adapter, REG_TBTT_PROHIBIT + 2);
+- haldata->RegCR_1 = rtw_read8(Adapter, REG_CR + 1);
++ res = rtw_read8(Adapter, REG_FWHW_TXQ_CTRL + 2, &haldata->RegFwHwTxQCtrl);
++ if (res)
++ return res;
++
++ res = rtw_read8(Adapter, REG_TBTT_PROHIBIT + 2, &haldata->RegReg542);
++ if (res)
++ return res;
++
++ res = rtw_read8(Adapter, REG_CR + 1, &haldata->RegCR_1);
++ if (res)
++ return res;
++
++ return 0;
+ }
+
+ static void _BeaconFunctionEnable(struct adapter *Adapter,
+@@ -484,11 +533,17 @@ static void _BBTurnOnBlock(struct adapter *Adapter)
+ static void _InitAntenna_Selection(struct adapter *Adapter)
+ {
+ struct hal_data_8188e *haldata = &Adapter->haldata;
++ int res;
++ u32 reg;
+
+ if (haldata->AntDivCfg == 0)
+ return;
+
+- rtw_write32(Adapter, REG_LEDCFG0, rtw_read32(Adapter, REG_LEDCFG0) | BIT(23));
++ res = rtw_read32(Adapter, REG_LEDCFG0, ®);
++ if (res)
++ return;
++
++ rtw_write32(Adapter, REG_LEDCFG0, reg | BIT(23));
+ rtl8188e_PHY_SetBBReg(Adapter, rFPGA0_XAB_RFParameter, BIT(13), 0x01);
+
+ if (rtl8188e_PHY_QueryBBReg(Adapter, rFPGA0_XA_RFInterfaceOE, 0x300) == Antenna_A)
+@@ -514,9 +569,11 @@ u32 rtl8188eu_hal_init(struct adapter *Adapter)
+ u16 value16;
+ u8 txpktbuf_bndy;
+ u32 status = _SUCCESS;
++ int res;
+ struct hal_data_8188e *haldata = &Adapter->haldata;
+ struct pwrctrl_priv *pwrctrlpriv = &Adapter->pwrctrlpriv;
+ struct registry_priv *pregistrypriv = &Adapter->registrypriv;
++ u32 reg;
+
+ if (Adapter->pwrctrlpriv.bkeepfwalive) {
+ if (haldata->odmpriv.RFCalibrateInfo.bIQKInitialized) {
+@@ -614,13 +671,19 @@ u32 rtl8188eu_hal_init(struct adapter *Adapter)
+ /* Hw bug which Hw initials RxFF boundary size to a value which is larger than the real Rx buffer size in 88E. */
+ /* */
+ /* Enable MACTXEN/MACRXEN block */
+- value16 = rtw_read16(Adapter, REG_CR);
++ res = rtw_read16(Adapter, REG_CR, &value16);
++ if (res)
++ return _FAIL;
++
+ value16 |= (MACTXEN | MACRXEN);
+ rtw_write8(Adapter, REG_CR, value16);
+
+ /* Enable TX Report */
+ /* Enable Tx Report Timer */
+- value8 = rtw_read8(Adapter, REG_TX_RPT_CTRL);
++ res = rtw_read8(Adapter, REG_TX_RPT_CTRL, &value8);
++ if (res)
++ return _FAIL;
++
+ rtw_write8(Adapter, REG_TX_RPT_CTRL, (value8 | BIT(1) | BIT(0)));
+ /* Set MAX RPT MACID */
+ rtw_write8(Adapter, REG_TX_RPT_CTRL + 1, 2);/* FOR sta mode ,0: bc/mc ,1:AP */
+@@ -684,7 +747,11 @@ u32 rtl8188eu_hal_init(struct adapter *Adapter)
+ rtw_write16(Adapter, REG_TX_RPT_TIME, 0x3DF0);
+
+ /* enable tx DMA to drop the redundate data of packet */
+- rtw_write16(Adapter, REG_TXDMA_OFFSET_CHK, (rtw_read16(Adapter, REG_TXDMA_OFFSET_CHK) | DROP_DATA_EN));
++ res = rtw_read16(Adapter, REG_TXDMA_OFFSET_CHK, &value16);
++ if (res)
++ return _FAIL;
++
++ rtw_write16(Adapter, REG_TXDMA_OFFSET_CHK, (value16 | DROP_DATA_EN));
+
+ /* 2010/08/26 MH Merge from 8192CE. */
+ if (pwrctrlpriv->rf_pwrstate == rf_on) {
+@@ -704,7 +771,11 @@ u32 rtl8188eu_hal_init(struct adapter *Adapter)
+ rtw_write8(Adapter, REG_USB_HRPWM, 0);
+
+ /* ack for xmit mgmt frames. */
+- rtw_write32(Adapter, REG_FWHW_TXQ_CTRL, rtw_read32(Adapter, REG_FWHW_TXQ_CTRL) | BIT(12));
++ res = rtw_read32(Adapter, REG_FWHW_TXQ_CTRL, ®);
++ if (res)
++ return _FAIL;
++
++ rtw_write32(Adapter, REG_FWHW_TXQ_CTRL, reg | BIT(12));
+
+ exit:
+ return status;
+@@ -714,9 +785,13 @@ static void CardDisableRTL8188EU(struct adapter *Adapter)
+ {
+ u8 val8;
+ struct hal_data_8188e *haldata = &Adapter->haldata;
++ int res;
+
+ /* Stop Tx Report Timer. 0x4EC[Bit1]=b'0 */
+- val8 = rtw_read8(Adapter, REG_TX_RPT_CTRL);
++ res = rtw_read8(Adapter, REG_TX_RPT_CTRL, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_TX_RPT_CTRL, val8 & (~BIT(1)));
+
+ /* stop rx */
+@@ -727,10 +802,16 @@ static void CardDisableRTL8188EU(struct adapter *Adapter)
+
+ /* 2. 0x1F[7:0] = 0 turn off RF */
+
+- val8 = rtw_read8(Adapter, REG_MCUFWDL);
++ res = rtw_read8(Adapter, REG_MCUFWDL, &val8);
++ if (res)
++ return;
++
+ if ((val8 & RAM_DL_SEL) && Adapter->bFWReady) { /* 8051 RAM code */
+ /* Reset MCU 0x2[10]=0. */
+- val8 = rtw_read8(Adapter, REG_SYS_FUNC_EN + 1);
++ res = rtw_read8(Adapter, REG_SYS_FUNC_EN + 1, &val8);
++ if (res)
++ return;
++
+ val8 &= ~BIT(2); /* 0x2[10], FEN_CPUEN */
+ rtw_write8(Adapter, REG_SYS_FUNC_EN + 1, val8);
+ }
+@@ -740,26 +821,45 @@ static void CardDisableRTL8188EU(struct adapter *Adapter)
+
+ /* YJ,add,111212 */
+ /* Disable 32k */
+- val8 = rtw_read8(Adapter, REG_32K_CTRL);
++ res = rtw_read8(Adapter, REG_32K_CTRL, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_32K_CTRL, val8 & (~BIT(0)));
+
+ /* Card disable power action flow */
+ HalPwrSeqCmdParsing(Adapter, Rtl8188E_NIC_DISABLE_FLOW);
+
+ /* Reset MCU IO Wrapper */
+- val8 = rtw_read8(Adapter, REG_RSV_CTRL + 1);
++ res = rtw_read8(Adapter, REG_RSV_CTRL + 1, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_RSV_CTRL + 1, (val8 & (~BIT(3))));
+- val8 = rtw_read8(Adapter, REG_RSV_CTRL + 1);
++
++ res = rtw_read8(Adapter, REG_RSV_CTRL + 1, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_RSV_CTRL + 1, val8 | BIT(3));
+
+ /* YJ,test add, 111207. For Power Consumption. */
+- val8 = rtw_read8(Adapter, GPIO_IN);
++ res = rtw_read8(Adapter, GPIO_IN, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, GPIO_OUT, val8);
+ rtw_write8(Adapter, GPIO_IO_SEL, 0xFF);/* Reg0x46 */
+
+- val8 = rtw_read8(Adapter, REG_GPIO_IO_SEL);
++ res = rtw_read8(Adapter, REG_GPIO_IO_SEL, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_GPIO_IO_SEL, (val8 << 4));
+- val8 = rtw_read8(Adapter, REG_GPIO_IO_SEL + 1);
++ res = rtw_read8(Adapter, REG_GPIO_IO_SEL + 1, &val8);
++ if (res)
++ return;
++
+ rtw_write8(Adapter, REG_GPIO_IO_SEL + 1, val8 | 0x0F);/* Reg0x43 */
+ rtw_write32(Adapter, REG_BB_PAD_CTRL, 0x00080808);/* set LNA ,TRSW,EX_PA Pin to output mode */
+ haldata->bMacPwrCtrlOn = false;
+@@ -830,9 +930,13 @@ void ReadAdapterInfo8188EU(struct adapter *Adapter)
+ struct eeprom_priv *eeprom = &Adapter->eeprompriv;
+ struct led_priv *ledpriv = &Adapter->ledpriv;
+ u8 eeValue;
++ int res;
+
+ /* check system boot selection */
+- eeValue = rtw_read8(Adapter, REG_9346CR);
++ res = rtw_read8(Adapter, REG_9346CR, &eeValue);
++ if (res)
++ return;
++
+ eeprom->EepromOrEfuse = (eeValue & BOOT_FROM_EEPROM);
+ eeprom->bautoload_fail_flag = !(eeValue & EEPROM_EN);
+
+@@ -887,12 +991,21 @@ static void hw_var_set_opmode(struct adapter *Adapter, u8 *val)
+ {
+ u8 val8;
+ u8 mode = *((u8 *)val);
++ int res;
+
+ /* disable Port0 TSF update */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) | BIT(4));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, &val8);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, val8 | BIT(4));
+
+ /* set net_type */
+- val8 = rtw_read8(Adapter, MSR) & 0x0c;
++ res = rtw_read8(Adapter, MSR, &val8);
++ if (res)
++ return;
++
++ val8 &= 0x0c;
+ val8 |= mode;
+ rtw_write8(Adapter, MSR, val8);
+
+@@ -927,14 +1040,22 @@ static void hw_var_set_opmode(struct adapter *Adapter, u8 *val)
+ rtw_write8(Adapter, REG_DUAL_TSF_RST, BIT(0));
+
+ /* BIT(3) - If set 0, hw will clr bcnq when tx becon ok/fail or port 0 */
+- rtw_write8(Adapter, REG_MBID_NUM, rtw_read8(Adapter, REG_MBID_NUM) | BIT(3) | BIT(4));
++ res = rtw_read8(Adapter, REG_MBID_NUM, &val8);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_MBID_NUM, val8 | BIT(3) | BIT(4));
+
+ /* enable BCN0 Function for if1 */
+ /* don't enable update TSF0 for if1 (due to TSF update when beacon/probe rsp are received) */
+ rtw_write8(Adapter, REG_BCN_CTRL, (DIS_TSF_UDT0_NORMAL_CHIP | EN_BCN_FUNCTION | BIT(1)));
+
+ /* dis BCN1 ATIM WND if if2 is station */
+- rtw_write8(Adapter, REG_BCN_CTRL_1, rtw_read8(Adapter, REG_BCN_CTRL_1) | BIT(0));
++ res = rtw_read8(Adapter, REG_BCN_CTRL_1, &val8);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL_1, val8 | BIT(0));
+ }
+ }
+
+@@ -943,6 +1064,8 @@ void SetHwReg8188EU(struct adapter *Adapter, u8 variable, u8 *val)
+ struct hal_data_8188e *haldata = &Adapter->haldata;
+ struct dm_priv *pdmpriv = &haldata->dmpriv;
+ struct odm_dm_struct *podmpriv = &haldata->odmpriv;
++ u8 reg;
++ int res;
+
+ switch (variable) {
+ case HW_VAR_SET_OPMODE:
+@@ -970,7 +1093,11 @@ void SetHwReg8188EU(struct adapter *Adapter, u8 variable, u8 *val)
+ /* Set RRSR rate table. */
+ rtw_write8(Adapter, REG_RRSR, BrateCfg & 0xff);
+ rtw_write8(Adapter, REG_RRSR + 1, (BrateCfg >> 8) & 0xff);
+- rtw_write8(Adapter, REG_RRSR + 2, rtw_read8(Adapter, REG_RRSR + 2) & 0xf0);
++ res = rtw_read8(Adapter, REG_RRSR + 2, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_RRSR + 2, reg & 0xf0);
+
+ /* Set RTS initial rate */
+ while (BrateCfg > 0x1) {
+@@ -994,13 +1121,21 @@ void SetHwReg8188EU(struct adapter *Adapter, u8 variable, u8 *val)
+ StopTxBeacon(Adapter);
+
+ /* disable related TSF function */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) & (~BIT(3)));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, reg & (~BIT(3)));
+
+ rtw_write32(Adapter, REG_TSFTR, tsf);
+ rtw_write32(Adapter, REG_TSFTR + 4, tsf >> 32);
+
+ /* enable related TSF function */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) | BIT(3));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, reg | BIT(3));
+
+ if (((pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE) || ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE))
+ ResumeTxBeacon(Adapter);
+@@ -1009,17 +1144,27 @@ void SetHwReg8188EU(struct adapter *Adapter, u8 variable, u8 *val)
+ case HW_VAR_MLME_SITESURVEY:
+ if (*((u8 *)val)) { /* under sitesurvey */
+ /* config RCR to receive different BSSID & not to receive data frame */
+- u32 v = rtw_read32(Adapter, REG_RCR);
++ u32 v;
++
++ res = rtw_read32(Adapter, REG_RCR, &v);
++ if (res)
++ return;
++
+ v &= ~(RCR_CBSSID_BCN);
+ rtw_write32(Adapter, REG_RCR, v);
+ /* reject all data frame */
+ rtw_write16(Adapter, REG_RXFLTMAP2, 0x00);
+
+ /* disable update TSF */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) | BIT(4));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, reg | BIT(4));
+ } else { /* sitesurvey done */
+ struct mlme_ext_priv *pmlmeext = &Adapter->mlmeextpriv;
+ struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info;
++ u32 reg32;
+
+ if ((is_client_associated_to_ap(Adapter)) ||
+ ((pmlmeinfo->state & 0x03) == WIFI_FW_ADHOC_STATE)) {
+@@ -1027,13 +1172,26 @@ void SetHwReg8188EU(struct adapter *Adapter, u8 variable, u8 *val)
+ rtw_write16(Adapter, REG_RXFLTMAP2, 0xFFFF);
+
+ /* enable update TSF */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) & (~BIT(4)));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, reg & (~BIT(4)));
+ } else if ((pmlmeinfo->state & 0x03) == WIFI_FW_AP_STATE) {
+ rtw_write16(Adapter, REG_RXFLTMAP2, 0xFFFF);
+ /* enable update TSF */
+- rtw_write8(Adapter, REG_BCN_CTRL, rtw_read8(Adapter, REG_BCN_CTRL) & (~BIT(4)));
++ res = rtw_read8(Adapter, REG_BCN_CTRL, ®);
++ if (res)
++ return;
++
++ rtw_write8(Adapter, REG_BCN_CTRL, reg & (~BIT(4)));
+ }
+- rtw_write32(Adapter, REG_RCR, rtw_read32(Adapter, REG_RCR) | RCR_CBSSID_BCN);
++
++ res = rtw_read32(Adapter, REG_RCR, ®32);
++ if (res)
++ return;
++
++ rtw_write32(Adapter, REG_RCR, reg32 | RCR_CBSSID_BCN);
+ }
+ break;
+ case HW_VAR_SLOT_TIME:
+@@ -1190,6 +1348,8 @@ void SetBeaconRelatedRegisters8188EUsb(struct adapter *adapt)
+ struct mlme_ext_priv *pmlmeext = &adapt->mlmeextpriv;
+ struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info;
+ u32 bcn_ctrl_reg = REG_BCN_CTRL;
++ int res;
++ u8 reg;
+ /* reset TSF, enable update TSF, correcting TSF On Beacon */
+
+ /* BCN interval */
+@@ -1200,7 +1360,10 @@ void SetBeaconRelatedRegisters8188EUsb(struct adapter *adapt)
+
+ rtw_write8(adapt, REG_SLOT, 0x09);
+
+- value32 = rtw_read32(adapt, REG_TCR);
++ res = rtw_read32(adapt, REG_TCR, &value32);
++ if (res)
++ return;
++
+ value32 &= ~TSFRST;
+ rtw_write32(adapt, REG_TCR, value32);
+
+@@ -1215,7 +1378,11 @@ void SetBeaconRelatedRegisters8188EUsb(struct adapter *adapt)
+
+ ResumeTxBeacon(adapt);
+
+- rtw_write8(adapt, bcn_ctrl_reg, rtw_read8(adapt, bcn_ctrl_reg) | BIT(1));
++ res = rtw_read8(adapt, bcn_ctrl_reg, ®);
++ if (res)
++ return;
++
++ rtw_write8(adapt, bcn_ctrl_reg, reg | BIT(1));
+ }
+
+ void rtl8188eu_init_default_value(struct adapter *adapt)
+diff --git a/drivers/staging/r8188eu/hal/usb_ops_linux.c b/drivers/staging/r8188eu/hal/usb_ops_linux.c
+index d5e674542a785..c1a4d023f6279 100644
+--- a/drivers/staging/r8188eu/hal/usb_ops_linux.c
++++ b/drivers/staging/r8188eu/hal/usb_ops_linux.c
+@@ -94,40 +94,47 @@ static int usb_write(struct intf_hdl *intf, u16 value, void *data, u8 size)
+ return status;
+ }
+
+-u8 rtw_read8(struct adapter *adapter, u32 addr)
++int __must_check rtw_read8(struct adapter *adapter, u32 addr, u8 *data)
+ {
+ struct io_priv *io_priv = &adapter->iopriv;
+ struct intf_hdl *intf = &io_priv->intf;
+ u16 value = addr & 0xffff;
+- u8 data;
+
+- usb_read(intf, value, &data, 1);
+-
+- return data;
++ return usb_read(intf, value, data, 1);
+ }
+
+-u16 rtw_read16(struct adapter *adapter, u32 addr)
++int __must_check rtw_read16(struct adapter *adapter, u32 addr, u16 *data)
+ {
+ struct io_priv *io_priv = &adapter->iopriv;
+ struct intf_hdl *intf = &io_priv->intf;
+ u16 value = addr & 0xffff;
+- __le16 data;
++ __le16 le_data;
++ int res;
++
++ res = usb_read(intf, value, &le_data, 2);
++ if (res)
++ return res;
+
+- usb_read(intf, value, &data, 2);
++ *data = le16_to_cpu(le_data);
+
+- return le16_to_cpu(data);
++ return 0;
+ }
+
+-u32 rtw_read32(struct adapter *adapter, u32 addr)
++int __must_check rtw_read32(struct adapter *adapter, u32 addr, u32 *data)
+ {
+ struct io_priv *io_priv = &adapter->iopriv;
+ struct intf_hdl *intf = &io_priv->intf;
+ u16 value = addr & 0xffff;
+- __le32 data;
++ __le32 le_data;
++ int res;
++
++ res = usb_read(intf, value, &le_data, 4);
++ if (res)
++ return res;
+
+- usb_read(intf, value, &data, 4);
++ *data = le32_to_cpu(le_data);
+
+- return le32_to_cpu(data);
++ return 0;
+ }
+
+ int rtw_write8(struct adapter *adapter, u32 addr, u8 val)
+diff --git a/drivers/staging/r8188eu/include/rtw_io.h b/drivers/staging/r8188eu/include/rtw_io.h
+index 6910e2b430e24..1c6097367a67c 100644
+--- a/drivers/staging/r8188eu/include/rtw_io.h
++++ b/drivers/staging/r8188eu/include/rtw_io.h
+@@ -220,9 +220,9 @@ void unregister_intf_hdl(struct intf_hdl *pintfhdl);
+ void _rtw_attrib_read(struct adapter *adapter, u32 addr, u32 cnt, u8 *pmem);
+ void _rtw_attrib_write(struct adapter *adapter, u32 addr, u32 cnt, u8 *pmem);
+
+-u8 rtw_read8(struct adapter *adapter, u32 addr);
+-u16 rtw_read16(struct adapter *adapter, u32 addr);
+-u32 rtw_read32(struct adapter *adapter, u32 addr);
++int __must_check rtw_read8(struct adapter *adapter, u32 addr, u8 *data);
++int __must_check rtw_read16(struct adapter *adapter, u32 addr, u16 *data);
++int __must_check rtw_read32(struct adapter *adapter, u32 addr, u32 *data);
+ void _rtw_read_mem(struct adapter *adapter, u32 addr, u32 cnt, u8 *pmem);
+ u32 rtw_read_port(struct adapter *adapter, u8 *pmem);
+ void rtw_read_port_cancel(struct adapter *adapter);
+diff --git a/drivers/staging/r8188eu/os_dep/ioctl_linux.c b/drivers/staging/r8188eu/os_dep/ioctl_linux.c
+index 8dd280e2739a2..f486870965ac7 100644
+--- a/drivers/staging/r8188eu/os_dep/ioctl_linux.c
++++ b/drivers/staging/r8188eu/os_dep/ioctl_linux.c
+@@ -3126,18 +3126,29 @@ exit:
+ static void mac_reg_dump(struct adapter *padapter)
+ {
+ int i, j = 1;
++ u32 reg;
++ int res;
++
+ pr_info("\n ======= MAC REG =======\n");
+ for (i = 0x0; i < 0x300; i += 4) {
+ if (j % 4 == 1)
+ pr_info("0x%02x", i);
+- pr_info(" 0x%08x ", rtw_read32(padapter, i));
++
++ res = rtw_read32(padapter, i, ®);
++ if (!res)
++ pr_info(" 0x%08x ", reg);
++
+ if ((j++) % 4 == 0)
+ pr_info("\n");
+ }
+ for (i = 0x400; i < 0x800; i += 4) {
+ if (j % 4 == 1)
+ pr_info("0x%02x", i);
+- pr_info(" 0x%08x ", rtw_read32(padapter, i));
++
++ res = rtw_read32(padapter, i, ®);
++ if (!res)
++ pr_info(" 0x%08x ", reg);
++
+ if ((j++) % 4 == 0)
+ pr_info("\n");
+ }
+@@ -3145,13 +3156,18 @@ static void mac_reg_dump(struct adapter *padapter)
+
+ static void bb_reg_dump(struct adapter *padapter)
+ {
+- int i, j = 1;
++ int i, j = 1, res;
++ u32 reg;
++
+ pr_info("\n ======= BB REG =======\n");
+ for (i = 0x800; i < 0x1000; i += 4) {
+ if (j % 4 == 1)
+ pr_info("0x%02x", i);
+
+- pr_info(" 0x%08x ", rtw_read32(padapter, i));
++ res = rtw_read32(padapter, i, ®);
++ if (!res)
++ pr_info(" 0x%08x ", reg);
++
+ if ((j++) % 4 == 0)
+ pr_info("\n");
+ }
+@@ -3178,6 +3194,7 @@ static void rtw_set_dynamic_functions(struct adapter *adapter, u8 dm_func)
+ {
+ struct hal_data_8188e *haldata = &adapter->haldata;
+ struct odm_dm_struct *odmpriv = &haldata->odmpriv;
++ int res;
+
+ switch (dm_func) {
+ case 0:
+@@ -3193,7 +3210,9 @@ static void rtw_set_dynamic_functions(struct adapter *adapter, u8 dm_func)
+ if (!(odmpriv->SupportAbility & DYNAMIC_BB_DIG)) {
+ struct rtw_dig *digtable = &odmpriv->DM_DigTable;
+
+- digtable->CurIGValue = rtw_read8(adapter, 0xc50);
++ res = rtw_read8(adapter, 0xc50, &digtable->CurIGValue);
++ (void)res;
++ /* FIXME: return an error to caller */
+ }
+ odmpriv->SupportAbility = DYNAMIC_ALL_FUNC_ENABLE;
+ break;
+@@ -3329,8 +3348,9 @@ static int rtw_dbg_port(struct net_device *dev,
+ u16 reg = arg;
+ u16 start_value = 0;
+ u32 write_num = extra_arg;
+- int i;
++ int i, res;
+ struct xmit_frame *xmit_frame;
++ u8 val8;
+
+ xmit_frame = rtw_IOL_accquire_xmit_frame(padapter);
+ if (!xmit_frame) {
+@@ -3343,7 +3363,9 @@ static int rtw_dbg_port(struct net_device *dev,
+ if (rtl8188e_IOL_exec_cmds_sync(padapter, xmit_frame, 5000, 0) != _SUCCESS)
+ ret = -EPERM;
+
+- rtw_read8(padapter, reg);
++ /* FIXME: is this read necessary? */
++ res = rtw_read8(padapter, reg, &val8);
++ (void)res;
+ }
+ break;
+
+@@ -3352,8 +3374,8 @@ static int rtw_dbg_port(struct net_device *dev,
+ u16 reg = arg;
+ u16 start_value = 200;
+ u32 write_num = extra_arg;
+-
+- int i;
++ u16 val16;
++ int i, res;
+ struct xmit_frame *xmit_frame;
+
+ xmit_frame = rtw_IOL_accquire_xmit_frame(padapter);
+@@ -3367,7 +3389,9 @@ static int rtw_dbg_port(struct net_device *dev,
+ if (rtl8188e_IOL_exec_cmds_sync(padapter, xmit_frame, 5000, 0) != _SUCCESS)
+ ret = -EPERM;
+
+- rtw_read16(padapter, reg);
++ /* FIXME: is this read necessary? */
++ res = rtw_read16(padapter, reg, &val16);
++ (void)res;
+ }
+ break;
+ case 0x08: /* continuous write dword test */
+@@ -3390,7 +3414,8 @@ static int rtw_dbg_port(struct net_device *dev,
+ if (rtl8188e_IOL_exec_cmds_sync(padapter, xmit_frame, 5000, 0) != _SUCCESS)
+ ret = -EPERM;
+
+- rtw_read32(padapter, reg);
++ /* FIXME: is this read necessary? */
++ ret = rtw_read32(padapter, reg, &write_num);
+ }
+ break;
+ }
+diff --git a/drivers/staging/r8188eu/os_dep/os_intfs.c b/drivers/staging/r8188eu/os_dep/os_intfs.c
+index 891c85b088ca1..cac9553666e6d 100644
+--- a/drivers/staging/r8188eu/os_dep/os_intfs.c
++++ b/drivers/staging/r8188eu/os_dep/os_intfs.c
+@@ -740,19 +740,32 @@ static void rtw_fifo_cleanup(struct adapter *adapter)
+ {
+ struct pwrctrl_priv *pwrpriv = &adapter->pwrctrlpriv;
+ u8 trycnt = 100;
++ int res;
++ u32 reg;
+
+ /* pause tx */
+ rtw_write8(adapter, REG_TXPAUSE, 0xff);
+
+ /* keep sn */
+- adapter->xmitpriv.nqos_ssn = rtw_read16(adapter, REG_NQOS_SEQ);
++ /* FIXME: return an error to caller */
++ res = rtw_read16(adapter, REG_NQOS_SEQ, &adapter->xmitpriv.nqos_ssn);
++ if (res)
++ return;
+
+ if (!pwrpriv->bkeepfwalive) {
+ /* RX DMA stop */
++ res = rtw_read32(adapter, REG_RXPKT_NUM, ®);
++ if (res)
++ return;
++
+ rtw_write32(adapter, REG_RXPKT_NUM,
+- (rtw_read32(adapter, REG_RXPKT_NUM) | RW_RELEASE_EN));
++ (reg | RW_RELEASE_EN));
+ do {
+- if (!(rtw_read32(adapter, REG_RXPKT_NUM) & RXDMA_IDLE))
++ res = rtw_read32(adapter, REG_RXPKT_NUM, ®);
++ if (res)
++ continue;
++
++ if (!(reg & RXDMA_IDLE))
+ break;
+ } while (trycnt--);
+
+diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c
+index e4a07a26f6939..93ba1d00335bf 100644
+--- a/drivers/thunderbolt/tmu.c
++++ b/drivers/thunderbolt/tmu.c
+@@ -359,13 +359,14 @@ int tb_switch_tmu_disable(struct tb_switch *sw)
+ * In case of uni-directional time sync, TMU handshake is
+ * initiated by upstream router. In case of bi-directional
+ * time sync, TMU handshake is initiated by downstream router.
+- * Therefore, we change the rate to off in the respective
+- * router.
++ * We change downstream router's rate to off for both uni/bidir
++ * cases although it is needed only for the bi-directional mode.
++ * We avoid changing upstream router's mode since it might
++ * have another downstream router plugged, that is set to
++ * uni-directional mode and we don't want to change it's TMU
++ * mode.
+ */
+- if (unidirectional)
+- tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF);
+- else
+- tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
++ tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
+
+ tb_port_tmu_time_sync_disable(up);
+ ret = tb_port_tmu_time_sync_disable(down);
+diff --git a/drivers/tty/serial/ucc_uart.c b/drivers/tty/serial/ucc_uart.c
+index 6000853973c10..3cc9ef08455c2 100644
+--- a/drivers/tty/serial/ucc_uart.c
++++ b/drivers/tty/serial/ucc_uart.c
+@@ -1137,6 +1137,8 @@ static unsigned int soc_info(unsigned int *rev_h, unsigned int *rev_l)
+ /* No compatible property, so try the name. */
+ soc_string = np->name;
+
++ of_node_put(np);
++
+ /* Extract the SOC number from the "PowerPC," string */
+ if ((sscanf(soc_string, "PowerPC,%u", &soc) != 1) || !soc)
+ return 0;
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 8d91be0fd1a4e..a51ca56a0ebe7 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -2227,6 +2227,8 @@ static inline int ufshcd_hba_capabilities(struct ufs_hba *hba)
+ int err;
+
+ hba->capabilities = ufshcd_readl(hba, REG_CONTROLLER_CAPABILITIES);
++ if (hba->quirks & UFSHCD_QUIRK_BROKEN_64BIT_ADDRESS)
++ hba->capabilities &= ~MASK_64_ADDRESSING_SUPPORT;
+
+ /* nutrs and nutmrs are 0 based values */
+ hba->nutrs = (hba->capabilities & MASK_TRANSFER_REQUESTS_SLOTS) + 1;
+@@ -4290,8 +4292,13 @@ static int ufshcd_get_max_pwr_mode(struct ufs_hba *hba)
+ if (hba->max_pwr_info.is_valid)
+ return 0;
+
+- pwr_info->pwr_tx = FAST_MODE;
+- pwr_info->pwr_rx = FAST_MODE;
++ if (hba->quirks & UFSHCD_QUIRK_HIBERN_FASTAUTO) {
++ pwr_info->pwr_tx = FASTAUTO_MODE;
++ pwr_info->pwr_rx = FASTAUTO_MODE;
++ } else {
++ pwr_info->pwr_tx = FAST_MODE;
++ pwr_info->pwr_rx = FAST_MODE;
++ }
+ pwr_info->hs_rate = PA_HS_MODE_B;
+
+ /* Get the connected lane count */
+diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
+index a81d8cbd542f3..25995667c8323 100644
+--- a/drivers/ufs/host/ufs-exynos.c
++++ b/drivers/ufs/host/ufs-exynos.c
+@@ -910,9 +910,13 @@ static int exynos_ufs_phy_init(struct exynos_ufs *ufs)
+ if (ret) {
+ dev_err(hba->dev, "%s: phy init failed, ret = %d\n",
+ __func__, ret);
+- goto out_exit_phy;
++ return ret;
+ }
+
++ ret = phy_power_on(generic_phy);
++ if (ret)
++ goto out_exit_phy;
++
+ return 0;
+
+ out_exit_phy:
+@@ -1174,10 +1178,6 @@ static int exynos_ufs_init(struct ufs_hba *hba)
+ goto out;
+ }
+
+- ret = phy_power_on(ufs->phy);
+- if (ret)
+- goto phy_off;
+-
+ exynos_ufs_priv_init(hba, ufs);
+
+ if (ufs->drv_data->drv_init) {
+@@ -1195,8 +1195,6 @@ static int exynos_ufs_init(struct ufs_hba *hba)
+ exynos_ufs_config_smu(ufs);
+ return 0;
+
+-phy_off:
+- phy_power_off(ufs->phy);
+ out:
+ hba->priv = NULL;
+ return ret;
+@@ -1514,9 +1512,14 @@ static int exynos_ufs_probe(struct platform_device *pdev)
+ static int exynos_ufs_remove(struct platform_device *pdev)
+ {
+ struct ufs_hba *hba = platform_get_drvdata(pdev);
++ struct exynos_ufs *ufs = ufshcd_get_variant(hba);
+
+ pm_runtime_get_sync(&(pdev)->dev);
+ ufshcd_remove(hba);
++
++ phy_power_off(ufs->phy);
++ phy_exit(ufs->phy);
++
+ return 0;
+ }
+
+diff --git a/drivers/ufs/host/ufs-mediatek.c b/drivers/ufs/host/ufs-mediatek.c
+index beabc3ccd30b3..4582d69309d92 100644
+--- a/drivers/ufs/host/ufs-mediatek.c
++++ b/drivers/ufs/host/ufs-mediatek.c
+@@ -1026,7 +1026,6 @@ static int ufs_mtk_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op,
+ * ufshcd_suspend() re-enabling regulators while vreg is still
+ * in low-power mode.
+ */
+- ufs_mtk_vreg_set_lpm(hba, true);
+ err = ufs_mtk_mphy_power_on(hba, false);
+ if (err)
+ goto fail;
+@@ -1050,12 +1049,13 @@ static int ufs_mtk_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
+ {
+ int err;
+
++ if (hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL)
++ ufs_mtk_vreg_set_lpm(hba, false);
++
+ err = ufs_mtk_mphy_power_on(hba, true);
+ if (err)
+ goto fail;
+
+- ufs_mtk_vreg_set_lpm(hba, false);
+-
+ if (ufshcd_is_link_hibern8(hba)) {
+ err = ufs_mtk_link_set_hpm(hba);
+ if (err)
+@@ -1220,9 +1220,59 @@ static int ufs_mtk_remove(struct platform_device *pdev)
+ return 0;
+ }
+
++#ifdef CONFIG_PM_SLEEP
++int ufs_mtk_system_suspend(struct device *dev)
++{
++ struct ufs_hba *hba = dev_get_drvdata(dev);
++ int ret;
++
++ ret = ufshcd_system_suspend(dev);
++ if (ret)
++ return ret;
++
++ ufs_mtk_vreg_set_lpm(hba, true);
++
++ return 0;
++}
++
++int ufs_mtk_system_resume(struct device *dev)
++{
++ struct ufs_hba *hba = dev_get_drvdata(dev);
++
++ ufs_mtk_vreg_set_lpm(hba, false);
++
++ return ufshcd_system_resume(dev);
++}
++#endif
++
++int ufs_mtk_runtime_suspend(struct device *dev)
++{
++ struct ufs_hba *hba = dev_get_drvdata(dev);
++ int ret = 0;
++
++ ret = ufshcd_runtime_suspend(dev);
++ if (ret)
++ return ret;
++
++ ufs_mtk_vreg_set_lpm(hba, true);
++
++ return 0;
++}
++
++int ufs_mtk_runtime_resume(struct device *dev)
++{
++ struct ufs_hba *hba = dev_get_drvdata(dev);
++
++ ufs_mtk_vreg_set_lpm(hba, false);
++
++ return ufshcd_runtime_resume(dev);
++}
++
+ static const struct dev_pm_ops ufs_mtk_pm_ops = {
+- SET_SYSTEM_SLEEP_PM_OPS(ufshcd_system_suspend, ufshcd_system_resume)
+- SET_RUNTIME_PM_OPS(ufshcd_runtime_suspend, ufshcd_runtime_resume, NULL)
++ SET_SYSTEM_SLEEP_PM_OPS(ufs_mtk_system_suspend,
++ ufs_mtk_system_resume)
++ SET_RUNTIME_PM_OPS(ufs_mtk_runtime_suspend,
++ ufs_mtk_runtime_resume, NULL)
+ .prepare = ufshcd_suspend_prepare,
+ .complete = ufshcd_resume_complete,
+ };
+diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
+index 87cfa91a758df..d21b69997e750 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.c
++++ b/drivers/usb/cdns3/cdns3-gadget.c
+@@ -625,9 +625,9 @@ static void cdns3_wa2_remove_old_request(struct cdns3_endpoint *priv_ep)
+ trace_cdns3_wa2(priv_ep, "removes eldest request");
+
+ kfree(priv_req->request.buf);
++ list_del_init(&priv_req->list);
+ cdns3_gadget_ep_free_request(&priv_ep->endpoint,
+ &priv_req->request);
+- list_del_init(&priv_req->list);
+ --priv_ep->wa2_counter;
+
+ if (!chain)
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index fe2a58c758610..8b15742d9e8aa 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -3594,7 +3594,8 @@ void dwc2_hsotg_core_disconnect(struct dwc2_hsotg *hsotg)
+ void dwc2_hsotg_core_connect(struct dwc2_hsotg *hsotg)
+ {
+ /* remove the soft-disconnect and let's go */
+- dwc2_clear_bit(hsotg, DCTL, DCTL_SFTDISCON);
++ if (!hsotg->role_sw || (dwc2_readl(hsotg, GOTGCTL) & GOTGCTL_BSESVLD))
++ dwc2_clear_bit(hsotg, DCTL, DCTL_SFTDISCON);
+ }
+
+ /**
+diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c
+index 951934aa44541..ec500ee499eed 100644
+--- a/drivers/usb/gadget/function/uvc_queue.c
++++ b/drivers/usb/gadget/function/uvc_queue.c
+@@ -44,7 +44,8 @@ static int uvc_queue_setup(struct vb2_queue *vq,
+ {
+ struct uvc_video_queue *queue = vb2_get_drv_priv(vq);
+ struct uvc_video *video = container_of(queue, struct uvc_video, queue);
+- struct usb_composite_dev *cdev = video->uvc->func.config->cdev;
++ unsigned int req_size;
++ unsigned int nreq;
+
+ if (*nbuffers > UVC_MAX_VIDEO_BUFFERS)
+ *nbuffers = UVC_MAX_VIDEO_BUFFERS;
+@@ -53,10 +54,16 @@ static int uvc_queue_setup(struct vb2_queue *vq,
+
+ sizes[0] = video->imagesize;
+
+- if (cdev->gadget->speed < USB_SPEED_SUPER)
+- video->uvc_num_requests = 4;
+- else
+- video->uvc_num_requests = 64;
++ req_size = video->ep->maxpacket
++ * max_t(unsigned int, video->ep->maxburst, 1)
++ * (video->ep->mult);
++
++ /* We divide by two, to increase the chance to run
++ * into fewer requests for smaller framesizes.
++ */
++ nreq = DIV_ROUND_UP(DIV_ROUND_UP(sizes[0], 2), req_size);
++ nreq = clamp(nreq, 4U, 64U);
++ video->uvc_num_requests = nreq;
+
+ return 0;
+ }
+diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
+index ce421d9cc241b..c00ce0e91f5d5 100644
+--- a/drivers/usb/gadget/function/uvc_video.c
++++ b/drivers/usb/gadget/function/uvc_video.c
+@@ -261,7 +261,7 @@ uvc_video_complete(struct usb_ep *ep, struct usb_request *req)
+ break;
+
+ default:
+- uvcg_info(&video->uvc->func,
++ uvcg_warn(&video->uvc->func,
+ "VS request completed with status %d.\n",
+ req->status);
+ uvcg_queue_cancel(queue, 0);
+diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
+index 79990597c39f1..01c3ead7d1b42 100644
+--- a/drivers/usb/gadget/legacy/inode.c
++++ b/drivers/usb/gadget/legacy/inode.c
+@@ -362,6 +362,7 @@ ep_io (struct ep_data *epdata, void *buf, unsigned len)
+ spin_unlock_irq (&epdata->dev->lock);
+
+ DBG (epdata->dev, "endpoint gone\n");
++ wait_for_completion(&done);
+ epdata->status = -ENODEV;
+ }
+ }
+diff --git a/drivers/usb/host/ohci-ppc-of.c b/drivers/usb/host/ohci-ppc-of.c
+index 1960b8dfdba51..591f675cc9306 100644
+--- a/drivers/usb/host/ohci-ppc-of.c
++++ b/drivers/usb/host/ohci-ppc-of.c
+@@ -166,6 +166,7 @@ static int ohci_hcd_ppc_of_probe(struct platform_device *op)
+ release_mem_region(res.start, 0x4);
+ } else
+ pr_debug("%s: cannot get ehci offset from fdt\n", __FILE__);
++ of_node_put(np);
+ }
+
+ irq_dispose_mapping(irq);
+diff --git a/drivers/usb/renesas_usbhs/rza.c b/drivers/usb/renesas_usbhs/rza.c
+index 24de64edb674b..2d77edefb4b30 100644
+--- a/drivers/usb/renesas_usbhs/rza.c
++++ b/drivers/usb/renesas_usbhs/rza.c
+@@ -23,6 +23,10 @@ static int usbhs_rza1_hardware_init(struct platform_device *pdev)
+ extal_clk = of_find_node_by_name(NULL, "extal");
+ of_property_read_u32(usb_x1_clk, "clock-frequency", &freq_usb);
+ of_property_read_u32(extal_clk, "clock-frequency", &freq_extal);
++
++ of_node_put(usb_x1_clk);
++ of_node_put(extal_clk);
++
+ if (freq_usb == 0) {
+ if (freq_extal == 12000000) {
+ /* Select 12MHz XTAL */
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index 0f28658996472..3e81532c01cb8 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -33,7 +33,7 @@ MODULE_PARM_DESC(batch_mapping, "Batched mapping 1 -Enable; 0 - Disable");
+ static int max_iotlb_entries = 2048;
+ module_param(max_iotlb_entries, int, 0444);
+ MODULE_PARM_DESC(max_iotlb_entries,
+- "Maximum number of iotlb entries. 0 means unlimited. (default: 2048)");
++ "Maximum number of iotlb entries for each address space. 0 means unlimited. (default: 2048)");
+
+ #define VDPASIM_QUEUE_ALIGN PAGE_SIZE
+ #define VDPASIM_QUEUE_MAX 256
+@@ -291,7 +291,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr)
+ goto err_iommu;
+
+ for (i = 0; i < vdpasim->dev_attr.nas; i++)
+- vhost_iotlb_init(&vdpasim->iommu[i], 0, 0);
++ vhost_iotlb_init(&vdpasim->iommu[i], max_iotlb_entries, 0);
+
+ vdpasim->buffer = kvmalloc(dev_attr->buffer_size, GFP_KERNEL);
+ if (!vdpasim->buffer)
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
+index 42d401d439117..03a28def8eeeb 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c
+@@ -34,7 +34,11 @@
+ #define VDPASIM_BLK_CAPACITY 0x40000
+ #define VDPASIM_BLK_SIZE_MAX 0x1000
+ #define VDPASIM_BLK_SEG_MAX 32
++
++/* 1 virtqueue, 1 address space, 1 virtqueue group */
+ #define VDPASIM_BLK_VQ_NUM 1
++#define VDPASIM_BLK_AS_NUM 1
++#define VDPASIM_BLK_GROUP_NUM 1
+
+ static char vdpasim_blk_id[VIRTIO_BLK_ID_BYTES] = "vdpa_blk_sim";
+
+@@ -260,6 +264,8 @@ static int vdpasim_blk_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
+ dev_attr.id = VIRTIO_ID_BLOCK;
+ dev_attr.supported_features = VDPASIM_BLK_FEATURES;
+ dev_attr.nvqs = VDPASIM_BLK_VQ_NUM;
++ dev_attr.ngroups = VDPASIM_BLK_GROUP_NUM;
++ dev_attr.nas = VDPASIM_BLK_AS_NUM;
+ dev_attr.config_size = sizeof(struct virtio_blk_config);
+ dev_attr.get_config = vdpasim_blk_get_config;
+ dev_attr.work_fn = vdpasim_blk_work;
+diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
+index 18fc0916587ec..277cd1152dd80 100644
+--- a/drivers/vfio/vfio.c
++++ b/drivers/vfio/vfio.c
+@@ -1814,6 +1814,7 @@ struct vfio_info_cap_header *vfio_info_cap_add(struct vfio_info_cap *caps,
+ buf = krealloc(caps->buf, caps->size + size, GFP_KERNEL);
+ if (!buf) {
+ kfree(caps->buf);
++ caps->buf = NULL;
+ caps->size = 0;
+ return ERR_PTR(-ENOMEM);
+ }
+diff --git a/drivers/video/fbdev/i740fb.c b/drivers/video/fbdev/i740fb.c
+index 09dd85553d4f3..7f09a0daaaa24 100644
+--- a/drivers/video/fbdev/i740fb.c
++++ b/drivers/video/fbdev/i740fb.c
+@@ -400,7 +400,7 @@ static int i740fb_decode_var(const struct fb_var_screeninfo *var,
+ u32 xres, right, hslen, left, xtotal;
+ u32 yres, lower, vslen, upper, ytotal;
+ u32 vxres, xoffset, vyres, yoffset;
+- u32 bpp, base, dacspeed24, mem;
++ u32 bpp, base, dacspeed24, mem, freq;
+ u8 r7;
+ int i;
+
+@@ -643,7 +643,12 @@ static int i740fb_decode_var(const struct fb_var_screeninfo *var,
+ par->atc[VGA_ATC_OVERSCAN] = 0;
+
+ /* Calculate VCLK that most closely matches the requested dot clock */
+- i740_calc_vclk((((u32)1e9) / var->pixclock) * (u32)(1e3), par);
++ freq = (((u32)1e9) / var->pixclock) * (u32)(1e3);
++ if (freq < I740_RFREQ_FIX) {
++ fb_dbg(info, "invalid pixclock\n");
++ freq = I740_RFREQ_FIX;
++ }
++ i740_calc_vclk(freq, par);
+
+ /* Since we program the clocks ourselves, always use VCLK2. */
+ par->misc |= 0x0C;
+diff --git a/drivers/virt/vboxguest/vboxguest_linux.c b/drivers/virt/vboxguest/vboxguest_linux.c
+index 73eb34849eaba..4ccfd30c2a304 100644
+--- a/drivers/virt/vboxguest/vboxguest_linux.c
++++ b/drivers/virt/vboxguest/vboxguest_linux.c
+@@ -356,8 +356,8 @@ static int vbg_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ goto err_vbg_core_exit;
+ }
+
+- ret = devm_request_irq(dev, pci->irq, vbg_core_isr, IRQF_SHARED,
+- DEVICE_NAME, gdev);
++ ret = request_irq(pci->irq, vbg_core_isr, IRQF_SHARED, DEVICE_NAME,
++ gdev);
+ if (ret) {
+ vbg_err("vboxguest: Error requesting irq: %d\n", ret);
+ goto err_vbg_core_exit;
+@@ -367,7 +367,7 @@ static int vbg_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ if (ret) {
+ vbg_err("vboxguest: Error misc_register %s failed: %d\n",
+ DEVICE_NAME, ret);
+- goto err_vbg_core_exit;
++ goto err_free_irq;
+ }
+
+ ret = misc_register(&gdev->misc_device_user);
+@@ -403,6 +403,8 @@ err_unregister_misc_device_user:
+ misc_deregister(&gdev->misc_device_user);
+ err_unregister_misc_device:
+ misc_deregister(&gdev->misc_device);
++err_free_irq:
++ free_irq(pci->irq, gdev);
+ err_vbg_core_exit:
+ vbg_core_exit(gdev);
+ err_disable_pcidev:
+@@ -419,6 +421,7 @@ static void vbg_pci_remove(struct pci_dev *pci)
+ vbg_gdev = NULL;
+ mutex_unlock(&vbg_gdev_mutex);
+
++ free_irq(pci->irq, gdev);
+ device_remove_file(gdev->dev, &dev_attr_host_features);
+ device_remove_file(gdev->dev, &dev_attr_host_version);
+ misc_deregister(&gdev->misc_device_user);
+diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
+index 56c77f63cd224..dd9e6f68de245 100644
+--- a/drivers/virtio/Kconfig
++++ b/drivers/virtio/Kconfig
+@@ -35,11 +35,12 @@ if VIRTIO_MENU
+
+ config VIRTIO_HARDEN_NOTIFICATION
+ bool "Harden virtio notification"
++ depends on BROKEN
+ help
+ Enable this to harden the device notifications and suppress
+ those that happen at a time where notifications are illegal.
+
+- Experimental: Note that several drivers still have bugs that
++ Experimental: Note that several drivers still have issues that
+ may cause crashes or hangs when correct handling of
+ notifications is enforced; depending on the subset of
+ drivers and devices you use, this may or may not work.
+diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c
+index 597af455a522b..0792fda49a15f 100644
+--- a/drivers/xen/xenbus/xenbus_dev_frontend.c
++++ b/drivers/xen/xenbus/xenbus_dev_frontend.c
+@@ -128,7 +128,7 @@ static ssize_t xenbus_file_read(struct file *filp,
+ {
+ struct xenbus_file_priv *u = filp->private_data;
+ struct read_buffer *rb;
+- unsigned i;
++ ssize_t i;
+ int ret;
+
+ mutex_lock(&u->reply_mutex);
+@@ -148,7 +148,7 @@ again:
+ rb = list_entry(u->read_buffers.next, struct read_buffer, list);
+ i = 0;
+ while (i < len) {
+- unsigned sz = min((unsigned)len - i, rb->len - rb->cons);
++ size_t sz = min_t(size_t, len - i, rb->len - rb->cons);
+
+ ret = copy_to_user(ubuf + i, &rb->msg[rb->cons], sz);
+
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 5627b43d4cc24..deaed255f301e 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1640,9 +1640,11 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ div64_u64(zone_unusable * 100, bg->length));
+ trace_btrfs_reclaim_block_group(bg);
+ ret = btrfs_relocate_chunk(fs_info, bg->start);
+- if (ret)
++ if (ret) {
++ btrfs_dec_block_group_ro(bg);
+ btrfs_err(fs_info, "error relocating chunk %llu",
+ bg->start);
++ }
+
+ next:
+ btrfs_put_block_group(bg);
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index a6dc827e75af0..33411baf5c7a3 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -3573,7 +3573,12 @@ int prepare_to_relocate(struct reloc_control *rc)
+ */
+ return PTR_ERR(trans);
+ }
+- return btrfs_commit_transaction(trans);
++
++ ret = btrfs_commit_transaction(trans);
++ if (ret)
++ unset_reloc_control(rc);
++
++ return ret;
+ }
+
+ static noinline_for_stack int relocate_block_group(struct reloc_control *rc)
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 3c962bfd204f6..42f02cffe06b0 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1146,7 +1146,9 @@ again:
+ extref = btrfs_lookup_inode_extref(NULL, root, path, name, namelen,
+ inode_objectid, parent_objectid, 0,
+ 0);
+- if (!IS_ERR_OR_NULL(extref)) {
++ if (IS_ERR(extref)) {
++ return PTR_ERR(extref);
++ } else if (extref) {
+ u32 item_size;
+ u32 cur_offset = 0;
+ unsigned long base;
+@@ -1457,7 +1459,7 @@ static int add_link(struct btrfs_trans_handle *trans,
+ * on the inode will not free it. We will fixup the link count later.
+ */
+ if (other_inode->i_nlink == 0)
+- inc_nlink(other_inode);
++ set_nlink(other_inode, 1);
+ add_link:
+ ret = btrfs_add_link(trans, BTRFS_I(dir), BTRFS_I(inode),
+ name, namelen, 0, ref_index);
+@@ -1600,7 +1602,7 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans,
+ * free it. We will fixup the link count later.
+ */
+ if (!ret && inode->i_nlink == 0)
+- inc_nlink(inode);
++ set_nlink(inode, 1);
+ }
+ if (ret < 0)
+ goto out;
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index ac8fd5e7f5405..2b1f22322e8fa 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -3578,24 +3578,23 @@ static void handle_cap_grant(struct inode *inode,
+ fill_inline = true;
+ }
+
+- if (ci->i_auth_cap == cap &&
+- le32_to_cpu(grant->op) == CEPH_CAP_OP_IMPORT) {
+- if (newcaps & ~extra_info->issued)
+- wake = true;
++ if (le32_to_cpu(grant->op) == CEPH_CAP_OP_IMPORT) {
++ if (ci->i_auth_cap == cap) {
++ if (newcaps & ~extra_info->issued)
++ wake = true;
++
++ if (ci->i_requested_max_size > max_size ||
++ !(le32_to_cpu(grant->wanted) & CEPH_CAP_ANY_FILE_WR)) {
++ /* re-request max_size if necessary */
++ ci->i_requested_max_size = 0;
++ wake = true;
++ }
+
+- if (ci->i_requested_max_size > max_size ||
+- !(le32_to_cpu(grant->wanted) & CEPH_CAP_ANY_FILE_WR)) {
+- /* re-request max_size if necessary */
+- ci->i_requested_max_size = 0;
+- wake = true;
++ ceph_kick_flushing_inode_caps(session, ci);
+ }
+-
+- ceph_kick_flushing_inode_caps(session, ci);
+- spin_unlock(&ci->i_ceph_lock);
+ up_read(&session->s_mdsc->snap_rwsem);
+- } else {
+- spin_unlock(&ci->i_ceph_lock);
+ }
++ spin_unlock(&ci->i_ceph_lock);
+
+ if (fill_inline)
+ ceph_fill_inline_data(inode, NULL, extra_info->inline_data,
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 33f517d549ce5..0aded10375fdd 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1220,14 +1220,17 @@ static int encode_supported_features(void **p, void *end)
+ if (count > 0) {
+ size_t i;
+ size_t size = FEATURE_BYTES(count);
++ unsigned long bit;
+
+ if (WARN_ON_ONCE(*p + 4 + size > end))
+ return -ERANGE;
+
+ ceph_encode_32(p, size);
+ memset(*p, 0, size);
+- for (i = 0; i < count; i++)
+- ((unsigned char*)(*p))[i / 8] |= BIT(feature_bits[i] % 8);
++ for (i = 0; i < count; i++) {
++ bit = feature_bits[i];
++ ((unsigned char *)(*p))[bit / 8] |= BIT(bit % 8);
++ }
+ *p += size;
+ } else {
+ if (WARN_ON_ONCE(*p + 4 > end))
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index 1140aecd82ce4..2a49e331987be 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -33,10 +33,6 @@ enum ceph_feature_type {
+ CEPHFS_FEATURE_MAX = CEPHFS_FEATURE_METRIC_COLLECT,
+ };
+
+-/*
+- * This will always have the highest feature bit value
+- * as the last element of the array.
+- */
+ #define CEPHFS_FEATURES_CLIENT_SUPPORTED { \
+ 0, 1, 2, 3, 4, 5, 6, 7, \
+ CEPHFS_FEATURE_MIMIC, \
+@@ -45,8 +41,6 @@ enum ceph_feature_type {
+ CEPHFS_FEATURE_MULTI_RECONNECT, \
+ CEPHFS_FEATURE_DELEG_INO, \
+ CEPHFS_FEATURE_METRIC_COLLECT, \
+- \
+- CEPHFS_FEATURE_MAX, \
+ }
+ #define CEPHFS_FEATURES_CLIENT_REQUIRED {}
+
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 0e84e6fcf8ab4..197f3c09d3f3e 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -742,6 +742,8 @@ cifs_close_deferred_file(struct cifsInodeInfo *cifs_inode)
+ list_for_each_entry(cfile, &cifs_inode->openFileList, flist) {
+ if (delayed_work_pending(&cfile->deferred)) {
+ if (cancel_delayed_work(&cfile->deferred)) {
++ cifs_del_deferred_close(cfile);
++
+ tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ if (tmp_list == NULL)
+ break;
+@@ -773,6 +775,8 @@ cifs_close_all_deferred_files(struct cifs_tcon *tcon)
+ cfile = list_entry(tmp, struct cifsFileInfo, tlist);
+ if (delayed_work_pending(&cfile->deferred)) {
+ if (cancel_delayed_work(&cfile->deferred)) {
++ cifs_del_deferred_close(cfile);
++
+ tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ if (tmp_list == NULL)
+ break;
+@@ -808,6 +812,8 @@ cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon, const char *path)
+ if (strstr(full_path, path)) {
+ if (delayed_work_pending(&cfile->deferred)) {
+ if (cancel_delayed_work(&cfile->deferred)) {
++ cifs_del_deferred_close(cfile);
++
+ tmp_list = kmalloc(sizeof(struct file_list), GFP_ATOMIC);
+ if (tmp_list == NULL)
+ break;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 8802995b2d3d6..aa4c1d403708f 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1145,9 +1145,7 @@ move_smb2_ea_to_cifs(char *dst, size_t dst_size,
+ size_t name_len, value_len, user_name_len;
+
+ while (src_size > 0) {
+- name = &src->ea_data[0];
+ name_len = (size_t)src->ea_name_length;
+- value = &src->ea_data[src->ea_name_length + 1];
+ value_len = (size_t)le16_to_cpu(src->ea_value_length);
+
+ if (name_len == 0)
+@@ -1159,6 +1157,9 @@ move_smb2_ea_to_cifs(char *dst, size_t dst_size,
+ goto out;
+ }
+
++ name = &src->ea_data[0];
++ value = &src->ea_data[src->ea_name_length + 1];
++
+ if (ea_name) {
+ if (ea_name_len == name_len &&
+ memcmp(ea_name, name, name_len) == 0) {
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 9e06334771a39..38e7dc2531b17 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -5928,6 +5928,15 @@ static void ext4_mb_clear_bb(handle_t *handle, struct inode *inode,
+
+ sbi = EXT4_SB(sb);
+
++ if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
++ !ext4_inode_block_valid(inode, block, count)) {
++ ext4_error(sb, "Freeing blocks in system zone - "
++ "Block = %llu, count = %lu", block, count);
++ /* err = 0. ext4_std_error should be a no op */
++ goto error_return;
++ }
++ flags |= EXT4_FREE_BLOCKS_VALIDATED;
++
+ do_more:
+ overflow = 0;
+ ext4_get_group_no_and_offset(sb, block, &block_group, &bit);
+@@ -5944,6 +5953,8 @@ do_more:
+ overflow = EXT4_C2B(sbi, bit) + count -
+ EXT4_BLOCKS_PER_GROUP(sb);
+ count -= overflow;
++ /* The range changed so it's no longer validated */
++ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ }
+ count_clusters = EXT4_NUM_B2C(sbi, count);
+ bitmap_bh = ext4_read_block_bitmap(sb, block_group);
+@@ -5958,7 +5969,8 @@ do_more:
+ goto error_return;
+ }
+
+- if (!ext4_inode_block_valid(inode, block, count)) {
++ if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
++ !ext4_inode_block_valid(inode, block, count)) {
+ ext4_error(sb, "Freeing blocks in system zone - "
+ "Block = %llu, count = %lu", block, count);
+ /* err = 0. ext4_std_error should be a no op */
+@@ -6081,6 +6093,8 @@ do_more:
+ block += count;
+ count = overflow;
+ put_bh(bitmap_bh);
++ /* The range changed so it's no longer validated */
++ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ goto do_more;
+ }
+ error_return:
+@@ -6127,6 +6141,7 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+ "block = %llu, count = %lu", block, count);
+ return;
+ }
++ flags |= EXT4_FREE_BLOCKS_VALIDATED;
+
+ ext4_debug("freeing block %llu\n", block);
+ trace_ext4_free_blocks(inode, block, count, flags);
+@@ -6158,6 +6173,8 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+ block -= overflow;
+ count += overflow;
+ }
++ /* The range changed so it's no longer validated */
++ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ }
+ overflow = EXT4_LBLK_COFF(sbi, count);
+ if (overflow) {
+@@ -6168,6 +6185,8 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
+ return;
+ } else
+ count += sbi->s_cluster_ratio - overflow;
++ /* The range changed so it's no longer validated */
++ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ }
+
+ if (!bh && (flags & EXT4_FREE_BLOCKS_FORGET)) {
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 4af441494e09b..3a31b662f6619 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3090,11 +3090,8 @@ bool ext4_empty_dir(struct inode *inode)
+ de = (struct ext4_dir_entry_2 *) (bh->b_data +
+ (offset & (sb->s_blocksize - 1)));
+ if (ext4_check_dir_entry(inode, NULL, de, bh,
+- bh->b_data, bh->b_size, offset)) {
+- offset = (offset | (sb->s_blocksize - 1)) + 1;
+- continue;
+- }
+- if (le32_to_cpu(de->inode)) {
++ bh->b_data, bh->b_size, offset) ||
++ le32_to_cpu(de->inode)) {
+ brelse(bh);
+ return false;
+ }
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index e5c2713aa11ad..cb5a64293881e 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1989,6 +1989,16 @@ int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count)
+ }
+ brelse(bh);
+
++ /*
++ * For bigalloc, trim the requested size to the nearest cluster
++ * boundary to avoid creating an unusable filesystem. We do this
++ * silently, instead of returning an error, to avoid breaking
++ * callers that blindly resize the filesystem to the full size of
++ * the underlying block device.
++ */
++ if (ext4_has_feature_bigalloc(sb))
++ n_blocks_count &= ~((1 << EXT4_CLUSTER_BITS(sb)) - 1);
++
+ retry:
+ o_blocks_count = ext4_blocks_count(es);
+
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 5c950298837f1..7006fa7dd5cb8 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -757,6 +757,7 @@ enum {
+ FI_ENABLE_COMPRESS, /* enable compression in "user" compression mode */
+ FI_COMPRESS_RELEASED, /* compressed blocks were released */
+ FI_ALIGNED_WRITE, /* enable aligned write */
++ FI_COW_FILE, /* indicate COW file */
+ FI_MAX, /* max flag, never be used */
+ };
+
+@@ -3208,6 +3209,11 @@ static inline bool f2fs_is_atomic_file(struct inode *inode)
+ return is_inode_flag_set(inode, FI_ATOMIC_FILE);
+ }
+
++static inline bool f2fs_is_cow_file(struct inode *inode)
++{
++ return is_inode_flag_set(inode, FI_COW_FILE);
++}
++
+ static inline bool f2fs_is_first_block_written(struct inode *inode)
+ {
+ return is_inode_flag_set(inode, FI_FIRST_BLOCK_WRITTEN);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index fc0f30738b21c..ecd833ba35fcb 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2061,7 +2061,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
+
+ set_inode_flag(inode, FI_ATOMIC_FILE);
+- set_inode_flag(fi->cow_inode, FI_ATOMIC_FILE);
++ set_inode_flag(fi->cow_inode, FI_COW_FILE);
+ clear_inode_flag(fi->cow_inode, FI_INLINE_DATA);
+ f2fs_up_write(&fi->i_gc_rwsem[WRITE]);
+
+@@ -2108,6 +2108,31 @@ unlock_out:
+ return ret;
+ }
+
++static int f2fs_ioc_abort_atomic_write(struct file *filp)
++{
++ struct inode *inode = file_inode(filp);
++ struct user_namespace *mnt_userns = file_mnt_user_ns(filp);
++ int ret;
++
++ if (!inode_owner_or_capable(mnt_userns, inode))
++ return -EACCES;
++
++ ret = mnt_want_write_file(filp);
++ if (ret)
++ return ret;
++
++ inode_lock(inode);
++
++ if (f2fs_is_atomic_file(inode))
++ f2fs_abort_atomic_write(inode, true);
++
++ inode_unlock(inode);
++
++ mnt_drop_write_file(filp);
++ f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
++ return ret;
++}
++
+ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ {
+ struct inode *inode = file_inode(filp);
+@@ -4063,9 +4088,10 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ return f2fs_ioc_start_atomic_write(filp);
+ case F2FS_IOC_COMMIT_ATOMIC_WRITE:
+ return f2fs_ioc_commit_atomic_write(filp);
++ case F2FS_IOC_ABORT_ATOMIC_WRITE:
++ return f2fs_ioc_abort_atomic_write(filp);
+ case F2FS_IOC_START_VOLATILE_WRITE:
+ case F2FS_IOC_RELEASE_VOLATILE_WRITE:
+- case F2FS_IOC_ABORT_VOLATILE_WRITE:
+ return -EOPNOTSUPP;
+ case F2FS_IOC_SHUTDOWN:
+ return f2fs_ioc_shutdown(filp, arg);
+@@ -4734,7 +4760,7 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ case F2FS_IOC_COMMIT_ATOMIC_WRITE:
+ case F2FS_IOC_START_VOLATILE_WRITE:
+ case F2FS_IOC_RELEASE_VOLATILE_WRITE:
+- case F2FS_IOC_ABORT_VOLATILE_WRITE:
++ case F2FS_IOC_ABORT_ATOMIC_WRITE:
+ case F2FS_IOC_SHUTDOWN:
+ case FITRIM:
+ case FS_IOC_SET_ENCRYPTION_POLICY:
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index cf6f7fc83c082..02e92a72511b2 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1292,7 +1292,11 @@ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs)
+ dec_valid_node_count(sbi, dn->inode, !ofs);
+ goto fail;
+ }
+- f2fs_bug_on(sbi, new_ni.blk_addr != NULL_ADDR);
++ if (unlikely(new_ni.blk_addr != NULL_ADDR)) {
++ err = -EFSCORRUPTED;
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ goto fail;
++ }
+ #endif
+ new_ni.nid = dn->nid;
+ new_ni.ino = dn->inode->i_ino;
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 874c1b9c41a2a..52df19a0638b1 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -193,7 +193,7 @@ void f2fs_abort_atomic_write(struct inode *inode, bool clean)
+ if (f2fs_is_atomic_file(inode)) {
+ if (clean)
+ truncate_inode_pages_final(inode->i_mapping);
+- clear_inode_flag(fi->cow_inode, FI_ATOMIC_FILE);
++ clear_inode_flag(fi->cow_inode, FI_COW_FILE);
+ iput(fi->cow_inode);
+ fi->cow_inode = NULL;
+ clear_inode_flag(inode, FI_ATOMIC_FILE);
+@@ -3166,7 +3166,7 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
+ return CURSEG_COLD_DATA;
+ if (file_is_hot(inode) ||
+ is_inode_flag_set(inode, FI_HOT_DATA) ||
+- f2fs_is_atomic_file(inode))
++ f2fs_is_cow_file(inode))
+ return CURSEG_HOT_DATA;
+ return f2fs_rw_hint_to_seg_type(inode->i_write_hint);
+ } else {
+@@ -4362,6 +4362,12 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ return err;
+ seg_info_from_raw_sit(se, &sit);
+
++ if (se->type >= NR_PERSISTENT_LOG) {
++ f2fs_err(sbi, "Invalid segment type: %u, segno: %u",
++ se->type, start);
++ return -EFSCORRUPTED;
++ }
++
+ sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+
+ if (f2fs_block_unit_discard(sbi)) {
+@@ -4410,6 +4416,13 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ break;
+ seg_info_from_raw_sit(se, &sit);
+
++ if (se->type >= NR_PERSISTENT_LOG) {
++ f2fs_err(sbi, "Invalid segment type: %u, segno: %u",
++ se->type, start);
++ err = -EFSCORRUPTED;
++ break;
++ }
++
+ sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+
+ if (f2fs_block_unit_discard(sbi)) {
+diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c
+index 74920826d8f67..26a6d395737a6 100644
+--- a/fs/fscache/cookie.c
++++ b/fs/fscache/cookie.c
+@@ -739,6 +739,9 @@ again_locked:
+ fallthrough;
+
+ case FSCACHE_COOKIE_STATE_FAILED:
++ if (test_and_clear_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags))
++ fscache_end_cookie_access(cookie, fscache_access_invalidate_cookie_end);
++
+ if (atomic_read(&cookie->n_accesses) != 0)
+ break;
+ if (test_bit(FSCACHE_COOKIE_DO_RELINQUISH, &cookie->flags)) {
+@@ -1063,8 +1066,8 @@ void __fscache_invalidate(struct fscache_cookie *cookie,
+ return;
+
+ case FSCACHE_COOKIE_STATE_LOOKING_UP:
+- __fscache_begin_cookie_access(cookie, fscache_access_invalidate_cookie);
+- set_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags);
++ if (!test_and_set_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags))
++ __fscache_begin_cookie_access(cookie, fscache_access_invalidate_cookie);
+ fallthrough;
+ case FSCACHE_COOKIE_STATE_CREATING:
+ spin_unlock(&cookie->lock);
+diff --git a/fs/nfs/nfs4idmap.c b/fs/nfs/nfs4idmap.c
+index f331866dd4182..ec6afd3c4bca6 100644
+--- a/fs/nfs/nfs4idmap.c
++++ b/fs/nfs/nfs4idmap.c
+@@ -561,22 +561,20 @@ nfs_idmap_prepare_pipe_upcall(struct idmap *idmap,
+ return true;
+ }
+
+-static void
+-nfs_idmap_complete_pipe_upcall_locked(struct idmap *idmap, int ret)
++static void nfs_idmap_complete_pipe_upcall(struct idmap_legacy_upcalldata *data,
++ int ret)
+ {
+- struct key *authkey = idmap->idmap_upcall_data->authkey;
+-
+- kfree(idmap->idmap_upcall_data);
+- idmap->idmap_upcall_data = NULL;
+- complete_request_key(authkey, ret);
+- key_put(authkey);
++ complete_request_key(data->authkey, ret);
++ key_put(data->authkey);
++ kfree(data);
+ }
+
+-static void
+-nfs_idmap_abort_pipe_upcall(struct idmap *idmap, int ret)
++static void nfs_idmap_abort_pipe_upcall(struct idmap *idmap,
++ struct idmap_legacy_upcalldata *data,
++ int ret)
+ {
+- if (idmap->idmap_upcall_data != NULL)
+- nfs_idmap_complete_pipe_upcall_locked(idmap, ret);
++ if (cmpxchg(&idmap->idmap_upcall_data, data, NULL) == data)
++ nfs_idmap_complete_pipe_upcall(data, ret);
+ }
+
+ static int nfs_idmap_legacy_upcall(struct key *authkey, void *aux)
+@@ -613,7 +611,7 @@ static int nfs_idmap_legacy_upcall(struct key *authkey, void *aux)
+
+ ret = rpc_queue_upcall(idmap->idmap_pipe, msg);
+ if (ret < 0)
+- nfs_idmap_abort_pipe_upcall(idmap, ret);
++ nfs_idmap_abort_pipe_upcall(idmap, data, ret);
+
+ return ret;
+ out2:
+@@ -669,6 +667,7 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ struct request_key_auth *rka;
+ struct rpc_inode *rpci = RPC_I(file_inode(filp));
+ struct idmap *idmap = (struct idmap *)rpci->private;
++ struct idmap_legacy_upcalldata *data;
+ struct key *authkey;
+ struct idmap_msg im;
+ size_t namelen_in;
+@@ -678,10 +677,11 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ * will have been woken up and someone else may now have used
+ * idmap_key_cons - so after this point we may no longer touch it.
+ */
+- if (idmap->idmap_upcall_data == NULL)
++ data = xchg(&idmap->idmap_upcall_data, NULL);
++ if (data == NULL)
+ goto out_noupcall;
+
+- authkey = idmap->idmap_upcall_data->authkey;
++ authkey = data->authkey;
+ rka = get_request_key_auth(authkey);
+
+ if (mlen != sizeof(im)) {
+@@ -703,18 +703,17 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen)
+ if (namelen_in == 0 || namelen_in == IDMAP_NAMESZ) {
+ ret = -EINVAL;
+ goto out;
+-}
++ }
+
+- ret = nfs_idmap_read_and_verify_message(&im,
+- &idmap->idmap_upcall_data->idmap_msg,
+- rka->target_key, authkey);
++ ret = nfs_idmap_read_and_verify_message(&im, &data->idmap_msg,
++ rka->target_key, authkey);
+ if (ret >= 0) {
+ key_set_timeout(rka->target_key, nfs_idmap_cache_timeout);
+ ret = mlen;
+ }
+
+ out:
+- nfs_idmap_complete_pipe_upcall_locked(idmap, ret);
++ nfs_idmap_complete_pipe_upcall(data, ret);
+ out_noupcall:
+ return ret;
+ }
+@@ -728,7 +727,7 @@ idmap_pipe_destroy_msg(struct rpc_pipe_msg *msg)
+ struct idmap *idmap = data->idmap;
+
+ if (msg->errno)
+- nfs_idmap_abort_pipe_upcall(idmap, msg->errno);
++ nfs_idmap_abort_pipe_upcall(idmap, data, msg->errno);
+ }
+
+ static void
+@@ -736,8 +735,11 @@ idmap_release_pipe(struct inode *inode)
+ {
+ struct rpc_inode *rpci = RPC_I(inode);
+ struct idmap *idmap = (struct idmap *)rpci->private;
++ struct idmap_legacy_upcalldata *data;
+
+- nfs_idmap_abort_pipe_upcall(idmap, -EPIPE);
++ data = xchg(&idmap->idmap_upcall_data, NULL);
++ if (data)
++ nfs_idmap_complete_pipe_upcall(data, -EPIPE);
+ }
+
+ int nfs_map_name_to_uid(const struct nfs_server *server, const char *name, size_t namelen, kuid_t *uid)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index bb0e84a46d61a..77e5a99846d62 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -784,10 +784,9 @@ static void nfs4_slot_sequence_record_sent(struct nfs4_slot *slot,
+ if ((s32)(seqnr - slot->seq_nr_highest_sent) > 0)
+ slot->seq_nr_highest_sent = seqnr;
+ }
+-static void nfs4_slot_sequence_acked(struct nfs4_slot *slot,
+- u32 seqnr)
++static void nfs4_slot_sequence_acked(struct nfs4_slot *slot, u32 seqnr)
+ {
+- slot->seq_nr_highest_sent = seqnr;
++ nfs4_slot_sequence_record_sent(slot, seqnr);
+ slot->seq_nr_last_acked = seqnr;
+ }
+
+@@ -854,7 +853,6 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ __func__,
+ slot->slot_nr,
+ slot->seq_nr);
+- nfs4_slot_sequence_acked(slot, slot->seq_nr);
+ goto out_retry;
+ case -NFS4ERR_RETRY_UNCACHED_REP:
+ case -NFS4ERR_SEQ_FALSE_RETRY:
+@@ -3098,12 +3096,13 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ }
+
+ out:
+- if (opendata->lgp) {
+- nfs4_lgopen_release(opendata->lgp);
+- opendata->lgp = NULL;
+- }
+- if (!opendata->cancelled)
++ if (!opendata->cancelled) {
++ if (opendata->lgp) {
++ nfs4_lgopen_release(opendata->lgp);
++ opendata->lgp = NULL;
++ }
+ nfs4_sequence_free_slot(&opendata->o_res.seq_res);
++ }
+ return ret;
+ }
+
+@@ -9477,6 +9476,9 @@ static int nfs41_reclaim_complete_handle_errors(struct rpc_task *task, struct nf
+ rpc_delay(task, NFS4_POLL_RETRY_MAX);
+ fallthrough;
+ case -NFS4ERR_RETRY_UNCACHED_REP:
++ case -EACCES:
++ dprintk("%s: failed to reclaim complete error %d for server %s, retrying\n",
++ __func__, task->tk_status, clp->cl_hostname);
+ return -EAGAIN;
+ case -NFS4ERR_BADSESSION:
+ case -NFS4ERR_DEADSESSION:
+diff --git a/fs/ntfs3/fslog.c b/fs/ntfs3/fslog.c
+index 49b7df6167785..614513460b8e0 100644
+--- a/fs/ntfs3/fslog.c
++++ b/fs/ntfs3/fslog.c
+@@ -5057,7 +5057,7 @@ undo_action_next:
+ goto add_allocated_vcns;
+
+ vcn = le64_to_cpu(lrh->target_vcn);
+- vcn &= ~(log->clst_per_page - 1);
++ vcn &= ~(u64)(log->clst_per_page - 1);
+
+ add_allocated_vcns:
+ for (i = 0, vcn = le64_to_cpu(lrh->target_vcn),
+diff --git a/fs/ntfs3/fsntfs.c b/fs/ntfs3/fsntfs.c
+index 3de5700a9b833..891125ca68488 100644
+--- a/fs/ntfs3/fsntfs.c
++++ b/fs/ntfs3/fsntfs.c
+@@ -831,10 +831,15 @@ int ntfs_update_mftmirr(struct ntfs_sb_info *sbi, int wait)
+ {
+ int err;
+ struct super_block *sb = sbi->sb;
+- u32 blocksize = sb->s_blocksize;
++ u32 blocksize;
+ sector_t block1, block2;
+ u32 bytes;
+
++ if (!sb)
++ return -EINVAL;
++
++ blocksize = sb->s_blocksize;
++
+ if (!(sbi->flags & NTFS_FLAGS_MFTMIRR))
+ return 0;
+
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index 6f81e3a49abfb..76ebea253fa25 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -1994,7 +1994,7 @@ static int indx_free_children(struct ntfs_index *indx, struct ntfs_inode *ni,
+ const struct NTFS_DE *e, bool trim)
+ {
+ int err;
+- struct indx_node *n;
++ struct indx_node *n = NULL;
+ struct INDEX_HDR *hdr;
+ CLST vbn = de_get_vbn(e);
+ size_t i;
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index be4ebdd8048b0..803ff4c63c318 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -430,6 +430,7 @@ end_enum:
+ } else if (fname && fname->home.low == cpu_to_le32(MFT_REC_EXTEND) &&
+ fname->home.seq == cpu_to_le16(MFT_REC_EXTEND)) {
+ /* Records in $Extend are not a files or general directories. */
++ inode->i_op = &ntfs_file_inode_operations;
+ } else {
+ err = -EINVAL;
+ goto out;
+diff --git a/fs/ntfs3/super.c b/fs/ntfs3/super.c
+index 0c6de62877377..b41d7c824a50b 100644
+--- a/fs/ntfs3/super.c
++++ b/fs/ntfs3/super.c
+@@ -30,6 +30,7 @@
+ #include <linux/fs_context.h>
+ #include <linux/fs_parser.h>
+ #include <linux/log2.h>
++#include <linux/minmax.h>
+ #include <linux/module.h>
+ #include <linux/nls.h>
+ #include <linux/seq_file.h>
+@@ -390,7 +391,7 @@ static int ntfs_fs_reconfigure(struct fs_context *fc)
+ return -EINVAL;
+ }
+
+- memcpy(sbi->options, new_opts, sizeof(*new_opts));
++ swap(sbi->options, fc->fs_private);
+
+ return 0;
+ }
+@@ -900,6 +901,8 @@ static int ntfs_fill_super(struct super_block *sb, struct fs_context *fc)
+ ref.high = 0;
+
+ sbi->sb = sb;
++ sbi->options = fc->fs_private;
++ fc->fs_private = NULL;
+ sb->s_flags |= SB_NODIRATIME;
+ sb->s_magic = 0x7366746e; // "ntfs"
+ sb->s_op = &ntfs_sops;
+@@ -1262,8 +1265,6 @@ load_root:
+ goto put_inode_out;
+ }
+
+- fc->fs_private = NULL;
+-
+ return 0;
+
+ put_inode_out:
+@@ -1416,7 +1417,6 @@ static int ntfs_init_fs_context(struct fs_context *fc)
+ mutex_init(&sbi->compress.mtx_lzx);
+ #endif
+
+- sbi->options = opts;
+ fc->s_fs_info = sbi;
+ ok:
+ fc->fs_private = opts;
+diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c
+index 5e0e0280e70de..1b8c89dbf6684 100644
+--- a/fs/ntfs3/xattr.c
++++ b/fs/ntfs3/xattr.c
+@@ -547,28 +547,23 @@ static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,
+ {
+ const char *name;
+ size_t size, name_len;
+- void *value = NULL;
+- int err = 0;
++ void *value;
++ int err;
+ int flags;
++ umode_t mode;
+
+ if (S_ISLNK(inode->i_mode))
+ return -EOPNOTSUPP;
+
++ mode = inode->i_mode;
+ switch (type) {
+ case ACL_TYPE_ACCESS:
+ /* Do not change i_mode if we are in init_acl */
+ if (acl && !init_acl) {
+- umode_t mode;
+-
+ err = posix_acl_update_mode(mnt_userns, inode, &mode,
+ &acl);
+ if (err)
+- goto out;
+-
+- if (inode->i_mode != mode) {
+- inode->i_mode = mode;
+- mark_inode_dirty(inode);
+- }
++ return err;
+ }
+ name = XATTR_NAME_POSIX_ACL_ACCESS;
+ name_len = sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1;
+@@ -604,8 +599,13 @@ static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,
+ err = ntfs_set_ea(inode, name, name_len, value, size, flags, 0);
+ if (err == -ENODATA && !size)
+ err = 0; /* Removing non existed xattr. */
+- if (!err)
++ if (!err) {
+ set_cached_acl(inode, type, acl);
++ if (inode->i_mode != mode) {
++ inode->i_mode = mode;
++ mark_inode_dirty(inode);
++ }
++ }
+
+ out:
+ kfree(value);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 1ce5c96983937..4c20961302094 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1418,11 +1418,12 @@ static int ovl_make_workdir(struct super_block *sb, struct ovl_fs *ofs,
+ */
+ err = ovl_setxattr(ofs, ofs->workdir, OVL_XATTR_OPAQUE, "0", 1);
+ if (err) {
++ pr_warn("failed to set xattr on upper\n");
+ ofs->noxattr = true;
+ if (ofs->config.index || ofs->config.metacopy) {
+ ofs->config.index = false;
+ ofs->config.metacopy = false;
+- pr_warn("upper fs does not support xattr, falling back to index=off,metacopy=off.\n");
++ pr_warn("...falling back to index=off,metacopy=off.\n");
+ }
+ /*
+ * xattr support is required for persistent st_ino.
+@@ -1430,8 +1431,10 @@ static int ovl_make_workdir(struct super_block *sb, struct ovl_fs *ofs,
+ */
+ if (ofs->config.xino == OVL_XINO_AUTO) {
+ ofs->config.xino = OVL_XINO_OFF;
+- pr_warn("upper fs does not support xattr, falling back to xino=off.\n");
++ pr_warn("...falling back to xino=off.\n");
+ }
++ if (err == -EPERM && !ofs->config.userxattr)
++ pr_info("try mounting with 'userxattr' option\n");
+ err = 0;
+ } else {
+ ovl_removexattr(ofs, ofs->workdir, OVL_XATTR_OPAQUE);
+diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
+index 3096f086b5a32..71ab4ba9c25d1 100644
+--- a/include/asm-generic/bitops/atomic.h
++++ b/include/asm-generic/bitops/atomic.h
+@@ -39,9 +39,6 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p)
+ unsigned long mask = BIT_MASK(nr);
+
+ p += BIT_WORD(nr);
+- if (READ_ONCE(*p) & mask)
+- return 1;
+-
+ old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p);
+ return !!(old & mask);
+ }
+@@ -53,9 +50,6 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
+ unsigned long mask = BIT_MASK(nr);
+
+ p += BIT_WORD(nr);
+- if (!(READ_ONCE(*p) & mask))
+- return 0;
+-
+ old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
+ return !!(old & mask);
+ }
+diff --git a/include/linux/bpfptr.h b/include/linux/bpfptr.h
+index 46e1757d06a35..79b2f78eec1a0 100644
+--- a/include/linux/bpfptr.h
++++ b/include/linux/bpfptr.h
+@@ -49,7 +49,9 @@ static inline void bpfptr_add(bpfptr_t *bpfptr, size_t val)
+ static inline int copy_from_bpfptr_offset(void *dst, bpfptr_t src,
+ size_t offset, size_t size)
+ {
+- return copy_from_sockptr_offset(dst, (sockptr_t) src, offset, size);
++ if (!bpfptr_is_kernel(src))
++ return copy_from_user(dst, src.user + offset, size);
++ return copy_from_kernel_nofault(dst, src.kernel + offset, size);
+ }
+
+ static inline int copy_from_bpfptr(void *dst, bpfptr_t src, size_t size)
+@@ -78,7 +80,9 @@ static inline void *kvmemdup_bpfptr(bpfptr_t src, size_t len)
+
+ static inline long strncpy_from_bpfptr(char *dst, bpfptr_t src, size_t count)
+ {
+- return strncpy_from_sockptr(dst, (sockptr_t) src, count);
++ if (bpfptr_is_kernel(src))
++ return strncpy_from_kernel_nofault(dst, src.kernel, count);
++ return strncpy_from_user(dst, src.user, count);
+ }
+
+ #endif /* _LINUX_BPFPTR_H */
+diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h
+index 86af6f0a00a2a..ca98aeadcc804 100644
+--- a/include/linux/io-pgtable.h
++++ b/include/linux/io-pgtable.h
+@@ -74,17 +74,22 @@ struct io_pgtable_cfg {
+ * to support up to 35 bits PA where the bit32, bit33 and bit34 are
+ * encoded in the bit9, bit4 and bit5 of the PTE respectively.
+ *
++ * IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT: (ARM v7s format) MediaTek IOMMUs
++ * extend the translation table base support up to 35 bits PA, the
++ * encoding format is same with IO_PGTABLE_QUIRK_ARM_MTK_EXT.
++ *
+ * IO_PGTABLE_QUIRK_ARM_TTBR1: (ARM LPAE format) Configure the table
+ * for use in the upper half of a split address space.
+ *
+ * IO_PGTABLE_QUIRK_ARM_OUTER_WBWA: Override the outer-cacheability
+ * attributes set in the TCR for a non-coherent page-table walker.
+ */
+- #define IO_PGTABLE_QUIRK_ARM_NS BIT(0)
+- #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1)
+- #define IO_PGTABLE_QUIRK_ARM_MTK_EXT BIT(3)
+- #define IO_PGTABLE_QUIRK_ARM_TTBR1 BIT(5)
+- #define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6)
++ #define IO_PGTABLE_QUIRK_ARM_NS BIT(0)
++ #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1)
++ #define IO_PGTABLE_QUIRK_ARM_MTK_EXT BIT(3)
++ #define IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT BIT(4)
++ #define IO_PGTABLE_QUIRK_ARM_TTBR1 BIT(5)
++ #define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6)
+ unsigned long quirks;
+ unsigned long pgsize_bitmap;
+ unsigned int ias;
+diff --git a/include/linux/nmi.h b/include/linux/nmi.h
+index 750c7f395ca90..f700ff2df074e 100644
+--- a/include/linux/nmi.h
++++ b/include/linux/nmi.h
+@@ -122,6 +122,8 @@ int watchdog_nmi_probe(void);
+ int watchdog_nmi_enable(unsigned int cpu);
+ void watchdog_nmi_disable(unsigned int cpu);
+
++void lockup_detector_reconfigure(void);
++
+ /**
+ * touch_nmi_watchdog - restart NMI watchdog timeout.
+ *
+diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
+index 5860f32e39580..986c8a17ca5e7 100644
+--- a/include/linux/sunrpc/xdr.h
++++ b/include/linux/sunrpc/xdr.h
+@@ -419,8 +419,8 @@ static inline int xdr_stream_encode_item_absent(struct xdr_stream *xdr)
+ */
+ static inline __be32 *xdr_encode_bool(__be32 *p, u32 n)
+ {
+- *p = n ? xdr_one : xdr_zero;
+- return p++;
++ *p++ = n ? xdr_one : xdr_zero;
++ return p;
+ }
+
+ /**
+diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
+index 522bbf9379571..99f54d4f4bca2 100644
+--- a/include/linux/sunrpc/xprt.h
++++ b/include/linux/sunrpc/xprt.h
+@@ -144,7 +144,8 @@ struct rpc_xprt_ops {
+ unsigned short (*get_srcport)(struct rpc_xprt *xprt);
+ int (*buf_alloc)(struct rpc_task *task);
+ void (*buf_free)(struct rpc_task *task);
+- int (*prepare_request)(struct rpc_rqst *req);
++ int (*prepare_request)(struct rpc_rqst *req,
++ struct xdr_buf *buf);
+ int (*send_request)(struct rpc_rqst *req);
+ void (*wait_for_reply_request)(struct rpc_task *task);
+ void (*timer)(struct rpc_xprt *xprt, struct rpc_task *task);
+diff --git a/include/linux/uacce.h b/include/linux/uacce.h
+index 48e319f402751..9ce88c28b0a87 100644
+--- a/include/linux/uacce.h
++++ b/include/linux/uacce.h
+@@ -70,6 +70,7 @@ enum uacce_q_state {
+ * @wait: wait queue head
+ * @list: index into uacce queues list
+ * @qfrs: pointer of qfr regions
++ * @mutex: protects queue state
+ * @state: queue state machine
+ * @pasid: pasid associated to the mm
+ * @handle: iommu_sva handle returned by iommu_sva_bind_device()
+@@ -80,6 +81,7 @@ struct uacce_queue {
+ wait_queue_head_t wait;
+ struct list_head list;
+ struct uacce_qfile_region *qfrs[UACCE_MAX_REGION];
++ struct mutex mutex;
+ enum uacce_q_state state;
+ u32 pasid;
+ struct iommu_sva *handle;
+@@ -97,9 +99,9 @@ struct uacce_queue {
+ * @dev_id: id of the uacce device
+ * @cdev: cdev of the uacce
+ * @dev: dev of the uacce
++ * @mutex: protects uacce operation
+ * @priv: private pointer of the uacce
+ * @queues: list of queues
+- * @queues_lock: lock for queues list
+ * @inode: core vfs
+ */
+ struct uacce_device {
+@@ -113,9 +115,9 @@ struct uacce_device {
+ u32 dev_id;
+ struct cdev *cdev;
+ struct device dev;
++ struct mutex mutex;
+ void *priv;
+ struct list_head queues;
+- struct mutex queues_lock;
+ struct inode *inode;
+ };
+
+diff --git a/include/linux/usb/typec_mux.h b/include/linux/usb/typec_mux.h
+index ee57781dcf288..9292f0e078464 100644
+--- a/include/linux/usb/typec_mux.h
++++ b/include/linux/usb/typec_mux.h
+@@ -58,17 +58,13 @@ struct typec_mux_desc {
+ void *drvdata;
+ };
+
++#if IS_ENABLED(CONFIG_TYPEC)
++
+ struct typec_mux *fwnode_typec_mux_get(struct fwnode_handle *fwnode,
+ const struct typec_altmode_desc *desc);
+ void typec_mux_put(struct typec_mux *mux);
+ int typec_mux_set(struct typec_mux *mux, struct typec_mux_state *state);
+
+-static inline struct typec_mux *
+-typec_mux_get(struct device *dev, const struct typec_altmode_desc *desc)
+-{
+- return fwnode_typec_mux_get(dev_fwnode(dev), desc);
+-}
+-
+ struct typec_mux_dev *
+ typec_mux_register(struct device *parent, const struct typec_mux_desc *desc);
+ void typec_mux_unregister(struct typec_mux_dev *mux);
+@@ -76,4 +72,40 @@ void typec_mux_unregister(struct typec_mux_dev *mux);
+ void typec_mux_set_drvdata(struct typec_mux_dev *mux, void *data);
+ void *typec_mux_get_drvdata(struct typec_mux_dev *mux);
+
++#else
++
++static inline struct typec_mux *fwnode_typec_mux_get(struct fwnode_handle *fwnode,
++ const struct typec_altmode_desc *desc)
++{
++ return NULL;
++}
++
++static inline void typec_mux_put(struct typec_mux *mux) {}
++
++static inline int typec_mux_set(struct typec_mux *mux, struct typec_mux_state *state)
++{
++ return 0;
++}
++
++static inline struct typec_mux_dev *
++typec_mux_register(struct device *parent, const struct typec_mux_desc *desc)
++{
++ return ERR_PTR(-EOPNOTSUPP);
++}
++static inline void typec_mux_unregister(struct typec_mux_dev *mux) {}
++
++static inline void typec_mux_set_drvdata(struct typec_mux_dev *mux, void *data) {}
++static inline void *typec_mux_get_drvdata(struct typec_mux_dev *mux)
++{
++ return ERR_PTR(-EOPNOTSUPP);
++}
++
++#endif /* CONFIG_TYPEC */
++
++static inline struct typec_mux *
++typec_mux_get(struct device *dev, const struct typec_altmode_desc *desc)
++{
++ return fwnode_typec_mux_get(dev_fwnode(dev), desc);
++}
++
+ #endif /* __USB_TYPEC_MUX */
+diff --git a/include/net/mptcp.h b/include/net/mptcp.h
+index 4d761ad530c94..e2ff509da0198 100644
+--- a/include/net/mptcp.h
++++ b/include/net/mptcp.h
+@@ -290,4 +290,8 @@ struct mptcp_sock *bpf_mptcp_sock_from_subflow(struct sock *sk);
+ static inline struct mptcp_sock *bpf_mptcp_sock_from_subflow(struct sock *sk) { return NULL; }
+ #endif
+
++#if !IS_ENABLED(CONFIG_MPTCP)
++struct mptcp_sock { };
++#endif
++
+ #endif /* __NET_MPTCP_H */
+diff --git a/include/net/netns/conntrack.h b/include/net/netns/conntrack.h
+index 0677cd3de0344..c396a3862e808 100644
+--- a/include/net/netns/conntrack.h
++++ b/include/net/netns/conntrack.h
+@@ -95,7 +95,7 @@ struct nf_ip_net {
+
+ struct netns_ct {
+ #ifdef CONFIG_NF_CONNTRACK_EVENTS
+- bool ctnetlink_has_listener;
++ u8 ctnetlink_has_listener;
+ bool ecache_dwork_pending;
+ #endif
+ u8 sysctl_log_invalid; /* Log invalid packets */
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index ac151ecc7f19f..2428bc64cb1d6 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -105,11 +105,6 @@
+ #define REG_RESERVED_ADDR 0xffffffff
+ #define REG_RESERVED(reg) REG(reg, REG_RESERVED_ADDR)
+
+-#define for_each_stat(ocelot, stat) \
+- for ((stat) = (ocelot)->stats_layout; \
+- ((stat)->name[0] != '\0'); \
+- (stat)++)
+-
+ enum ocelot_target {
+ ANA = 1,
+ QS,
+@@ -335,7 +330,8 @@ enum ocelot_reg {
+ SYS_COUNT_RX_64,
+ SYS_COUNT_RX_65_127,
+ SYS_COUNT_RX_128_255,
+- SYS_COUNT_RX_256_1023,
++ SYS_COUNT_RX_256_511,
++ SYS_COUNT_RX_512_1023,
+ SYS_COUNT_RX_1024_1526,
+ SYS_COUNT_RX_1527_MAX,
+ SYS_COUNT_RX_PAUSE,
+@@ -351,7 +347,8 @@ enum ocelot_reg {
+ SYS_COUNT_TX_PAUSE,
+ SYS_COUNT_TX_64,
+ SYS_COUNT_TX_65_127,
+- SYS_COUNT_TX_128_511,
++ SYS_COUNT_TX_128_255,
++ SYS_COUNT_TX_256_511,
+ SYS_COUNT_TX_512_1023,
+ SYS_COUNT_TX_1024_1526,
+ SYS_COUNT_TX_1527_MAX,
+@@ -538,13 +535,108 @@ enum ocelot_ptp_pins {
+ TOD_ACC_PIN
+ };
+
++enum ocelot_stat {
++ OCELOT_STAT_RX_OCTETS,
++ OCELOT_STAT_RX_UNICAST,
++ OCELOT_STAT_RX_MULTICAST,
++ OCELOT_STAT_RX_BROADCAST,
++ OCELOT_STAT_RX_SHORTS,
++ OCELOT_STAT_RX_FRAGMENTS,
++ OCELOT_STAT_RX_JABBERS,
++ OCELOT_STAT_RX_CRC_ALIGN_ERRS,
++ OCELOT_STAT_RX_SYM_ERRS,
++ OCELOT_STAT_RX_64,
++ OCELOT_STAT_RX_65_127,
++ OCELOT_STAT_RX_128_255,
++ OCELOT_STAT_RX_256_511,
++ OCELOT_STAT_RX_512_1023,
++ OCELOT_STAT_RX_1024_1526,
++ OCELOT_STAT_RX_1527_MAX,
++ OCELOT_STAT_RX_PAUSE,
++ OCELOT_STAT_RX_CONTROL,
++ OCELOT_STAT_RX_LONGS,
++ OCELOT_STAT_RX_CLASSIFIED_DROPS,
++ OCELOT_STAT_RX_RED_PRIO_0,
++ OCELOT_STAT_RX_RED_PRIO_1,
++ OCELOT_STAT_RX_RED_PRIO_2,
++ OCELOT_STAT_RX_RED_PRIO_3,
++ OCELOT_STAT_RX_RED_PRIO_4,
++ OCELOT_STAT_RX_RED_PRIO_5,
++ OCELOT_STAT_RX_RED_PRIO_6,
++ OCELOT_STAT_RX_RED_PRIO_7,
++ OCELOT_STAT_RX_YELLOW_PRIO_0,
++ OCELOT_STAT_RX_YELLOW_PRIO_1,
++ OCELOT_STAT_RX_YELLOW_PRIO_2,
++ OCELOT_STAT_RX_YELLOW_PRIO_3,
++ OCELOT_STAT_RX_YELLOW_PRIO_4,
++ OCELOT_STAT_RX_YELLOW_PRIO_5,
++ OCELOT_STAT_RX_YELLOW_PRIO_6,
++ OCELOT_STAT_RX_YELLOW_PRIO_7,
++ OCELOT_STAT_RX_GREEN_PRIO_0,
++ OCELOT_STAT_RX_GREEN_PRIO_1,
++ OCELOT_STAT_RX_GREEN_PRIO_2,
++ OCELOT_STAT_RX_GREEN_PRIO_3,
++ OCELOT_STAT_RX_GREEN_PRIO_4,
++ OCELOT_STAT_RX_GREEN_PRIO_5,
++ OCELOT_STAT_RX_GREEN_PRIO_6,
++ OCELOT_STAT_RX_GREEN_PRIO_7,
++ OCELOT_STAT_TX_OCTETS,
++ OCELOT_STAT_TX_UNICAST,
++ OCELOT_STAT_TX_MULTICAST,
++ OCELOT_STAT_TX_BROADCAST,
++ OCELOT_STAT_TX_COLLISION,
++ OCELOT_STAT_TX_DROPS,
++ OCELOT_STAT_TX_PAUSE,
++ OCELOT_STAT_TX_64,
++ OCELOT_STAT_TX_65_127,
++ OCELOT_STAT_TX_128_255,
++ OCELOT_STAT_TX_256_511,
++ OCELOT_STAT_TX_512_1023,
++ OCELOT_STAT_TX_1024_1526,
++ OCELOT_STAT_TX_1527_MAX,
++ OCELOT_STAT_TX_YELLOW_PRIO_0,
++ OCELOT_STAT_TX_YELLOW_PRIO_1,
++ OCELOT_STAT_TX_YELLOW_PRIO_2,
++ OCELOT_STAT_TX_YELLOW_PRIO_3,
++ OCELOT_STAT_TX_YELLOW_PRIO_4,
++ OCELOT_STAT_TX_YELLOW_PRIO_5,
++ OCELOT_STAT_TX_YELLOW_PRIO_6,
++ OCELOT_STAT_TX_YELLOW_PRIO_7,
++ OCELOT_STAT_TX_GREEN_PRIO_0,
++ OCELOT_STAT_TX_GREEN_PRIO_1,
++ OCELOT_STAT_TX_GREEN_PRIO_2,
++ OCELOT_STAT_TX_GREEN_PRIO_3,
++ OCELOT_STAT_TX_GREEN_PRIO_4,
++ OCELOT_STAT_TX_GREEN_PRIO_5,
++ OCELOT_STAT_TX_GREEN_PRIO_6,
++ OCELOT_STAT_TX_GREEN_PRIO_7,
++ OCELOT_STAT_TX_AGED,
++ OCELOT_STAT_DROP_LOCAL,
++ OCELOT_STAT_DROP_TAIL,
++ OCELOT_STAT_DROP_YELLOW_PRIO_0,
++ OCELOT_STAT_DROP_YELLOW_PRIO_1,
++ OCELOT_STAT_DROP_YELLOW_PRIO_2,
++ OCELOT_STAT_DROP_YELLOW_PRIO_3,
++ OCELOT_STAT_DROP_YELLOW_PRIO_4,
++ OCELOT_STAT_DROP_YELLOW_PRIO_5,
++ OCELOT_STAT_DROP_YELLOW_PRIO_6,
++ OCELOT_STAT_DROP_YELLOW_PRIO_7,
++ OCELOT_STAT_DROP_GREEN_PRIO_0,
++ OCELOT_STAT_DROP_GREEN_PRIO_1,
++ OCELOT_STAT_DROP_GREEN_PRIO_2,
++ OCELOT_STAT_DROP_GREEN_PRIO_3,
++ OCELOT_STAT_DROP_GREEN_PRIO_4,
++ OCELOT_STAT_DROP_GREEN_PRIO_5,
++ OCELOT_STAT_DROP_GREEN_PRIO_6,
++ OCELOT_STAT_DROP_GREEN_PRIO_7,
++ OCELOT_NUM_STATS,
++};
++
+ struct ocelot_stat_layout {
+ u32 offset;
+ char name[ETH_GSTRING_LEN];
+ };
+
+-#define OCELOT_STAT_END { .name = "" }
+-
+ struct ocelot_stats_region {
+ struct list_head node;
+ u32 offset;
+@@ -707,7 +799,6 @@ struct ocelot {
+ const u32 *const *map;
+ const struct ocelot_stat_layout *stats_layout;
+ struct list_head stats_regions;
+- unsigned int num_stats;
+
+ u32 pool_size[OCELOT_SB_NUM][OCELOT_SB_POOL_NUM];
+ int packet_buffer_size;
+@@ -750,7 +841,7 @@ struct ocelot {
+ struct ocelot_psfp_list psfp;
+
+ /* Workqueue to check statistics for overflow with its lock */
+- struct mutex stats_lock;
++ spinlock_t stats_lock;
+ u64 *stats;
+ struct delayed_work stats_work;
+ struct workqueue_struct *stats_queue;
+diff --git a/include/sound/control.h b/include/sound/control.h
+index 985c51a8fb748..a1fc7e0a47d95 100644
+--- a/include/sound/control.h
++++ b/include/sound/control.h
+@@ -109,7 +109,7 @@ struct snd_ctl_file {
+ int preferred_subdevice[SND_CTL_SUBDEV_ITEMS];
+ wait_queue_head_t change_sleep;
+ spinlock_t read_lock;
+- struct fasync_struct *fasync;
++ struct snd_fasync *fasync;
+ int subscribed; /* read interface is activated */
+ struct list_head events; /* waiting events for read */
+ };
+diff --git a/include/sound/core.h b/include/sound/core.h
+index 6d4cc49584c63..39cee40ac22e0 100644
+--- a/include/sound/core.h
++++ b/include/sound/core.h
+@@ -501,4 +501,12 @@ snd_pci_quirk_lookup_id(u16 vendor, u16 device,
+ }
+ #endif
+
++/* async signal helpers */
++struct snd_fasync;
++
++int snd_fasync_helper(int fd, struct file *file, int on,
++ struct snd_fasync **fasyncp);
++void snd_kill_fasync(struct snd_fasync *fasync, int signal, int poll);
++void snd_fasync_free(struct snd_fasync *fasync);
++
+ #endif /* __SOUND_CORE_H */
+diff --git a/include/sound/pcm.h b/include/sound/pcm.h
+index 6b99310b5b889..6987110843f03 100644
+--- a/include/sound/pcm.h
++++ b/include/sound/pcm.h
+@@ -399,7 +399,7 @@ struct snd_pcm_runtime {
+ snd_pcm_uframes_t twake; /* do transfer (!poll) wakeup if non-zero */
+ wait_queue_head_t sleep; /* poll sleep */
+ wait_queue_head_t tsleep; /* transfer sleep */
+- struct fasync_struct *fasync;
++ struct snd_fasync *fasync;
+ bool stop_operating; /* sync_stop will be called */
+ struct mutex buffer_mutex; /* protect for buffer changes */
+ atomic_t buffer_accessing; /* >0: in r/w operation, <0: blocked */
+diff --git a/include/uapi/linux/atm_zatm.h b/include/uapi/linux/atm_zatm.h
+new file mode 100644
+index 0000000000000..5135027b93c1c
+--- /dev/null
++++ b/include/uapi/linux/atm_zatm.h
+@@ -0,0 +1,47 @@
++/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
++/* atm_zatm.h - Driver-specific declarations of the ZATM driver (for use by
++ driver-specific utilities) */
++
++/* Written 1995-1999 by Werner Almesberger, EPFL LRC/ICA */
++
++
++#ifndef LINUX_ATM_ZATM_H
++#define LINUX_ATM_ZATM_H
++
++/*
++ * Note: non-kernel programs including this file must also include
++ * sys/types.h for struct timeval
++ */
++
++#include <linux/atmapi.h>
++#include <linux/atmioc.h>
++
++#define ZATM_GETPOOL _IOW('a',ATMIOC_SARPRV+1,struct atmif_sioc)
++ /* get pool statistics */
++#define ZATM_GETPOOLZ _IOW('a',ATMIOC_SARPRV+2,struct atmif_sioc)
++ /* get statistics and zero */
++#define ZATM_SETPOOL _IOW('a',ATMIOC_SARPRV+3,struct atmif_sioc)
++ /* set pool parameters */
++
++struct zatm_pool_info {
++ int ref_count; /* free buffer pool usage counters */
++ int low_water,high_water; /* refill parameters */
++ int rqa_count,rqu_count; /* queue condition counters */
++ int offset,next_off; /* alignment optimizations: offset */
++ int next_cnt,next_thres; /* repetition counter and threshold */
++};
++
++struct zatm_pool_req {
++ int pool_num; /* pool number */
++ struct zatm_pool_info info; /* actual information */
++};
++
++#define ZATM_OAM_POOL 0 /* free buffer pool for OAM cells */
++#define ZATM_AAL0_POOL 1 /* free buffer pool for AAL0 cells */
++#define ZATM_AAL5_POOL_BASE 2 /* first AAL5 free buffer pool */
++#define ZATM_LAST_POOL ZATM_AAL5_POOL_BASE+10 /* max. 64 kB */
++
++#define ZATM_TIMER_HISTORY_SIZE 16 /* number of timer adjustments to
++ record; must be 2^n */
++
++#endif
+diff --git a/include/uapi/linux/f2fs.h b/include/uapi/linux/f2fs.h
+index 352a822d43709..3121d127d5aae 100644
+--- a/include/uapi/linux/f2fs.h
++++ b/include/uapi/linux/f2fs.h
+@@ -13,7 +13,7 @@
+ #define F2FS_IOC_COMMIT_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 2)
+ #define F2FS_IOC_START_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 3)
+ #define F2FS_IOC_RELEASE_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 4)
+-#define F2FS_IOC_ABORT_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 5)
++#define F2FS_IOC_ABORT_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 5)
+ #define F2FS_IOC_GARBAGE_COLLECT _IOW(F2FS_IOCTL_MAGIC, 6, __u32)
+ #define F2FS_IOC_WRITE_CHECKPOINT _IO(F2FS_IOCTL_MAGIC, 7)
+ #define F2FS_IOC_DEFRAGMENT _IOWR(F2FS_IOCTL_MAGIC, 8, \
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index a92271421718e..991aea081ec70 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -577,6 +577,18 @@ enum ufshcd_quirks {
+ * support physical host configuration.
+ */
+ UFSHCD_QUIRK_SKIP_PH_CONFIGURATION = 1 << 16,
++
++ /*
++ * This quirk needs to be enabled if the host controller has
++ * 64-bit addressing supported capability but it doesn't work.
++ */
++ UFSHCD_QUIRK_BROKEN_64BIT_ADDRESS = 1 << 17,
++
++ /*
++ * This quirk needs to be enabled if the host controller has
++ * auto-hibernate capability but it's FASTAUTO only.
++ */
++ UFSHCD_QUIRK_HIBERN_FASTAUTO = 1 << 18,
+ };
+
+ enum ufshcd_caps {
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index 1d05d63e6fa5a..a50cf0bb520ab 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -649,6 +649,11 @@ static int bpf_iter_init_array_map(void *priv_data,
+ seq_info->percpu_value_buf = value_buf;
+ }
+
++ /* bpf_iter_attach_map() acquires a map uref, and the uref may be
++ * released before or in the middle of iterating map elements, so
++ * acquire an extra map uref for iterator.
++ */
++ bpf_map_inc_with_uref(map);
+ seq_info->map = map;
+ return 0;
+ }
+@@ -657,6 +662,7 @@ static void bpf_iter_fini_array_map(void *priv_data)
+ {
+ struct bpf_iter_seq_array_map_info *seq_info = priv_data;
+
++ bpf_map_put_with_uref(seq_info->map);
+ kfree(seq_info->percpu_value_buf);
+ }
+
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index 17fb69c0e0dcd..4dd5e0005afa1 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -311,12 +311,8 @@ static struct htab_elem *prealloc_lru_pop(struct bpf_htab *htab, void *key,
+ struct htab_elem *l;
+
+ if (node) {
+- u32 key_size = htab->map.key_size;
+-
+ l = container_of(node, struct htab_elem, lru_node);
+- memcpy(l->key, key, key_size);
+- check_and_init_map_value(&htab->map,
+- l->key + round_up(key_size, 8));
++ memcpy(l->key, key, htab->map.key_size);
+ return l;
+ }
+
+@@ -2064,6 +2060,7 @@ static int bpf_iter_init_hash_map(void *priv_data,
+ seq_info->percpu_value_buf = value_buf;
+ }
+
++ bpf_map_inc_with_uref(map);
+ seq_info->map = map;
+ seq_info->htab = container_of(map, struct bpf_htab, map);
+ return 0;
+@@ -2073,6 +2070,7 @@ static void bpf_iter_fini_hash_map(void *priv_data)
+ {
+ struct bpf_iter_seq_hash_map_info *seq_info = priv_data;
+
++ bpf_map_put_with_uref(seq_info->map);
+ kfree(seq_info->percpu_value_buf);
+ }
+
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 2b69306d3c6e6..82e83cfb4114a 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -5035,9 +5035,6 @@ static bool syscall_prog_is_valid_access(int off, int size,
+
+ BPF_CALL_3(bpf_sys_bpf, int, cmd, union bpf_attr *, attr, u32, attr_size)
+ {
+- struct bpf_prog * __maybe_unused prog;
+- struct bpf_tramp_run_ctx __maybe_unused run_ctx;
+-
+ switch (cmd) {
+ case BPF_MAP_CREATE:
+ case BPF_MAP_UPDATE_ELEM:
+@@ -5047,6 +5044,18 @@ BPF_CALL_3(bpf_sys_bpf, int, cmd, union bpf_attr *, attr, u32, attr_size)
+ case BPF_LINK_CREATE:
+ case BPF_RAW_TRACEPOINT_OPEN:
+ break;
++ default:
++ return -EINVAL;
++ }
++ return __sys_bpf(cmd, KERNEL_BPFPTR(attr), attr_size);
++}
++
++int kern_sys_bpf(int cmd, union bpf_attr *attr, unsigned int size)
++{
++ struct bpf_prog * __maybe_unused prog;
++ struct bpf_tramp_run_ctx __maybe_unused run_ctx;
++
++ switch (cmd) {
+ #ifdef CONFIG_BPF_JIT /* __bpf_prog_enter_sleepable used by trampoline and JIT */
+ case BPF_PROG_TEST_RUN:
+ if (attr->test.data_in || attr->test.data_out ||
+@@ -5077,11 +5086,10 @@ BPF_CALL_3(bpf_sys_bpf, int, cmd, union bpf_attr *, attr, u32, attr_size)
+ return 0;
+ #endif
+ default:
+- return -EINVAL;
++ return ____bpf_sys_bpf(cmd, attr, size);
+ }
+- return __sys_bpf(cmd, KERNEL_BPFPTR(attr), attr_size);
+ }
+-EXPORT_SYMBOL(bpf_sys_bpf);
++EXPORT_SYMBOL(kern_sys_bpf);
+
+ static const struct bpf_func_proto bpf_sys_bpf_proto = {
+ .func = bpf_sys_bpf,
+diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
+index 7d4478525c669..4c57fc89fa17b 100644
+--- a/kernel/trace/trace_eprobe.c
++++ b/kernel/trace/trace_eprobe.c
+@@ -226,6 +226,7 @@ static int trace_eprobe_tp_arg_update(struct trace_eprobe *ep, int i)
+ struct probe_arg *parg = &ep->tp.args[i];
+ struct ftrace_event_field *field;
+ struct list_head *head;
++ int ret = -ENOENT;
+
+ head = trace_get_fields(ep->event);
+ list_for_each_entry(field, head, link) {
+@@ -235,9 +236,20 @@ static int trace_eprobe_tp_arg_update(struct trace_eprobe *ep, int i)
+ return 0;
+ }
+ }
++
++ /*
++ * Argument not found on event. But allow for comm and COMM
++ * to be used to get the current->comm.
++ */
++ if (strcmp(parg->code->data, "COMM") == 0 ||
++ strcmp(parg->code->data, "comm") == 0) {
++ parg->code->op = FETCH_OP_COMM;
++ ret = 0;
++ }
++
+ kfree(parg->code->data);
+ parg->code->data = NULL;
+- return -ENOENT;
++ return ret;
+ }
+
+ static int eprobe_event_define_fields(struct trace_event_call *event_call)
+@@ -310,6 +322,27 @@ static unsigned long get_event_field(struct fetch_insn *code, void *rec)
+
+ addr = rec + field->offset;
+
++ if (is_string_field(field)) {
++ switch (field->filter_type) {
++ case FILTER_DYN_STRING:
++ val = (unsigned long)(rec + (*(unsigned int *)addr & 0xffff));
++ break;
++ case FILTER_RDYN_STRING:
++ val = (unsigned long)(addr + (*(unsigned int *)addr & 0xffff));
++ break;
++ case FILTER_STATIC_STRING:
++ val = (unsigned long)addr;
++ break;
++ case FILTER_PTR_STRING:
++ val = (unsigned long)(*(char *)addr);
++ break;
++ default:
++ WARN_ON_ONCE(1);
++ return 0;
++ }
++ return val;
++ }
++
+ switch (field->size) {
+ case 1:
+ if (field->is_signed)
+@@ -341,16 +374,38 @@ static unsigned long get_event_field(struct fetch_insn *code, void *rec)
+
+ static int get_eprobe_size(struct trace_probe *tp, void *rec)
+ {
++ struct fetch_insn *code;
+ struct probe_arg *arg;
+ int i, len, ret = 0;
+
+ for (i = 0; i < tp->nr_args; i++) {
+ arg = tp->args + i;
+- if (unlikely(arg->dynamic)) {
++ if (arg->dynamic) {
+ unsigned long val;
+
+- val = get_event_field(arg->code, rec);
+- len = process_fetch_insn_bottom(arg->code + 1, val, NULL, NULL);
++ code = arg->code;
++ retry:
++ switch (code->op) {
++ case FETCH_OP_TP_ARG:
++ val = get_event_field(code, rec);
++ break;
++ case FETCH_OP_IMM:
++ val = code->immediate;
++ break;
++ case FETCH_OP_COMM:
++ val = (unsigned long)current->comm;
++ break;
++ case FETCH_OP_DATA:
++ val = (unsigned long)code->data;
++ break;
++ case FETCH_NOP_SYMBOL: /* Ignore a place holder */
++ code++;
++ goto retry;
++ default:
++ continue;
++ }
++ code++;
++ len = process_fetch_insn_bottom(code, val, NULL, NULL);
+ if (len > 0)
+ ret += len;
+ }
+@@ -368,8 +423,28 @@ process_fetch_insn(struct fetch_insn *code, void *rec, void *dest,
+ {
+ unsigned long val;
+
+- val = get_event_field(code, rec);
+- return process_fetch_insn_bottom(code + 1, val, dest, base);
++ retry:
++ switch (code->op) {
++ case FETCH_OP_TP_ARG:
++ val = get_event_field(code, rec);
++ break;
++ case FETCH_OP_IMM:
++ val = code->immediate;
++ break;
++ case FETCH_OP_COMM:
++ val = (unsigned long)current->comm;
++ break;
++ case FETCH_OP_DATA:
++ val = (unsigned long)code->data;
++ break;
++ case FETCH_NOP_SYMBOL: /* Ignore a place holder */
++ code++;
++ goto retry;
++ default:
++ return -EILSEQ;
++ }
++ code++;
++ return process_fetch_insn_bottom(code, val, dest, base);
+ }
+ NOKPROBE_SYMBOL(process_fetch_insn)
+
+@@ -841,6 +916,10 @@ static int trace_eprobe_tp_update_arg(struct trace_eprobe *ep, const char *argv[
+ if (ep->tp.args[i].code->op == FETCH_OP_TP_ARG)
+ ret = trace_eprobe_tp_arg_update(ep, i);
+
++ /* Handle symbols "@" */
++ if (!ret)
++ ret = traceprobe_update_arg(&ep->tp.args[i]);
++
+ return ret;
+ }
+
+diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
+index a114549720d63..61e3a2620fa3c 100644
+--- a/kernel/trace/trace_event_perf.c
++++ b/kernel/trace/trace_event_perf.c
+@@ -157,7 +157,7 @@ static void perf_trace_event_unreg(struct perf_event *p_event)
+ int i;
+
+ if (--tp_event->perf_refcount > 0)
+- goto out;
++ return;
+
+ tp_event->class->reg(tp_event, TRACE_REG_PERF_UNREGISTER, NULL);
+
+@@ -176,8 +176,6 @@ static void perf_trace_event_unreg(struct perf_event *p_event)
+ perf_trace_buf[i] = NULL;
+ }
+ }
+-out:
+- trace_event_put_ref(tp_event);
+ }
+
+ static int perf_trace_event_open(struct perf_event *p_event)
+@@ -241,6 +239,7 @@ void perf_trace_destroy(struct perf_event *p_event)
+ mutex_lock(&event_mutex);
+ perf_trace_event_close(p_event);
+ perf_trace_event_unreg(p_event);
++ trace_event_put_ref(p_event->tp_event);
+ mutex_unlock(&event_mutex);
+ }
+
+@@ -292,6 +291,7 @@ void perf_kprobe_destroy(struct perf_event *p_event)
+ mutex_lock(&event_mutex);
+ perf_trace_event_close(p_event);
+ perf_trace_event_unreg(p_event);
++ trace_event_put_ref(p_event->tp_event);
+ mutex_unlock(&event_mutex);
+
+ destroy_local_trace_kprobe(p_event->tp_event);
+@@ -347,6 +347,7 @@ void perf_uprobe_destroy(struct perf_event *p_event)
+ mutex_lock(&event_mutex);
+ perf_trace_event_close(p_event);
+ perf_trace_event_unreg(p_event);
++ trace_event_put_ref(p_event->tp_event);
+ mutex_unlock(&event_mutex);
+ destroy_local_trace_uprobe(p_event->tp_event);
+ }
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 181f08186d32c..0356cae0cf74e 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -176,6 +176,7 @@ static int trace_define_generic_fields(void)
+
+ __generic_field(int, CPU, FILTER_CPU);
+ __generic_field(int, cpu, FILTER_CPU);
++ __generic_field(int, common_cpu, FILTER_CPU);
+ __generic_field(char *, COMM, FILTER_COMM);
+ __generic_field(char *, comm, FILTER_COMM);
+
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 80863c6508e5e..d7626c936b986 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -279,7 +279,14 @@ static int parse_probe_vars(char *arg, const struct fetch_type *t,
+ int ret = 0;
+ int len;
+
+- if (strcmp(arg, "retval") == 0) {
++ if (flags & TPARG_FL_TPOINT) {
++ if (code->data)
++ return -EFAULT;
++ code->data = kstrdup(arg, GFP_KERNEL);
++ if (!code->data)
++ return -ENOMEM;
++ code->op = FETCH_OP_TP_ARG;
++ } else if (strcmp(arg, "retval") == 0) {
+ if (flags & TPARG_FL_RETURN) {
+ code->op = FETCH_OP_RETVAL;
+ } else {
+@@ -303,7 +310,7 @@ static int parse_probe_vars(char *arg, const struct fetch_type *t,
+ }
+ } else
+ goto inval_var;
+- } else if (strcmp(arg, "comm") == 0) {
++ } else if (strcmp(arg, "comm") == 0 || strcmp(arg, "COMM") == 0) {
+ code->op = FETCH_OP_COMM;
+ #ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API
+ } else if (((flags & TPARG_FL_MASK) ==
+@@ -319,13 +326,6 @@ static int parse_probe_vars(char *arg, const struct fetch_type *t,
+ code->op = FETCH_OP_ARG;
+ code->param = (unsigned int)param - 1;
+ #endif
+- } else if (flags & TPARG_FL_TPOINT) {
+- if (code->data)
+- return -EFAULT;
+- code->data = kstrdup(arg, GFP_KERNEL);
+- if (!code->data)
+- return -ENOMEM;
+- code->op = FETCH_OP_TP_ARG;
+ } else
+ goto inval_var;
+
+@@ -380,6 +380,11 @@ parse_probe_arg(char *arg, const struct fetch_type *type,
+ break;
+
+ case '%': /* named register */
++ if (flags & TPARG_FL_TPOINT) {
++ /* eprobes do not handle registers */
++ trace_probe_log_err(offs, BAD_VAR);
++ break;
++ }
+ ret = regs_query_register_offset(arg + 1);
+ if (ret >= 0) {
+ code->op = FETCH_OP_REG;
+@@ -613,9 +618,11 @@ static int traceprobe_parse_probe_arg_body(const char *argv, ssize_t *size,
+
+ /*
+ * Since $comm and immediate string can not be dereferenced,
+- * we can find those by strcmp.
++ * we can find those by strcmp. But ignore for eprobes.
+ */
+- if (strcmp(arg, "$comm") == 0 || strncmp(arg, "\\\"", 2) == 0) {
++ if (!(flags & TPARG_FL_TPOINT) &&
++ (strcmp(arg, "$comm") == 0 || strcmp(arg, "$COMM") == 0 ||
++ strncmp(arg, "\\\"", 2) == 0)) {
+ /* The type of $comm must be "string", and not an array. */
+ if (parg->count || (t && strcmp(t, "string")))
+ goto out;
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index ecb0e8346e653..8e61f21e7e33e 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -537,7 +537,7 @@ int lockup_detector_offline_cpu(unsigned int cpu)
+ return 0;
+ }
+
+-static void lockup_detector_reconfigure(void)
++static void __lockup_detector_reconfigure(void)
+ {
+ cpus_read_lock();
+ watchdog_nmi_stop();
+@@ -557,6 +557,13 @@ static void lockup_detector_reconfigure(void)
+ __lockup_detector_cleanup();
+ }
+
++void lockup_detector_reconfigure(void)
++{
++ mutex_lock(&watchdog_mutex);
++ __lockup_detector_reconfigure();
++ mutex_unlock(&watchdog_mutex);
++}
++
+ /*
+ * Create the watchdog infrastructure and configure the detector(s).
+ */
+@@ -573,13 +580,13 @@ static __init void lockup_detector_setup(void)
+ return;
+
+ mutex_lock(&watchdog_mutex);
+- lockup_detector_reconfigure();
++ __lockup_detector_reconfigure();
+ softlockup_initialized = true;
+ mutex_unlock(&watchdog_mutex);
+ }
+
+ #else /* CONFIG_SOFTLOCKUP_DETECTOR */
+-static void lockup_detector_reconfigure(void)
++static void __lockup_detector_reconfigure(void)
+ {
+ cpus_read_lock();
+ watchdog_nmi_stop();
+@@ -587,9 +594,13 @@ static void lockup_detector_reconfigure(void)
+ watchdog_nmi_start();
+ cpus_read_unlock();
+ }
++void lockup_detector_reconfigure(void)
++{
++ __lockup_detector_reconfigure();
++}
+ static inline void lockup_detector_setup(void)
+ {
+- lockup_detector_reconfigure();
++ __lockup_detector_reconfigure();
+ }
+ #endif /* !CONFIG_SOFTLOCKUP_DETECTOR */
+
+@@ -629,7 +640,7 @@ static void proc_watchdog_update(void)
+ {
+ /* Remove impossible cpus to keep sysctl output clean. */
+ cpumask_and(&watchdog_cpumask, &watchdog_cpumask, cpu_possible_mask);
+- lockup_detector_reconfigure();
++ __lockup_detector_reconfigure();
+ }
+
+ /*
+diff --git a/lib/list_debug.c b/lib/list_debug.c
+index 9daa3fb9d1cd6..d98d43f80958b 100644
+--- a/lib/list_debug.c
++++ b/lib/list_debug.c
+@@ -20,7 +20,11 @@
+ bool __list_add_valid(struct list_head *new, struct list_head *prev,
+ struct list_head *next)
+ {
+- if (CHECK_DATA_CORRUPTION(next->prev != prev,
++ if (CHECK_DATA_CORRUPTION(prev == NULL,
++ "list_add corruption. prev is NULL.\n") ||
++ CHECK_DATA_CORRUPTION(next == NULL,
++ "list_add corruption. next is NULL.\n") ||
++ CHECK_DATA_CORRUPTION(next->prev != prev,
+ "list_add corruption. next->prev should be prev (%px), but was %px. (next=%px).\n",
+ prev, next->prev, next) ||
+ CHECK_DATA_CORRUPTION(prev->next != next,
+@@ -42,7 +46,11 @@ bool __list_del_entry_valid(struct list_head *entry)
+ prev = entry->prev;
+ next = entry->next;
+
+- if (CHECK_DATA_CORRUPTION(next == LIST_POISON1,
++ if (CHECK_DATA_CORRUPTION(next == NULL,
++ "list_del corruption, %px->next is NULL\n", entry) ||
++ CHECK_DATA_CORRUPTION(prev == NULL,
++ "list_del corruption, %px->prev is NULL\n", entry) ||
++ CHECK_DATA_CORRUPTION(next == LIST_POISON1,
+ "list_del corruption, %px->next is LIST_POISON1 (%px)\n",
+ entry, LIST_POISON1) ||
+ CHECK_DATA_CORRUPTION(prev == LIST_POISON2,
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index f5ecfdcf57b22..b670ba03a675c 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -178,7 +178,10 @@ activate_next:
+ if (!first)
+ return;
+
+- if (WARN_ON_ONCE(j1939_session_activate(first))) {
++ if (j1939_session_activate(first)) {
++ netdev_warn_once(first->priv->ndev,
++ "%s: 0x%p: Identical session is already activated.\n",
++ __func__, first);
+ first->err = -EBUSY;
+ goto activate_next;
+ } else {
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 307ee1174a6e2..d7d86c944d76d 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -260,6 +260,8 @@ static void __j1939_session_drop(struct j1939_session *session)
+
+ static void j1939_session_destroy(struct j1939_session *session)
+ {
++ struct sk_buff *skb;
++
+ if (session->transmission) {
+ if (session->err)
+ j1939_sk_errqueue(session, J1939_ERRQUEUE_TX_ABORT);
+@@ -274,7 +276,11 @@ static void j1939_session_destroy(struct j1939_session *session)
+ WARN_ON_ONCE(!list_empty(&session->sk_session_queue_entry));
+ WARN_ON_ONCE(!list_empty(&session->active_session_list_entry));
+
+- skb_queue_purge(&session->skb_queue);
++ while ((skb = skb_dequeue(&session->skb_queue)) != NULL) {
++ /* drop ref taken in j1939_session_skb_queue() */
++ skb_unref(skb);
++ kfree_skb(skb);
++ }
+ __j1939_session_drop(session);
+ j1939_priv_put(session->priv);
+ kfree(session);
+diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
+index a25ec93729b97..1b7f385643b4c 100644
+--- a/net/core/bpf_sk_storage.c
++++ b/net/core/bpf_sk_storage.c
+@@ -875,10 +875,18 @@ static int bpf_iter_init_sk_storage_map(void *priv_data,
+ {
+ struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data;
+
++ bpf_map_inc_with_uref(aux->map);
+ seq_info->map = aux->map;
+ return 0;
+ }
+
++static void bpf_iter_fini_sk_storage_map(void *priv_data)
++{
++ struct bpf_iter_seq_sk_storage_map_info *seq_info = priv_data;
++
++ bpf_map_put_with_uref(seq_info->map);
++}
++
+ static int bpf_iter_attach_map(struct bpf_prog *prog,
+ union bpf_iter_link_info *linfo,
+ struct bpf_iter_aux_info *aux)
+@@ -896,7 +904,7 @@ static int bpf_iter_attach_map(struct bpf_prog *prog,
+ if (map->map_type != BPF_MAP_TYPE_SK_STORAGE)
+ goto put_map;
+
+- if (prog->aux->max_rdonly_access > map->value_size) {
++ if (prog->aux->max_rdwr_access > map->value_size) {
+ err = -EACCES;
+ goto put_map;
+ }
+@@ -924,7 +932,7 @@ static const struct seq_operations bpf_sk_storage_map_seq_ops = {
+ static const struct bpf_iter_seq_info iter_seq_info = {
+ .seq_ops = &bpf_sk_storage_map_seq_ops,
+ .init_seq_private = bpf_iter_init_sk_storage_map,
+- .fini_seq_private = NULL,
++ .fini_seq_private = bpf_iter_fini_sk_storage_map,
+ .seq_priv_size = sizeof(struct bpf_iter_seq_sk_storage_map_info),
+ };
+
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 5cc88490f18fd..5e36723cbc21f 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -4943,7 +4943,7 @@ static int devlink_param_get(struct devlink *devlink,
+ const struct devlink_param *param,
+ struct devlink_param_gset_ctx *ctx)
+ {
+- if (!param->get)
++ if (!param->get || devlink->reload_failed)
+ return -EOPNOTSUPP;
+ return param->get(devlink, param->id, ctx);
+ }
+@@ -4952,7 +4952,7 @@ static int devlink_param_set(struct devlink *devlink,
+ const struct devlink_param *param,
+ struct devlink_param_gset_ctx *ctx)
+ {
+- if (!param->set)
++ if (!param->set || devlink->reload_failed)
+ return -EOPNOTSUPP;
+ return param->set(devlink, param->id, ctx);
+ }
+diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
+index a10335b4ba2d0..c8d137ef5980e 100644
+--- a/net/core/gen_stats.c
++++ b/net/core/gen_stats.c
+@@ -345,7 +345,7 @@ static void gnet_stats_add_queue_cpu(struct gnet_stats_queue *qstats,
+ for_each_possible_cpu(i) {
+ const struct gnet_stats_queue *qcpu = per_cpu_ptr(q, i);
+
+- qstats->qlen += qcpu->backlog;
++ qstats->qlen += qcpu->qlen;
+ qstats->backlog += qcpu->backlog;
+ qstats->drops += qcpu->drops;
+ qstats->requeues += qcpu->requeues;
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index ac45328607f77..4b5b15c684ed6 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -6070,6 +6070,7 @@ static int rtnetlink_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
+ if (kind == RTNL_KIND_DEL && (nlh->nlmsg_flags & NLM_F_BULK) &&
+ !(flags & RTNL_FLAG_BULK_DEL_SUPPORTED)) {
+ NL_SET_ERR_MSG(extack, "Bulk delete is not supported");
++ module_put(owner);
+ goto err_unlock;
+ }
+
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 81d4b4756a02d..b8ba578c5a504 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -783,13 +783,22 @@ static int sock_map_init_seq_private(void *priv_data,
+ {
+ struct sock_map_seq_info *info = priv_data;
+
++ bpf_map_inc_with_uref(aux->map);
+ info->map = aux->map;
+ return 0;
+ }
+
++static void sock_map_fini_seq_private(void *priv_data)
++{
++ struct sock_map_seq_info *info = priv_data;
++
++ bpf_map_put_with_uref(info->map);
++}
++
+ static const struct bpf_iter_seq_info sock_map_iter_seq_info = {
+ .seq_ops = &sock_map_seq_ops,
+ .init_seq_private = sock_map_init_seq_private,
++ .fini_seq_private = sock_map_fini_seq_private,
+ .seq_priv_size = sizeof(struct sock_map_seq_info),
+ };
+
+@@ -1369,18 +1378,27 @@ static const struct seq_operations sock_hash_seq_ops = {
+ };
+
+ static int sock_hash_init_seq_private(void *priv_data,
+- struct bpf_iter_aux_info *aux)
++ struct bpf_iter_aux_info *aux)
+ {
+ struct sock_hash_seq_info *info = priv_data;
+
++ bpf_map_inc_with_uref(aux->map);
+ info->map = aux->map;
+ info->htab = container_of(aux->map, struct bpf_shtab, map);
+ return 0;
+ }
+
++static void sock_hash_fini_seq_private(void *priv_data)
++{
++ struct sock_hash_seq_info *info = priv_data;
++
++ bpf_map_put_with_uref(info->map);
++}
++
+ static const struct bpf_iter_seq_info sock_hash_iter_seq_info = {
+ .seq_ops = &sock_hash_seq_ops,
+ .init_seq_private = sock_hash_init_seq_private,
++ .fini_seq_private = sock_hash_fini_seq_private,
+ .seq_priv_size = sizeof(struct sock_hash_seq_info),
+ };
+
+diff --git a/net/dsa/port.c b/net/dsa/port.c
+index 2dd76eb1621c7..a8895ee3cd600 100644
+--- a/net/dsa/port.c
++++ b/net/dsa/port.c
+@@ -145,11 +145,14 @@ int dsa_port_set_state(struct dsa_port *dp, u8 state, bool do_fast_age)
+ static void dsa_port_set_state_now(struct dsa_port *dp, u8 state,
+ bool do_fast_age)
+ {
++ struct dsa_switch *ds = dp->ds;
+ int err;
+
+ err = dsa_port_set_state(dp, state, do_fast_age);
+- if (err)
+- pr_err("DSA: failed to set STP state %u (%d)\n", state, err);
++ if (err && err != -EOPNOTSUPP) {
++ dev_err(ds->dev, "port %d failed to set STP state %u: %pe\n",
++ dp->index, state, ERR_PTR(err));
++ }
+ }
+
+ int dsa_port_set_mst_state(struct dsa_port *dp,
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 77e3f5970ce48..ec62f472aa1cb 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1311,8 +1311,7 @@ struct dst_entry *ip6_dst_lookup_tunnel(struct sk_buff *skb,
+ fl6.daddr = info->key.u.ipv6.dst;
+ fl6.saddr = info->key.u.ipv6.src;
+ prio = info->key.tos;
+- fl6.flowlabel = ip6_make_flowinfo(RT_TOS(prio),
+- info->key.label);
++ fl6.flowlabel = ip6_make_flowinfo(prio, info->key.label);
+
+ dst = ipv6_stub->ipv6_dst_lookup_flow(net, sock->sk, &fl6,
+ NULL);
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index b0dfe97ea4eed..7ad4542ecc605 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -1358,6 +1358,9 @@ static void ndisc_router_discovery(struct sk_buff *skb)
+ if (!rt && lifetime) {
+ ND_PRINTK(3, info, "RA: adding default router\n");
+
++ if (neigh)
++ neigh_release(neigh);
++
+ rt = rt6_add_dflt_router(net, &ipv6_hdr(skb)->saddr,
+ skb->dev, pref, defrtr_usr_metric);
+ if (!rt) {
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 8ffb8aabd3244..3d90fa9653ef3 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1276,6 +1276,9 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ info->limit > dfrag->data_len))
+ return 0;
+
++ if (unlikely(!__tcp_can_send(ssk)))
++ return -EAGAIN;
++
+ /* compute send limit */
+ info->mss_now = tcp_send_mss(ssk, &info->size_goal, info->flags);
+ copy = info->size_goal;
+@@ -1449,7 +1452,8 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk)
+ if (__mptcp_check_fallback(msk)) {
+ if (!msk->first)
+ return NULL;
+- return sk_stream_memory_free(msk->first) ? msk->first : NULL;
++ return __tcp_can_send(msk->first) &&
++ sk_stream_memory_free(msk->first) ? msk->first : NULL;
+ }
+
+ /* re-use last subflow, if the burst allow that */
+@@ -1600,6 +1604,8 @@ void __mptcp_push_pending(struct sock *sk, unsigned int flags)
+
+ ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+ if (ret <= 0) {
++ if (ret == -EAGAIN)
++ continue;
+ mptcp_push_release(ssk, &info);
+ goto out;
+ }
+@@ -2805,30 +2811,16 @@ static void __mptcp_wr_shutdown(struct sock *sk)
+
+ static void __mptcp_destroy_sock(struct sock *sk)
+ {
+- struct mptcp_subflow_context *subflow, *tmp;
+ struct mptcp_sock *msk = mptcp_sk(sk);
+- LIST_HEAD(conn_list);
+
+ pr_debug("msk=%p", msk);
+
+ might_sleep();
+
+- /* join list will be eventually flushed (with rst) at sock lock release time*/
+- list_splice_init(&msk->conn_list, &conn_list);
+-
+ mptcp_stop_timer(sk);
+ sk_stop_timer(sk, &sk->sk_timer);
+ msk->pm.status = 0;
+
+- /* clears msk->subflow, allowing the following loop to close
+- * even the initial subflow
+- */
+- mptcp_dispose_initial_subflow(msk);
+- list_for_each_entry_safe(subflow, tmp, &conn_list, node) {
+- struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+- __mptcp_close_ssk(sk, ssk, subflow, 0);
+- }
+-
+ sk->sk_prot->destroy(sk);
+
+ WARN_ON_ONCE(msk->rmem_fwd_alloc);
+@@ -2920,24 +2912,20 @@ static void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk)
+
+ static int mptcp_disconnect(struct sock *sk, int flags)
+ {
+- struct mptcp_subflow_context *subflow, *tmp;
+ struct mptcp_sock *msk = mptcp_sk(sk);
+
+ inet_sk_state_store(sk, TCP_CLOSE);
+
+- list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node) {
+- struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+-
+- __mptcp_close_ssk(sk, ssk, subflow, MPTCP_CF_FASTCLOSE);
+- }
+-
+ mptcp_stop_timer(sk);
+ sk_stop_timer(sk, &sk->sk_timer);
+
+ if (mptcp_sk(sk)->token)
+ mptcp_event(MPTCP_EVENT_CLOSED, mptcp_sk(sk), NULL, GFP_KERNEL);
+
+- mptcp_destroy_common(msk);
++ /* msk->subflow is still intact, the following will not free the first
++ * subflow
++ */
++ mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE);
+ msk->last_snd = NULL;
+ WRITE_ONCE(msk->flags, 0);
+ msk->cb_flags = 0;
+@@ -3087,12 +3075,17 @@ out:
+ return newsk;
+ }
+
+-void mptcp_destroy_common(struct mptcp_sock *msk)
++void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags)
+ {
++ struct mptcp_subflow_context *subflow, *tmp;
+ struct sock *sk = (struct sock *)msk;
+
+ __mptcp_clear_xmit(sk);
+
++ /* join list will be eventually flushed (with rst) at sock lock release time */
++ list_for_each_entry_safe(subflow, tmp, &msk->conn_list, node)
++ __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), subflow, flags);
++
+ /* move to sk_receive_queue, sk_stream_kill_queues will purge it */
+ mptcp_data_lock(sk);
+ skb_queue_splice_tail_init(&msk->receive_queue, &sk->sk_receive_queue);
+@@ -3114,7 +3107,11 @@ static void mptcp_destroy(struct sock *sk)
+ {
+ struct mptcp_sock *msk = mptcp_sk(sk);
+
+- mptcp_destroy_common(msk);
++ /* clears msk->subflow, allowing the following to close
++ * even the initial subflow
++ */
++ mptcp_dispose_initial_subflow(msk);
++ mptcp_destroy_common(msk, 0);
+ sk_sockets_allocated_dec(sk);
+ }
+
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 480c5320b86e5..092154d5bc752 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -625,16 +625,19 @@ void mptcp_info2sockaddr(const struct mptcp_addr_info *info,
+ struct sockaddr_storage *addr,
+ unsigned short family);
+
+-static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow)
++static inline bool __tcp_can_send(const struct sock *ssk)
+ {
+- struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
++ /* only send if our side has not closed yet */
++ return ((1 << inet_sk_state_load(ssk)) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT));
++}
+
++static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow)
++{
+ /* can't send if JOIN hasn't completed yet (i.e. is usable for mptcp) */
+ if (subflow->request_join && !subflow->fully_established)
+ return false;
+
+- /* only send if our side has not closed yet */
+- return ((1 << ssk->sk_state) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT));
++ return __tcp_can_send(mptcp_subflow_tcp_sock(subflow));
+ }
+
+ void mptcp_subflow_set_active(struct mptcp_subflow_context *subflow);
+@@ -718,7 +721,7 @@ static inline void mptcp_write_space(struct sock *sk)
+ }
+ }
+
+-void mptcp_destroy_common(struct mptcp_sock *msk);
++void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags);
+
+ #define MPTCP_TOKEN_MAX_RETRIES 4
+
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index af28f3b603899..ac41b55b0a81a 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -621,7 +621,8 @@ static void mptcp_sock_destruct(struct sock *sk)
+ sock_orphan(sk);
+ }
+
+- mptcp_destroy_common(mptcp_sk(sk));
++ /* We don't need to clear msk->subflow, as it's still NULL at this point */
++ mptcp_destroy_common(mptcp_sk(sk), 0);
+ inet_sock_destruct(sk);
+ }
+
+diff --git a/net/netfilter/nf_conntrack_ftp.c b/net/netfilter/nf_conntrack_ftp.c
+index a414274338cff..0d9332e9cf71a 100644
+--- a/net/netfilter/nf_conntrack_ftp.c
++++ b/net/netfilter/nf_conntrack_ftp.c
+@@ -34,11 +34,6 @@ MODULE_DESCRIPTION("ftp connection tracking helper");
+ MODULE_ALIAS("ip_conntrack_ftp");
+ MODULE_ALIAS_NFCT_HELPER(HELPER_NAME);
+
+-/* This is slow, but it's simple. --RR */
+-static char *ftp_buffer;
+-
+-static DEFINE_SPINLOCK(nf_ftp_lock);
+-
+ #define MAX_PORTS 8
+ static u_int16_t ports[MAX_PORTS];
+ static unsigned int ports_c;
+@@ -398,6 +393,9 @@ static int help(struct sk_buff *skb,
+ return NF_ACCEPT;
+ }
+
++ if (unlikely(skb_linearize(skb)))
++ return NF_DROP;
++
+ th = skb_header_pointer(skb, protoff, sizeof(_tcph), &_tcph);
+ if (th == NULL)
+ return NF_ACCEPT;
+@@ -411,12 +409,8 @@ static int help(struct sk_buff *skb,
+ }
+ datalen = skb->len - dataoff;
+
+- spin_lock_bh(&nf_ftp_lock);
+- fb_ptr = skb_header_pointer(skb, dataoff, datalen, ftp_buffer);
+- if (!fb_ptr) {
+- spin_unlock_bh(&nf_ftp_lock);
+- return NF_ACCEPT;
+- }
++ spin_lock_bh(&ct->lock);
++ fb_ptr = skb->data + dataoff;
+
+ ends_in_nl = (fb_ptr[datalen - 1] == '\n');
+ seq = ntohl(th->seq) + datalen;
+@@ -544,7 +538,7 @@ out_update_nl:
+ if (ends_in_nl)
+ update_nl_seq(ct, seq, ct_ftp_info, dir, skb);
+ out:
+- spin_unlock_bh(&nf_ftp_lock);
++ spin_unlock_bh(&ct->lock);
+ return ret;
+ }
+
+@@ -571,7 +565,6 @@ static const struct nf_conntrack_expect_policy ftp_exp_policy = {
+ static void __exit nf_conntrack_ftp_fini(void)
+ {
+ nf_conntrack_helpers_unregister(ftp, ports_c * 2);
+- kfree(ftp_buffer);
+ }
+
+ static int __init nf_conntrack_ftp_init(void)
+@@ -580,10 +573,6 @@ static int __init nf_conntrack_ftp_init(void)
+
+ NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_ftp_master));
+
+- ftp_buffer = kmalloc(65536, GFP_KERNEL);
+- if (!ftp_buffer)
+- return -ENOMEM;
+-
+ if (ports_c == 0)
+ ports[ports_c++] = FTP_PORT;
+
+@@ -603,7 +592,6 @@ static int __init nf_conntrack_ftp_init(void)
+ ret = nf_conntrack_helpers_register(ftp, ports_c * 2);
+ if (ret < 0) {
+ pr_err("failed to register helpers\n");
+- kfree(ftp_buffer);
+ return ret;
+ }
+
+diff --git a/net/netfilter/nf_conntrack_h323_main.c b/net/netfilter/nf_conntrack_h323_main.c
+index 2eb31ffb3d141..04479d0aab8dd 100644
+--- a/net/netfilter/nf_conntrack_h323_main.c
++++ b/net/netfilter/nf_conntrack_h323_main.c
+@@ -34,6 +34,8 @@
+ #include <net/netfilter/nf_conntrack_zones.h>
+ #include <linux/netfilter/nf_conntrack_h323.h>
+
++#define H323_MAX_SIZE 65535
++
+ /* Parameters */
+ static unsigned int default_rrq_ttl __read_mostly = 300;
+ module_param(default_rrq_ttl, uint, 0600);
+@@ -142,6 +144,9 @@ static int get_tpkt_data(struct sk_buff *skb, unsigned int protoff,
+ if (tcpdatalen <= 0) /* No TCP data */
+ goto clear_out;
+
++ if (tcpdatalen > H323_MAX_SIZE)
++ tcpdatalen = H323_MAX_SIZE;
++
+ if (*data == NULL) { /* first TPKT */
+ /* Get first TPKT pointer */
+ tpkt = skb_header_pointer(skb, tcpdataoff, tcpdatalen,
+@@ -1220,6 +1225,9 @@ static unsigned char *get_udp_data(struct sk_buff *skb, unsigned int protoff,
+ if (dataoff >= skb->len)
+ return NULL;
+ *datalen = skb->len - dataoff;
++ if (*datalen > H323_MAX_SIZE)
++ *datalen = H323_MAX_SIZE;
++
+ return skb_header_pointer(skb, dataoff, *datalen, h323_buffer);
+ }
+
+@@ -1821,7 +1829,7 @@ static int __init nf_conntrack_h323_init(void)
+
+ NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_h323_master));
+
+- h323_buffer = kmalloc(65536, GFP_KERNEL);
++ h323_buffer = kmalloc(H323_MAX_SIZE + 1, GFP_KERNEL);
+ if (!h323_buffer)
+ return -ENOMEM;
+ ret = h323_helper_init();
+diff --git a/net/netfilter/nf_conntrack_irc.c b/net/netfilter/nf_conntrack_irc.c
+index 08ee4e760a3d2..1796c456ac98b 100644
+--- a/net/netfilter/nf_conntrack_irc.c
++++ b/net/netfilter/nf_conntrack_irc.c
+@@ -39,6 +39,7 @@ unsigned int (*nf_nat_irc_hook)(struct sk_buff *skb,
+ EXPORT_SYMBOL_GPL(nf_nat_irc_hook);
+
+ #define HELPER_NAME "irc"
++#define MAX_SEARCH_SIZE 4095
+
+ MODULE_AUTHOR("Harald Welte <laforge@netfilter.org>");
+ MODULE_DESCRIPTION("IRC (DCC) connection tracking helper");
+@@ -121,6 +122,7 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+ int i, ret = NF_ACCEPT;
+ char *addr_beg_p, *addr_end_p;
+ typeof(nf_nat_irc_hook) nf_nat_irc;
++ unsigned int datalen;
+
+ /* If packet is coming from IRC server */
+ if (dir == IP_CT_DIR_REPLY)
+@@ -140,8 +142,12 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+ if (dataoff >= skb->len)
+ return NF_ACCEPT;
+
++ datalen = skb->len - dataoff;
++ if (datalen > MAX_SEARCH_SIZE)
++ datalen = MAX_SEARCH_SIZE;
++
+ spin_lock_bh(&irc_buffer_lock);
+- ib_ptr = skb_header_pointer(skb, dataoff, skb->len - dataoff,
++ ib_ptr = skb_header_pointer(skb, dataoff, datalen,
+ irc_buffer);
+ if (!ib_ptr) {
+ spin_unlock_bh(&irc_buffer_lock);
+@@ -149,7 +155,7 @@ static int help(struct sk_buff *skb, unsigned int protoff,
+ }
+
+ data = ib_ptr;
+- data_limit = ib_ptr + skb->len - dataoff;
++ data_limit = ib_ptr + datalen;
+
+ /* strlen("\1DCC SENT t AAAAAAAA P\1\n")=24
+ * 5+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=14 */
+@@ -251,7 +257,7 @@ static int __init nf_conntrack_irc_init(void)
+ irc_exp_policy.max_expected = max_dcc_channels;
+ irc_exp_policy.timeout = dcc_timeout;
+
+- irc_buffer = kmalloc(65536, GFP_KERNEL);
++ irc_buffer = kmalloc(MAX_SEARCH_SIZE + 1, GFP_KERNEL);
+ if (!irc_buffer)
+ return -ENOMEM;
+
+diff --git a/net/netfilter/nf_conntrack_sane.c b/net/netfilter/nf_conntrack_sane.c
+index fcb33b1d5456d..13dc421fc4f52 100644
+--- a/net/netfilter/nf_conntrack_sane.c
++++ b/net/netfilter/nf_conntrack_sane.c
+@@ -34,10 +34,6 @@ MODULE_AUTHOR("Michal Schmidt <mschmidt@redhat.com>");
+ MODULE_DESCRIPTION("SANE connection tracking helper");
+ MODULE_ALIAS_NFCT_HELPER(HELPER_NAME);
+
+-static char *sane_buffer;
+-
+-static DEFINE_SPINLOCK(nf_sane_lock);
+-
+ #define MAX_PORTS 8
+ static u_int16_t ports[MAX_PORTS];
+ static unsigned int ports_c;
+@@ -67,14 +63,16 @@ static int help(struct sk_buff *skb,
+ unsigned int dataoff, datalen;
+ const struct tcphdr *th;
+ struct tcphdr _tcph;
+- void *sb_ptr;
+ int ret = NF_ACCEPT;
+ int dir = CTINFO2DIR(ctinfo);
+ struct nf_ct_sane_master *ct_sane_info = nfct_help_data(ct);
+ struct nf_conntrack_expect *exp;
+ struct nf_conntrack_tuple *tuple;
+- struct sane_request *req;
+ struct sane_reply_net_start *reply;
++ union {
++ struct sane_request req;
++ struct sane_reply_net_start repl;
++ } buf;
+
+ /* Until there's been traffic both ways, don't look in packets. */
+ if (ctinfo != IP_CT_ESTABLISHED &&
+@@ -92,59 +90,62 @@ static int help(struct sk_buff *skb,
+ return NF_ACCEPT;
+
+ datalen = skb->len - dataoff;
+-
+- spin_lock_bh(&nf_sane_lock);
+- sb_ptr = skb_header_pointer(skb, dataoff, datalen, sane_buffer);
+- if (!sb_ptr) {
+- spin_unlock_bh(&nf_sane_lock);
+- return NF_ACCEPT;
+- }
+-
+ if (dir == IP_CT_DIR_ORIGINAL) {
++ const struct sane_request *req;
++
+ if (datalen != sizeof(struct sane_request))
+- goto out;
++ return NF_ACCEPT;
++
++ req = skb_header_pointer(skb, dataoff, datalen, &buf.req);
++ if (!req)
++ return NF_ACCEPT;
+
+- req = sb_ptr;
+ if (req->RPC_code != htonl(SANE_NET_START)) {
+ /* Not an interesting command */
+- ct_sane_info->state = SANE_STATE_NORMAL;
+- goto out;
++ WRITE_ONCE(ct_sane_info->state, SANE_STATE_NORMAL);
++ return NF_ACCEPT;
+ }
+
+ /* We're interested in the next reply */
+- ct_sane_info->state = SANE_STATE_START_REQUESTED;
+- goto out;
++ WRITE_ONCE(ct_sane_info->state, SANE_STATE_START_REQUESTED);
++ return NF_ACCEPT;
+ }
+
++ /* IP_CT_DIR_REPLY */
++
+ /* Is it a reply to an uninteresting command? */
+- if (ct_sane_info->state != SANE_STATE_START_REQUESTED)
+- goto out;
++ if (READ_ONCE(ct_sane_info->state) != SANE_STATE_START_REQUESTED)
++ return NF_ACCEPT;
+
+ /* It's a reply to SANE_NET_START. */
+- ct_sane_info->state = SANE_STATE_NORMAL;
++ WRITE_ONCE(ct_sane_info->state, SANE_STATE_NORMAL);
+
+ if (datalen < sizeof(struct sane_reply_net_start)) {
+ pr_debug("NET_START reply too short\n");
+- goto out;
++ return NF_ACCEPT;
+ }
+
+- reply = sb_ptr;
++ datalen = sizeof(struct sane_reply_net_start);
++
++ reply = skb_header_pointer(skb, dataoff, datalen, &buf.repl);
++ if (!reply)
++ return NF_ACCEPT;
++
+ if (reply->status != htonl(SANE_STATUS_SUCCESS)) {
+ /* saned refused the command */
+ pr_debug("unsuccessful SANE_STATUS = %u\n",
+ ntohl(reply->status));
+- goto out;
++ return NF_ACCEPT;
+ }
+
+ /* Invalid saned reply? Ignore it. */
+ if (reply->zero != 0)
+- goto out;
++ return NF_ACCEPT;
+
+ exp = nf_ct_expect_alloc(ct);
+ if (exp == NULL) {
+ nf_ct_helper_log(skb, ct, "cannot alloc expectation");
+- ret = NF_DROP;
+- goto out;
++ return NF_DROP;
+ }
+
+ tuple = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple;
+@@ -162,9 +163,6 @@ static int help(struct sk_buff *skb,
+ }
+
+ nf_ct_expect_put(exp);
+-
+-out:
+- spin_unlock_bh(&nf_sane_lock);
+ return ret;
+ }
+
+@@ -178,7 +176,6 @@ static const struct nf_conntrack_expect_policy sane_exp_policy = {
+ static void __exit nf_conntrack_sane_fini(void)
+ {
+ nf_conntrack_helpers_unregister(sane, ports_c * 2);
+- kfree(sane_buffer);
+ }
+
+ static int __init nf_conntrack_sane_init(void)
+@@ -187,10 +184,6 @@ static int __init nf_conntrack_sane_init(void)
+
+ NF_CT_HELPER_BUILD_BUG_ON(sizeof(struct nf_ct_sane_master));
+
+- sane_buffer = kmalloc(65536, GFP_KERNEL);
+- if (!sane_buffer)
+- return -ENOMEM;
+-
+ if (ports_c == 0)
+ ports[ports_c++] = SANE_PORT;
+
+@@ -210,7 +203,6 @@ static int __init nf_conntrack_sane_init(void)
+ ret = nf_conntrack_helpers_register(sane, ports_c * 2);
+ if (ret < 0) {
+ pr_err("failed to register helpers\n");
+- kfree(sane_buffer);
+ return ret;
+ }
+
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index f4d2a5f277952..4bd6e9427c918 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -889,7 +889,7 @@ static int nf_tables_dump_tables(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -1705,7 +1705,7 @@ static int nf_tables_dump_chains(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -3149,7 +3149,7 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -3907,7 +3907,7 @@ cont:
+ list_for_each_entry(i, &ctx->table->sets, list) {
+ int tmp;
+
+- if (!nft_is_active_next(ctx->net, set))
++ if (!nft_is_active_next(ctx->net, i))
+ continue;
+ if (!sscanf(i->name, name, &tmp))
+ continue;
+@@ -4133,7 +4133,7 @@ static int nf_tables_dump_sets(struct sk_buff *skb, struct netlink_callback *cb)
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (ctx->family != NFPROTO_UNSPEC &&
+@@ -4451,6 +4451,11 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ err = nf_tables_set_desc_parse(&desc, nla[NFTA_SET_DESC]);
+ if (err < 0)
+ return err;
++
++ if (desc.field_count > 1 && !(flags & NFT_SET_CONCAT))
++ return -EINVAL;
++ } else if (flags & NFT_SET_CONCAT) {
++ return -EINVAL;
+ }
+
+ if (nla[NFTA_SET_EXPR] || nla[NFTA_SET_EXPRESSIONS])
+@@ -5061,6 +5066,8 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
++ cb->seq = READ_ONCE(nft_net->base_seq);
++
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (dump_ctx->ctx.family != NFPROTO_UNSPEC &&
+ dump_ctx->ctx.family != table->family)
+@@ -5196,6 +5203,9 @@ static int nft_setelem_parse_flags(const struct nft_set *set,
+ if (!(set->flags & NFT_SET_INTERVAL) &&
+ *flags & NFT_SET_ELEM_INTERVAL_END)
+ return -EINVAL;
++ if ((*flags & (NFT_SET_ELEM_INTERVAL_END | NFT_SET_ELEM_CATCHALL)) ==
++ (NFT_SET_ELEM_INTERVAL_END | NFT_SET_ELEM_CATCHALL))
++ return -EINVAL;
+
+ return 0;
+ }
+@@ -5564,7 +5574,7 @@ int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set,
+
+ err = nft_expr_clone(expr, set->exprs[i]);
+ if (err < 0) {
+- nft_expr_destroy(ctx, expr);
++ kfree(expr);
+ goto err_expr;
+ }
+ expr_array[i] = expr;
+@@ -5796,6 +5806,24 @@ static void nft_setelem_remove(const struct net *net,
+ set->ops->remove(net, set, elem);
+ }
+
++static bool nft_setelem_valid_key_end(const struct nft_set *set,
++ struct nlattr **nla, u32 flags)
++{
++ if ((set->flags & (NFT_SET_CONCAT | NFT_SET_INTERVAL)) ==
++ (NFT_SET_CONCAT | NFT_SET_INTERVAL)) {
++ if (flags & NFT_SET_ELEM_INTERVAL_END)
++ return false;
++ if (!nla[NFTA_SET_ELEM_KEY_END] &&
++ !(flags & NFT_SET_ELEM_CATCHALL))
++ return false;
++ } else {
++ if (nla[NFTA_SET_ELEM_KEY_END])
++ return false;
++ }
++
++ return true;
++}
++
+ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ const struct nlattr *attr, u32 nlmsg_flags)
+ {
+@@ -5846,6 +5874,18 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ return -EINVAL;
+ }
+
++ if (set->flags & NFT_SET_OBJECT) {
++ if (!nla[NFTA_SET_ELEM_OBJREF] &&
++ !(flags & NFT_SET_ELEM_INTERVAL_END))
++ return -EINVAL;
++ } else {
++ if (nla[NFTA_SET_ELEM_OBJREF])
++ return -EINVAL;
++ }
++
++ if (!nft_setelem_valid_key_end(set, nla, flags))
++ return -EINVAL;
++
+ if ((flags & NFT_SET_ELEM_INTERVAL_END) &&
+ (nla[NFTA_SET_ELEM_DATA] ||
+ nla[NFTA_SET_ELEM_OBJREF] ||
+@@ -5853,6 +5893,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ nla[NFTA_SET_ELEM_EXPIRATION] ||
+ nla[NFTA_SET_ELEM_USERDATA] ||
+ nla[NFTA_SET_ELEM_EXPR] ||
++ nla[NFTA_SET_ELEM_KEY_END] ||
+ nla[NFTA_SET_ELEM_EXPRESSIONS]))
+ return -EINVAL;
+
+@@ -5983,10 +6024,6 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ }
+
+ if (nla[NFTA_SET_ELEM_OBJREF] != NULL) {
+- if (!(set->flags & NFT_SET_OBJECT)) {
+- err = -EINVAL;
+- goto err_parse_key_end;
+- }
+ obj = nft_obj_lookup(ctx->net, ctx->table,
+ nla[NFTA_SET_ELEM_OBJREF],
+ set->objtype, genmask);
+@@ -6273,6 +6310,9 @@ static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set,
+ if (!nla[NFTA_SET_ELEM_KEY] && !(flags & NFT_SET_ELEM_CATCHALL))
+ return -EINVAL;
+
++ if (!nft_setelem_valid_key_end(set, nla, flags))
++ return -EINVAL;
++
+ nft_set_ext_prepare(&tmpl);
+
+ if (flags != 0) {
+@@ -6887,7 +6927,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -7819,7 +7859,7 @@ static int nf_tables_dump_flowtable(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = nft_net->base_seq;
++ cb->seq = READ_ONCE(nft_net->base_seq);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -8752,6 +8792,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ struct nft_trans_elem *te;
+ struct nft_chain *chain;
+ struct nft_table *table;
++ unsigned int base_seq;
+ LIST_HEAD(adl);
+ int err;
+
+@@ -8801,9 +8842,12 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ * Bump generation counter, invalidate any dump in progress.
+ * Cannot fail after this point.
+ */
+- while (++nft_net->base_seq == 0)
++ base_seq = READ_ONCE(nft_net->base_seq);
++ while (++base_seq == 0)
+ ;
+
++ WRITE_ONCE(nft_net->base_seq, base_seq);
++
+ /* step 3. Start new generation, rules_gen_X now in use. */
+ net->nft.gencursor = nft_gencursor_next(net);
+
+@@ -9365,13 +9409,9 @@ static int nf_tables_check_loops(const struct nft_ctx *ctx,
+ break;
+ }
+ }
+-
+- cond_resched();
+ }
+
+ list_for_each_entry(set, &ctx->table->sets, list) {
+- cond_resched();
+-
+ if (!nft_is_active_next(ctx->net, set))
+ continue;
+ if (!(set->flags & NFT_SET_MAP) ||
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index 3ddce24ac76dd..cee3e4e905ec8 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -34,25 +34,23 @@ static noinline void __nft_trace_packet(struct nft_traceinfo *info,
+ nft_trace_notify(info);
+ }
+
+-static inline void nft_trace_packet(struct nft_traceinfo *info,
++static inline void nft_trace_packet(const struct nft_pktinfo *pkt,
++ struct nft_traceinfo *info,
+ const struct nft_chain *chain,
+ const struct nft_rule_dp *rule,
+ enum nft_trace_types type)
+ {
+ if (static_branch_unlikely(&nft_trace_enabled)) {
+- const struct nft_pktinfo *pkt = info->pkt;
+-
+ info->nf_trace = pkt->skb->nf_trace;
+ info->rule = rule;
+ __nft_trace_packet(info, chain, type);
+ }
+ }
+
+-static inline void nft_trace_copy_nftrace(struct nft_traceinfo *info)
++static inline void nft_trace_copy_nftrace(const struct nft_pktinfo *pkt,
++ struct nft_traceinfo *info)
+ {
+ if (static_branch_unlikely(&nft_trace_enabled)) {
+- const struct nft_pktinfo *pkt = info->pkt;
+-
+ if (info->trace)
+ info->nf_trace = pkt->skb->nf_trace;
+ }
+@@ -96,7 +94,6 @@ static noinline void __nft_trace_verdict(struct nft_traceinfo *info,
+ const struct nft_chain *chain,
+ const struct nft_regs *regs)
+ {
+- const struct nft_pktinfo *pkt = info->pkt;
+ enum nft_trace_types type;
+
+ switch (regs->verdict.code) {
+@@ -110,7 +107,9 @@ static noinline void __nft_trace_verdict(struct nft_traceinfo *info,
+ break;
+ default:
+ type = NFT_TRACETYPE_RULE;
+- info->nf_trace = pkt->skb->nf_trace;
++
++ if (info->trace)
++ info->nf_trace = info->pkt->skb->nf_trace;
+ break;
+ }
+
+@@ -271,10 +270,10 @@ next_rule:
+ switch (regs.verdict.code) {
+ case NFT_BREAK:
+ regs.verdict.code = NFT_CONTINUE;
+- nft_trace_copy_nftrace(&info);
++ nft_trace_copy_nftrace(pkt, &info);
+ continue;
+ case NFT_CONTINUE:
+- nft_trace_packet(&info, chain, rule,
++ nft_trace_packet(pkt, &info, chain, rule,
+ NFT_TRACETYPE_RULE);
+ continue;
+ }
+@@ -318,7 +317,7 @@ next_rule:
+ goto next_rule;
+ }
+
+- nft_trace_packet(&info, basechain, NULL, NFT_TRACETYPE_POLICY);
++ nft_trace_packet(pkt, &info, basechain, NULL, NFT_TRACETYPE_POLICY);
+
+ if (static_branch_unlikely(&nft_counters_enabled))
+ nft_update_chain_stats(basechain, pkt);
+diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
+index 2f7c477fc9e70..1f38bf8fcfa87 100644
+--- a/net/netfilter/nfnetlink.c
++++ b/net/netfilter/nfnetlink.c
+@@ -44,6 +44,10 @@ MODULE_DESCRIPTION("Netfilter messages via netlink socket");
+
+ static unsigned int nfnetlink_pernet_id __read_mostly;
+
++#ifdef CONFIG_NF_CONNTRACK_EVENTS
++static DEFINE_SPINLOCK(nfnl_grp_active_lock);
++#endif
++
+ struct nfnl_net {
+ struct sock *nfnl;
+ };
+@@ -654,6 +658,44 @@ static void nfnetlink_rcv(struct sk_buff *skb)
+ netlink_rcv_skb(skb, nfnetlink_rcv_msg);
+ }
+
++static void nfnetlink_bind_event(struct net *net, unsigned int group)
++{
++#ifdef CONFIG_NF_CONNTRACK_EVENTS
++ int type, group_bit;
++ u8 v;
++
++ /* All NFNLGRP_CONNTRACK_* group bits fit into u8.
++ * The other groups are not relevant and can be ignored.
++ */
++ if (group >= 8)
++ return;
++
++ type = nfnl_group2type[group];
++
++ switch (type) {
++ case NFNL_SUBSYS_CTNETLINK:
++ break;
++ case NFNL_SUBSYS_CTNETLINK_EXP:
++ break;
++ default:
++ return;
++ }
++
++ group_bit = (1 << group);
++
++ spin_lock(&nfnl_grp_active_lock);
++ v = READ_ONCE(net->ct.ctnetlink_has_listener);
++ if ((v & group_bit) == 0) {
++ v |= group_bit;
++
++ /* read concurrently without nfnl_grp_active_lock held. */
++ WRITE_ONCE(net->ct.ctnetlink_has_listener, v);
++ }
++
++ spin_unlock(&nfnl_grp_active_lock);
++#endif
++}
++
+ static int nfnetlink_bind(struct net *net, int group)
+ {
+ const struct nfnetlink_subsystem *ss;
+@@ -670,28 +712,45 @@ static int nfnetlink_bind(struct net *net, int group)
+ if (!ss)
+ request_module_nowait("nfnetlink-subsys-%d", type);
+
+-#ifdef CONFIG_NF_CONNTRACK_EVENTS
+- if (type == NFNL_SUBSYS_CTNETLINK) {
+- nfnl_lock(NFNL_SUBSYS_CTNETLINK);
+- WRITE_ONCE(net->ct.ctnetlink_has_listener, true);
+- nfnl_unlock(NFNL_SUBSYS_CTNETLINK);
+- }
+-#endif
++ nfnetlink_bind_event(net, group);
+ return 0;
+ }
+
+ static void nfnetlink_unbind(struct net *net, int group)
+ {
+ #ifdef CONFIG_NF_CONNTRACK_EVENTS
++ int type, group_bit;
++
+ if (group <= NFNLGRP_NONE || group > NFNLGRP_MAX)
+ return;
+
+- if (nfnl_group2type[group] == NFNL_SUBSYS_CTNETLINK) {
+- nfnl_lock(NFNL_SUBSYS_CTNETLINK);
+- if (!nfnetlink_has_listeners(net, group))
+- WRITE_ONCE(net->ct.ctnetlink_has_listener, false);
+- nfnl_unlock(NFNL_SUBSYS_CTNETLINK);
++ type = nfnl_group2type[group];
++
++ switch (type) {
++ case NFNL_SUBSYS_CTNETLINK:
++ break;
++ case NFNL_SUBSYS_CTNETLINK_EXP:
++ break;
++ default:
++ return;
++ }
++
++ /* ctnetlink_has_listener is u8 */
++ if (group >= 8)
++ return;
++
++ group_bit = (1 << group);
++
++ spin_lock(&nfnl_grp_active_lock);
++ if (!nfnetlink_has_listeners(net, group)) {
++ u8 v = READ_ONCE(net->ct.ctnetlink_has_listener);
++
++ v &= ~group_bit;
++
++ /* read concurrently without nfnl_grp_active_lock held. */
++ WRITE_ONCE(net->ct.ctnetlink_has_listener, v);
+ }
++ spin_unlock(&nfnl_grp_active_lock);
+ #endif
+ }
+
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 1afca2a6c2ac1..57010927e20a8 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -1174,13 +1174,17 @@ static int ctrl_dumppolicy_start(struct netlink_callback *cb)
+ op.policy,
+ op.maxattr);
+ if (err)
+- return err;
++ goto err_free_state;
+ }
+ }
+
+ if (!ctx->state)
+ return -ENODATA;
+ return 0;
++
++err_free_state:
++ netlink_policy_dump_free(ctx->state);
++ return err;
+ }
+
+ static void *ctrl_dumppolicy_prep(struct sk_buff *skb,
+diff --git a/net/netlink/policy.c b/net/netlink/policy.c
+index 8d7c900e27f4c..87e3de0fde896 100644
+--- a/net/netlink/policy.c
++++ b/net/netlink/policy.c
+@@ -144,7 +144,7 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
+
+ err = add_policy(&state, policy, maxtype);
+ if (err)
+- return err;
++ goto err_try_undo;
+
+ for (policy_idx = 0;
+ policy_idx < state->n_alloc && state->policies[policy_idx].policy;
+@@ -164,7 +164,7 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
+ policy[type].nested_policy,
+ policy[type].len);
+ if (err)
+- return err;
++ goto err_try_undo;
+ break;
+ default:
+ break;
+@@ -174,6 +174,16 @@ int netlink_policy_dump_add_policy(struct netlink_policy_dump_state **pstate,
+
+ *pstate = state;
+ return 0;
++
++err_try_undo:
++ /* Try to preserve reasonable unwind semantics - if we're starting from
++ * scratch clean up fully, otherwise record what we got and caller will.
++ */
++ if (!*pstate)
++ netlink_policy_dump_free(state);
++ else
++ *pstate = state;
++ return err;
+ }
+
+ static bool
+diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c
+index 18196e1c8c2fd..9ced13c0627a7 100644
+--- a/net/qrtr/mhi.c
++++ b/net/qrtr/mhi.c
+@@ -78,11 +78,6 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev,
+ struct qrtr_mhi_dev *qdev;
+ int rc;
+
+- /* start channels */
+- rc = mhi_prepare_for_transfer_autoqueue(mhi_dev);
+- if (rc)
+- return rc;
+-
+ qdev = devm_kzalloc(&mhi_dev->dev, sizeof(*qdev), GFP_KERNEL);
+ if (!qdev)
+ return -ENOMEM;
+@@ -96,6 +91,13 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev,
+ if (rc)
+ return rc;
+
++ /* start channels */
++ rc = mhi_prepare_for_transfer_autoqueue(mhi_dev);
++ if (rc) {
++ qrtr_endpoint_unregister(&qdev->ep);
++ return rc;
++ }
++
+ dev_dbg(qdev->dev, "Qualcomm MHI QRTR driver probed\n");
+
+ return 0;
+diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
+index 6fdedd9dbbc28..cfbf0e129cba5 100644
+--- a/net/rds/ib_recv.c
++++ b/net/rds/ib_recv.c
+@@ -363,6 +363,7 @@ static int acquire_refill(struct rds_connection *conn)
+ static void release_refill(struct rds_connection *conn)
+ {
+ clear_bit(RDS_RECV_REFILL, &conn->c_flags);
++ smp_mb__after_atomic();
+
+ /* We don't use wait_on_bit()/wake_up_bit() because our waking is in a
+ * hot path and finding waiters is very rare. We don't want to walk
+diff --git a/net/sunrpc/auth.c b/net/sunrpc/auth.c
+index 682fcd24bf439..2324d1e58f21f 100644
+--- a/net/sunrpc/auth.c
++++ b/net/sunrpc/auth.c
+@@ -445,7 +445,7 @@ rpcauth_prune_expired(struct list_head *free, int nr_to_scan)
+ * Enforce a 60 second garbage collection moratorium
+ * Note that the cred_unused list must be time-ordered.
+ */
+- if (!time_in_range(cred->cr_expire, expired, jiffies))
++ if (time_in_range(cred->cr_expire, expired, jiffies))
+ continue;
+ if (!rpcauth_unhash_cred(cred))
+ continue;
+diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c
+index 5a6b61dcdf2dc..ad8ef1fb08b4e 100644
+--- a/net/sunrpc/backchannel_rqst.c
++++ b/net/sunrpc/backchannel_rqst.c
+@@ -64,6 +64,17 @@ static void xprt_free_allocation(struct rpc_rqst *req)
+ kfree(req);
+ }
+
++static void xprt_bc_reinit_xdr_buf(struct xdr_buf *buf)
++{
++ buf->head[0].iov_len = PAGE_SIZE;
++ buf->tail[0].iov_len = 0;
++ buf->pages = NULL;
++ buf->page_len = 0;
++ buf->flags = 0;
++ buf->len = 0;
++ buf->buflen = PAGE_SIZE;
++}
++
+ static int xprt_alloc_xdr_buf(struct xdr_buf *buf, gfp_t gfp_flags)
+ {
+ struct page *page;
+@@ -292,6 +303,9 @@ void xprt_free_bc_rqst(struct rpc_rqst *req)
+ */
+ spin_lock_bh(&xprt->bc_pa_lock);
+ if (xprt_need_to_requeue(xprt)) {
++ xprt_bc_reinit_xdr_buf(&req->rq_snd_buf);
++ xprt_bc_reinit_xdr_buf(&req->rq_rcv_buf);
++ req->rq_rcv_buf.len = PAGE_SIZE;
+ list_add_tail(&req->rq_bc_pa_list, &xprt->bc_pa_list);
+ xprt->bc_alloc_count++;
+ atomic_inc(&xprt->bc_slot_count);
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index b6781ada3aa8d..733f9f2260926 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1856,7 +1856,6 @@ rpc_xdr_encode(struct rpc_task *task)
+ req->rq_snd_buf.head[0].iov_len = 0;
+ xdr_init_encode(&xdr, &req->rq_snd_buf,
+ req->rq_snd_buf.head[0].iov_base, req);
+- xdr_free_bvec(&req->rq_snd_buf);
+ if (rpc_encode_header(task, &xdr))
+ return;
+
+diff --git a/net/sunrpc/sysfs.c b/net/sunrpc/sysfs.c
+index a3a2f8aeb80ea..d1a15c6d3fd9b 100644
+--- a/net/sunrpc/sysfs.c
++++ b/net/sunrpc/sysfs.c
+@@ -291,8 +291,10 @@ static ssize_t rpc_sysfs_xprt_state_change(struct kobject *kobj,
+ int offline = 0, online = 0, remove = 0;
+ struct rpc_xprt_switch *xps = rpc_sysfs_xprt_kobj_get_xprt_switch(kobj);
+
+- if (!xprt)
+- return 0;
++ if (!xprt || !xps) {
++ count = 0;
++ goto out_put;
++ }
+
+ if (!strncmp(buf, "offline", 7))
+ offline = 1;
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 86d62cffba0dd..53b024cea3b3e 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -73,7 +73,7 @@ static void xprt_init(struct rpc_xprt *xprt, struct net *net);
+ static __be32 xprt_alloc_xid(struct rpc_xprt *xprt);
+ static void xprt_destroy(struct rpc_xprt *xprt);
+ static void xprt_request_init(struct rpc_task *task);
+-static int xprt_request_prepare(struct rpc_rqst *req);
++static int xprt_request_prepare(struct rpc_rqst *req, struct xdr_buf *buf);
+
+ static DEFINE_SPINLOCK(xprt_list_lock);
+ static LIST_HEAD(xprt_list);
+@@ -1149,7 +1149,7 @@ xprt_request_enqueue_receive(struct rpc_task *task)
+ if (!xprt_request_need_enqueue_receive(task, req))
+ return 0;
+
+- ret = xprt_request_prepare(task->tk_rqstp);
++ ret = xprt_request_prepare(task->tk_rqstp, &req->rq_rcv_buf);
+ if (ret)
+ return ret;
+ spin_lock(&xprt->queue_lock);
+@@ -1179,8 +1179,11 @@ xprt_request_dequeue_receive_locked(struct rpc_task *task)
+ {
+ struct rpc_rqst *req = task->tk_rqstp;
+
+- if (test_and_clear_bit(RPC_TASK_NEED_RECV, &task->tk_runstate))
++ if (test_and_clear_bit(RPC_TASK_NEED_RECV, &task->tk_runstate)) {
+ xprt_request_rb_remove(req->rq_xprt, req);
++ xdr_free_bvec(&req->rq_rcv_buf);
++ req->rq_private_buf.bvec = NULL;
++ }
+ }
+
+ /**
+@@ -1336,8 +1339,14 @@ xprt_request_enqueue_transmit(struct rpc_task *task)
+ {
+ struct rpc_rqst *pos, *req = task->tk_rqstp;
+ struct rpc_xprt *xprt = req->rq_xprt;
++ int ret;
+
+ if (xprt_request_need_enqueue_transmit(task, req)) {
++ ret = xprt_request_prepare(task->tk_rqstp, &req->rq_snd_buf);
++ if (ret) {
++ task->tk_status = ret;
++ return;
++ }
+ req->rq_bytes_sent = 0;
+ spin_lock(&xprt->queue_lock);
+ /*
+@@ -1397,6 +1406,7 @@ xprt_request_dequeue_transmit_locked(struct rpc_task *task)
+ } else
+ list_del(&req->rq_xmit2);
+ atomic_long_dec(&req->rq_xprt->xmit_queuelen);
++ xdr_free_bvec(&req->rq_snd_buf);
+ }
+
+ /**
+@@ -1433,8 +1443,6 @@ xprt_request_dequeue_xprt(struct rpc_task *task)
+ test_bit(RPC_TASK_NEED_RECV, &task->tk_runstate) ||
+ xprt_is_pinned_rqst(req)) {
+ spin_lock(&xprt->queue_lock);
+- xprt_request_dequeue_transmit_locked(task);
+- xprt_request_dequeue_receive_locked(task);
+ while (xprt_is_pinned_rqst(req)) {
+ set_bit(RPC_TASK_MSG_PIN_WAIT, &task->tk_runstate);
+ spin_unlock(&xprt->queue_lock);
+@@ -1442,6 +1450,8 @@ xprt_request_dequeue_xprt(struct rpc_task *task)
+ spin_lock(&xprt->queue_lock);
+ clear_bit(RPC_TASK_MSG_PIN_WAIT, &task->tk_runstate);
+ }
++ xprt_request_dequeue_transmit_locked(task);
++ xprt_request_dequeue_receive_locked(task);
+ spin_unlock(&xprt->queue_lock);
+ }
+ }
+@@ -1449,18 +1459,19 @@ xprt_request_dequeue_xprt(struct rpc_task *task)
+ /**
+ * xprt_request_prepare - prepare an encoded request for transport
+ * @req: pointer to rpc_rqst
++ * @buf: pointer to send/rcv xdr_buf
+ *
+ * Calls into the transport layer to do whatever is needed to prepare
+ * the request for transmission or receive.
+ * Returns error, or zero.
+ */
+ static int
+-xprt_request_prepare(struct rpc_rqst *req)
++xprt_request_prepare(struct rpc_rqst *req, struct xdr_buf *buf)
+ {
+ struct rpc_xprt *xprt = req->rq_xprt;
+
+ if (xprt->ops->prepare_request)
+- return xprt->ops->prepare_request(req);
++ return xprt->ops->prepare_request(req, buf);
+ return 0;
+ }
+
+@@ -1961,8 +1972,6 @@ void xprt_release(struct rpc_task *task)
+ spin_unlock(&xprt->transport_lock);
+ if (req->rq_buffer)
+ xprt->ops->buf_free(task);
+- xdr_free_bvec(&req->rq_rcv_buf);
+- xdr_free_bvec(&req->rq_snd_buf);
+ if (req->rq_cred != NULL)
+ put_rpccred(req->rq_cred);
+ if (req->rq_release_snd_buf)
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index fcdd0fca408e0..95a15b74667d6 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -822,17 +822,9 @@ static int xs_stream_nospace(struct rpc_rqst *req, bool vm_wait)
+ return ret;
+ }
+
+-static int
+-xs_stream_prepare_request(struct rpc_rqst *req)
++static int xs_stream_prepare_request(struct rpc_rqst *req, struct xdr_buf *buf)
+ {
+- gfp_t gfp = rpc_task_gfp_mask();
+- int ret;
+-
+- ret = xdr_alloc_bvec(&req->rq_snd_buf, gfp);
+- if (ret < 0)
+- return ret;
+- xdr_free_bvec(&req->rq_rcv_buf);
+- return xdr_alloc_bvec(&req->rq_rcv_buf, gfp);
++ return xdr_alloc_bvec(buf, rpc_task_gfp_mask());
+ }
+
+ /*
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index f04abf662ec6c..b4ee163154a68 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1286,6 +1286,7 @@ static void vsock_connect_timeout(struct work_struct *work)
+ if (sk->sk_state == TCP_SYN_SENT &&
+ (sk->sk_shutdown != SHUTDOWN_MASK)) {
+ sk->sk_state = TCP_CLOSE;
++ sk->sk_socket->state = SS_UNCONNECTED;
+ sk->sk_err = ETIMEDOUT;
+ sk_error_report(sk);
+ vsock_transport_cancel_pkt(vsk);
+@@ -1391,7 +1392,14 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
+ * timeout fires.
+ */
+ sock_hold(sk);
+- schedule_delayed_work(&vsk->connect_work, timeout);
++
++ /* If the timeout function is already scheduled,
++ * reschedule it, then ungrab the socket refcount to
++ * keep it balanced.
++ */
++ if (mod_delayed_work(system_wq, &vsk->connect_work,
++ timeout))
++ sock_put(sk);
+
+ /* Skip ahead to preserve error code set above. */
+ goto out_wait;
+diff --git a/scripts/Makefile.gcc-plugins b/scripts/Makefile.gcc-plugins
+index 692d64a70542a..e4deaf5fa571d 100644
+--- a/scripts/Makefile.gcc-plugins
++++ b/scripts/Makefile.gcc-plugins
+@@ -4,7 +4,7 @@ gcc-plugin-$(CONFIG_GCC_PLUGIN_LATENT_ENTROPY) += latent_entropy_plugin.so
+ gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_LATENT_ENTROPY) \
+ += -DLATENT_ENTROPY_PLUGIN
+ ifdef CONFIG_GCC_PLUGIN_LATENT_ENTROPY
+- DISABLE_LATENT_ENTROPY_PLUGIN += -fplugin-arg-latent_entropy_plugin-disable
++ DISABLE_LATENT_ENTROPY_PLUGIN += -fplugin-arg-latent_entropy_plugin-disable -ULATENT_ENTROPY_PLUGIN
+ endif
+ export DISABLE_LATENT_ENTROPY_PLUGIN
+
+diff --git a/scripts/dummy-tools/gcc b/scripts/dummy-tools/gcc
+index b2483149bbe55..7db8258434355 100755
+--- a/scripts/dummy-tools/gcc
++++ b/scripts/dummy-tools/gcc
+@@ -96,12 +96,8 @@ fi
+
+ # To set GCC_PLUGINS
+ if arg_contain -print-file-name=plugin "$@"; then
+- plugin_dir=$(mktemp -d)
+-
+- mkdir -p $plugin_dir/include
+- touch $plugin_dir/include/plugin-version.h
+-
+- echo $plugin_dir
++ # Use $0 to find the in-tree dummy directory
++ echo "$(dirname "$(readlink -f "$0")")/dummy-plugin-dir"
+ exit 0
+ fi
+
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 620dc8c4c8140..c664a0a1f7d66 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -2203,13 +2203,11 @@ static void add_exported_symbols(struct buffer *buf, struct module *mod)
+ /* record CRCs for exported symbols */
+ buf_printf(buf, "\n");
+ list_for_each_entry(sym, &mod->exported_symbols, list) {
+- if (!sym->crc_valid) {
++ if (!sym->crc_valid)
+ warn("EXPORT symbol \"%s\" [%s%s] version generation failed, symbol will not be versioned.\n"
+ "Is \"%s\" prototyped in <asm/asm-prototypes.h>?\n",
+ sym->name, mod->name, mod->is_vmlinux ? "" : ".ko",
+ sym->name);
+- continue;
+- }
+
+ buf_printf(buf, "SYMBOL_CRC(%s, 0x%08x, \"%s\");\n",
+ sym->name, sym->crc, sym->is_gpl_only ? "_gpl" : "");
+diff --git a/scripts/module.lds.S b/scripts/module.lds.S
+index 1d0e1e4dc3d2a..3a3aa2354ed86 100644
+--- a/scripts/module.lds.S
++++ b/scripts/module.lds.S
+@@ -27,6 +27,8 @@ SECTIONS {
+ .ctors 0 : ALIGN(8) { *(SORT(.ctors.*)) *(.ctors) }
+ .init_array 0 : ALIGN(8) { *(SORT(.init_array.*)) *(.init_array) }
+
++ .altinstructions 0 : ALIGN(8) { KEEP(*(.altinstructions)) }
++ __bug_table 0 : ALIGN(8) { KEEP(*(__bug_table)) }
+ __jump_table 0 : ALIGN(8) { KEEP(*(__jump_table)) }
+
+ __patchable_function_entries : { *(__patchable_function_entries) }
+diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
+index 0797edb2fb3dc..d307fb1edd76e 100644
+--- a/security/apparmor/apparmorfs.c
++++ b/security/apparmor/apparmorfs.c
+@@ -401,7 +401,7 @@ static struct aa_loaddata *aa_simple_write_to_buffer(const char __user *userbuf,
+
+ data->size = copy_size;
+ if (copy_from_user(data->data, userbuf, copy_size)) {
+- kvfree(data);
++ aa_put_loaddata(data);
+ return ERR_PTR(-EFAULT);
+ }
+
+diff --git a/security/apparmor/audit.c b/security/apparmor/audit.c
+index f7e97c7e80f3d..704b0c895605a 100644
+--- a/security/apparmor/audit.c
++++ b/security/apparmor/audit.c
+@@ -137,7 +137,7 @@ int aa_audit(int type, struct aa_profile *profile, struct common_audit_data *sa,
+ }
+ if (AUDIT_MODE(profile) == AUDIT_QUIET ||
+ (type == AUDIT_APPARMOR_DENIED &&
+- AUDIT_MODE(profile) == AUDIT_QUIET))
++ AUDIT_MODE(profile) == AUDIT_QUIET_DENIED))
+ return aad(sa)->error;
+
+ if (KILL_MODE(profile) && type == AUDIT_APPARMOR_DENIED)
+diff --git a/security/apparmor/domain.c b/security/apparmor/domain.c
+index a29e69d2c3005..97721115340fc 100644
+--- a/security/apparmor/domain.c
++++ b/security/apparmor/domain.c
+@@ -466,7 +466,7 @@ restart:
+ * xattrs, or a longer match
+ */
+ candidate = profile;
+- candidate_len = profile->xmatch_len;
++ candidate_len = max(count, profile->xmatch_len);
+ candidate_xattrs = ret;
+ conflict = false;
+ }
+diff --git a/security/apparmor/include/lib.h b/security/apparmor/include/lib.h
+index e2e8df0c6f1c9..f42359f58eb58 100644
+--- a/security/apparmor/include/lib.h
++++ b/security/apparmor/include/lib.h
+@@ -22,6 +22,11 @@
+ */
+
+ #define DEBUG_ON (aa_g_debug)
++/*
++ * split individual debug cases out in preparation for finer grained
++ * debug controls in the future.
++ */
++#define AA_DEBUG_LABEL DEBUG_ON
+ #define dbg_printk(__fmt, __args...) pr_debug(__fmt, ##__args)
+ #define AA_DEBUG(fmt, args...) \
+ do { \
+diff --git a/security/apparmor/include/policy.h b/security/apparmor/include/policy.h
+index cb5ef21991b72..232d3d9566eb7 100644
+--- a/security/apparmor/include/policy.h
++++ b/security/apparmor/include/policy.h
+@@ -135,7 +135,7 @@ struct aa_profile {
+
+ const char *attach;
+ struct aa_dfa *xmatch;
+- int xmatch_len;
++ unsigned int xmatch_len;
+ enum audit_mode audit;
+ long mode;
+ u32 path_flags;
+diff --git a/security/apparmor/label.c b/security/apparmor/label.c
+index 0b0265da19267..3fca010a58296 100644
+--- a/security/apparmor/label.c
++++ b/security/apparmor/label.c
+@@ -1631,9 +1631,9 @@ int aa_label_snxprint(char *str, size_t size, struct aa_ns *ns,
+ AA_BUG(!str && size != 0);
+ AA_BUG(!label);
+
+- if (flags & FLAG_ABS_ROOT) {
++ if (AA_DEBUG_LABEL && (flags & FLAG_ABS_ROOT)) {
+ ns = root_ns;
+- len = snprintf(str, size, "=");
++ len = snprintf(str, size, "_");
+ update_for_len(total, len, size, str);
+ } else if (!ns) {
+ ns = labels_ns(label);
+@@ -1744,7 +1744,7 @@ void aa_label_xaudit(struct audit_buffer *ab, struct aa_ns *ns,
+ if (!use_label_hname(ns, label, flags) ||
+ display_mode(ns, label, flags)) {
+ len = aa_label_asxprint(&name, ns, label, flags, gfp);
+- if (len == -1) {
++ if (len < 0) {
+ AA_DEBUG("label print error");
+ return;
+ }
+@@ -1772,7 +1772,7 @@ void aa_label_seq_xprint(struct seq_file *f, struct aa_ns *ns,
+ int len;
+
+ len = aa_label_asxprint(&str, ns, label, flags, gfp);
+- if (len == -1) {
++ if (len < 0) {
+ AA_DEBUG("label print error");
+ return;
+ }
+@@ -1795,7 +1795,7 @@ void aa_label_xprintk(struct aa_ns *ns, struct aa_label *label, int flags,
+ int len;
+
+ len = aa_label_asxprint(&str, ns, label, flags, gfp);
+- if (len == -1) {
++ if (len < 0) {
+ AA_DEBUG("label print error");
+ return;
+ }
+@@ -1895,7 +1895,8 @@ struct aa_label *aa_label_strn_parse(struct aa_label *base, const char *str,
+ AA_BUG(!str);
+
+ str = skipn_spaces(str, n);
+- if (str == NULL || (*str == '=' && base != &root_ns->unconfined->label))
++ if (str == NULL || (AA_DEBUG_LABEL && *str == '_' &&
++ base != &root_ns->unconfined->label))
+ return ERR_PTR(-EINVAL);
+
+ len = label_count_strn_entries(str, end - str);
+diff --git a/security/apparmor/mount.c b/security/apparmor/mount.c
+index aa6fcfde30514..f7bb47daf2ad6 100644
+--- a/security/apparmor/mount.c
++++ b/security/apparmor/mount.c
+@@ -229,7 +229,8 @@ static const char * const mnt_info_table[] = {
+ "failed srcname match",
+ "failed type match",
+ "failed flags match",
+- "failed data match"
++ "failed data match",
++ "failed perms check"
+ };
+
+ /*
+@@ -284,8 +285,8 @@ static int do_match_mnt(struct aa_dfa *dfa, unsigned int start,
+ return 0;
+ }
+
+- /* failed at end of flags match */
+- return 4;
++ /* failed at perms check, don't confuse with flags match */
++ return 6;
+ }
+
+
+@@ -718,6 +719,7 @@ int aa_pivotroot(struct aa_label *label, const struct path *old_path,
+ aa_put_label(target);
+ goto out;
+ }
++ aa_put_label(target);
+ } else
+ /* already audited error */
+ error = PTR_ERR(target);
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 0acca6f2a93fc..9f23cdde784fd 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -746,16 +746,18 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ profile->label.flags |= FLAG_HAT;
+ if (!unpack_u32(e, &tmp, NULL))
+ goto fail;
+- if (tmp == PACKED_MODE_COMPLAIN || (e->version & FORCE_COMPLAIN_FLAG))
++ if (tmp == PACKED_MODE_COMPLAIN || (e->version & FORCE_COMPLAIN_FLAG)) {
+ profile->mode = APPARMOR_COMPLAIN;
+- else if (tmp == PACKED_MODE_ENFORCE)
++ } else if (tmp == PACKED_MODE_ENFORCE) {
+ profile->mode = APPARMOR_ENFORCE;
+- else if (tmp == PACKED_MODE_KILL)
++ } else if (tmp == PACKED_MODE_KILL) {
+ profile->mode = APPARMOR_KILL;
+- else if (tmp == PACKED_MODE_UNCONFINED)
++ } else if (tmp == PACKED_MODE_UNCONFINED) {
+ profile->mode = APPARMOR_UNCONFINED;
+- else
++ profile->label.flags |= FLAG_UNCONFINED;
++ } else {
+ goto fail;
++ }
+ if (!unpack_u32(e, &tmp, NULL))
+ goto fail;
+ if (tmp)
+diff --git a/sound/core/control.c b/sound/core/control.c
+index a25c0d64d104f..f66fe4be30d35 100644
+--- a/sound/core/control.c
++++ b/sound/core/control.c
+@@ -127,6 +127,7 @@ static int snd_ctl_release(struct inode *inode, struct file *file)
+ if (control->vd[idx].owner == ctl)
+ control->vd[idx].owner = NULL;
+ up_write(&card->controls_rwsem);
++ snd_fasync_free(ctl->fasync);
+ snd_ctl_empty_read_queue(ctl);
+ put_pid(ctl->pid);
+ kfree(ctl);
+@@ -181,7 +182,7 @@ void snd_ctl_notify(struct snd_card *card, unsigned int mask,
+ _found:
+ wake_up(&ctl->change_sleep);
+ spin_unlock(&ctl->read_lock);
+- kill_fasync(&ctl->fasync, SIGIO, POLL_IN);
++ snd_kill_fasync(ctl->fasync, SIGIO, POLL_IN);
+ }
+ read_unlock_irqrestore(&card->ctl_files_rwlock, flags);
+ }
+@@ -2002,7 +2003,7 @@ static int snd_ctl_fasync(int fd, struct file * file, int on)
+ struct snd_ctl_file *ctl;
+
+ ctl = file->private_data;
+- return fasync_helper(fd, file, on, &ctl->fasync);
++ return snd_fasync_helper(fd, file, on, &ctl->fasync);
+ }
+
+ /* return the preferred subdevice number if already assigned;
+@@ -2170,7 +2171,7 @@ static int snd_ctl_dev_disconnect(struct snd_device *device)
+ read_lock_irqsave(&card->ctl_files_rwlock, flags);
+ list_for_each_entry(ctl, &card->ctl_files, list) {
+ wake_up(&ctl->change_sleep);
+- kill_fasync(&ctl->fasync, SIGIO, POLL_ERR);
++ snd_kill_fasync(ctl->fasync, SIGIO, POLL_ERR);
+ }
+ read_unlock_irqrestore(&card->ctl_files_rwlock, flags);
+
+diff --git a/sound/core/info.c b/sound/core/info.c
+index 782fba87cc043..e952441f71403 100644
+--- a/sound/core/info.c
++++ b/sound/core/info.c
+@@ -111,9 +111,9 @@ static loff_t snd_info_entry_llseek(struct file *file, loff_t offset, int orig)
+ entry = data->entry;
+ mutex_lock(&entry->access);
+ if (entry->c.ops->llseek) {
+- offset = entry->c.ops->llseek(entry,
+- data->file_private_data,
+- file, offset, orig);
++ ret = entry->c.ops->llseek(entry,
++ data->file_private_data,
++ file, offset, orig);
+ goto out;
+ }
+
+diff --git a/sound/core/misc.c b/sound/core/misc.c
+index 50e4aaa6270d1..d32a19976a2b9 100644
+--- a/sound/core/misc.c
++++ b/sound/core/misc.c
+@@ -10,6 +10,7 @@
+ #include <linux/time.h>
+ #include <linux/slab.h>
+ #include <linux/ioport.h>
++#include <linux/fs.h>
+ #include <sound/core.h>
+
+ #ifdef CONFIG_SND_DEBUG
+@@ -145,3 +146,96 @@ snd_pci_quirk_lookup(struct pci_dev *pci, const struct snd_pci_quirk *list)
+ }
+ EXPORT_SYMBOL(snd_pci_quirk_lookup);
+ #endif
++
++/*
++ * Deferred async signal helpers
++ *
++ * Below are a few helper functions to wrap the async signal handling
++ * in the deferred work. The main purpose is to avoid the messy deadlock
++ * around tasklist_lock and co at the kill_fasync() invocation.
++ * fasync_helper() and kill_fasync() are replaced with snd_fasync_helper()
++ * and snd_kill_fasync(), respectively. In addition, snd_fasync_free() has
++ * to be called at releasing the relevant file object.
++ */
++struct snd_fasync {
++ struct fasync_struct *fasync;
++ int signal;
++ int poll;
++ int on;
++ struct list_head list;
++};
++
++static DEFINE_SPINLOCK(snd_fasync_lock);
++static LIST_HEAD(snd_fasync_list);
++
++static void snd_fasync_work_fn(struct work_struct *work)
++{
++ struct snd_fasync *fasync;
++
++ spin_lock_irq(&snd_fasync_lock);
++ while (!list_empty(&snd_fasync_list)) {
++ fasync = list_first_entry(&snd_fasync_list, struct snd_fasync, list);
++ list_del_init(&fasync->list);
++ spin_unlock_irq(&snd_fasync_lock);
++ if (fasync->on)
++ kill_fasync(&fasync->fasync, fasync->signal, fasync->poll);
++ spin_lock_irq(&snd_fasync_lock);
++ }
++ spin_unlock_irq(&snd_fasync_lock);
++}
++
++static DECLARE_WORK(snd_fasync_work, snd_fasync_work_fn);
++
++int snd_fasync_helper(int fd, struct file *file, int on,
++ struct snd_fasync **fasyncp)
++{
++ struct snd_fasync *fasync = NULL;
++
++ if (on) {
++ fasync = kzalloc(sizeof(*fasync), GFP_KERNEL);
++ if (!fasync)
++ return -ENOMEM;
++ INIT_LIST_HEAD(&fasync->list);
++ }
++
++ spin_lock_irq(&snd_fasync_lock);
++ if (*fasyncp) {
++ kfree(fasync);
++ fasync = *fasyncp;
++ } else {
++ if (!fasync) {
++ spin_unlock_irq(&snd_fasync_lock);
++ return 0;
++ }
++ *fasyncp = fasync;
++ }
++ fasync->on = on;
++ spin_unlock_irq(&snd_fasync_lock);
++ return fasync_helper(fd, file, on, &fasync->fasync);
++}
++EXPORT_SYMBOL_GPL(snd_fasync_helper);
++
++void snd_kill_fasync(struct snd_fasync *fasync, int signal, int poll)
++{
++ unsigned long flags;
++
++ if (!fasync || !fasync->on)
++ return;
++ spin_lock_irqsave(&snd_fasync_lock, flags);
++ fasync->signal = signal;
++ fasync->poll = poll;
++ list_move(&fasync->list, &snd_fasync_list);
++ schedule_work(&snd_fasync_work);
++ spin_unlock_irqrestore(&snd_fasync_lock, flags);
++}
++EXPORT_SYMBOL_GPL(snd_kill_fasync);
++
++void snd_fasync_free(struct snd_fasync *fasync)
++{
++ if (!fasync)
++ return;
++ fasync->on = 0;
++ flush_work(&snd_fasync_work);
++ kfree(fasync);
++}
++EXPORT_SYMBOL_GPL(snd_fasync_free);
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index 977d54320a5ca..c917ac84a7e58 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -1005,6 +1005,7 @@ void snd_pcm_detach_substream(struct snd_pcm_substream *substream)
+ substream->runtime = NULL;
+ }
+ mutex_destroy(&runtime->buffer_mutex);
++ snd_fasync_free(runtime->fasync);
+ kfree(runtime);
+ put_pid(substream->pid);
+ substream->pid = NULL;
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 1fc7c50ffa625..40751e5aff09f 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1822,7 +1822,7 @@ void snd_pcm_period_elapsed_under_stream_lock(struct snd_pcm_substream *substrea
+ snd_timer_interrupt(substream->timer, 1);
+ #endif
+ _end:
+- kill_fasync(&runtime->fasync, SIGIO, POLL_IN);
++ snd_kill_fasync(runtime->fasync, SIGIO, POLL_IN);
+ }
+ EXPORT_SYMBOL(snd_pcm_period_elapsed_under_stream_lock);
+
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 4adaee62ef333..16fcf57c6f030 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -3945,7 +3945,7 @@ static int snd_pcm_fasync(int fd, struct file * file, int on)
+ runtime = substream->runtime;
+ if (runtime->status->state == SNDRV_PCM_STATE_DISCONNECTED)
+ return -EBADFD;
+- return fasync_helper(fd, file, on, &runtime->fasync);
++ return snd_fasync_helper(fd, file, on, &runtime->fasync);
+ }
+
+ /*
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index b3214baa89193..e08a37c23add8 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -83,7 +83,7 @@ struct snd_timer_user {
+ unsigned int filter;
+ struct timespec64 tstamp; /* trigger tstamp */
+ wait_queue_head_t qchange_sleep;
+- struct fasync_struct *fasync;
++ struct snd_fasync *fasync;
+ struct mutex ioctl_lock;
+ };
+
+@@ -1345,7 +1345,7 @@ static void snd_timer_user_interrupt(struct snd_timer_instance *timeri,
+ }
+ __wake:
+ spin_unlock(&tu->qlock);
+- kill_fasync(&tu->fasync, SIGIO, POLL_IN);
++ snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
+ wake_up(&tu->qchange_sleep);
+ }
+
+@@ -1383,7 +1383,7 @@ static void snd_timer_user_ccallback(struct snd_timer_instance *timeri,
+ spin_lock_irqsave(&tu->qlock, flags);
+ snd_timer_user_append_to_tqueue(tu, &r1);
+ spin_unlock_irqrestore(&tu->qlock, flags);
+- kill_fasync(&tu->fasync, SIGIO, POLL_IN);
++ snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
+ wake_up(&tu->qchange_sleep);
+ }
+
+@@ -1453,7 +1453,7 @@ static void snd_timer_user_tinterrupt(struct snd_timer_instance *timeri,
+ spin_unlock(&tu->qlock);
+ if (append == 0)
+ return;
+- kill_fasync(&tu->fasync, SIGIO, POLL_IN);
++ snd_kill_fasync(tu->fasync, SIGIO, POLL_IN);
+ wake_up(&tu->qchange_sleep);
+ }
+
+@@ -1521,6 +1521,7 @@ static int snd_timer_user_release(struct inode *inode, struct file *file)
+ snd_timer_instance_free(tu->timeri);
+ }
+ mutex_unlock(&tu->ioctl_lock);
++ snd_fasync_free(tu->fasync);
+ kfree(tu->queue);
+ kfree(tu->tqueue);
+ kfree(tu);
+@@ -2135,7 +2136,7 @@ static int snd_timer_user_fasync(int fd, struct file * file, int on)
+ struct snd_timer_user *tu;
+
+ tu = file->private_data;
+- return fasync_helper(fd, file, on, &tu->fasync);
++ return snd_fasync_helper(fd, file, on, &tu->fasync);
+ }
+
+ static ssize_t snd_timer_user_read(struct file *file, char __user *buffer,
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 7579a6982f471..ccb195b7cb08a 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2935,8 +2935,7 @@ static int hda_codec_runtime_suspend(struct device *dev)
+ if (!codec->card)
+ return 0;
+
+- if (!codec->bus->jackpoll_in_suspend)
+- cancel_delayed_work_sync(&codec->jackpoll_work);
++ cancel_delayed_work_sync(&codec->jackpoll_work);
+
+ state = hda_call_codec_suspend(codec);
+ if (codec->link_down_at_suspend ||
+@@ -2944,6 +2943,11 @@ static int hda_codec_runtime_suspend(struct device *dev)
+ (state & AC_PWRST_CLK_STOP_OK)))
+ snd_hdac_codec_link_down(&codec->core);
+ snd_hda_codec_display_power(codec, false);
++
++ if (codec->bus->jackpoll_in_suspend &&
++ (dev->power.power_state.event != PM_EVENT_SUSPEND))
++ schedule_delayed_work(&codec->jackpoll_work,
++ codec->jackpoll_interval);
+ return 0;
+ }
+
+@@ -2967,6 +2971,9 @@ static int hda_codec_runtime_resume(struct device *dev)
+ #ifdef CONFIG_PM_SLEEP
+ static int hda_codec_pm_prepare(struct device *dev)
+ {
++ struct hda_codec *codec = dev_to_hda_codec(dev);
++
++ cancel_delayed_work_sync(&codec->jackpoll_work);
+ dev->power.power_state = PMSG_SUSPEND;
+ return pm_runtime_suspended(dev);
+ }
+@@ -2986,9 +2993,6 @@ static void hda_codec_pm_complete(struct device *dev)
+
+ static int hda_codec_pm_suspend(struct device *dev)
+ {
+- struct hda_codec *codec = dev_to_hda_codec(dev);
+-
+- cancel_delayed_work_sync(&codec->jackpoll_work);
+ dev->power.power_state = PMSG_SUSPEND;
+ return pm_runtime_force_suspend(dev);
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 619e6025ba97c..1ae9674fa8a3c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9234,6 +9234,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8aa3, "HP ProBook 450 G9 (MB 8AA1)", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8aa8, "HP EliteBook 640 G9 (MB 8AA6)", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8aab, "HP EliteBook 650 G9 (MB 8AA9)", ALC236_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8ad1, "HP EliteBook 840 14 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8ad2, "HP EliteBook 860 16 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -9352,6 +9354,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1558, 0x7717, "Clevo NS70PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x7718, "Clevo L140PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+diff --git a/sound/soc/codecs/lpass-va-macro.c b/sound/soc/codecs/lpass-va-macro.c
+index d18b56e604330..1ea10dc70748a 100644
+--- a/sound/soc/codecs/lpass-va-macro.c
++++ b/sound/soc/codecs/lpass-va-macro.c
+@@ -199,6 +199,7 @@ struct va_macro {
+ struct clk *mclk;
+ struct clk *macro;
+ struct clk *dcodec;
++ struct clk *fsgen;
+ struct clk_hw hw;
+ struct lpass_macro *pds;
+
+@@ -467,9 +468,9 @@ static int va_macro_mclk_event(struct snd_soc_dapm_widget *w,
+
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
+- return va_macro_mclk_enable(va, true);
++ return clk_prepare_enable(va->fsgen);
+ case SND_SOC_DAPM_POST_PMD:
+- return va_macro_mclk_enable(va, false);
++ clk_disable_unprepare(va->fsgen);
+ }
+
+ return 0;
+@@ -1473,6 +1474,12 @@ static int va_macro_probe(struct platform_device *pdev)
+ if (ret)
+ goto err_clkout;
+
++ va->fsgen = clk_hw_get_clk(&va->hw, "fsgen");
++ if (IS_ERR(va->fsgen)) {
++ ret = PTR_ERR(va->fsgen);
++ goto err_clkout;
++ }
++
+ ret = devm_snd_soc_register_component(dev, &va_macro_component_drv,
+ va_macro_dais,
+ ARRAY_SIZE(va_macro_dais));
+diff --git a/sound/soc/codecs/nau8821.c b/sound/soc/codecs/nau8821.c
+index ce4e7f46bb067..e078d2ffb3f67 100644
+--- a/sound/soc/codecs/nau8821.c
++++ b/sound/soc/codecs/nau8821.c
+@@ -1665,15 +1665,6 @@ static int nau8821_i2c_probe(struct i2c_client *i2c)
+ return ret;
+ }
+
+-static int nau8821_i2c_remove(struct i2c_client *i2c_client)
+-{
+- struct nau8821 *nau8821 = i2c_get_clientdata(i2c_client);
+-
+- devm_free_irq(nau8821->dev, nau8821->irq, nau8821);
+-
+- return 0;
+-}
+-
+ static const struct i2c_device_id nau8821_i2c_ids[] = {
+ { "nau8821", 0 },
+ { }
+@@ -1703,7 +1694,6 @@ static struct i2c_driver nau8821_driver = {
+ .acpi_match_table = ACPI_PTR(nau8821_acpi_match),
+ },
+ .probe_new = nau8821_i2c_probe,
+- .remove = nau8821_i2c_remove,
+ .id_table = nau8821_i2c_ids,
+ };
+ module_i2c_driver(nau8821_driver);
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index c1dbd978d5502..9ea2aca65e899 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -46,34 +46,22 @@ static void tas2770_reset(struct tas2770_priv *tas2770)
+ usleep_range(1000, 2000);
+ }
+
+-static int tas2770_set_bias_level(struct snd_soc_component *component,
+- enum snd_soc_bias_level level)
++static int tas2770_update_pwr_ctrl(struct tas2770_priv *tas2770)
+ {
+- struct tas2770_priv *tas2770 =
+- snd_soc_component_get_drvdata(component);
++ struct snd_soc_component *component = tas2770->component;
++ unsigned int val;
++ int ret;
+
+- switch (level) {
+- case SND_SOC_BIAS_ON:
+- snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_ACTIVE);
+- break;
+- case SND_SOC_BIAS_STANDBY:
+- case SND_SOC_BIAS_PREPARE:
+- snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_MUTE);
+- break;
+- case SND_SOC_BIAS_OFF:
+- snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_SHUTDOWN);
+- break;
++ if (tas2770->dac_powered)
++ val = tas2770->unmuted ?
++ TAS2770_PWR_CTRL_ACTIVE : TAS2770_PWR_CTRL_MUTE;
++ else
++ val = TAS2770_PWR_CTRL_SHUTDOWN;
+
+- default:
+- dev_err(tas2770->dev, "wrong power level setting %d\n", level);
+- return -EINVAL;
+- }
++ ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
++ TAS2770_PWR_CTRL_MASK, val);
++ if (ret < 0)
++ return ret;
+
+ return 0;
+ }
+@@ -114,9 +102,7 @@ static int tas2770_codec_resume(struct snd_soc_component *component)
+ gpiod_set_value_cansleep(tas2770->sdz_gpio, 1);
+ usleep_range(1000, 2000);
+ } else {
+- ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_ACTIVE);
++ ret = tas2770_update_pwr_ctrl(tas2770);
+ if (ret < 0)
+ return ret;
+ }
+@@ -152,24 +138,19 @@ static int tas2770_dac_event(struct snd_soc_dapm_widget *w,
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+- ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_MUTE);
++ tas2770->dac_powered = 1;
++ ret = tas2770_update_pwr_ctrl(tas2770);
+ break;
+ case SND_SOC_DAPM_PRE_PMD:
+- ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_SHUTDOWN);
++ tas2770->dac_powered = 0;
++ ret = tas2770_update_pwr_ctrl(tas2770);
+ break;
+ default:
+ dev_err(tas2770->dev, "Not supported evevt\n");
+ return -EINVAL;
+ }
+
+- if (ret < 0)
+- return ret;
+-
+- return 0;
++ return ret;
+ }
+
+ static const struct snd_kcontrol_new isense_switch =
+@@ -203,21 +184,11 @@ static const struct snd_soc_dapm_route tas2770_audio_map[] = {
+ static int tas2770_mute(struct snd_soc_dai *dai, int mute, int direction)
+ {
+ struct snd_soc_component *component = dai->component;
+- int ret;
+-
+- if (mute)
+- ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_MUTE);
+- else
+- ret = snd_soc_component_update_bits(component, TAS2770_PWR_CTRL,
+- TAS2770_PWR_CTRL_MASK,
+- TAS2770_PWR_CTRL_ACTIVE);
+-
+- if (ret < 0)
+- return ret;
++ struct tas2770_priv *tas2770 =
++ snd_soc_component_get_drvdata(component);
+
+- return 0;
++ tas2770->unmuted = !mute;
++ return tas2770_update_pwr_ctrl(tas2770);
+ }
+
+ static int tas2770_set_bitwidth(struct tas2770_priv *tas2770, int bitwidth)
+@@ -337,7 +308,7 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ struct snd_soc_component *component = dai->component;
+ struct tas2770_priv *tas2770 =
+ snd_soc_component_get_drvdata(component);
+- u8 tdm_rx_start_slot = 0, asi_cfg_1 = 0;
++ u8 tdm_rx_start_slot = 0, invert_fpol = 0, fpol_preinv = 0, asi_cfg_1 = 0;
+ int ret;
+
+ switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
+@@ -349,9 +320,15 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ }
+
+ switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
++ case SND_SOC_DAIFMT_NB_IF:
++ invert_fpol = 1;
++ fallthrough;
+ case SND_SOC_DAIFMT_NB_NF:
+ asi_cfg_1 |= TAS2770_TDM_CFG_REG1_RX_RSING;
+ break;
++ case SND_SOC_DAIFMT_IB_IF:
++ invert_fpol = 1;
++ fallthrough;
+ case SND_SOC_DAIFMT_IB_NF:
+ asi_cfg_1 |= TAS2770_TDM_CFG_REG1_RX_FALING;
+ break;
+@@ -369,15 +346,19 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ case SND_SOC_DAIFMT_I2S:
+ tdm_rx_start_slot = 1;
++ fpol_preinv = 0;
+ break;
+ case SND_SOC_DAIFMT_DSP_A:
+ tdm_rx_start_slot = 0;
++ fpol_preinv = 1;
+ break;
+ case SND_SOC_DAIFMT_DSP_B:
+ tdm_rx_start_slot = 1;
++ fpol_preinv = 1;
+ break;
+ case SND_SOC_DAIFMT_LEFT_J:
+ tdm_rx_start_slot = 0;
++ fpol_preinv = 1;
+ break;
+ default:
+ dev_err(tas2770->dev,
+@@ -391,6 +372,14 @@ static int tas2770_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ if (ret < 0)
+ return ret;
+
++ ret = snd_soc_component_update_bits(component, TAS2770_TDM_CFG_REG0,
++ TAS2770_TDM_CFG_REG0_FPOL_MASK,
++ (fpol_preinv ^ invert_fpol)
++ ? TAS2770_TDM_CFG_REG0_FPOL_RSING
++ : TAS2770_TDM_CFG_REG0_FPOL_FALING);
++ if (ret < 0)
++ return ret;
++
+ return 0;
+ }
+
+@@ -489,7 +478,7 @@ static struct snd_soc_dai_driver tas2770_dai_driver[] = {
+ .id = 0,
+ .playback = {
+ .stream_name = "ASI1 Playback",
+- .channels_min = 2,
++ .channels_min = 1,
+ .channels_max = 2,
+ .rates = TAS2770_RATES,
+ .formats = TAS2770_FORMATS,
+@@ -537,7 +526,6 @@ static const struct snd_soc_component_driver soc_component_driver_tas2770 = {
+ .probe = tas2770_codec_probe,
+ .suspend = tas2770_codec_suspend,
+ .resume = tas2770_codec_resume,
+- .set_bias_level = tas2770_set_bias_level,
+ .controls = tas2770_snd_controls,
+ .num_controls = ARRAY_SIZE(tas2770_snd_controls),
+ .dapm_widgets = tas2770_dapm_widgets,
+diff --git a/sound/soc/codecs/tas2770.h b/sound/soc/codecs/tas2770.h
+index d156666bcc552..f75f40781ab13 100644
+--- a/sound/soc/codecs/tas2770.h
++++ b/sound/soc/codecs/tas2770.h
+@@ -41,6 +41,9 @@
+ #define TAS2770_TDM_CFG_REG0_31_44_1_48KHZ 0x6
+ #define TAS2770_TDM_CFG_REG0_31_88_2_96KHZ 0x8
+ #define TAS2770_TDM_CFG_REG0_31_176_4_192KHZ 0xa
++#define TAS2770_TDM_CFG_REG0_FPOL_MASK BIT(0)
++#define TAS2770_TDM_CFG_REG0_FPOL_RSING 0
++#define TAS2770_TDM_CFG_REG0_FPOL_FALING 1
+ /* TDM Configuration Reg1 */
+ #define TAS2770_TDM_CFG_REG1 TAS2770_REG(0X0, 0x0B)
+ #define TAS2770_TDM_CFG_REG1_MASK GENMASK(5, 1)
+@@ -135,6 +138,8 @@ struct tas2770_priv {
+ struct device *dev;
+ int v_sense_slot;
+ int i_sense_slot;
++ bool dac_powered;
++ bool unmuted;
+ };
+
+ #endif /* __TAS2770__ */
+diff --git a/sound/soc/codecs/tlv320aic32x4.c b/sound/soc/codecs/tlv320aic32x4.c
+index 8f42fd7bc0539..9b082cc5ecc49 100644
+--- a/sound/soc/codecs/tlv320aic32x4.c
++++ b/sound/soc/codecs/tlv320aic32x4.c
+@@ -49,6 +49,8 @@ struct aic32x4_priv {
+ struct aic32x4_setup_data *setup;
+ struct device *dev;
+ enum aic32x4_type type;
++
++ unsigned int fmt;
+ };
+
+ static int aic32x4_reset_adc(struct snd_soc_dapm_widget *w,
+@@ -611,6 +613,7 @@ static int aic32x4_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ static int aic32x4_set_dai_fmt(struct snd_soc_dai *codec_dai, unsigned int fmt)
+ {
+ struct snd_soc_component *component = codec_dai->component;
++ struct aic32x4_priv *aic32x4 = snd_soc_component_get_drvdata(component);
+ u8 iface_reg_1 = 0;
+ u8 iface_reg_2 = 0;
+ u8 iface_reg_3 = 0;
+@@ -654,6 +657,8 @@ static int aic32x4_set_dai_fmt(struct snd_soc_dai *codec_dai, unsigned int fmt)
+ return -EINVAL;
+ }
+
++ aic32x4->fmt = fmt;
++
+ snd_soc_component_update_bits(component, AIC32X4_IFACE1,
+ AIC32X4_IFACE1_DATATYPE_MASK |
+ AIC32X4_IFACE1_MASTER_MASK, iface_reg_1);
+@@ -758,6 +763,10 @@ static int aic32x4_setup_clocks(struct snd_soc_component *component,
+ return -EINVAL;
+ }
+
++ /* PCM over I2S is always 2-channel */
++ if ((aic32x4->fmt & SND_SOC_DAIFMT_FORMAT_MASK) == SND_SOC_DAIFMT_I2S)
++ channels = 2;
++
+ madc = DIV_ROUND_UP((32 * adc_resource_class), aosr);
+ max_dosr = (AIC32X4_MAX_DOSR_FREQ / sample_rate / dosr_increment) *
+ dosr_increment;
+diff --git a/sound/soc/intel/avs/core.c b/sound/soc/intel/avs/core.c
+index 3a0997c3af2b9..cf373969bb69d 100644
+--- a/sound/soc/intel/avs/core.c
++++ b/sound/soc/intel/avs/core.c
+@@ -445,6 +445,7 @@ static int avs_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ dma_set_mask(dev, DMA_BIT_MASK(32));
+ dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
+ }
++ dma_set_max_seg_size(dev, UINT_MAX);
+
+ ret = avs_hdac_bus_init_streams(bus);
+ if (ret < 0) {
+diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
+index 668f533578a69..8d36d35e6eaab 100644
+--- a/sound/soc/intel/avs/pcm.c
++++ b/sound/soc/intel/avs/pcm.c
+@@ -636,8 +636,8 @@ static ssize_t topology_name_read(struct file *file, char __user *user_buf, size
+ char buf[64];
+ size_t len;
+
+- len = snprintf(buf, sizeof(buf), "%s/%s\n", component->driver->topology_name_prefix,
+- mach->tplg_filename);
++ len = scnprintf(buf, sizeof(buf), "%s/%s\n", component->driver->topology_name_prefix,
++ mach->tplg_filename);
+
+ return simple_read_from_buffer(user_buf, count, ppos, buf, len);
+ }
+diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/sof_es8336.c
+index 23d03e0f77599..d70d8255b8c76 100644
+--- a/sound/soc/intel/boards/sof_es8336.c
++++ b/sound/soc/intel/boards/sof_es8336.c
+@@ -57,28 +57,26 @@ static const struct acpi_gpio_params enable_gpio0 = { 0, 0, true };
+ static const struct acpi_gpio_params enable_gpio1 = { 1, 0, true };
+
+ static const struct acpi_gpio_mapping acpi_speakers_enable_gpio0[] = {
+- { "speakers-enable-gpios", &enable_gpio0, 1 },
++ { "speakers-enable-gpios", &enable_gpio0, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
+ { }
+ };
+
+ static const struct acpi_gpio_mapping acpi_speakers_enable_gpio1[] = {
+- { "speakers-enable-gpios", &enable_gpio1, 1 },
++ { "speakers-enable-gpios", &enable_gpio1, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
+ };
+
+ static const struct acpi_gpio_mapping acpi_enable_both_gpios[] = {
+- { "speakers-enable-gpios", &enable_gpio0, 1 },
+- { "headphone-enable-gpios", &enable_gpio1, 1 },
++ { "speakers-enable-gpios", &enable_gpio0, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
++ { "headphone-enable-gpios", &enable_gpio1, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
+ { }
+ };
+
+ static const struct acpi_gpio_mapping acpi_enable_both_gpios_rev_order[] = {
+- { "speakers-enable-gpios", &enable_gpio1, 1 },
+- { "headphone-enable-gpios", &enable_gpio0, 1 },
++ { "speakers-enable-gpios", &enable_gpio1, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
++ { "headphone-enable-gpios", &enable_gpio0, 1, ACPI_GPIO_QUIRK_ONLY_GPIOIO },
+ { }
+ };
+
+-static const struct acpi_gpio_mapping *gpio_mapping = acpi_speakers_enable_gpio0;
+-
+ static void log_quirks(struct device *dev)
+ {
+ dev_info(dev, "quirk mask %#lx\n", quirk);
+@@ -272,15 +270,6 @@ static int sof_es8336_quirk_cb(const struct dmi_system_id *id)
+ {
+ quirk = (unsigned long)id->driver_data;
+
+- if (quirk & SOF_ES8336_HEADPHONE_GPIO) {
+- if (quirk & SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK)
+- gpio_mapping = acpi_enable_both_gpios;
+- else
+- gpio_mapping = acpi_enable_both_gpios_rev_order;
+- } else if (quirk & SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK) {
+- gpio_mapping = acpi_speakers_enable_gpio1;
+- }
+-
+ return 1;
+ }
+
+@@ -529,6 +518,7 @@ static int sof_es8336_probe(struct platform_device *pdev)
+ struct acpi_device *adev;
+ struct snd_soc_dai_link *dai_links;
+ struct device *codec_dev;
++ const struct acpi_gpio_mapping *gpio_mapping;
+ unsigned int cnt = 0;
+ int dmic_be_num = 0;
+ int hdmi_num = 3;
+@@ -635,6 +625,17 @@ static int sof_es8336_probe(struct platform_device *pdev)
+ }
+
+ /* get speaker enable GPIO */
++ if (quirk & SOF_ES8336_HEADPHONE_GPIO) {
++ if (quirk & SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK)
++ gpio_mapping = acpi_enable_both_gpios;
++ else
++ gpio_mapping = acpi_enable_both_gpios_rev_order;
++ } else if (quirk & SOF_ES8336_SPEAKERS_EN_GPIO1_QUIRK) {
++ gpio_mapping = acpi_speakers_enable_gpio1;
++ } else {
++ gpio_mapping = acpi_speakers_enable_gpio0;
++ }
++
+ ret = devm_acpi_dev_add_driver_gpios(codec_dev, gpio_mapping);
+ if (ret)
+ dev_warn(codec_dev, "unable to add GPIO mapping table\n");
+diff --git a/sound/soc/intel/boards/sof_nau8825.c b/sound/soc/intel/boards/sof_nau8825.c
+index 97dcd204a2466..9b3a2ff4d9cdc 100644
+--- a/sound/soc/intel/boards/sof_nau8825.c
++++ b/sound/soc/intel/boards/sof_nau8825.c
+@@ -177,11 +177,6 @@ static int sof_card_late_probe(struct snd_soc_card *card)
+ struct sof_hdmi_pcm *pcm;
+ int err;
+
+- if (list_empty(&ctx->hdmi_pcm_list))
+- return -EINVAL;
+-
+- pcm = list_first_entry(&ctx->hdmi_pcm_list, struct sof_hdmi_pcm, head);
+-
+ if (sof_nau8825_quirk & SOF_MAX98373_SPEAKER_AMP_PRESENT) {
+ /* Disable Left and Right Spk pin after boot */
+ snd_soc_dapm_disable_pin(dapm, "Left Spk");
+@@ -191,6 +186,11 @@ static int sof_card_late_probe(struct snd_soc_card *card)
+ return err;
+ }
+
++ if (list_empty(&ctx->hdmi_pcm_list))
++ return -EINVAL;
++
++ pcm = list_first_entry(&ctx->hdmi_pcm_list, struct sof_hdmi_pcm, head);
++
+ return hda_dsp_hdmi_build_controls(card, pcm->codec_dai->component);
+ }
+
+diff --git a/sound/soc/qcom/qdsp6/q6apm-dai.c b/sound/soc/qcom/qdsp6/q6apm-dai.c
+index ee59ef36b85a6..e45210c0e25e6 100644
+--- a/sound/soc/qcom/qdsp6/q6apm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6apm-dai.c
+@@ -153,6 +153,12 @@ static int q6apm_dai_prepare(struct snd_soc_component *component,
+ q6apm_unmap_memory_regions(prtd->graph, substream->stream);
+ }
+
++ if (prtd->state) {
++ /* clear the previous setup if any */
++ q6apm_graph_stop(prtd->graph);
++ q6apm_unmap_memory_regions(prtd->graph, substream->stream);
++ }
++
+ prtd->pcm_count = snd_pcm_lib_period_bytes(substream);
+ prtd->pos = 0;
+ /* rate and channels are sent to audio driver */
+diff --git a/sound/soc/sh/rcar/ssiu.c b/sound/soc/sh/rcar/ssiu.c
+index 4b8a63e336c77..d7f4646ee029c 100644
+--- a/sound/soc/sh/rcar/ssiu.c
++++ b/sound/soc/sh/rcar/ssiu.c
+@@ -67,6 +67,8 @@ static void rsnd_ssiu_busif_err_irq_ctrl(struct rsnd_mod *mod, int enable)
+ shift = 1;
+ offset = 1;
+ break;
++ default:
++ return;
+ }
+
+ for (i = 0; i < 4; i++) {
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index a827cc3c158ae..0c1de56248427 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1318,6 +1318,9 @@ static struct snd_soc_pcm_runtime *dpcm_get_be(struct snd_soc_card *card,
+ if (!be->dai_link->no_pcm)
+ continue;
+
++ if (!snd_soc_dpcm_get_substream(be, stream))
++ continue;
++
+ for_each_rtd_dais(be, i, dai) {
+ w = snd_soc_dai_get_widget(dai, stream);
+
+diff --git a/sound/soc/sof/debug.c b/sound/soc/sof/debug.c
+index cf1271eb29b23..15c906e9fe2e2 100644
+--- a/sound/soc/sof/debug.c
++++ b/sound/soc/sof/debug.c
+@@ -252,9 +252,9 @@ static int memory_info_update(struct snd_sof_dev *sdev, char *buf, size_t buff_s
+ }
+
+ for (i = 0, len = 0; i < reply->num_elems; i++) {
+- ret = snprintf(buf + len, buff_size - len, "zone %d.%d used %#8x free %#8x\n",
+- reply->elems[i].zone, reply->elems[i].id,
+- reply->elems[i].used, reply->elems[i].free);
++ ret = scnprintf(buf + len, buff_size - len, "zone %d.%d used %#8x free %#8x\n",
++ reply->elems[i].zone, reply->elems[i].id,
++ reply->elems[i].used, reply->elems[i].free);
+ if (ret < 0)
+ goto error;
+ len += ret;
+diff --git a/sound/soc/sof/intel/cnl.c b/sound/soc/sof/intel/cnl.c
+index cd6e5f8a5eb4d..6c98f65635fcc 100644
+--- a/sound/soc/sof/intel/cnl.c
++++ b/sound/soc/sof/intel/cnl.c
+@@ -60,17 +60,23 @@ irqreturn_t cnl_ipc4_irq_thread(int irq, void *context)
+
+ if (primary & SOF_IPC4_MSG_DIR_MASK) {
+ /* Reply received */
+- struct sof_ipc4_msg *data = sdev->ipc->msg.reply_data;
++ if (likely(sdev->fw_state == SOF_FW_BOOT_COMPLETE)) {
++ struct sof_ipc4_msg *data = sdev->ipc->msg.reply_data;
+
+- data->primary = primary;
+- data->extension = extension;
++ data->primary = primary;
++ data->extension = extension;
+
+- spin_lock_irq(&sdev->ipc_lock);
++ spin_lock_irq(&sdev->ipc_lock);
+
+- snd_sof_ipc_get_reply(sdev);
+- snd_sof_ipc_reply(sdev, data->primary);
++ snd_sof_ipc_get_reply(sdev);
++ snd_sof_ipc_reply(sdev, data->primary);
+
+- spin_unlock_irq(&sdev->ipc_lock);
++ spin_unlock_irq(&sdev->ipc_lock);
++ } else {
++ dev_dbg_ratelimited(sdev->dev,
++ "IPC reply before FW_READY: %#x|%#x\n",
++ primary, extension);
++ }
+ } else {
+ /* Notification received */
+ notification_data.primary = primary;
+@@ -124,15 +130,20 @@ irqreturn_t cnl_ipc_irq_thread(int irq, void *context)
+ CNL_DSP_REG_HIPCCTL,
+ CNL_DSP_REG_HIPCCTL_DONE, 0);
+
+- spin_lock_irq(&sdev->ipc_lock);
++ if (likely(sdev->fw_state == SOF_FW_BOOT_COMPLETE)) {
++ spin_lock_irq(&sdev->ipc_lock);
+
+- /* handle immediate reply from DSP core */
+- hda_dsp_ipc_get_reply(sdev);
+- snd_sof_ipc_reply(sdev, msg);
++ /* handle immediate reply from DSP core */
++ hda_dsp_ipc_get_reply(sdev);
++ snd_sof_ipc_reply(sdev, msg);
+
+- cnl_ipc_dsp_done(sdev);
++ cnl_ipc_dsp_done(sdev);
+
+- spin_unlock_irq(&sdev->ipc_lock);
++ spin_unlock_irq(&sdev->ipc_lock);
++ } else {
++ dev_dbg_ratelimited(sdev->dev, "IPC reply before FW_READY: %#x\n",
++ msg);
++ }
+
+ ipc_irq = true;
+ }
+diff --git a/sound/soc/sof/intel/hda-ipc.c b/sound/soc/sof/intel/hda-ipc.c
+index f080112499552..65e688f749eaf 100644
+--- a/sound/soc/sof/intel/hda-ipc.c
++++ b/sound/soc/sof/intel/hda-ipc.c
+@@ -148,17 +148,23 @@ irqreturn_t hda_dsp_ipc4_irq_thread(int irq, void *context)
+
+ if (primary & SOF_IPC4_MSG_DIR_MASK) {
+ /* Reply received */
+- struct sof_ipc4_msg *data = sdev->ipc->msg.reply_data;
++ if (likely(sdev->fw_state == SOF_FW_BOOT_COMPLETE)) {
++ struct sof_ipc4_msg *data = sdev->ipc->msg.reply_data;
+
+- data->primary = primary;
+- data->extension = extension;
++ data->primary = primary;
++ data->extension = extension;
+
+- spin_lock_irq(&sdev->ipc_lock);
++ spin_lock_irq(&sdev->ipc_lock);
+
+- snd_sof_ipc_get_reply(sdev);
+- snd_sof_ipc_reply(sdev, data->primary);
++ snd_sof_ipc_get_reply(sdev);
++ snd_sof_ipc_reply(sdev, data->primary);
+
+- spin_unlock_irq(&sdev->ipc_lock);
++ spin_unlock_irq(&sdev->ipc_lock);
++ } else {
++ dev_dbg_ratelimited(sdev->dev,
++ "IPC reply before FW_READY: %#x|%#x\n",
++ primary, extension);
++ }
+ } else {
+ /* Notification received */
+
+@@ -225,16 +231,21 @@ irqreturn_t hda_dsp_ipc_irq_thread(int irq, void *context)
+ * place, the message might not yet be marked as expecting a
+ * reply.
+ */
+- spin_lock_irq(&sdev->ipc_lock);
++ if (likely(sdev->fw_state == SOF_FW_BOOT_COMPLETE)) {
++ spin_lock_irq(&sdev->ipc_lock);
+
+- /* handle immediate reply from DSP core */
+- hda_dsp_ipc_get_reply(sdev);
+- snd_sof_ipc_reply(sdev, msg);
++ /* handle immediate reply from DSP core */
++ hda_dsp_ipc_get_reply(sdev);
++ snd_sof_ipc_reply(sdev, msg);
+
+- /* set the done bit */
+- hda_dsp_ipc_dsp_done(sdev);
++ /* set the done bit */
++ hda_dsp_ipc_dsp_done(sdev);
+
+- spin_unlock_irq(&sdev->ipc_lock);
++ spin_unlock_irq(&sdev->ipc_lock);
++ } else {
++ dev_dbg_ratelimited(sdev->dev, "IPC reply before FW_READY: %#x\n",
++ msg);
++ }
+
+ ipc_irq = true;
+ }
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index bc07df1fc39f0..17f2f3a982c38 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -467,7 +467,7 @@ static void hda_dsp_dump_ext_rom_status(struct snd_sof_dev *sdev, const char *le
+ chip = get_chip_info(sdev->pdata);
+ for (i = 0; i < HDA_EXT_ROM_STATUS_SIZE; i++) {
+ value = snd_sof_dsp_read(sdev, HDA_DSP_BAR, chip->rom_status_reg + i * 0x4);
+- len += snprintf(msg + len, sizeof(msg) - len, " 0x%x", value);
++ len += scnprintf(msg + len, sizeof(msg) - len, " 0x%x", value);
+ }
+
+ dev_printk(level, sdev->dev, "extended rom status: %s", msg);
+@@ -1395,6 +1395,7 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev)
+
+ if (mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_SSP_NUMBER &&
+ mach->mach_params.i2s_link_mask) {
++ const struct sof_intel_dsp_desc *chip = get_chip_info(sdev->pdata);
+ int ssp_num;
+
+ if (hweight_long(mach->mach_params.i2s_link_mask) > 1 &&
+@@ -1404,6 +1405,12 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev)
+ /* fls returns 1-based results, SSPs indices are 0-based */
+ ssp_num = fls(mach->mach_params.i2s_link_mask) - 1;
+
++ if (ssp_num >= chip->ssp_count) {
++ dev_err(sdev->dev, "Invalid SSP %d, max on this platform is %d\n",
++ ssp_num, chip->ssp_count);
++ return NULL;
++ }
++
+ tplg_filename = devm_kasprintf(sdev->dev, GFP_KERNEL,
+ "%s%s%d",
+ sof_pdata->tplg_filename,
+diff --git a/sound/soc/sof/sof-client-probes.c b/sound/soc/sof/sof-client-probes.c
+index 34e6bd356e717..60e4250fac876 100644
+--- a/sound/soc/sof/sof-client-probes.c
++++ b/sound/soc/sof/sof-client-probes.c
+@@ -693,6 +693,10 @@ static int sof_probes_client_probe(struct auxiliary_device *auxdev,
+ if (!sof_probes_enabled)
+ return -ENXIO;
+
++ /* only ipc3 is supported */
++ if (sof_client_get_ipc_type(cdev) != SOF_IPC)
++ return -ENXIO;
++
+ if (!dev->platform_data) {
+ dev_err(dev, "missing platform data\n");
+ return -ENODEV;
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 0fff96a5d3ab4..d356743de2ff9 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -387,6 +387,14 @@ static const struct usb_audio_device_name usb_audio_names[] = {
+ DEVICE_NAME(0x05e1, 0x0408, "Syntek", "STK1160"),
+ DEVICE_NAME(0x05e1, 0x0480, "Hauppauge", "Woodbury"),
+
++ /* ASUS ROG Zenith II: this machine has also two devices, one for
++ * the front headphone and another for the rest
++ */
++ PROFILE_NAME(0x0b05, 0x1915, "ASUS", "Zenith II Front Headphone",
++ "Zenith-II-Front-Headphone"),
++ PROFILE_NAME(0x0b05, 0x1916, "ASUS", "Zenith II Main Audio",
++ "Zenith-II-Main-Audio"),
++
+ /* ASUS ROG Strix */
+ PROFILE_NAME(0x0b05, 0x1917,
+ "Realtek", "ALC1220-VB-DT", "Realtek-ALC1220-VB-Desktop"),
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 3c795675f048b..f4bd1e8ae4b6c 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -374,13 +374,28 @@ static const struct usbmix_name_map corsair_virtuoso_map[] = {
+ { 0 }
+ };
+
+-/* Some mobos shipped with a dummy HD-audio show the invalid GET_MIN/GET_MAX
+- * response for Input Gain Pad (id=19, control=12) and the connector status
+- * for SPDIF terminal (id=18). Skip them.
+- */
+-static const struct usbmix_name_map asus_rog_map[] = {
+- { 18, NULL }, /* OT, connector control */
+- { 19, NULL, 12 }, /* FU, Input Gain Pad */
++/* ASUS ROG Zenith II with Realtek ALC1220-VB */
++static const struct usbmix_name_map asus_zenith_ii_map[] = {
++ { 19, NULL, 12 }, /* FU, Input Gain Pad - broken response, disabled */
++ { 16, "Speaker" }, /* OT */
++ { 22, "Speaker Playback" }, /* FU */
++ { 7, "Line" }, /* IT */
++ { 19, "Line Capture" }, /* FU */
++ { 8, "Mic" }, /* IT */
++ { 20, "Mic Capture" }, /* FU */
++ { 9, "Front Mic" }, /* IT */
++ { 21, "Front Mic Capture" }, /* FU */
++ { 17, "IEC958" }, /* OT */
++ { 23, "IEC958 Playback" }, /* FU */
++ {}
++};
++
++static const struct usbmix_connector_map asus_zenith_ii_connector_map[] = {
++ { 10, 16 }, /* (Back) Speaker */
++ { 11, 17 }, /* SPDIF */
++ { 13, 7 }, /* Line */
++ { 14, 8 }, /* Mic */
++ { 15, 9 }, /* Front Mic */
+ {}
+ };
+
+@@ -611,9 +626,10 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ .map = gigabyte_b450_map,
+ .connector_map = gigabyte_b450_connector_map,
+ },
+- { /* ASUS ROG Zenith II */
++ { /* ASUS ROG Zenith II (main audio) */
+ .id = USB_ID(0x0b05, 0x1916),
+- .map = asus_rog_map,
++ .map = asus_zenith_ii_map,
++ .connector_map = asus_zenith_ii_connector_map,
+ },
+ { /* ASUS ROG Strix */
+ .id = USB_ID(0x0b05, 0x1917),
+diff --git a/tools/build/feature/test-libcrypto.c b/tools/build/feature/test-libcrypto.c
+index a98174e0569c8..bc34a5bbb5049 100644
+--- a/tools/build/feature/test-libcrypto.c
++++ b/tools/build/feature/test-libcrypto.c
+@@ -1,16 +1,23 @@
+ // SPDX-License-Identifier: GPL-2.0
++#include <openssl/evp.h>
+ #include <openssl/sha.h>
+ #include <openssl/md5.h>
+
+ int main(void)
+ {
+- MD5_CTX context;
++ EVP_MD_CTX *mdctx;
+ unsigned char md[MD5_DIGEST_LENGTH + SHA_DIGEST_LENGTH];
+ unsigned char dat[] = "12345";
++ unsigned int digest_len;
+
+- MD5_Init(&context);
+- MD5_Update(&context, &dat[0], sizeof(dat));
+- MD5_Final(&md[0], &context);
++ mdctx = EVP_MD_CTX_new();
++ if (!mdctx)
++ return 0;
++
++ EVP_DigestInit_ex(mdctx, EVP_md5(), NULL);
++ EVP_DigestUpdate(mdctx, &dat[0], sizeof(dat));
++ EVP_DigestFinal_ex(mdctx, &md[0], &digest_len);
++ EVP_MD_CTX_free(mdctx);
+
+ SHA1(&dat[0], sizeof(dat), &md[0]);
+
+diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
+index bd6f4505e7b1e..70adf7b119b99 100644
+--- a/tools/lib/bpf/skel_internal.h
++++ b/tools/lib/bpf/skel_internal.h
+@@ -66,13 +66,13 @@ struct bpf_load_and_run_opts {
+ const char *errstr;
+ };
+
+-long bpf_sys_bpf(__u32 cmd, void *attr, __u32 attr_size);
++long kern_sys_bpf(__u32 cmd, void *attr, __u32 attr_size);
+
+ static inline int skel_sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
+ unsigned int size)
+ {
+ #ifdef __KERNEL__
+- return bpf_sys_bpf(cmd, attr, size);
++ return kern_sys_bpf(cmd, attr, size);
+ #else
+ return syscall(__NR_bpf, cmd, attr, size);
+ #endif
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index b341f8a8c7c56..31c719f99f66e 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -4096,7 +4096,8 @@ static int validate_ibt(struct objtool_file *file)
+ * These sections can reference text addresses, but not with
+ * the intent to indirect branch to them.
+ */
+- if (!strncmp(sec->name, ".discard", 8) ||
++ if ((!strncmp(sec->name, ".discard", 8) &&
++ strcmp(sec->name, ".discard.ibt_endbr_noseal")) ||
+ !strncmp(sec->name, ".debug", 6) ||
+ !strcmp(sec->name, ".altinstructions") ||
+ !strcmp(sec->name, ".ibt_endbr_seal") ||
+diff --git a/tools/perf/tests/switch-tracking.c b/tools/perf/tests/switch-tracking.c
+index 0c0c2328bf4e6..6f53bee33f7cb 100644
+--- a/tools/perf/tests/switch-tracking.c
++++ b/tools/perf/tests/switch-tracking.c
+@@ -324,6 +324,7 @@ out_free_nodes:
+ static int test__switch_tracking(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
+ {
+ const char *sched_switch = "sched:sched_switch";
++ const char *cycles = "cycles:u";
+ struct switch_tracking switch_tracking = { .tids = NULL, };
+ struct record_opts opts = {
+ .mmap_pages = UINT_MAX,
+@@ -372,12 +373,19 @@ static int test__switch_tracking(struct test_suite *test __maybe_unused, int sub
+ cpu_clocks_evsel = evlist__last(evlist);
+
+ /* Second event */
+- if (perf_pmu__has_hybrid())
+- err = parse_events(evlist, "cpu_core/cycles/u", NULL);
+- else
+- err = parse_events(evlist, "cycles:u", NULL);
++ if (perf_pmu__has_hybrid()) {
++ cycles = "cpu_core/cycles/u";
++ err = parse_events(evlist, cycles, NULL);
++ if (err) {
++ cycles = "cpu_atom/cycles/u";
++ pr_debug("Trying %s\n", cycles);
++ err = parse_events(evlist, cycles, NULL);
++ }
++ } else {
++ err = parse_events(evlist, cycles, NULL);
++ }
+ if (err) {
+- pr_debug("Failed to parse event cycles:u\n");
++ pr_debug("Failed to parse event %s\n", cycles);
+ goto out_err;
+ }
+
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 7ed2357404316..700c95eafd62a 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -2391,9 +2391,12 @@ void parse_events_error__exit(struct parse_events_error *err)
+ void parse_events_error__handle(struct parse_events_error *err, int idx,
+ char *str, char *help)
+ {
+- if (WARN(!str, "WARNING: failed to provide error string\n")) {
+- free(help);
+- return;
++ if (WARN(!str, "WARNING: failed to provide error string\n"))
++ goto out_free;
++ if (!err) {
++ /* Assume caller does not want message printed */
++ pr_debug("event syntax error: %s\n", str);
++ goto out_free;
+ }
+ switch (err->num_errors) {
+ case 0:
+@@ -2419,6 +2422,11 @@ void parse_events_error__handle(struct parse_events_error *err, int idx,
+ break;
+ }
+ err->num_errors++;
++ return;
++
++out_free:
++ free(str);
++ free(help);
+ }
+
+ #define MAX_WIDTH 1000
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index 062b5cbe67aff..dee6c527021c2 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -1775,8 +1775,10 @@ int parse_perf_probe_command(const char *cmd, struct perf_probe_event *pev)
+ if (!pev->event && pev->point.function && pev->point.line
+ && !pev->point.lazy_line && !pev->point.offset) {
+ if (asprintf(&pev->event, "%s_L%d", pev->point.function,
+- pev->point.line) < 0)
+- return -ENOMEM;
++ pev->point.line) < 0) {
++ ret = -ENOMEM;
++ goto out;
++ }
+ }
+
+ /* Copy arguments and ensure return probe has no C argument */
+diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
+index 431f2bddf6c83..1f4f72d887f91 100644
+--- a/tools/testing/cxl/test/cxl.c
++++ b/tools/testing/cxl/test/cxl.c
+@@ -466,7 +466,6 @@ static int mock_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm)
+ .end = -1,
+ };
+
+- cxld->flags = CXL_DECODER_F_ENABLE;
+ cxld->interleave_ways = min_not_zero(target_count, 1);
+ cxld->interleave_granularity = SZ_4K;
+ cxld->target_type = CXL_DECODER_EXPANDER;
+diff --git a/tools/testing/cxl/test/mock.c b/tools/testing/cxl/test/mock.c
+index f1f8c40948c5c..bce6a21df0d58 100644
+--- a/tools/testing/cxl/test/mock.c
++++ b/tools/testing/cxl/test/mock.c
+@@ -208,13 +208,15 @@ int __wrap_cxl_await_media_ready(struct cxl_dev_state *cxlds)
+ }
+ EXPORT_SYMBOL_NS_GPL(__wrap_cxl_await_media_ready, CXL);
+
+-bool __wrap_cxl_hdm_decode_init(struct cxl_dev_state *cxlds,
+- struct cxl_hdm *cxlhdm)
++int __wrap_cxl_hdm_decode_init(struct cxl_dev_state *cxlds,
++ struct cxl_hdm *cxlhdm)
+ {
+ int rc = 0, index;
+ struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
+
+- if (!ops || !ops->is_mock_dev(cxlds->dev))
++ if (ops && ops->is_mock_dev(cxlds->dev))
++ rc = 0;
++ else
+ rc = cxl_hdm_decode_init(cxlds, cxlhdm);
+ put_cxl_mock_ops(index);
+
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
+index fa928b431555c..7c02509c71d0a 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
+@@ -21,7 +21,6 @@ check_error 'p:^/bar vfs_read' # NO_GROUP_NAME
+ check_error 'p:^12345678901234567890123456789012345678901234567890123456789012345/bar vfs_read' # GROUP_TOO_LONG
+
+ check_error 'p:^foo.1/bar vfs_read' # BAD_GROUP_NAME
+-check_error 'p:foo/^ vfs_read' # NO_EVENT_NAME
+ check_error 'p:foo/^12345678901234567890123456789012345678901234567890123456789012345 vfs_read' # EVENT_TOO_LONG
+ check_error 'p:foo/^bar.1 vfs_read' # BAD_EVENT_NAME
+
+diff --git a/tools/testing/selftests/net/forwarding/custom_multipath_hash.sh b/tools/testing/selftests/net/forwarding/custom_multipath_hash.sh
+index a15d21dc035a6..56eb83d1a3bdd 100755
+--- a/tools/testing/selftests/net/forwarding/custom_multipath_hash.sh
++++ b/tools/testing/selftests/net/forwarding/custom_multipath_hash.sh
+@@ -181,37 +181,43 @@ ping_ipv6()
+
+ send_src_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_src_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+ send_src_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:4::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:4::2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B "2001:db8:4::2-2001:db8:4::fd" \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B "2001:db8:4::2-2001:db8:4::fd" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+@@ -226,13 +232,15 @@ send_flowlabel()
+
+ send_src_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:4::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:4::2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:4::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:4::2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+diff --git a/tools/testing/selftests/net/forwarding/gre_custom_multipath_hash.sh b/tools/testing/selftests/net/forwarding/gre_custom_multipath_hash.sh
+index a73f52efcb6cf..0446db9c6f748 100755
+--- a/tools/testing/selftests/net/forwarding/gre_custom_multipath_hash.sh
++++ b/tools/testing/selftests/net/forwarding/gre_custom_multipath_hash.sh
+@@ -276,37 +276,43 @@ ping_ipv6()
+
+ send_src_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_src_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+ send_src_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+@@ -321,13 +327,15 @@ send_flowlabel()
+
+ send_src_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:2::2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:2::2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+diff --git a/tools/testing/selftests/net/forwarding/ip6gre_custom_multipath_hash.sh b/tools/testing/selftests/net/forwarding/ip6gre_custom_multipath_hash.sh
+index 8fea2c2e0b25d..d40183b4eccc8 100755
+--- a/tools/testing/selftests/net/forwarding/ip6gre_custom_multipath_hash.sh
++++ b/tools/testing/selftests/net/forwarding/ip6gre_custom_multipath_hash.sh
+@@ -278,37 +278,43 @@ ping_ipv6()
+
+ send_src_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A "198.51.100.2-198.51.100.253" -B 203.0.113.2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B "203.0.113.2-203.0.113.253" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_src_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp4()
+ {
+- $MZ $h1 -q -p 64 -A 198.51.100.2 -B 203.0.113.2 \
++ ip vrf exec v$h1 $MZ $h1 -q -p 64 \
++ -A 198.51.100.2 -B 203.0.113.2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+ send_src_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A "2001:db8:1::2-2001:db8:1::fd" -B 2001:db8:2::2 \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+ send_dst_ipv6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B "2001:db8:2::2-2001:db8:2::fd" \
+ -d 1msec -c 50 -t udp "sp=20000,dp=30000"
+ }
+
+@@ -323,13 +329,15 @@ send_flowlabel()
+
+ send_src_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:2::2 \
+ -d 1msec -t udp "sp=0-32768,dp=30000"
+ }
+
+ send_dst_udp6()
+ {
+- $MZ -6 $h1 -q -p 64 -A 2001:db8:1::2 -B 2001:db8:2::2 \
++ ip vrf exec v$h1 $MZ -6 $h1 -q -p 64 \
++ -A 2001:db8:1::2 -B 2001:db8:2::2 \
+ -d 1msec -t udp "sp=20000,dp=0-32768"
+ }
+
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index e2ea6c126c99f..24d4e9cb617e4 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -553,6 +553,18 @@ static void set_nonblock(int fd, bool nonblock)
+ fcntl(fd, F_SETFL, flags & ~O_NONBLOCK);
+ }
+
++static void shut_wr(int fd)
++{
++ /* Close our write side, ev. give some time
++ * for address notification and/or checking
++ * the current status
++ */
++ if (cfg_wait)
++ usleep(cfg_wait);
++
++ shutdown(fd, SHUT_WR);
++}
++
+ static int copyfd_io_poll(int infd, int peerfd, int outfd, bool *in_closed_after_out)
+ {
+ struct pollfd fds = {
+@@ -630,14 +642,7 @@ static int copyfd_io_poll(int infd, int peerfd, int outfd, bool *in_closed_after
+ /* ... and peer also closed already */
+ break;
+
+- /* ... but we still receive.
+- * Close our write side, ev. give some time
+- * for address notification and/or checking
+- * the current status
+- */
+- if (cfg_wait)
+- usleep(cfg_wait);
+- shutdown(peerfd, SHUT_WR);
++ shut_wr(peerfd);
+ } else {
+ if (errno == EINTR)
+ continue;
+@@ -767,7 +772,7 @@ static int copyfd_io_mmap(int infd, int peerfd, int outfd,
+ if (err)
+ return err;
+
+- shutdown(peerfd, SHUT_WR);
++ shut_wr(peerfd);
+
+ err = do_recvfile(peerfd, outfd);
+ *in_closed_after_out = true;
+@@ -791,6 +796,9 @@ static int copyfd_io_sendfile(int infd, int peerfd, int outfd,
+ err = do_sendfile(infd, peerfd, size);
+ if (err)
+ return err;
++
++ shut_wr(peerfd);
++
+ err = do_recvfile(peerfd, outfd);
+ *in_closed_after_out = true;
+ }
+diff --git a/tools/tracing/rtla/Makefile b/tools/tracing/rtla/Makefile
+index 1bea2d16d4c11..b8fe10d941ce3 100644
+--- a/tools/tracing/rtla/Makefile
++++ b/tools/tracing/rtla/Makefile
+@@ -108,9 +108,9 @@ install: doc_install
+ $(INSTALL) rtla -m 755 $(DESTDIR)$(BINDIR)
+ $(STRIP) $(DESTDIR)$(BINDIR)/rtla
+ @test ! -f $(DESTDIR)$(BINDIR)/osnoise || rm $(DESTDIR)$(BINDIR)/osnoise
+- ln -s $(DESTDIR)$(BINDIR)/rtla $(DESTDIR)$(BINDIR)/osnoise
++ ln -s rtla $(DESTDIR)$(BINDIR)/osnoise
+ @test ! -f $(DESTDIR)$(BINDIR)/timerlat || rm $(DESTDIR)$(BINDIR)/timerlat
+- ln -s $(DESTDIR)$(BINDIR)/rtla $(DESTDIR)$(BINDIR)/timerlat
++ ln -s rtla $(DESTDIR)$(BINDIR)/timerlat
+
+ .PHONY: clean tarball
+ clean: doc_clean
+diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
+index 5b98f3ee58a58..0fffaeedee767 100644
+--- a/tools/vm/slabinfo.c
++++ b/tools/vm/slabinfo.c
+@@ -125,7 +125,7 @@ static void usage(void)
+ "-n|--numa Show NUMA information\n"
+ "-N|--lines=K Show the first K slabs\n"
+ "-o|--ops Show kmem_cache_ops\n"
+- "-P|--partial Sort by number of partial slabs\n"
++ "-P|--partial Sort by number of partial slabs\n"
+ "-r|--report Detailed report on single slabs\n"
+ "-s|--shrink Shrink slabs\n"
+ "-S|--Size Sort by size\n"
+@@ -1067,15 +1067,27 @@ static void sort_slabs(void)
+ for (s2 = s1 + 1; s2 < slabinfo + slabs; s2++) {
+ int result;
+
+- if (sort_size)
+- result = slab_size(s1) < slab_size(s2);
+- else if (sort_active)
+- result = slab_activity(s1) < slab_activity(s2);
+- else if (sort_loss)
+- result = slab_waste(s1) < slab_waste(s2);
+- else if (sort_partial)
+- result = s1->partial < s2->partial;
+- else
++ if (sort_size) {
++ if (slab_size(s1) == slab_size(s2))
++ result = strcasecmp(s1->name, s2->name);
++ else
++ result = slab_size(s1) < slab_size(s2);
++ } else if (sort_active) {
++ if (slab_activity(s1) == slab_activity(s2))
++ result = strcasecmp(s1->name, s2->name);
++ else
++ result = slab_activity(s1) < slab_activity(s2);
++ } else if (sort_loss) {
++ if (slab_waste(s1) == slab_waste(s2))
++ result = strcasecmp(s1->name, s2->name);
++ else
++ result = slab_waste(s1) < slab_waste(s2);
++ } else if (sort_partial) {
++ if (s1->partial == s2->partial)
++ result = strcasecmp(s1->name, s2->name);
++ else
++ result = s1->partial < s2->partial;
++ } else
+ result = strcasecmp(s1->name, s2->name);
+
+ if (show_inverted)
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 98246f3dea87c..c56861ed0e382 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1085,6 +1085,9 @@ static struct kvm *kvm_create_vm(unsigned long type)
+ if (!kvm)
+ return ERR_PTR(-ENOMEM);
+
++ /* KVM is pinned via open("/dev/kvm"), the fd passed to this ioctl(). */
++ __module_get(kvm_chardev_ops.owner);
++
+ KVM_MMU_LOCK_INIT(kvm);
+ mmgrab(current->mm);
+ kvm->mm = current->mm;
+@@ -1170,16 +1173,6 @@ static struct kvm *kvm_create_vm(unsigned long type)
+ preempt_notifier_inc();
+ kvm_init_pm_notifier(kvm);
+
+- /*
+- * When the fd passed to this ioctl() is opened it pins the module,
+- * but try_module_get() also prevents getting a reference if the module
+- * is in MODULE_STATE_GOING (e.g. if someone ran "rmmod --wait").
+- */
+- if (!try_module_get(kvm_chardev_ops.owner)) {
+- r = -ENODEV;
+- goto out_err;
+- }
+-
+ return kvm;
+
+ out_err:
+@@ -1201,6 +1194,7 @@ out_err_no_irq_srcu:
+ out_err_no_srcu:
+ kvm_arch_free_vm(kvm);
+ mmdrop(current->mm);
++ module_put(kvm_chardev_ops.owner);
+ return ERR_PTR(r);
+ }
+
next reply other threads:[~2022-08-25 10:32 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-25 10:31 Mike Pagano [this message]
-- strict thread matches above, loose matches on Subject: below --
2023-03-21 13:33 [gentoo-commits] proj/linux-patches:5.19 commit in: / Mike Pagano
2022-10-24 11:04 Mike Pagano
2022-10-15 10:04 Mike Pagano
2022-10-12 11:17 Mike Pagano
2022-10-05 11:56 Mike Pagano
2022-10-04 14:51 Mike Pagano
2022-09-28 9:55 Mike Pagano
2022-09-27 12:09 Mike Pagano
2022-09-27 12:02 Mike Pagano
2022-09-23 12:50 Mike Pagano
2022-09-23 12:38 Mike Pagano
2022-09-20 12:00 Mike Pagano
2022-09-15 10:29 Mike Pagano
2022-09-08 10:45 Mike Pagano
2022-09-05 12:02 Mike Pagano
2022-08-31 15:44 Mike Pagano
2022-08-31 13:33 Mike Pagano
2022-08-31 12:11 Mike Pagano
2022-08-29 10:45 Mike Pagano
2022-08-25 17:37 Mike Pagano
2022-08-21 16:55 Mike Pagano
2022-08-19 13:32 Mike Pagano
2022-08-17 14:30 Mike Pagano
2022-08-11 12:32 Mike Pagano
2022-08-02 18:20 Mike Pagano
2022-06-27 19:30 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1661423503.fbe75912097fd2d987df503261ed461b94514678.mpagano@gentoo \
--to=mpagano@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox