From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 8AE9D1382C5 for ; Mon, 11 May 2020 22:52:57 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id BE497E0871; Mon, 11 May 2020 22:52:56 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 6ACF3E0871 for ; Mon, 11 May 2020 22:52:56 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id EDDA734FDA2 for ; Mon, 11 May 2020 22:52:54 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id AFE711B6 for ; Mon, 11 May 2020 22:52:53 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1589237561.43d60aa903f572b62636daf590b98f65206078ed.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.4 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1222_linux-4.4.223.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 43d60aa903f572b62636daf590b98f65206078ed X-VCS-Branch: 4.4 Date: Mon, 11 May 2020 22:52:53 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 2a58659e-dacb-4bef-ba54-2ed6054dc5e2 X-Archives-Hash: 43318a1b5defd2d93c3cf66300f56a7b commit: 43d60aa903f572b62636daf590b98f65206078ed Author: Mike Pagano gentoo org> AuthorDate: Mon May 11 22:52:41 2020 +0000 Commit: Mike Pagano gentoo org> CommitDate: Mon May 11 22:52:41 2020 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=43d60aa9 Linux patch 4.4.223 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1222_linux-4.4.223.patch | 9587 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 9591 insertions(+) diff --git a/0000_README b/0000_README index 80253f1..ea662bf 100644 --- a/0000_README +++ b/0000_README @@ -931,6 +931,10 @@ Patch: 1221_linux-4.4.222.patch From: http://www.kernel.org Desc: Linux 4.4.222 +Patch: 1222_linux-4.4.223.patch +From: http://www.kernel.org +Desc: Linux 4.4.223 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1222_linux-4.4.223.patch b/1222_linux-4.4.223.patch new file mode 100644 index 0000000..dca4367 --- /dev/null +++ b/1222_linux-4.4.223.patch @@ -0,0 +1,9587 @@ +diff --git a/Makefile b/Makefile +index 03f34df673d9..6b88acb0b9b1 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 4 + PATCHLEVEL = 4 +-SUBLEVEL = 222 ++SUBLEVEL = 223 + EXTRAVERSION = + NAME = Blurry Fish Butt + +diff --git a/arch/alpha/kernel/pci-sysfs.c b/arch/alpha/kernel/pci-sysfs.c +index 99e8d4796c96..92c0d460815b 100644 +--- a/arch/alpha/kernel/pci-sysfs.c ++++ b/arch/alpha/kernel/pci-sysfs.c +@@ -77,10 +77,10 @@ static int pci_mmap_resource(struct kobject *kobj, + if (i >= PCI_ROM_RESOURCE) + return -ENODEV; + +- if (!__pci_mmap_fits(pdev, i, vma, sparse)) ++ if (res->flags & IORESOURCE_MEM && iomem_is_exclusive(res->start)) + return -EINVAL; + +- if (iomem_is_exclusive(res->start)) ++ if (!__pci_mmap_fits(pdev, i, vma, sparse)) + return -EINVAL; + + pcibios_resource_to_bus(pdev->bus, &bar, res); +diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile +index 30bbc3746130..25b80021a1a3 100644 +--- a/arch/arm/boot/dts/Makefile ++++ b/arch/arm/boot/dts/Makefile +@@ -166,6 +166,7 @@ dtb-$(CONFIG_MACH_KIRKWOOD) += \ + kirkwood-ds109.dtb \ + kirkwood-ds110jv10.dtb \ + kirkwood-ds111.dtb \ ++ kirkwood-ds112.dtb \ + kirkwood-ds209.dtb \ + kirkwood-ds210.dtb \ + kirkwood-ds212.dtb \ +diff --git a/arch/arm/boot/dts/kirkwood-ds112.dts b/arch/arm/boot/dts/kirkwood-ds112.dts +index bf4143c6cb8f..b84af3da8c84 100644 +--- a/arch/arm/boot/dts/kirkwood-ds112.dts ++++ b/arch/arm/boot/dts/kirkwood-ds112.dts +@@ -14,7 +14,7 @@ + #include "kirkwood-synology.dtsi" + + / { +- model = "Synology DS111"; ++ model = "Synology DS112"; + compatible = "synology,ds111", "marvell,kirkwood"; + + memory { +diff --git a/arch/arm/boot/dts/kirkwood-lswvl.dts b/arch/arm/boot/dts/kirkwood-lswvl.dts +index 09eed3cea0af..36eec7392ab4 100644 +--- a/arch/arm/boot/dts/kirkwood-lswvl.dts ++++ b/arch/arm/boot/dts/kirkwood-lswvl.dts +@@ -1,7 +1,8 @@ + /* + * Device Tree file for Buffalo Linkstation LS-WVL/VL + * +- * Copyright (C) 2015, rogershimizu@gmail.com ++ * Copyright (C) 2015, 2016 ++ * Roger Shimizu + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License +@@ -156,21 +157,21 @@ + button@1 { + label = "Function Button"; + linux,code = ; +- gpios = <&gpio0 45 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 13 GPIO_ACTIVE_LOW>; + }; + + button@2 { + label = "Power-on Switch"; + linux,code = ; + linux,input-type = <5>; +- gpios = <&gpio0 46 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 14 GPIO_ACTIVE_LOW>; + }; + + button@3 { + label = "Power-auto Switch"; + linux,code = ; + linux,input-type = <5>; +- gpios = <&gpio0 47 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 15 GPIO_ACTIVE_LOW>; + }; + }; + +@@ -185,38 +186,38 @@ + + led@1 { + label = "lswvl:red:alarm"; +- gpios = <&gpio0 36 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 4 GPIO_ACTIVE_HIGH>; + }; + + led@2 { + label = "lswvl:red:func"; +- gpios = <&gpio0 37 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 5 GPIO_ACTIVE_HIGH>; + }; + + led@3 { + label = "lswvl:amber:info"; +- gpios = <&gpio0 38 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 6 GPIO_ACTIVE_HIGH>; + }; + + led@4 { + label = "lswvl:blue:func"; +- gpios = <&gpio0 39 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 7 GPIO_ACTIVE_HIGH>; + }; + + led@5 { + label = "lswvl:blue:power"; +- gpios = <&gpio0 40 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 8 GPIO_ACTIVE_LOW>; + default-state = "keep"; + }; + + led@6 { + label = "lswvl:red:hdderr0"; +- gpios = <&gpio0 34 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>; + }; + + led@7 { + label = "lswvl:red:hdderr1"; +- gpios = <&gpio0 35 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 3 GPIO_ACTIVE_HIGH>; + }; + }; + +@@ -233,7 +234,7 @@ + 3250 1 + 5000 0>; + +- alarm-gpios = <&gpio0 43 GPIO_ACTIVE_HIGH>; ++ alarm-gpios = <&gpio1 11 GPIO_ACTIVE_HIGH>; + }; + + restart_poweroff { +diff --git a/arch/arm/boot/dts/kirkwood-lswxl.dts b/arch/arm/boot/dts/kirkwood-lswxl.dts +index f5db16a08597..b13ec20a7088 100644 +--- a/arch/arm/boot/dts/kirkwood-lswxl.dts ++++ b/arch/arm/boot/dts/kirkwood-lswxl.dts +@@ -1,7 +1,8 @@ + /* + * Device Tree file for Buffalo Linkstation LS-WXL/WSXL + * +- * Copyright (C) 2015, rogershimizu@gmail.com ++ * Copyright (C) 2015, 2016 ++ * Roger Shimizu + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License +@@ -156,21 +157,21 @@ + button@1 { + label = "Function Button"; + linux,code = ; +- gpios = <&gpio1 41 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 9 GPIO_ACTIVE_LOW>; + }; + + button@2 { + label = "Power-on Switch"; + linux,code = ; + linux,input-type = <5>; +- gpios = <&gpio1 42 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 10 GPIO_ACTIVE_LOW>; + }; + + button@3 { + label = "Power-auto Switch"; + linux,code = ; + linux,input-type = <5>; +- gpios = <&gpio1 43 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 11 GPIO_ACTIVE_LOW>; + }; + }; + +@@ -185,12 +186,12 @@ + + led@1 { + label = "lswxl:blue:func"; +- gpios = <&gpio1 36 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 4 GPIO_ACTIVE_LOW>; + }; + + led@2 { + label = "lswxl:red:alarm"; +- gpios = <&gpio1 49 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 17 GPIO_ACTIVE_LOW>; + }; + + led@3 { +@@ -200,23 +201,23 @@ + + led@4 { + label = "lswxl:blue:power"; +- gpios = <&gpio1 8 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 7 GPIO_ACTIVE_HIGH>; ++ default-state = "keep"; + }; + + led@5 { + label = "lswxl:red:func"; +- gpios = <&gpio1 5 GPIO_ACTIVE_LOW>; +- default-state = "keep"; ++ gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>; + }; + + led@6 { + label = "lswxl:red:hdderr0"; +- gpios = <&gpio1 2 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio0 8 GPIO_ACTIVE_HIGH>; + }; + + led@7 { + label = "lswxl:red:hdderr1"; +- gpios = <&gpio1 3 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 14 GPIO_ACTIVE_HIGH>; + }; + }; + +@@ -225,15 +226,15 @@ + pinctrl-0 = <&pmx_fan_low &pmx_fan_high &pmx_fan_lock>; + pinctrl-names = "default"; + +- gpios = <&gpio0 47 GPIO_ACTIVE_LOW +- &gpio0 48 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 16 GPIO_ACTIVE_LOW ++ &gpio1 15 GPIO_ACTIVE_LOW>; + + gpio-fan,speed-map = <0 3 + 1500 2 + 3250 1 + 5000 0>; + +- alarm-gpios = <&gpio1 49 GPIO_ACTIVE_HIGH>; ++ alarm-gpios = <&gpio1 8 GPIO_ACTIVE_HIGH>; + }; + + restart_poweroff { +@@ -256,7 +257,7 @@ + enable-active-high; + regulator-always-on; + regulator-boot-on; +- gpio = <&gpio0 37 GPIO_ACTIVE_HIGH>; ++ gpio = <&gpio1 5 GPIO_ACTIVE_HIGH>; + }; + hdd_power0: regulator@2 { + compatible = "regulator-fixed"; +diff --git a/arch/arm/boot/dts/orion5x-linkstation-lswtgl.dts b/arch/arm/boot/dts/orion5x-linkstation-lswtgl.dts +index 3daec912b4bf..aae8a7aceab7 100644 +--- a/arch/arm/boot/dts/orion5x-linkstation-lswtgl.dts ++++ b/arch/arm/boot/dts/orion5x-linkstation-lswtgl.dts +@@ -1,7 +1,8 @@ + /* + * Device Tree file for Buffalo Linkstation LS-WTGL + * +- * Copyright (C) 2015, Roger Shimizu ++ * Copyright (C) 2015, 2016 ++ * Roger Shimizu + * + * This file is dual-licensed: you can use it either under the terms + * of the GPL or the X11 license, at your option. Note that this dual +@@ -69,8 +70,6 @@ + + internal-regs { + pinctrl: pinctrl@10000 { +- pinctrl-0 = <&pmx_usb_power &pmx_power_hdd +- &pmx_fan_low &pmx_fan_high &pmx_fan_lock>; + pinctrl-names = "default"; + + pmx_led_power: pmx-leds { +@@ -162,6 +161,7 @@ + led@1 { + label = "lswtgl:blue:power"; + gpios = <&gpio0 0 GPIO_ACTIVE_LOW>; ++ default-state = "keep"; + }; + + led@2 { +@@ -188,7 +188,7 @@ + 3250 1 + 5000 0>; + +- alarm-gpios = <&gpio0 2 GPIO_ACTIVE_HIGH>; ++ alarm-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>; + }; + + restart_poweroff { +@@ -228,6 +228,37 @@ + }; + }; + ++&devbus_bootcs { ++ status = "okay"; ++ devbus,keep-config; ++ ++ flash@0 { ++ compatible = "jedec-flash"; ++ reg = <0 0x40000>; ++ bank-width = <1>; ++ ++ partitions { ++ compatible = "fixed-partitions"; ++ #address-cells = <1>; ++ #size-cells = <1>; ++ ++ header@0 { ++ reg = <0 0x30000>; ++ read-only; ++ }; ++ ++ uboot@30000 { ++ reg = <0x30000 0xF000>; ++ read-only; ++ }; ++ ++ uboot_env@3F000 { ++ reg = <0x3F000 0x1000>; ++ }; ++ }; ++ }; ++}; ++ + &mdio { + status = "okay"; + +diff --git a/arch/arm/boot/dts/r8a7740-armadillo800eva.dts b/arch/arm/boot/dts/r8a7740-armadillo800eva.dts +index 105d9c95de4a..5c76dcc89df5 100644 +--- a/arch/arm/boot/dts/r8a7740-armadillo800eva.dts ++++ b/arch/arm/boot/dts/r8a7740-armadillo800eva.dts +@@ -180,7 +180,7 @@ + }; + + &extal1_clk { +- clock-frequency = <25000000>; ++ clock-frequency = <24000000>; + }; + &extal2_clk { + clock-frequency = <48000000>; +diff --git a/arch/arm/mach-imx/Kconfig b/arch/arm/mach-imx/Kconfig +index 8ceda2844c4f..9aa659e4c46e 100644 +--- a/arch/arm/mach-imx/Kconfig ++++ b/arch/arm/mach-imx/Kconfig +@@ -562,6 +562,7 @@ config SOC_IMX7D + select ARM_GIC + select HAVE_IMX_ANATOP + select HAVE_IMX_MMDC ++ select HAVE_IMX_SRC + help + This enables support for Freescale i.MX7 Dual processor. + +diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c +index 1bc87c29467b..fcb48eb3ecdd 100644 +--- a/arch/arm/mach-omap2/omap_hwmod.c ++++ b/arch/arm/mach-omap2/omap_hwmod.c +@@ -2207,15 +2207,15 @@ static int _idle(struct omap_hwmod *oh) + + pr_debug("omap_hwmod: %s: idling\n", oh->name); + ++ if (_are_all_hardreset_lines_asserted(oh)) ++ return 0; ++ + if (oh->_state != _HWMOD_STATE_ENABLED) { + WARN(1, "omap_hwmod: %s: idle state can only be entered from enabled state\n", + oh->name); + return -EINVAL; + } + +- if (_are_all_hardreset_lines_asserted(oh)) +- return 0; +- + if (oh->class->sysc) + _idle_sysc(oh); + _del_initiator_dep(oh, mpu_oh); +@@ -2262,6 +2262,9 @@ static int _shutdown(struct omap_hwmod *oh) + int ret, i; + u8 prev_state; + ++ if (_are_all_hardreset_lines_asserted(oh)) ++ return 0; ++ + if (oh->_state != _HWMOD_STATE_IDLE && + oh->_state != _HWMOD_STATE_ENABLED) { + WARN(1, "omap_hwmod: %s: disabled state can only be entered from idle, or enabled state\n", +@@ -2269,9 +2272,6 @@ static int _shutdown(struct omap_hwmod *oh) + return -EINVAL; + } + +- if (_are_all_hardreset_lines_asserted(oh)) +- return 0; +- + pr_debug("omap_hwmod: %s: disabling\n", oh->name); + + if (oh->class->pre_shutdown) { +diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c +index 6297140dd84f..fb413052e10a 100644 +--- a/arch/arm64/net/bpf_jit_comp.c ++++ b/arch/arm64/net/bpf_jit_comp.c +@@ -482,6 +482,7 @@ emit_cond_jmp: + case BPF_JGE: + jmp_cond = A64_COND_CS; + break; ++ case BPF_JSET: + case BPF_JNE: + jmp_cond = A64_COND_NE; + break; +diff --git a/arch/mips/boot/dts/brcm/bcm7435.dtsi b/arch/mips/boot/dts/brcm/bcm7435.dtsi +index 8b9432cc062b..27b2b8e08503 100644 +--- a/arch/mips/boot/dts/brcm/bcm7435.dtsi ++++ b/arch/mips/boot/dts/brcm/bcm7435.dtsi +@@ -7,7 +7,7 @@ + #address-cells = <1>; + #size-cells = <0>; + +- mips-hpt-frequency = <163125000>; ++ mips-hpt-frequency = <175625000>; + + cpu@0 { + compatible = "brcm,bmips5200"; +diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c +index ed7c4f1fc6a0..9189730bd517 100644 +--- a/arch/mips/cavium-octeon/octeon-irq.c ++++ b/arch/mips/cavium-octeon/octeon-irq.c +@@ -1220,7 +1220,7 @@ static int octeon_irq_gpio_map(struct irq_domain *d, + + line = (hw + gpiod->base_hwirq) >> 6; + bit = (hw + gpiod->base_hwirq) & 63; +- if (line > ARRAY_SIZE(octeon_irq_ciu_to_irq) || ++ if (line >= ARRAY_SIZE(octeon_irq_ciu_to_irq) || + octeon_irq_ciu_to_irq[line][bit] != 0) + return -EINVAL; + +diff --git a/arch/mips/cavium-octeon/setup.c b/arch/mips/cavium-octeon/setup.c +index cd7101fb6227..6b9c608cdff1 100644 +--- a/arch/mips/cavium-octeon/setup.c ++++ b/arch/mips/cavium-octeon/setup.c +@@ -251,6 +251,17 @@ static void octeon_crash_shutdown(struct pt_regs *regs) + default_machine_crash_shutdown(regs); + } + ++#ifdef CONFIG_SMP ++void octeon_crash_smp_send_stop(void) ++{ ++ int cpu; ++ ++ /* disable watchdogs */ ++ for_each_online_cpu(cpu) ++ cvmx_write_csr(CVMX_CIU_WDOGX(cpu_logical_map(cpu)), 0); ++} ++#endif ++ + #endif /* CONFIG_KEXEC */ + + #ifdef CONFIG_CAVIUM_RESERVE32 +@@ -864,6 +875,9 @@ void __init prom_init(void) + _machine_kexec_shutdown = octeon_shutdown; + _machine_crash_shutdown = octeon_crash_shutdown; + _machine_kexec_prepare = octeon_kexec_prepare; ++#ifdef CONFIG_SMP ++ _crash_smp_send_stop = octeon_crash_smp_send_stop; ++#endif + #endif + + octeon_user_io_init(); +diff --git a/arch/mips/cavium-octeon/smp.c b/arch/mips/cavium-octeon/smp.c +index b7fa9ae28c36..bbd34b0f8d84 100644 +--- a/arch/mips/cavium-octeon/smp.c ++++ b/arch/mips/cavium-octeon/smp.c +@@ -239,6 +239,7 @@ static int octeon_cpu_disable(void) + return -ENOTSUPP; + + set_cpu_online(cpu, false); ++ calculate_cpu_foreign_map(); + cpumask_clear_cpu(cpu, &cpu_callin_map); + octeon_fixup_irqs(); + +diff --git a/arch/mips/include/asm/elf.h b/arch/mips/include/asm/elf.h +index b01a6ff468e0..ce985cefe11c 100644 +--- a/arch/mips/include/asm/elf.h ++++ b/arch/mips/include/asm/elf.h +@@ -420,6 +420,7 @@ extern const char *__elf_platform; + #define ELF_ET_DYN_BASE (TASK_SIZE / 3 * 2) + #endif + ++/* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes */ + #define ARCH_DLINFO \ + do { \ + NEW_AUX_ENT(AT_SYSINFO_EHDR, \ +diff --git a/arch/mips/include/asm/kexec.h b/arch/mips/include/asm/kexec.h +index b6a4d4aa548f..cfdbe66575f4 100644 +--- a/arch/mips/include/asm/kexec.h ++++ b/arch/mips/include/asm/kexec.h +@@ -45,6 +45,7 @@ extern const unsigned char kexec_smp_wait[]; + extern unsigned long secondary_kexec_args[4]; + extern void (*relocated_kexec_smp_wait) (void *); + extern atomic_t kexec_ready_to_reboot; ++extern void (*_crash_smp_send_stop)(void); + #endif + #endif + +diff --git a/arch/mips/include/asm/r4kcache.h b/arch/mips/include/asm/r4kcache.h +index 38902bf97adc..667ca3c467b7 100644 +--- a/arch/mips/include/asm/r4kcache.h ++++ b/arch/mips/include/asm/r4kcache.h +@@ -210,7 +210,11 @@ static inline void protected_writeback_dcache_line(unsigned long addr) + + static inline void protected_writeback_scache_line(unsigned long addr) + { ++#ifdef CONFIG_EVA ++ protected_cachee_op(Hit_Writeback_Inv_SD, addr); ++#else + protected_cache_op(Hit_Writeback_Inv_SD, addr); ++#endif + } + + /* +diff --git a/arch/mips/include/asm/smp.h b/arch/mips/include/asm/smp.h +index 82852dfd8dab..5ce0fcc81e87 100644 +--- a/arch/mips/include/asm/smp.h ++++ b/arch/mips/include/asm/smp.h +@@ -63,6 +63,8 @@ extern cpumask_t cpu_coherent_mask; + + extern void asmlinkage smp_bootstrap(void); + ++extern void calculate_cpu_foreign_map(void); ++ + /* + * this function sends a 'reschedule' IPI to another CPU. + * it goes straight through and wastes no time serializing +diff --git a/arch/mips/include/uapi/asm/auxvec.h b/arch/mips/include/uapi/asm/auxvec.h +index c9c7195272c4..45ba259a3618 100644 +--- a/arch/mips/include/uapi/asm/auxvec.h ++++ b/arch/mips/include/uapi/asm/auxvec.h +@@ -14,4 +14,6 @@ + /* Location of VDSO image. */ + #define AT_SYSINFO_EHDR 33 + ++#define AT_VECTOR_SIZE_ARCH 1 /* entries in ARCH_DLINFO */ ++ + #endif /* __ASM_AUXVEC_H */ +diff --git a/arch/mips/kernel/bmips_vec.S b/arch/mips/kernel/bmips_vec.S +index 86495072a922..d9495f3f3fad 100644 +--- a/arch/mips/kernel/bmips_vec.S ++++ b/arch/mips/kernel/bmips_vec.S +@@ -93,7 +93,8 @@ NESTED(bmips_reset_nmi_vec, PT_SIZE, sp) + #if defined(CONFIG_CPU_BMIPS5000) + mfc0 k0, CP0_PRID + li k1, PRID_IMP_BMIPS5000 +- andi k0, 0xff00 ++ /* mask with PRID_IMP_BMIPS5000 to cover both variants */ ++ andi k0, PRID_IMP_BMIPS5000 + bne k0, k1, 1f + + /* if we're not on core 0, this must be the SMP boot signal */ +@@ -166,10 +167,12 @@ bmips_smp_entry: + 2: + #endif /* CONFIG_CPU_BMIPS4350 || CONFIG_CPU_BMIPS4380 */ + #if defined(CONFIG_CPU_BMIPS5000) +- /* set exception vector base */ ++ /* mask with PRID_IMP_BMIPS5000 to cover both variants */ + li k1, PRID_IMP_BMIPS5000 ++ andi k0, PRID_IMP_BMIPS5000 + bne k0, k1, 3f + ++ /* set exception vector base */ + la k0, ebase + lw k0, 0(k0) + mtc0 k0, $15, 1 +@@ -263,6 +266,8 @@ LEAF(bmips_enable_xks01) + #endif /* CONFIG_CPU_BMIPS4380 */ + #if defined(CONFIG_CPU_BMIPS5000) + li t1, PRID_IMP_BMIPS5000 ++ /* mask with PRID_IMP_BMIPS5000 to cover both variants */ ++ andi t2, PRID_IMP_BMIPS5000 + bne t2, t1, 2f + + mfc0 t0, $22, 5 +diff --git a/arch/mips/kernel/branch.c b/arch/mips/kernel/branch.c +index 71e8f4c0b8da..a2e9ad37ea20 100644 +--- a/arch/mips/kernel/branch.c ++++ b/arch/mips/kernel/branch.c +@@ -685,21 +685,9 @@ int __compute_return_epc_for_insn(struct pt_regs *regs, + } + lose_fpu(1); /* Save FPU state for the emulator. */ + reg = insn.i_format.rt; +- bit = 0; +- switch (insn.i_format.rs) { +- case bc1eqz_op: +- /* Test bit 0 */ +- if (get_fpr32(¤t->thread.fpu.fpr[reg], 0) +- & 0x1) +- bit = 1; +- break; +- case bc1nez_op: +- /* Test bit 0 */ +- if (!(get_fpr32(¤t->thread.fpu.fpr[reg], 0) +- & 0x1)) +- bit = 1; +- break; +- } ++ bit = get_fpr32(¤t->thread.fpu.fpr[reg], 0) & 0x1; ++ if (insn.i_format.rs == bc1eqz_op) ++ bit = !bit; + own_fpu(1); + if (bit) + epc = epc + 4 + +diff --git a/arch/mips/kernel/cps-vec.S b/arch/mips/kernel/cps-vec.S +index ac81edd44563..6b724436ac04 100644 +--- a/arch/mips/kernel/cps-vec.S ++++ b/arch/mips/kernel/cps-vec.S +@@ -245,7 +245,6 @@ LEAF(excep_intex) + + .org 0x480 + LEAF(excep_ejtag) +- DUMP_EXCEP("EJTAG") + PTR_LA k0, ejtag_debug_handler + jr k0 + nop +diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c +index 6b9064499bd3..157c08c37e68 100644 +--- a/arch/mips/kernel/cpu-probe.c ++++ b/arch/mips/kernel/cpu-probe.c +@@ -1284,7 +1284,10 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu) + case PRID_IMP_BMIPS5000: + case PRID_IMP_BMIPS5200: + c->cputype = CPU_BMIPS5000; +- __cpu_name[cpu] = "Broadcom BMIPS5000"; ++ if ((c->processor_id & PRID_IMP_MASK) == PRID_IMP_BMIPS5200) ++ __cpu_name[cpu] = "Broadcom BMIPS5200"; ++ else ++ __cpu_name[cpu] = "Broadcom BMIPS5000"; + set_elf_platform(cpu, "bmips5000"); + c->options |= MIPS_CPU_ULRI; + break; +diff --git a/arch/mips/kernel/crash.c b/arch/mips/kernel/crash.c +index 93c46c9cebb7..e757f36cea6f 100644 +--- a/arch/mips/kernel/crash.c ++++ b/arch/mips/kernel/crash.c +@@ -50,9 +50,14 @@ static void crash_shutdown_secondary(void *passed_regs) + + static void crash_kexec_prepare_cpus(void) + { ++ static int cpus_stopped; + unsigned int msecs; ++ unsigned int ncpus; + +- unsigned int ncpus = num_online_cpus() - 1;/* Excluding the panic cpu */ ++ if (cpus_stopped) ++ return; ++ ++ ncpus = num_online_cpus() - 1;/* Excluding the panic cpu */ + + dump_send_ipi(crash_shutdown_secondary); + smp_wmb(); +@@ -67,6 +72,17 @@ static void crash_kexec_prepare_cpus(void) + cpu_relax(); + mdelay(1); + } ++ ++ cpus_stopped = 1; ++} ++ ++/* Override the weak function in kernel/panic.c */ ++void crash_smp_send_stop(void) ++{ ++ if (_crash_smp_send_stop) ++ _crash_smp_send_stop(); ++ ++ crash_kexec_prepare_cpus(); + } + + #else /* !defined(CONFIG_SMP) */ +diff --git a/arch/mips/kernel/machine_kexec.c b/arch/mips/kernel/machine_kexec.c +index 92bc066e47a3..32b567e88b02 100644 +--- a/arch/mips/kernel/machine_kexec.c ++++ b/arch/mips/kernel/machine_kexec.c +@@ -25,6 +25,7 @@ void (*_machine_crash_shutdown)(struct pt_regs *regs) = NULL; + #ifdef CONFIG_SMP + void (*relocated_kexec_smp_wait) (void *); + atomic_t kexec_ready_to_reboot = ATOMIC_INIT(0); ++void (*_crash_smp_send_stop)(void) = NULL; + #endif + + int +diff --git a/arch/mips/kernel/perf_event_mipsxx.c b/arch/mips/kernel/perf_event_mipsxx.c +index d7b8dd43147a..fcc1117a73e0 100644 +--- a/arch/mips/kernel/perf_event_mipsxx.c ++++ b/arch/mips/kernel/perf_event_mipsxx.c +@@ -825,6 +825,16 @@ static const struct mips_perf_event mipsxxcore_event_map2 + [PERF_COUNT_HW_BRANCH_MISSES] = { 0x27, CNTR_ODD, T }, + }; + ++static const struct mips_perf_event i6400_event_map[PERF_COUNT_HW_MAX] = { ++ [PERF_COUNT_HW_CPU_CYCLES] = { 0x00, CNTR_EVEN | CNTR_ODD }, ++ [PERF_COUNT_HW_INSTRUCTIONS] = { 0x01, CNTR_EVEN | CNTR_ODD }, ++ /* These only count dcache, not icache */ ++ [PERF_COUNT_HW_CACHE_REFERENCES] = { 0x45, CNTR_EVEN | CNTR_ODD }, ++ [PERF_COUNT_HW_CACHE_MISSES] = { 0x48, CNTR_EVEN | CNTR_ODD }, ++ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = { 0x15, CNTR_EVEN | CNTR_ODD }, ++ [PERF_COUNT_HW_BRANCH_MISSES] = { 0x16, CNTR_EVEN | CNTR_ODD }, ++}; ++ + static const struct mips_perf_event loongson3_event_map[PERF_COUNT_HW_MAX] = { + [PERF_COUNT_HW_CPU_CYCLES] = { 0x00, CNTR_EVEN }, + [PERF_COUNT_HW_INSTRUCTIONS] = { 0x00, CNTR_ODD }, +@@ -1015,6 +1025,46 @@ static const struct mips_perf_event mipsxxcore_cache_map2 + }, + }; + ++static const struct mips_perf_event i6400_cache_map ++ [PERF_COUNT_HW_CACHE_MAX] ++ [PERF_COUNT_HW_CACHE_OP_MAX] ++ [PERF_COUNT_HW_CACHE_RESULT_MAX] = { ++[C(L1D)] = { ++ [C(OP_READ)] = { ++ [C(RESULT_ACCESS)] = { 0x46, CNTR_EVEN | CNTR_ODD }, ++ [C(RESULT_MISS)] = { 0x49, CNTR_EVEN | CNTR_ODD }, ++ }, ++ [C(OP_WRITE)] = { ++ [C(RESULT_ACCESS)] = { 0x47, CNTR_EVEN | CNTR_ODD }, ++ [C(RESULT_MISS)] = { 0x4a, CNTR_EVEN | CNTR_ODD }, ++ }, ++}, ++[C(L1I)] = { ++ [C(OP_READ)] = { ++ [C(RESULT_ACCESS)] = { 0x84, CNTR_EVEN | CNTR_ODD }, ++ [C(RESULT_MISS)] = { 0x85, CNTR_EVEN | CNTR_ODD }, ++ }, ++}, ++[C(DTLB)] = { ++ /* Can't distinguish read & write */ ++ [C(OP_READ)] = { ++ [C(RESULT_ACCESS)] = { 0x40, CNTR_EVEN | CNTR_ODD }, ++ [C(RESULT_MISS)] = { 0x41, CNTR_EVEN | CNTR_ODD }, ++ }, ++ [C(OP_WRITE)] = { ++ [C(RESULT_ACCESS)] = { 0x40, CNTR_EVEN | CNTR_ODD }, ++ [C(RESULT_MISS)] = { 0x41, CNTR_EVEN | CNTR_ODD }, ++ }, ++}, ++[C(BPU)] = { ++ /* Conditional branches / mispredicted */ ++ [C(OP_READ)] = { ++ [C(RESULT_ACCESS)] = { 0x15, CNTR_EVEN | CNTR_ODD }, ++ [C(RESULT_MISS)] = { 0x16, CNTR_EVEN | CNTR_ODD }, ++ }, ++}, ++}; ++ + static const struct mips_perf_event loongson3_cache_map + [PERF_COUNT_HW_CACHE_MAX] + [PERF_COUNT_HW_CACHE_OP_MAX] +@@ -1556,7 +1606,6 @@ static const struct mips_perf_event *mipsxx_pmu_map_raw_event(u64 config) + #endif + break; + case CPU_P5600: +- case CPU_I6400: + /* 8-bit event numbers */ + raw_id = config & 0x1ff; + base_id = raw_id & 0xff; +@@ -1569,6 +1618,11 @@ static const struct mips_perf_event *mipsxx_pmu_map_raw_event(u64 config) + raw_event.range = P; + #endif + break; ++ case CPU_I6400: ++ /* 8-bit event numbers */ ++ base_id = config & 0xff; ++ raw_event.cntr_mask = CNTR_EVEN | CNTR_ODD; ++ break; + case CPU_1004K: + if (IS_BOTH_COUNTERS_1004K_EVENT(base_id)) + raw_event.cntr_mask = CNTR_EVEN | CNTR_ODD; +@@ -1720,8 +1774,8 @@ init_hw_perf_events(void) + break; + case CPU_I6400: + mipspmu.name = "mips/I6400"; +- mipspmu.general_event_map = &mipsxxcore_event_map2; +- mipspmu.cache_event_map = &mipsxxcore_cache_map2; ++ mipspmu.general_event_map = &i6400_event_map; ++ mipspmu.cache_event_map = &i6400_cache_map; + break; + case CPU_1004K: + mipspmu.name = "mips/1004K"; +diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c +index 9d04392f7ef0..135e22611820 100644 +--- a/arch/mips/kernel/ptrace.c ++++ b/arch/mips/kernel/ptrace.c +@@ -670,9 +670,6 @@ static const struct pt_regs_offset regoffset_table[] = { + REG_OFFSET_NAME(c0_badvaddr, cp0_badvaddr), + REG_OFFSET_NAME(c0_cause, cp0_cause), + REG_OFFSET_NAME(c0_epc, cp0_epc), +-#ifdef CONFIG_MIPS_MT_SMTC +- REG_OFFSET_NAME(c0_tcstatus, cp0_tcstatus), +-#endif + #ifdef CONFIG_CPU_CAVIUM_OCTEON + REG_OFFSET_NAME(mpl0, mpl[0]), + REG_OFFSET_NAME(mpl1, mpl[1]), +diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S +index 29b0c5f978e4..7ee8c6269b22 100644 +--- a/arch/mips/kernel/scall32-o32.S ++++ b/arch/mips/kernel/scall32-o32.S +@@ -35,7 +35,6 @@ NESTED(handle_sys, PT_SIZE, sp) + + lw t1, PT_EPC(sp) # skip syscall on return + +- subu v0, v0, __NR_O32_Linux # check syscall number + addiu t1, 4 # skip to next instruction + sw t1, PT_EPC(sp) + +@@ -89,6 +88,7 @@ loads_done: + and t0, t1 + bnez t0, syscall_trace_entry # -> yes + syscall_common: ++ subu v0, v0, __NR_O32_Linux # check syscall number + sltiu t0, v0, __NR_O32_Linux_syscalls + 1 + beqz t0, illegal_syscall + +@@ -118,24 +118,23 @@ o32_syscall_exit: + + syscall_trace_entry: + SAVE_STATIC +- move s0, v0 + move a0, sp + + /* + * syscall number is in v0 unless we called syscall(__NR_###) + * where the real syscall number is in a0 + */ +- addiu a1, v0, __NR_O32_Linux +- bnez v0, 1f /* __NR_syscall at offset 0 */ ++ move a1, v0 ++ subu t2, v0, __NR_O32_Linux ++ bnez t2, 1f /* __NR_syscall at offset 0 */ + lw a1, PT_R4(sp) + + 1: jal syscall_trace_enter + + bltz v0, 1f # seccomp failed? Skip syscall + +- move v0, s0 # restore syscall +- + RESTORE_STATIC ++ lw v0, PT_R2(sp) # Restore syscall (maybe modified) + lw a0, PT_R4(sp) # Restore argument registers + lw a1, PT_R5(sp) + lw a2, PT_R6(sp) +diff --git a/arch/mips/kernel/scall64-64.S b/arch/mips/kernel/scall64-64.S +index a6323a969919..01779c315bc6 100644 +--- a/arch/mips/kernel/scall64-64.S ++++ b/arch/mips/kernel/scall64-64.S +@@ -82,15 +82,14 @@ n64_syscall_exit: + + syscall_trace_entry: + SAVE_STATIC +- move s0, v0 + move a0, sp + move a1, v0 + jal syscall_trace_enter + + bltz v0, 1f # seccomp failed? Skip syscall + +- move v0, s0 + RESTORE_STATIC ++ ld v0, PT_R2(sp) # Restore syscall (maybe modified) + ld a0, PT_R4(sp) # Restore argument registers + ld a1, PT_R5(sp) + ld a2, PT_R6(sp) +diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S +index e0fdca8d3abe..0d22a5cc0b8b 100644 +--- a/arch/mips/kernel/scall64-n32.S ++++ b/arch/mips/kernel/scall64-n32.S +@@ -42,9 +42,6 @@ NESTED(handle_sysn32, PT_SIZE, sp) + #endif + beqz t0, not_n32_scall + +- dsll t0, v0, 3 # offset into table +- ld t2, (sysn32_call_table - (__NR_N32_Linux * 8))(t0) +- + sd a3, PT_R26(sp) # save a3 for syscall restarting + + li t1, _TIF_WORK_SYSCALL_ENTRY +@@ -53,6 +50,9 @@ NESTED(handle_sysn32, PT_SIZE, sp) + bnez t0, n32_syscall_trace_entry + + syscall_common: ++ dsll t0, v0, 3 # offset into table ++ ld t2, (sysn32_call_table - (__NR_N32_Linux * 8))(t0) ++ + jalr t2 # Do The Real Thing (TM) + + li t0, -EMAXERRNO - 1 # error? +@@ -71,21 +71,25 @@ syscall_common: + + n32_syscall_trace_entry: + SAVE_STATIC +- move s0, t2 + move a0, sp + move a1, v0 + jal syscall_trace_enter + + bltz v0, 1f # seccomp failed? Skip syscall + +- move t2, s0 + RESTORE_STATIC ++ ld v0, PT_R2(sp) # Restore syscall (maybe modified) + ld a0, PT_R4(sp) # Restore argument registers + ld a1, PT_R5(sp) + ld a2, PT_R6(sp) + ld a3, PT_R7(sp) + ld a4, PT_R8(sp) + ld a5, PT_R9(sp) ++ ++ dsubu t2, v0, __NR_N32_Linux # check (new) syscall number ++ sltiu t0, t2, __NR_N32_Linux_syscalls + 1 ++ beqz t0, not_n32_scall ++ + j syscall_common + + 1: j syscall_exit +diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S +index 4faff3e77b25..a5cc2b2823d2 100644 +--- a/arch/mips/kernel/scall64-o32.S ++++ b/arch/mips/kernel/scall64-o32.S +@@ -52,9 +52,6 @@ NESTED(handle_sys, PT_SIZE, sp) + sll a2, a2, 0 + sll a3, a3, 0 + +- dsll t0, v0, 3 # offset into table +- ld t2, (sys32_call_table - (__NR_O32_Linux * 8))(t0) +- + sd a3, PT_R26(sp) # save a3 for syscall restarting + + /* +@@ -88,6 +85,9 @@ loads_done: + bnez t0, trace_a_syscall + + syscall_common: ++ dsll t0, v0, 3 # offset into table ++ ld t2, (sys32_call_table - (__NR_O32_Linux * 8))(t0) ++ + jalr t2 # Do The Real Thing (TM) + + li t0, -EMAXERRNO - 1 # error? +@@ -112,7 +112,6 @@ trace_a_syscall: + sd a6, PT_R10(sp) + sd a7, PT_R11(sp) # For indirect syscalls + +- move s0, t2 # Save syscall pointer + move a0, sp + /* + * absolute syscall number is in v0 unless we called syscall(__NR_###) +@@ -133,8 +132,8 @@ trace_a_syscall: + + bltz v0, 1f # seccomp failed? Skip syscall + +- move t2, s0 + RESTORE_STATIC ++ ld v0, PT_R2(sp) # Restore syscall (maybe modified) + ld a0, PT_R4(sp) # Restore argument registers + ld a1, PT_R5(sp) + ld a2, PT_R6(sp) +@@ -143,6 +142,11 @@ trace_a_syscall: + ld a5, PT_R9(sp) + ld a6, PT_R10(sp) + ld a7, PT_R11(sp) # For indirect syscalls ++ ++ dsubu t0, v0, __NR_O32_Linux # check (new) syscall number ++ sltiu t0, t0, __NR_O32_Linux_syscalls + 1 ++ beqz t0, not_o32_scall ++ + j syscall_common + + 1: j syscall_exit +diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c +index fadc946b306d..8fa30516f39d 100644 +--- a/arch/mips/kernel/setup.c ++++ b/arch/mips/kernel/setup.c +@@ -695,7 +695,7 @@ static void __init request_crashkernel(struct resource *res) + + #define USE_PROM_CMDLINE IS_ENABLED(CONFIG_MIPS_CMDLINE_FROM_BOOTLOADER) + #define USE_DTB_CMDLINE IS_ENABLED(CONFIG_MIPS_CMDLINE_FROM_DTB) +-#define EXTEND_WITH_PROM IS_ENABLED(CONFIG_MIPS_CMDLINE_EXTEND) ++#define EXTEND_WITH_PROM IS_ENABLED(CONFIG_MIPS_CMDLINE_DTB_EXTEND) + + static void __init arch_mem_init(char **cmdline_p) + { +diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c +index a62d24169d75..a00c4699ca10 100644 +--- a/arch/mips/kernel/smp-bmips.c ++++ b/arch/mips/kernel/smp-bmips.c +@@ -362,6 +362,7 @@ static int bmips_cpu_disable(void) + pr_info("SMP: CPU%d is offline\n", cpu); + + set_cpu_online(cpu, false); ++ calculate_cpu_foreign_map(); + cpumask_clear_cpu(cpu, &cpu_callin_map); + clear_c0_status(IE_IRQ5); + +diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c +index e04c8057b882..de0b7eaca9e2 100644 +--- a/arch/mips/kernel/smp-cps.c ++++ b/arch/mips/kernel/smp-cps.c +@@ -338,6 +338,7 @@ static int cps_cpu_disable(void) + atomic_sub(1 << cpu_vpe_id(¤t_cpu_data), &core_cfg->vpe_mask); + smp_mb__after_atomic(); + set_cpu_online(cpu, false); ++ calculate_cpu_foreign_map(); + cpumask_clear_cpu(cpu, &cpu_callin_map); + + return 0; +diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c +index 4af08c197177..8eff08d548fa 100644 +--- a/arch/mips/kernel/smp.c ++++ b/arch/mips/kernel/smp.c +@@ -118,7 +118,7 @@ static inline void set_cpu_core_map(int cpu) + * Calculate a new cpu_foreign_map mask whenever a + * new cpu appears or disappears. + */ +-static inline void calculate_cpu_foreign_map(void) ++void calculate_cpu_foreign_map(void) + { + int i, k, core_present; + cpumask_t temp_foreign_map; +diff --git a/arch/mips/kvm/dyntrans.c b/arch/mips/kvm/dyntrans.c +index 521121bdebff..4974bfc2c5c8 100644 +--- a/arch/mips/kvm/dyntrans.c ++++ b/arch/mips/kvm/dyntrans.c +@@ -82,7 +82,7 @@ int kvm_mips_trans_mfc0(uint32_t inst, uint32_t *opc, struct kvm_vcpu *vcpu) + + if ((rd == MIPS_CP0_ERRCTL) && (sel == 0)) { + mfc0_inst = CLEAR_TEMPLATE; +- mfc0_inst |= ((rt & 0x1f) << 16); ++ mfc0_inst |= ((rt & 0x1f) << 11); + } else { + mfc0_inst = LW_TEMPLATE; + mfc0_inst |= ((rt & 0x1f) << 16); +diff --git a/arch/mips/loongson64/loongson-3/smp.c b/arch/mips/loongson64/loongson-3/smp.c +index 509832a9836c..2525b6d38f58 100644 +--- a/arch/mips/loongson64/loongson-3/smp.c ++++ b/arch/mips/loongson64/loongson-3/smp.c +@@ -417,6 +417,7 @@ static int loongson3_cpu_disable(void) + return -EBUSY; + + set_cpu_online(cpu, false); ++ calculate_cpu_foreign_map(); + cpumask_clear_cpu(cpu, &cpu_callin_map); + local_irq_save(flags); + fixup_irqs(); +diff --git a/arch/mips/math-emu/cp1emu.c b/arch/mips/math-emu/cp1emu.c +index 011b9b9574f1..e31fde4bc25b 100644 +--- a/arch/mips/math-emu/cp1emu.c ++++ b/arch/mips/math-emu/cp1emu.c +@@ -975,9 +975,10 @@ static int cop1Emulate(struct pt_regs *xcp, struct mips_fpu_struct *ctx, + struct mm_decoded_insn dec_insn, void *__user *fault_addr) + { + unsigned long contpc = xcp->cp0_epc + dec_insn.pc_inc; +- unsigned int cond, cbit; ++ unsigned int cond, cbit, bit0; + mips_instruction ir; + int likely, pc_inc; ++ union fpureg *fpr; + u32 __user *wva; + u64 __user *dva; + u32 wval; +@@ -1189,14 +1190,14 @@ emul: + return SIGILL; + + cond = likely = 0; ++ fpr = ¤t->thread.fpu.fpr[MIPSInst_RT(ir)]; ++ bit0 = get_fpr32(fpr, 0) & 0x1; + switch (MIPSInst_RS(ir)) { + case bc1eqz_op: +- if (get_fpr32(¤t->thread.fpu.fpr[MIPSInst_RT(ir)], 0) & 0x1) +- cond = 1; ++ cond = bit0 == 0; + break; + case bc1nez_op: +- if (!(get_fpr32(¤t->thread.fpu.fpr[MIPSInst_RT(ir)], 0) & 0x1)) +- cond = 1; ++ cond = bit0 != 0; + break; + } + goto branch_common; +diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c +index 52e8c2026853..6c0147bd8e80 100644 +--- a/arch/mips/mm/c-r4k.c ++++ b/arch/mips/mm/c-r4k.c +@@ -447,6 +447,11 @@ static inline void local_r4k___flush_cache_all(void * args) + r4k_blast_scache(); + break; + ++ case CPU_BMIPS5000: ++ r4k_blast_scache(); ++ __sync(); ++ break; ++ + default: + r4k_blast_dcache(); + r4k_blast_icache(); +@@ -1308,6 +1313,12 @@ static void probe_pcache(void) + c->icache.flags |= MIPS_CACHE_IC_F_DC; + break; + ++ case CPU_BMIPS5000: ++ c->icache.flags |= MIPS_CACHE_IC_F_DC; ++ /* Cache aliases are handled in hardware; allow HIGHMEM */ ++ c->dcache.flags &= ~MIPS_CACHE_ALIASES; ++ break; ++ + case CPU_LOONGSON2: + /* + * LOONGSON2 has 4 way icache, but when using indexed cache op, +@@ -1745,8 +1756,6 @@ void r4k_cache_init(void) + flush_icache_range = (void *)b5k_instruction_hazard; + local_flush_icache_range = (void *)b5k_instruction_hazard; + +- /* Cache aliases are handled in hardware; allow HIGHMEM */ +- current_cpu_data.dcache.flags &= ~MIPS_CACHE_ALIASES; + + /* Optimization: an L2 flush implicitly flushes the L1 */ + current_cpu_data.options |= MIPS_CPU_INCLUSIVE_CACHES; +diff --git a/arch/mips/mm/sc-rm7k.c b/arch/mips/mm/sc-rm7k.c +index 9ac1efcfbcc7..78f900c59276 100644 +--- a/arch/mips/mm/sc-rm7k.c ++++ b/arch/mips/mm/sc-rm7k.c +@@ -161,7 +161,7 @@ static void rm7k_tc_disable(void) + local_irq_save(flags); + blast_rm7k_tcache(); + clear_c0_config(RM7K_CONF_TE); +- local_irq_save(flags); ++ local_irq_restore(flags); + } + + static void rm7k_sc_disable(void) +diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c +index 63b7d6f82d24..448b4aab3a1f 100644 +--- a/arch/mips/mm/tlbex.c ++++ b/arch/mips/mm/tlbex.c +@@ -2329,9 +2329,7 @@ static void config_htw_params(void) + if (CONFIG_PGTABLE_LEVELS >= 3) + pwsize |= ilog2(PTRS_PER_PMD) << MIPS_PWSIZE_MDW_SHIFT; + +- /* If XPA has been enabled, PTEs are 64-bit in size. */ +- if (config_enabled(CONFIG_64BITS) || (read_c0_pagegrain() & PG_ELPA)) +- pwsize |= 1; ++ pwsize |= ilog2(sizeof(pte_t)/4) << MIPS_PWSIZE_PTEW_SHIFT; + + write_c0_pwsize(pwsize); + +diff --git a/arch/mips/net/bpf_jit.c b/arch/mips/net/bpf_jit.c +index c0c1e9529dbd..742daf8351b9 100644 +--- a/arch/mips/net/bpf_jit.c ++++ b/arch/mips/net/bpf_jit.c +@@ -1207,7 +1207,7 @@ void bpf_jit_compile(struct bpf_prog *fp) + + memset(&ctx, 0, sizeof(ctx)); + +- ctx.offsets = kcalloc(fp->len, sizeof(*ctx.offsets), GFP_KERNEL); ++ ctx.offsets = kcalloc(fp->len + 1, sizeof(*ctx.offsets), GFP_KERNEL); + if (ctx.offsets == NULL) + return; + +diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c +index da3c4c3f4ec8..d4936615a756 100644 +--- a/arch/powerpc/kernel/mce.c ++++ b/arch/powerpc/kernel/mce.c +@@ -92,7 +92,8 @@ void save_mce_event(struct pt_regs *regs, long handled, + mce->in_use = 1; + + mce->initiator = MCE_INITIATOR_CPU; +- if (handled) ++ /* Mark it recovered if we have handled it and MSR(RI=1). */ ++ if (handled && (regs->msr & MSR_RI)) + mce->disposition = MCE_DISPOSITION_RECOVERED; + else + mce->disposition = MCE_DISPOSITION_NOT_RECOVERED; +diff --git a/arch/powerpc/kernel/pci_of_scan.c b/arch/powerpc/kernel/pci_of_scan.c +index a38d7293460d..985b5be3bcf6 100644 +--- a/arch/powerpc/kernel/pci_of_scan.c ++++ b/arch/powerpc/kernel/pci_of_scan.c +@@ -82,10 +82,16 @@ static void of_pci_parse_addrs(struct device_node *node, struct pci_dev *dev) + const __be32 *addrs; + u32 i; + int proplen; ++ bool mark_unset = false; + + addrs = of_get_property(node, "assigned-addresses", &proplen); +- if (!addrs) +- return; ++ if (!addrs || !proplen) { ++ addrs = of_get_property(node, "reg", &proplen); ++ if (!addrs || !proplen) ++ return; ++ mark_unset = true; ++ } ++ + pr_debug(" parse addresses (%d bytes) @ %p\n", proplen, addrs); + for (; proplen >= 20; proplen -= 20, addrs += 5) { + flags = pci_parse_of_flags(of_read_number(addrs, 1), 0); +@@ -110,6 +116,8 @@ static void of_pci_parse_addrs(struct device_node *node, struct pci_dev *dev) + continue; + } + res->flags = flags; ++ if (mark_unset) ++ res->flags |= IORESOURCE_UNSET; + res->name = pci_name(dev); + region.start = base; + region.end = base + size - 1; +diff --git a/arch/powerpc/kernel/tm.S b/arch/powerpc/kernel/tm.S +index 2d2860711e07..55e831238485 100644 +--- a/arch/powerpc/kernel/tm.S ++++ b/arch/powerpc/kernel/tm.S +@@ -352,8 +352,6 @@ _GLOBAL(__tm_recheckpoint) + */ + subi r7, r7, STACK_FRAME_OVERHEAD + +- SET_SCRATCH0(r1) +- + mfmsr r6 + /* R4 = original MSR to indicate whether thread used FP/Vector etc. */ + +@@ -482,6 +480,7 @@ restore_gprs: + * until we turn MSR RI back on. + */ + ++ SET_SCRATCH0(r1) + ld r5, -8(r1) + ld r1, -16(r1) + +diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c +index d3787618315f..56125dbdb1e2 100644 +--- a/arch/powerpc/platforms/powernv/opal.c ++++ b/arch/powerpc/platforms/powernv/opal.c +@@ -401,6 +401,7 @@ static int opal_recover_mce(struct pt_regs *regs, + + if (!(regs->msr & MSR_RI)) { + /* If MSR_RI isn't set, we cannot recover */ ++ pr_err("Machine check interrupt unrecoverable: MSR(RI=0)\n"); + recovered = 0; + } else if (evt->disposition == MCE_DISPOSITION_RECOVERED) { + /* Platform corrected itself */ +diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c +index 4a139465f1d4..7554075414d4 100644 +--- a/arch/x86/kernel/apic/x2apic_uv_x.c ++++ b/arch/x86/kernel/apic/x2apic_uv_x.c +@@ -648,9 +648,9 @@ static __init void map_mmioh_high_uv3(int index, int min_pnode, int max_pnode) + l = li; + } + addr1 = (base << shift) + +- f * (unsigned long)(1 << m_io); ++ f * (1ULL << m_io); + addr2 = (base << shift) + +- (l + 1) * (unsigned long)(1 << m_io); ++ (l + 1) * (1ULL << m_io); + pr_info("UV: %s[%03d..%03d] NASID 0x%04x ADDR 0x%016lx - 0x%016lx\n", + id, fi, li, lnasid, addr1, addr2); + if (max_io < l) +diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c +index fbf2edc3eb35..b983d3dc4e6c 100644 +--- a/arch/x86/kernel/cpu/perf_event.c ++++ b/arch/x86/kernel/cpu/perf_event.c +@@ -1550,6 +1550,7 @@ static void __init filter_events(struct attribute **attrs) + { + struct device_attribute *d; + struct perf_pmu_events_attr *pmu_attr; ++ int offset = 0; + int i, j; + + for (i = 0; attrs[i]; i++) { +@@ -1558,7 +1559,7 @@ static void __init filter_events(struct attribute **attrs) + /* str trumps id */ + if (pmu_attr->event_str) + continue; +- if (x86_pmu.event_map(i)) ++ if (x86_pmu.event_map(i + offset)) + continue; + + for (j = i; attrs[j]; j++) +@@ -1566,6 +1567,14 @@ static void __init filter_events(struct attribute **attrs) + + /* Check the shifted attr. */ + i--; ++ ++ /* ++ * event_map() is index based, the attrs array is organized ++ * by increasing event index. If we shift the events, then ++ * we need to compensate for the event_map(), otherwise ++ * we are looking up the wrong event in the map ++ */ ++ offset++; + } + } + +diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c +index 618565fecb1c..1a79d451cd34 100644 +--- a/arch/x86/kernel/process_64.c ++++ b/arch/x86/kernel/process_64.c +@@ -128,7 +128,7 @@ void release_thread(struct task_struct *dead_task) + if (dead_task->mm->context.ldt) { + pr_warn("WARNING: dead process %s still has LDT? <%p/%d>\n", + dead_task->comm, +- dead_task->mm->context.ldt, ++ dead_task->mm->context.ldt->entries, + dead_task->mm->context.ldt->size); + BUG(); + } +diff --git a/block/blk-mq.c b/block/blk-mq.c +index 8649dbf06ce4..b5633501f181 100644 +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -1491,7 +1491,7 @@ static struct blk_mq_tags *blk_mq_init_rq_map(struct blk_mq_tag_set *set, + int to_do; + void *p; + +- while (left < order_to_size(this_order - 1) && this_order) ++ while (this_order && left < order_to_size(this_order - 1)) + this_order--; + + do { +diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c +index e54e6170981b..a17337fd8f37 100644 +--- a/drivers/acpi/acpi_lpss.c ++++ b/drivers/acpi/acpi_lpss.c +@@ -704,8 +704,13 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb, + } + + switch (action) { +- case BUS_NOTIFY_ADD_DEVICE: ++ case BUS_NOTIFY_BOUND_DRIVER: + pdev->dev.pm_domain = &acpi_lpss_pm_domain; ++ break; ++ case BUS_NOTIFY_UNBOUND_DRIVER: ++ pdev->dev.pm_domain = NULL; ++ break; ++ case BUS_NOTIFY_ADD_DEVICE: + if (pdata->dev_desc->flags & LPSS_LTR) + return sysfs_create_group(&pdev->dev.kobj, + &lpss_attr_group); +@@ -713,7 +718,6 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb, + case BUS_NOTIFY_DEL_DEVICE: + if (pdata->dev_desc->flags & LPSS_LTR) + sysfs_remove_group(&pdev->dev.kobj, &lpss_attr_group); +- pdev->dev.pm_domain = NULL; + break; + default: + break; +diff --git a/drivers/ata/sata_dwc_460ex.c b/drivers/ata/sata_dwc_460ex.c +index 902034991517..7a7faca0ddcd 100644 +--- a/drivers/ata/sata_dwc_460ex.c ++++ b/drivers/ata/sata_dwc_460ex.c +@@ -924,15 +924,13 @@ static void sata_dwc_exec_command_by_tag(struct ata_port *ap, + struct ata_taskfile *tf, + u8 tag, u32 cmd_issued) + { +- unsigned long flags; + struct sata_dwc_device_port *hsdevp = HSDEVP_FROM_AP(ap); + + dev_dbg(ap->dev, "%s cmd(0x%02x): %s tag=%d\n", __func__, tf->command, + ata_get_cmd_descript(tf->command), tag); + +- spin_lock_irqsave(&ap->host->lock, flags); + hsdevp->cmd_issued[tag] = cmd_issued; +- spin_unlock_irqrestore(&ap->host->lock, flags); ++ + /* + * Clear SError before executing a new command. + * sata_dwc_scr_write and read can not be used here. Clearing the PM +diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c +index ccfd268148a8..3bf1cdee8f49 100644 +--- a/drivers/base/firmware_class.c ++++ b/drivers/base/firmware_class.c +@@ -1119,15 +1119,17 @@ static int + _request_firmware(const struct firmware **firmware_p, const char *name, + struct device *device, unsigned int opt_flags) + { +- struct firmware *fw; ++ struct firmware *fw = NULL; + long timeout; + int ret; + + if (!firmware_p) + return -EINVAL; + +- if (!name || name[0] == '\0') +- return -EINVAL; ++ if (!name || name[0] == '\0') { ++ ret = -EINVAL; ++ goto out; ++ } + + ret = _request_firmware_prepare(&fw, name, device); + if (ret <= 0) /* error or already assigned */ +diff --git a/drivers/base/isa.c b/drivers/base/isa.c +index 901d8185309e..372d10af2600 100644 +--- a/drivers/base/isa.c ++++ b/drivers/base/isa.c +@@ -180,4 +180,4 @@ static int __init isa_bus_init(void) + return error; + } + +-device_initcall(isa_bus_init); ++postcore_initcall(isa_bus_init); +diff --git a/drivers/bluetooth/btmrvl_sdio.c b/drivers/bluetooth/btmrvl_sdio.c +index 71ea2a3af293..9c6843af1c6b 100644 +--- a/drivers/bluetooth/btmrvl_sdio.c ++++ b/drivers/bluetooth/btmrvl_sdio.c +@@ -1112,7 +1112,8 @@ static int btmrvl_sdio_download_fw(struct btmrvl_sdio_card *card) + */ + if (btmrvl_sdio_verify_fw_download(card, pollnum)) { + BT_ERR("FW failed to be active in time!"); +- return -ETIMEDOUT; ++ ret = -ETIMEDOUT; ++ goto done; + } + + sdio_release_host(card->func); +diff --git a/drivers/char/hw_random/exynos-rng.c b/drivers/char/hw_random/exynos-rng.c +index 7ba0ae060d61..66115ef979b1 100644 +--- a/drivers/char/hw_random/exynos-rng.c ++++ b/drivers/char/hw_random/exynos-rng.c +@@ -155,6 +155,14 @@ static int exynos_rng_probe(struct platform_device *pdev) + return ret; + } + ++static int exynos_rng_remove(struct platform_device *pdev) ++{ ++ pm_runtime_dont_use_autosuspend(&pdev->dev); ++ pm_runtime_disable(&pdev->dev); ++ ++ return 0; ++} ++ + static int __maybe_unused exynos_rng_runtime_suspend(struct device *dev) + { + struct platform_device *pdev = to_platform_device(dev); +@@ -212,6 +220,7 @@ static struct platform_driver exynos_rng_driver = { + .of_match_table = exynos_rng_dt_match, + }, + .probe = exynos_rng_probe, ++ .remove = exynos_rng_remove, + }; + + module_platform_driver(exynos_rng_driver); +diff --git a/drivers/clk/clk-gpio.c b/drivers/clk/clk-gpio.c +index 335322dc403f..9bc801f3a7ba 100644 +--- a/drivers/clk/clk-gpio.c ++++ b/drivers/clk/clk-gpio.c +@@ -287,12 +287,14 @@ static void __init of_gpio_clk_setup(struct device_node *node, + const char **parent_names; + int i, num_parents; + ++ num_parents = of_clk_get_parent_count(node); ++ if (num_parents < 0) ++ return; ++ + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return; + +- num_parents = of_clk_get_parent_count(node); +- + parent_names = kcalloc(num_parents, sizeof(char *), GFP_KERNEL); + if (!parent_names) + return; +diff --git a/drivers/clk/clk-multiplier.c b/drivers/clk/clk-multiplier.c +index fe7806506bf3..e9fb8a111f71 100644 +--- a/drivers/clk/clk-multiplier.c ++++ b/drivers/clk/clk-multiplier.c +@@ -54,14 +54,28 @@ static unsigned long __bestmult(struct clk_hw *hw, unsigned long rate, + unsigned long *best_parent_rate, + u8 width, unsigned long flags) + { ++ struct clk_multiplier *mult = to_clk_multiplier(hw); + unsigned long orig_parent_rate = *best_parent_rate; + unsigned long parent_rate, current_rate, best_rate = ~0; + unsigned int i, bestmult = 0; ++ unsigned int maxmult = (1 << width) - 1; ++ ++ if (!(clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT)) { ++ bestmult = rate / orig_parent_rate; ++ ++ /* Make sure we don't end up with a 0 multiplier */ ++ if ((bestmult == 0) && ++ !(mult->flags & CLK_MULTIPLIER_ZERO_BYPASS)) ++ bestmult = 1; + +- if (!(clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT)) +- return rate / *best_parent_rate; ++ /* Make sure we don't overflow the multiplier */ ++ if (bestmult > maxmult) ++ bestmult = maxmult; ++ ++ return bestmult; ++ } + +- for (i = 1; i < ((1 << width) - 1); i++) { ++ for (i = 1; i < maxmult; i++) { + if (rate == orig_parent_rate * i) { + /* + * This is the best case for us if we have a +diff --git a/drivers/clk/clk-xgene.c b/drivers/clk/clk-xgene.c +index b134a8b15e2c..5fea58713293 100644 +--- a/drivers/clk/clk-xgene.c ++++ b/drivers/clk/clk-xgene.c +@@ -218,22 +218,20 @@ static int xgene_clk_enable(struct clk_hw *hw) + struct xgene_clk *pclk = to_xgene_clk(hw); + unsigned long flags = 0; + u32 data; +- phys_addr_t reg; + + if (pclk->lock) + spin_lock_irqsave(pclk->lock, flags); + + if (pclk->param.csr_reg != NULL) { + pr_debug("%s clock enabled\n", clk_hw_get_name(hw)); +- reg = __pa(pclk->param.csr_reg); + /* First enable the clock */ + data = xgene_clk_read(pclk->param.csr_reg + + pclk->param.reg_clk_offset); + data |= pclk->param.reg_clk_mask; + xgene_clk_write(data, pclk->param.csr_reg + + pclk->param.reg_clk_offset); +- pr_debug("%s clock PADDR base %pa clk offset 0x%08X mask 0x%08X value 0x%08X\n", +- clk_hw_get_name(hw), ®, ++ pr_debug("%s clk offset 0x%08X mask 0x%08X value 0x%08X\n", ++ clk_hw_get_name(hw), + pclk->param.reg_clk_offset, pclk->param.reg_clk_mask, + data); + +@@ -243,8 +241,8 @@ static int xgene_clk_enable(struct clk_hw *hw) + data &= ~pclk->param.reg_csr_mask; + xgene_clk_write(data, pclk->param.csr_reg + + pclk->param.reg_csr_offset); +- pr_debug("%s CSR RESET PADDR base %pa csr offset 0x%08X mask 0x%08X value 0x%08X\n", +- clk_hw_get_name(hw), ®, ++ pr_debug("%s csr offset 0x%08X mask 0x%08X value 0x%08X\n", ++ clk_hw_get_name(hw), + pclk->param.reg_csr_offset, pclk->param.reg_csr_mask, + data); + } +diff --git a/drivers/clk/imx/clk-pllv3.c b/drivers/clk/imx/clk-pllv3.c +index 6addf8f58b97..cbecbd584624 100644 +--- a/drivers/clk/imx/clk-pllv3.c ++++ b/drivers/clk/imx/clk-pllv3.c +@@ -76,9 +76,9 @@ static int clk_pllv3_prepare(struct clk_hw *hw) + + val = readl_relaxed(pll->base); + if (pll->powerup_set) +- val |= BM_PLL_POWER; ++ val |= pll->powerdown; + else +- val &= ~BM_PLL_POWER; ++ val &= ~pll->powerdown; + writel_relaxed(val, pll->base); + + return clk_pllv3_wait_lock(pll); +@@ -91,9 +91,9 @@ static void clk_pllv3_unprepare(struct clk_hw *hw) + + val = readl_relaxed(pll->base); + if (pll->powerup_set) +- val &= ~BM_PLL_POWER; ++ val &= ~pll->powerdown; + else +- val |= BM_PLL_POWER; ++ val |= pll->powerdown; + writel_relaxed(val, pll->base); + } + +diff --git a/drivers/clk/rockchip/clk-mmc-phase.c b/drivers/clk/rockchip/clk-mmc-phase.c +index 2b289581d570..b513a2bbfcc5 100644 +--- a/drivers/clk/rockchip/clk-mmc-phase.c ++++ b/drivers/clk/rockchip/clk-mmc-phase.c +@@ -41,8 +41,6 @@ static unsigned long rockchip_mmc_recalc(struct clk_hw *hw, + #define ROCKCHIP_MMC_DEGREE_MASK 0x3 + #define ROCKCHIP_MMC_DELAYNUM_OFFSET 2 + #define ROCKCHIP_MMC_DELAYNUM_MASK (0xff << ROCKCHIP_MMC_DELAYNUM_OFFSET) +-#define ROCKCHIP_MMC_INIT_STATE_RESET 0x1 +-#define ROCKCHIP_MMC_INIT_STATE_SHIFT 1 + + #define PSECS_PER_SEC 1000000000000LL + +@@ -183,15 +181,6 @@ struct clk *rockchip_clk_register_mmc(const char *name, + mmc_clock->reg = reg; + mmc_clock->shift = shift; + +- /* +- * Assert init_state to soft reset the CLKGEN +- * for mmc tuning phase and degree +- */ +- if (mmc_clock->shift == ROCKCHIP_MMC_INIT_STATE_SHIFT) +- writel(HIWORD_UPDATE(ROCKCHIP_MMC_INIT_STATE_RESET, +- ROCKCHIP_MMC_INIT_STATE_RESET, +- mmc_clock->shift), mmc_clock->reg); +- + clk = clk_register(NULL, &mmc_clock->hw); + if (IS_ERR(clk)) + goto err_free; +diff --git a/drivers/clk/st/clkgen-fsyn.c b/drivers/clk/st/clkgen-fsyn.c +index 576cd0354d48..ccb324d97160 100644 +--- a/drivers/clk/st/clkgen-fsyn.c ++++ b/drivers/clk/st/clkgen-fsyn.c +@@ -549,19 +549,20 @@ static int clk_fs660c32_vco_get_params(unsigned long input, + return 0; + } + +-static long quadfs_pll_fs660c32_round_rate(struct clk_hw *hw, unsigned long rate +- , unsigned long *prate) ++static long quadfs_pll_fs660c32_round_rate(struct clk_hw *hw, ++ unsigned long rate, ++ unsigned long *prate) + { + struct stm_fs params; + +- if (!clk_fs660c32_vco_get_params(*prate, rate, ¶ms)) +- clk_fs660c32_vco_get_rate(*prate, ¶ms, &rate); ++ if (clk_fs660c32_vco_get_params(*prate, rate, ¶ms)) ++ return rate; + +- pr_debug("%s: %s new rate %ld [sdiv=0x%x,md=0x%x,pe=0x%x,nsdiv3=%u]\n", ++ clk_fs660c32_vco_get_rate(*prate, ¶ms, &rate); ++ ++ pr_debug("%s: %s new rate %ld [ndiv=%u]\n", + __func__, clk_hw_get_name(hw), +- rate, (unsigned int)params.sdiv, +- (unsigned int)params.mdiv, +- (unsigned int)params.pe, (unsigned int)params.nsdiv); ++ rate, (unsigned int)params.ndiv); + + return rate; + } +diff --git a/drivers/clk/ti/dpll3xxx.c b/drivers/clk/ti/dpll3xxx.c +index 0e9119fae760..fa53bf6bd041 100644 +--- a/drivers/clk/ti/dpll3xxx.c ++++ b/drivers/clk/ti/dpll3xxx.c +@@ -437,7 +437,8 @@ int omap3_noncore_dpll_enable(struct clk_hw *hw) + + parent = clk_hw_get_parent(hw); + +- if (clk_hw_get_rate(hw) == clk_get_rate(dd->clk_bypass)) { ++ if (clk_hw_get_rate(hw) == ++ clk_hw_get_rate(__clk_get_hw(dd->clk_bypass))) { + WARN_ON(parent != __clk_get_hw(dd->clk_bypass)); + r = _omap3_noncore_dpll_bypass(clk); + } else { +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index 49aa58e617db..df651b1a7669 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -2171,10 +2171,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy, + return ret; + } + +- up_write(&policy->rwsem); + ret = __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); +- down_write(&policy->rwsem); +- + if (ret) { + pr_err("%s: Failed to Exit Governor: %s (%d)\n", + __func__, old_gov->name, ret); +@@ -2190,9 +2187,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy, + if (!ret) + goto out; + +- up_write(&policy->rwsem); + __cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); +- down_write(&policy->rwsem); + } + + /* new governor failed, so re-start old one */ +diff --git a/drivers/dma/edma.c b/drivers/dma/edma.c +index 17521fcf226f..3cca3055ebd4 100644 +--- a/drivers/dma/edma.c ++++ b/drivers/dma/edma.c +@@ -2439,7 +2439,13 @@ static struct platform_driver edma_driver = { + }, + }; + ++static int edma_tptc_probe(struct platform_device *pdev) ++{ ++ return 0; ++} ++ + static struct platform_driver edma_tptc_driver = { ++ .probe = edma_tptc_probe, + .driver = { + .name = "edma3-tptc", + .of_match_table = edma_tptc_of_ids, +diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c +index d8066ac1e764..ae09f004a33f 100644 +--- a/drivers/gpu/drm/qxl/qxl_cmd.c ++++ b/drivers/gpu/drm/qxl/qxl_cmd.c +@@ -529,8 +529,8 @@ int qxl_hw_surface_alloc(struct qxl_device *qdev, + /* no need to add a release to the fence for this surface bo, + since it is only released when we ask to destroy the surface + and it would never signal otherwise */ +- qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false); + qxl_release_fence_buffer_objects(release); ++ qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false); + + surf->hw_surf_alloc = true; + spin_lock(&qdev->surf_id_idr_lock); +@@ -572,9 +572,8 @@ int qxl_hw_surface_dealloc(struct qxl_device *qdev, + cmd->surface_id = id; + qxl_release_unmap(qdev, release, &cmd->release_info); + +- qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false); +- + qxl_release_fence_buffer_objects(release); ++ qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false); + + return 0; + } +diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c +index 5edebf495c07..0d6cc396cc16 100644 +--- a/drivers/gpu/drm/qxl/qxl_display.c ++++ b/drivers/gpu/drm/qxl/qxl_display.c +@@ -292,8 +292,8 @@ qxl_hide_cursor(struct qxl_device *qdev) + cmd->type = QXL_CURSOR_HIDE; + qxl_release_unmap(qdev, release, &cmd->release_info); + +- qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); + qxl_release_fence_buffer_objects(release); ++ qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); + return 0; + } + +@@ -390,8 +390,8 @@ static int qxl_crtc_cursor_set2(struct drm_crtc *crtc, + cmd->u.set.visible = 1; + qxl_release_unmap(qdev, release, &cmd->release_info); + +- qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); + qxl_release_fence_buffer_objects(release); ++ qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); + + /* finish with the userspace bo */ + ret = qxl_bo_reserve(user_bo, false); +@@ -450,8 +450,8 @@ static int qxl_crtc_cursor_move(struct drm_crtc *crtc, + cmd->u.position.y = qcrtc->cur_y + qcrtc->hot_spot_y; + qxl_release_unmap(qdev, release, &cmd->release_info); + +- qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); + qxl_release_fence_buffer_objects(release); ++ qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); + + return 0; + } +diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c +index 6e6c76080d6a..47da124b7ebf 100644 +--- a/drivers/gpu/drm/qxl/qxl_draw.c ++++ b/drivers/gpu/drm/qxl/qxl_draw.c +@@ -245,8 +245,8 @@ void qxl_draw_opaque_fb(const struct qxl_fb_image *qxl_fb_image, + qxl_bo_physical_address(qdev, dimage->bo, 0); + qxl_release_unmap(qdev, release, &drawable->release_info); + +- qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false); + qxl_release_fence_buffer_objects(release); ++ qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false); + + out_free_palette: + if (palette_bo) +@@ -352,9 +352,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, + goto out_release_backoff; + + rects = drawable_set_clipping(qdev, drawable, num_clips, clips_bo); +- if (!rects) ++ if (!rects) { ++ ret = -EINVAL; + goto out_release_backoff; +- ++ } + drawable = (struct qxl_drawable *)qxl_release_map(qdev, release); + + drawable->clip.type = SPICE_CLIP_TYPE_RECTS; +@@ -385,8 +386,8 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, + } + qxl_bo_kunmap(clips_bo); + +- qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false); + qxl_release_fence_buffer_objects(release); ++ qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false); + + out_release_backoff: + if (ret) +@@ -436,8 +437,8 @@ void qxl_draw_copyarea(struct qxl_device *qdev, + drawable->u.copy_bits.src_pos.y = sy; + qxl_release_unmap(qdev, release, &drawable->release_info); + +- qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false); + qxl_release_fence_buffer_objects(release); ++ qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false); + + out_free_release: + if (ret) +@@ -480,8 +481,8 @@ void qxl_draw_fill(struct qxl_draw_fill *qxl_draw_fill_rec) + + qxl_release_unmap(qdev, release, &drawable->release_info); + +- qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false); + qxl_release_fence_buffer_objects(release); ++ qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false); + + out_free_release: + if (ret) +diff --git a/drivers/gpu/drm/qxl/qxl_ioctl.c b/drivers/gpu/drm/qxl/qxl_ioctl.c +index 7c2e78201ead..4d852449a7d1 100644 +--- a/drivers/gpu/drm/qxl/qxl_ioctl.c ++++ b/drivers/gpu/drm/qxl/qxl_ioctl.c +@@ -257,11 +257,8 @@ static int qxl_process_single_command(struct qxl_device *qdev, + apply_surf_reloc(qdev, &reloc_info[i]); + } + ++ qxl_release_fence_buffer_objects(release); + ret = qxl_push_command_ring_release(qdev, release, cmd->type, true); +- if (ret) +- qxl_release_backoff_reserve_list(release); +- else +- qxl_release_fence_buffer_objects(release); + + out_free_bos: + out_free_release: +diff --git a/drivers/hv/hv_utils_transport.c b/drivers/hv/hv_utils_transport.c +index 1505ee6e6605..24b2766a6d34 100644 +--- a/drivers/hv/hv_utils_transport.c ++++ b/drivers/hv/hv_utils_transport.c +@@ -80,11 +80,10 @@ static ssize_t hvt_op_write(struct file *file, const char __user *buf, + + hvt = container_of(file->f_op, struct hvutil_transport, fops); + +- inmsg = kzalloc(count, GFP_KERNEL); +- if (copy_from_user(inmsg, buf, count)) { +- kfree(inmsg); +- return -EFAULT; +- } ++ inmsg = memdup_user(buf, count); ++ if (IS_ERR(inmsg)) ++ return PTR_ERR(inmsg); ++ + if (hvt->on_msg(inmsg, count)) + return -EFAULT; + kfree(inmsg); +diff --git a/drivers/iio/adc/ad7793.c b/drivers/iio/adc/ad7793.c +index 91d34ed756ea..fe0c5a155e21 100644 +--- a/drivers/iio/adc/ad7793.c ++++ b/drivers/iio/adc/ad7793.c +@@ -579,7 +579,7 @@ static const struct iio_info ad7797_info = { + .read_raw = &ad7793_read_raw, + .write_raw = &ad7793_write_raw, + .write_raw_get_fmt = &ad7793_write_raw_get_fmt, +- .attrs = &ad7793_attribute_group, ++ .attrs = &ad7797_attribute_group, + .validate_trigger = ad_sd_validate_trigger, + .driver_module = THIS_MODULE, + }; +diff --git a/drivers/infiniband/hw/cxgb3/cxio_hal.c b/drivers/infiniband/hw/cxgb3/cxio_hal.c +index de1c61b417d6..ada2e5009c86 100644 +--- a/drivers/infiniband/hw/cxgb3/cxio_hal.c ++++ b/drivers/infiniband/hw/cxgb3/cxio_hal.c +@@ -327,7 +327,7 @@ int cxio_destroy_cq(struct cxio_rdev *rdev_p, struct t3_cq *cq) + kfree(cq->sw_queue); + dma_free_coherent(&(rdev_p->rnic_info.pdev->dev), + (1UL << (cq->size_log2)) +- * sizeof(struct t3_cqe), cq->queue, ++ * sizeof(struct t3_cqe) + 1, cq->queue, + dma_unmap_addr(cq, mapping)); + cxio_hal_put_cqid(rdev_p->rscp, cq->cqid); + return err; +diff --git a/drivers/infiniband/hw/mlx4/ah.c b/drivers/infiniband/hw/mlx4/ah.c +index fc21bdbb8b32..005ea5524e09 100644 +--- a/drivers/infiniband/hw/mlx4/ah.c ++++ b/drivers/infiniband/hw/mlx4/ah.c +@@ -107,6 +107,7 @@ static struct ib_ah *create_iboe_ah(struct ib_pd *pd, struct ib_ah_attr *ah_attr + return ERR_PTR(ret); + ah->av.eth.gid_index = ret; + ah->av.eth.vlan = cpu_to_be16(vlan_tag); ++ ah->av.eth.hop_limit = ah_attr->grh.hop_limit; + if (ah_attr->static_rate) { + ah->av.eth.stat_rate = ah_attr->static_rate + MLX4_STAT_RATE_OFFSET; + while (ah->av.eth.stat_rate > IB_RATE_2_5_GBPS + MLX4_STAT_RATE_OFFSET && +diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c +index dbd5adc62c3f..1b731ab63ede 100644 +--- a/drivers/infiniband/hw/mlx5/main.c ++++ b/drivers/infiniband/hw/mlx5/main.c +@@ -905,7 +905,7 @@ static ssize_t show_fw_ver(struct device *device, struct device_attribute *attr, + { + struct mlx5_ib_dev *dev = + container_of(device, struct mlx5_ib_dev, ib_dev.dev); +- return sprintf(buf, "%d.%d.%d\n", fw_rev_maj(dev->mdev), ++ return sprintf(buf, "%d.%d.%04d\n", fw_rev_maj(dev->mdev), + fw_rev_min(dev->mdev), fw_rev_sub(dev->mdev)); + } + +diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c +index eac5f5eff8d2..b8e71187c700 100644 +--- a/drivers/infiniband/hw/mlx5/qp.c ++++ b/drivers/infiniband/hw/mlx5/qp.c +@@ -271,8 +271,10 @@ static int sq_overhead(enum ib_qp_type qp_type) + /* fall through */ + case IB_QPT_RC: + size += sizeof(struct mlx5_wqe_ctrl_seg) + +- sizeof(struct mlx5_wqe_atomic_seg) + +- sizeof(struct mlx5_wqe_raddr_seg); ++ max(sizeof(struct mlx5_wqe_atomic_seg) + ++ sizeof(struct mlx5_wqe_raddr_seg), ++ sizeof(struct mlx5_wqe_umr_ctrl_seg) + ++ sizeof(struct mlx5_mkey_seg)); + break; + + case IB_QPT_XRC_TGT: +@@ -280,9 +282,9 @@ static int sq_overhead(enum ib_qp_type qp_type) + + case IB_QPT_UC: + size += sizeof(struct mlx5_wqe_ctrl_seg) + +- sizeof(struct mlx5_wqe_raddr_seg) + +- sizeof(struct mlx5_wqe_umr_ctrl_seg) + +- sizeof(struct mlx5_mkey_seg); ++ max(sizeof(struct mlx5_wqe_raddr_seg), ++ sizeof(struct mlx5_wqe_umr_ctrl_seg) + ++ sizeof(struct mlx5_mkey_seg)); + break; + + case IB_QPT_UD: +diff --git a/drivers/input/keyboard/gpio_keys.c b/drivers/input/keyboard/gpio_keys.c +index bef317ff7352..b9f01bd1b7ef 100644 +--- a/drivers/input/keyboard/gpio_keys.c ++++ b/drivers/input/keyboard/gpio_keys.c +@@ -96,13 +96,29 @@ struct gpio_keys_drvdata { + * Return value of this function can be used to allocate bitmap + * large enough to hold all bits for given type. + */ +-static inline int get_n_events_by_type(int type) ++static int get_n_events_by_type(int type) + { + BUG_ON(type != EV_SW && type != EV_KEY); + + return (type == EV_KEY) ? KEY_CNT : SW_CNT; + } + ++/** ++ * get_bm_events_by_type() - returns bitmap of supported events per @type ++ * @input: input device from which bitmap is retrieved ++ * @type: type of button (%EV_KEY, %EV_SW) ++ * ++ * Return value of this function can be used to allocate bitmap ++ * large enough to hold all bits for given type. ++ */ ++static const unsigned long *get_bm_events_by_type(struct input_dev *dev, ++ int type) ++{ ++ BUG_ON(type != EV_SW && type != EV_KEY); ++ ++ return (type == EV_KEY) ? dev->keybit : dev->swbit; ++} ++ + /** + * gpio_keys_disable_button() - disables given GPIO button + * @bdata: button data for button to be disabled +@@ -213,6 +229,7 @@ static ssize_t gpio_keys_attr_store_helper(struct gpio_keys_drvdata *ddata, + const char *buf, unsigned int type) + { + int n_events = get_n_events_by_type(type); ++ const unsigned long *bitmap = get_bm_events_by_type(ddata->input, type); + unsigned long *bits; + ssize_t error; + int i; +@@ -226,6 +243,11 @@ static ssize_t gpio_keys_attr_store_helper(struct gpio_keys_drvdata *ddata, + goto out; + + /* First validate */ ++ if (!bitmap_subset(bits, bitmap, n_events)) { ++ error = -EINVAL; ++ goto out; ++ } ++ + for (i = 0; i < ddata->pdata->nbuttons; i++) { + struct gpio_button_data *bdata = &ddata->data[i]; + +@@ -239,11 +261,6 @@ static ssize_t gpio_keys_attr_store_helper(struct gpio_keys_drvdata *ddata, + } + } + +- if (i == ddata->pdata->nbuttons) { +- error = -EINVAL; +- goto out; +- } +- + mutex_lock(&ddata->disable_lock); + + for (i = 0; i < ddata->pdata->nbuttons; i++) { +diff --git a/drivers/input/touchscreen/edt-ft5x06.c b/drivers/input/touchscreen/edt-ft5x06.c +index a9d97d577a7e..5e535d0bbe16 100644 +--- a/drivers/input/touchscreen/edt-ft5x06.c ++++ b/drivers/input/touchscreen/edt-ft5x06.c +@@ -822,16 +822,22 @@ static void edt_ft5x06_ts_get_defaults(struct device *dev, + int error; + + error = device_property_read_u32(dev, "threshold", &val); +- if (!error) +- reg_addr->reg_threshold = val; ++ if (!error) { ++ edt_ft5x06_register_write(tsdata, reg_addr->reg_threshold, val); ++ tsdata->threshold = val; ++ } + + error = device_property_read_u32(dev, "gain", &val); +- if (!error) +- reg_addr->reg_gain = val; ++ if (!error) { ++ edt_ft5x06_register_write(tsdata, reg_addr->reg_gain, val); ++ tsdata->gain = val; ++ } + + error = device_property_read_u32(dev, "offset", &val); +- if (!error) +- reg_addr->reg_offset = val; ++ if (!error) { ++ edt_ft5x06_register_write(tsdata, reg_addr->reg_offset, val); ++ tsdata->offset = val; ++ } + } + + static void +diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c +index 347a3c17f73a..087a092a6e6e 100644 +--- a/drivers/iommu/dma-iommu.c ++++ b/drivers/iommu/dma-iommu.c +@@ -152,12 +152,15 @@ int dma_direction_to_prot(enum dma_data_direction dir, bool coherent) + } + } + +-static struct iova *__alloc_iova(struct iova_domain *iovad, size_t size, ++static struct iova *__alloc_iova(struct iommu_domain *domain, size_t size, + dma_addr_t dma_limit) + { ++ struct iova_domain *iovad = domain->iova_cookie; + unsigned long shift = iova_shift(iovad); + unsigned long length = iova_align(iovad, size) >> shift; + ++ if (domain->geometry.force_aperture) ++ dma_limit = min(dma_limit, domain->geometry.aperture_end); + /* + * Enforce size-alignment to be safe - there could perhaps be an + * attribute to control this per-device, or at least per-domain... +@@ -297,7 +300,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, + if (!pages) + return NULL; + +- iova = __alloc_iova(iovad, size, dev->coherent_dma_mask); ++ iova = __alloc_iova(domain, size, dev->coherent_dma_mask); + if (!iova) + goto out_free_pages; + +@@ -369,7 +372,7 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, + phys_addr_t phys = page_to_phys(page) + offset; + size_t iova_off = iova_offset(iovad, phys); + size_t len = iova_align(iovad, size + iova_off); +- struct iova *iova = __alloc_iova(iovad, len, dma_get_mask(dev)); ++ struct iova *iova = __alloc_iova(domain, len, dma_get_mask(dev)); + + if (!iova) + return DMA_ERROR_CODE; +@@ -483,7 +486,7 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, + prev = s; + } + +- iova = __alloc_iova(iovad, iova_len, dma_get_mask(dev)); ++ iova = __alloc_iova(domain, iova_len, dma_get_mask(dev)); + if (!iova) + goto out_restore_sg; + +diff --git a/drivers/md/dm.c b/drivers/md/dm.c +index c4d4cd38a58f..ff017d148323 100644 +--- a/drivers/md/dm.c ++++ b/drivers/md/dm.c +@@ -2192,7 +2192,7 @@ static void dm_request_fn(struct request_queue *q) + goto out; + + delay_and_out: +- blk_delay_queue(q, HZ / 100); ++ blk_delay_queue(q, 10); + out: + dm_put_live_table(md, srcu_idx); + } +diff --git a/drivers/media/pci/cx23885/cx23885-av.c b/drivers/media/pci/cx23885/cx23885-av.c +index 877dad89107e..e7d4406f9abd 100644 +--- a/drivers/media/pci/cx23885/cx23885-av.c ++++ b/drivers/media/pci/cx23885/cx23885-av.c +@@ -24,7 +24,7 @@ void cx23885_av_work_handler(struct work_struct *work) + { + struct cx23885_dev *dev = + container_of(work, struct cx23885_dev, cx25840_work); +- bool handled; ++ bool handled = false; + + v4l2_subdev_call(dev->sd_cx25840, core, interrupt_service_routine, + PCI_MSK_AV_CORE, &handled); +diff --git a/drivers/media/platform/am437x/am437x-vpfe.c b/drivers/media/platform/am437x/am437x-vpfe.c +index 36add3c463f7..1256af0dde1d 100644 +--- a/drivers/media/platform/am437x/am437x-vpfe.c ++++ b/drivers/media/platform/am437x/am437x-vpfe.c +@@ -1047,7 +1047,7 @@ static int vpfe_get_ccdc_image_format(struct vpfe_device *vpfe, + static int vpfe_config_ccdc_image_format(struct vpfe_device *vpfe) + { + enum ccdc_frmfmt frm_fmt = CCDC_FRMFMT_INTERLACED; +- int ret; ++ int ret = 0; + + vpfe_dbg(2, vpfe, "vpfe_config_ccdc_image_format\n"); + +diff --git a/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c b/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c +index a43404cad3e3..ed307488ccbd 100644 +--- a/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c ++++ b/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c +@@ -49,7 +49,7 @@ MODULE_FIRMWARE(FIRMWARE_MEMDMA); + #define PID_TABLE_SIZE 1024 + #define POLL_MSECS 50 + +-static int load_c8sectpfe_fw_step1(struct c8sectpfei *fei); ++static int load_c8sectpfe_fw(struct c8sectpfei *fei); + + #define TS_PKT_SIZE 188 + #define HEADER_SIZE (4) +@@ -143,6 +143,7 @@ static int c8sectpfe_start_feed(struct dvb_demux_feed *dvbdmxfeed) + struct channel_info *channel; + u32 tmp; + unsigned long *bitmap; ++ int ret; + + switch (dvbdmxfeed->type) { + case DMX_TYPE_TS: +@@ -171,8 +172,9 @@ static int c8sectpfe_start_feed(struct dvb_demux_feed *dvbdmxfeed) + } + + if (!atomic_read(&fei->fw_loaded)) { +- dev_err(fei->dev, "%s: c8sectpfe fw not loaded\n", __func__); +- return -EINVAL; ++ ret = load_c8sectpfe_fw(fei); ++ if (ret) ++ return ret; + } + + mutex_lock(&fei->lock); +@@ -267,8 +269,9 @@ static int c8sectpfe_stop_feed(struct dvb_demux_feed *dvbdmxfeed) + unsigned long *bitmap; + + if (!atomic_read(&fei->fw_loaded)) { +- dev_err(fei->dev, "%s: c8sectpfe fw not loaded\n", __func__); +- return -EINVAL; ++ ret = load_c8sectpfe_fw(fei); ++ if (ret) ++ return ret; + } + + mutex_lock(&fei->lock); +@@ -882,13 +885,6 @@ static int c8sectpfe_probe(struct platform_device *pdev) + goto err_clk_disable; + } + +- /* ensure all other init has been done before requesting firmware */ +- ret = load_c8sectpfe_fw_step1(fei); +- if (ret) { +- dev_err(dev, "Couldn't load slim core firmware\n"); +- goto err_clk_disable; +- } +- + c8sectpfe_debugfs_init(fei); + + return 0; +@@ -1093,15 +1089,14 @@ static void load_dmem_segment(struct c8sectpfei *fei, Elf32_Phdr *phdr, + phdr->p_memsz - phdr->p_filesz); + } + +-static int load_slim_core_fw(const struct firmware *fw, void *context) ++static int load_slim_core_fw(const struct firmware *fw, struct c8sectpfei *fei) + { +- struct c8sectpfei *fei = context; + Elf32_Ehdr *ehdr; + Elf32_Phdr *phdr; + u8 __iomem *dst; + int err = 0, i; + +- if (!fw || !context) ++ if (!fw || !fei) + return -EINVAL; + + ehdr = (Elf32_Ehdr *)fw->data; +@@ -1153,29 +1148,35 @@ static int load_slim_core_fw(const struct firmware *fw, void *context) + return err; + } + +-static void load_c8sectpfe_fw_cb(const struct firmware *fw, void *context) ++static int load_c8sectpfe_fw(struct c8sectpfei *fei) + { +- struct c8sectpfei *fei = context; ++ const struct firmware *fw; + int err; + ++ dev_info(fei->dev, "Loading firmware: %s\n", FIRMWARE_MEMDMA); ++ ++ err = request_firmware(&fw, FIRMWARE_MEMDMA, fei->dev); ++ if (err) ++ return err; ++ + err = c8sectpfe_elf_sanity_check(fei, fw); + if (err) { + dev_err(fei->dev, "c8sectpfe_elf_sanity_check failed err=(%d)\n" + , err); +- goto err; ++ return err; + } + +- err = load_slim_core_fw(fw, context); ++ err = load_slim_core_fw(fw, fei); + if (err) { + dev_err(fei->dev, "load_slim_core_fw failed err=(%d)\n", err); +- goto err; ++ return err; + } + + /* now the firmware is loaded configure the input blocks */ + err = configure_channels(fei); + if (err) { + dev_err(fei->dev, "configure_channels failed err=(%d)\n", err); +- goto err; ++ return err; + } + + /* +@@ -1188,28 +1189,6 @@ static void load_c8sectpfe_fw_cb(const struct firmware *fw, void *context) + writel(0x1, fei->io + DMA_CPU_RUN); + + atomic_set(&fei->fw_loaded, 1); +-err: +- complete_all(&fei->fw_ack); +-} +- +-static int load_c8sectpfe_fw_step1(struct c8sectpfei *fei) +-{ +- int err; +- +- dev_info(fei->dev, "Loading firmware: %s\n", FIRMWARE_MEMDMA); +- +- init_completion(&fei->fw_ack); +- atomic_set(&fei->fw_loaded, 0); +- +- err = request_firmware_nowait(THIS_MODULE, FW_ACTION_HOTPLUG, +- FIRMWARE_MEMDMA, fei->dev, GFP_KERNEL, fei, +- load_c8sectpfe_fw_cb); +- +- if (err) { +- dev_err(fei->dev, "request_firmware_nowait err: %d.\n", err); +- complete_all(&fei->fw_ack); +- return err; +- } + + return 0; + } +diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c +index 3f0f71adabb4..ea1008cf14a3 100644 +--- a/drivers/media/rc/rc-main.c ++++ b/drivers/media/rc/rc-main.c +@@ -61,7 +61,7 @@ struct rc_map *rc_map_get(const char *name) + struct rc_map_list *map; + + map = seek_rc_map(name); +-#ifdef MODULE ++#ifdef CONFIG_MODULES + if (!map) { + int rc = request_module("%s", name); + if (rc < 0) { +diff --git a/drivers/memory/tegra/tegra124.c b/drivers/memory/tegra/tegra124.c +index 234e74f97a4b..9f68a56f2727 100644 +--- a/drivers/memory/tegra/tegra124.c ++++ b/drivers/memory/tegra/tegra124.c +@@ -1007,6 +1007,7 @@ static const struct tegra_smmu_soc tegra124_smmu_soc = { + .num_swgroups = ARRAY_SIZE(tegra124_swgroups), + .supports_round_robin_arbitration = true, + .supports_request_limit = true, ++ .num_tlb_lines = 32, + .num_asids = 128, + }; + +diff --git a/drivers/mfd/lp8788-irq.c b/drivers/mfd/lp8788-irq.c +index c7a9825aa4ce..792d51bae20f 100644 +--- a/drivers/mfd/lp8788-irq.c ++++ b/drivers/mfd/lp8788-irq.c +@@ -112,7 +112,7 @@ static irqreturn_t lp8788_irq_handler(int irq, void *ptr) + struct lp8788_irq_data *irqd = ptr; + struct lp8788 *lp = irqd->lp; + u8 status[NUM_REGS], addr, mask; +- bool handled; ++ bool handled = false; + int i; + + if (lp8788_read_multi_bytes(lp, LP8788_INT_1, status, NUM_REGS)) +diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c +index 81c3f75b7330..8f9c26b77089 100644 +--- a/drivers/misc/cxl/fault.c ++++ b/drivers/misc/cxl/fault.c +@@ -152,7 +152,7 @@ static void cxl_handle_page_fault(struct cxl_context *ctx, + access = _PAGE_PRESENT; + if (dsisr & CXL_PSL_DSISR_An_S) + access |= _PAGE_RW; +- if ((!ctx->kernel) || ~(dar & (1ULL << 63))) ++ if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID)) + access |= _PAGE_USER; + + if (dsisr & DSISR_NOHPTE) +diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c +index 07592e428755..c4c4dfbde0dc 100644 +--- a/drivers/mmc/card/block.c ++++ b/drivers/mmc/card/block.c +@@ -668,8 +668,10 @@ static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev, + } + + md = mmc_blk_get(bdev->bd_disk); +- if (!md) ++ if (!md) { ++ err = -EINVAL; + goto cmd_err; ++ } + + card = md->queue.card; + if (IS_ERR(card)) { +diff --git a/drivers/mmc/core/debugfs.c b/drivers/mmc/core/debugfs.c +index 705586dcd9fa..9382a57a5aa4 100644 +--- a/drivers/mmc/core/debugfs.c ++++ b/drivers/mmc/core/debugfs.c +@@ -170,7 +170,7 @@ static int mmc_ios_show(struct seq_file *s, void *data) + str = "invalid"; + break; + } +- seq_printf(s, "signal voltage:\t%u (%s)\n", ios->chip_select, str); ++ seq_printf(s, "signal voltage:\t%u (%s)\n", ios->signal_voltage, str); + + switch (ios->drv_type) { + case MMC_SET_DRIVER_TYPE_A: +diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c +index 54ba1abb5460..ed9af65e583e 100644 +--- a/drivers/mmc/core/sd.c ++++ b/drivers/mmc/core/sd.c +@@ -337,6 +337,7 @@ static int mmc_read_switch(struct mmc_card *card) + card->sw_caps.sd3_bus_mode = status[13]; + /* Driver Strengths supported by the card */ + card->sw_caps.sd3_drv_type = status[9]; ++ card->sw_caps.sd3_curr_limit = status[7] | status[6] << 8; + } + + out: +@@ -553,14 +554,25 @@ static int sd_set_current_limit(struct mmc_card *card, u8 *status) + * when we set current limit to 200ma, the card will draw 200ma, and + * when we set current limit to 400/600/800ma, the card will draw its + * maximum 300ma from the host. ++ * ++ * The above is incorrect: if we try to set a current limit that is ++ * not supported by the card, the card can rightfully error out the ++ * attempt, and remain at the default current limit. This results ++ * in a 300mA card being limited to 200mA even though the host ++ * supports 800mA. Failures seen with SanDisk 8GB UHS cards with ++ * an iMX6 host. --rmk + */ +- if (max_current >= 800) ++ if (max_current >= 800 && ++ card->sw_caps.sd3_curr_limit & SD_MAX_CURRENT_800) + current_limit = SD_SET_CURRENT_LIMIT_800; +- else if (max_current >= 600) ++ else if (max_current >= 600 && ++ card->sw_caps.sd3_curr_limit & SD_MAX_CURRENT_600) + current_limit = SD_SET_CURRENT_LIMIT_600; +- else if (max_current >= 400) ++ else if (max_current >= 400 && ++ card->sw_caps.sd3_curr_limit & SD_MAX_CURRENT_400) + current_limit = SD_SET_CURRENT_LIMIT_400; +- else if (max_current >= 200) ++ else if (max_current >= 200 && ++ card->sw_caps.sd3_curr_limit & SD_MAX_CURRENT_200) + current_limit = SD_SET_CURRENT_LIMIT_200; + + if (current_limit != SD_SET_CURRENT_NO_CHANGE) { +diff --git a/drivers/mmc/host/dw_mmc-rockchip.c b/drivers/mmc/host/dw_mmc-rockchip.c +index 9becebeeccd1..b2c482da5dd7 100644 +--- a/drivers/mmc/host/dw_mmc-rockchip.c ++++ b/drivers/mmc/host/dw_mmc-rockchip.c +@@ -78,6 +78,70 @@ static void dw_mci_rk3288_set_ios(struct dw_mci *host, struct mmc_ios *ios) + /* Make sure we use phases which we can enumerate with */ + if (!IS_ERR(priv->sample_clk)) + clk_set_phase(priv->sample_clk, priv->default_sample_phase); ++ ++ /* ++ * Set the drive phase offset based on speed mode to achieve hold times. ++ * ++ * NOTE: this is _not_ a value that is dynamically tuned and is also ++ * _not_ a value that will vary from board to board. It is a value ++ * that could vary between different SoC models if they had massively ++ * different output clock delays inside their dw_mmc IP block (delay_o), ++ * but since it's OK to overshoot a little we don't need to do complex ++ * calculations and can pick values that will just work for everyone. ++ * ++ * When picking values we'll stick with picking 0/90/180/270 since ++ * those can be made very accurately on all known Rockchip SoCs. ++ * ++ * Note that these values match values from the DesignWare Databook ++ * tables for the most part except for SDR12 and "ID mode". For those ++ * two modes the databook calculations assume a clock in of 50MHz. As ++ * seen above, we always use a clock in rate that is exactly the ++ * card's input clock (times RK3288_CLKGEN_DIV, but that gets divided ++ * back out before the controller sees it). ++ * ++ * From measurement of a single device, it appears that delay_o is ++ * about .5 ns. Since we try to leave a bit of margin, it's expected ++ * that numbers here will be fine even with much larger delay_o ++ * (the 1.4 ns assumed by the DesignWare Databook would result in the ++ * same results, for instance). ++ */ ++ if (!IS_ERR(priv->drv_clk)) { ++ int phase; ++ ++ /* ++ * In almost all cases a 90 degree phase offset will provide ++ * sufficient hold times across all valid input clock rates ++ * assuming delay_o is not absurd for a given SoC. We'll use ++ * that as a default. ++ */ ++ phase = 90; ++ ++ switch (ios->timing) { ++ case MMC_TIMING_MMC_DDR52: ++ /* ++ * Since clock in rate with MMC_DDR52 is doubled when ++ * bus width is 8 we need to double the phase offset ++ * to get the same timings. ++ */ ++ if (ios->bus_width == MMC_BUS_WIDTH_8) ++ phase = 180; ++ break; ++ case MMC_TIMING_UHS_SDR104: ++ case MMC_TIMING_MMC_HS200: ++ /* ++ * In the case of 150 MHz clock (typical max for ++ * Rockchip SoCs), 90 degree offset will add a delay ++ * of 1.67 ns. That will meet min hold time of .8 ns ++ * as long as clock output delay is < .87 ns. On ++ * SoCs measured this seems to be OK, but it doesn't ++ * hurt to give margin here, so we use 180. ++ */ ++ phase = 180; ++ break; ++ } ++ ++ clk_set_phase(priv->drv_clk, phase); ++ } + } + + #define NUM_PHASES 360 +diff --git a/drivers/mmc/host/moxart-mmc.c b/drivers/mmc/host/moxart-mmc.c +index 79905ce895ad..bbad309679cf 100644 +--- a/drivers/mmc/host/moxart-mmc.c ++++ b/drivers/mmc/host/moxart-mmc.c +@@ -257,7 +257,7 @@ static void moxart_dma_complete(void *param) + static void moxart_transfer_dma(struct mmc_data *data, struct moxart_host *host) + { + u32 len, dir_data, dir_slave; +- unsigned long dma_time; ++ long dma_time; + struct dma_async_tx_descriptor *desc = NULL; + struct dma_chan *dma_chan; + +@@ -397,7 +397,8 @@ static void moxart_prepare_data(struct moxart_host *host) + static void moxart_request(struct mmc_host *mmc, struct mmc_request *mrq) + { + struct moxart_host *host = mmc_priv(mmc); +- unsigned long pio_time, flags; ++ long pio_time; ++ unsigned long flags; + u32 status; + + spin_lock_irqsave(&host->lock, flags); +diff --git a/drivers/mmc/host/sdhci-pxav3.c b/drivers/mmc/host/sdhci-pxav3.c +index f5edf9d3a18a..0535827b02ee 100644 +--- a/drivers/mmc/host/sdhci-pxav3.c ++++ b/drivers/mmc/host/sdhci-pxav3.c +@@ -307,8 +307,30 @@ static void pxav3_set_uhs_signaling(struct sdhci_host *host, unsigned int uhs) + __func__, uhs, ctrl_2); + } + ++static void pxav3_set_power(struct sdhci_host *host, unsigned char mode, ++ unsigned short vdd) ++{ ++ struct mmc_host *mmc = host->mmc; ++ u8 pwr = host->pwr; ++ ++ sdhci_set_power(host, mode, vdd); ++ ++ if (host->pwr == pwr) ++ return; ++ ++ if (host->pwr == 0) ++ vdd = 0; ++ ++ if (!IS_ERR(mmc->supply.vmmc)) { ++ spin_unlock_irq(&host->lock); ++ mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd); ++ spin_lock_irq(&host->lock); ++ } ++} ++ + static const struct sdhci_ops pxav3_sdhci_ops = { + .set_clock = sdhci_set_clock, ++ .set_power = pxav3_set_power, + .platform_send_init_74_clocks = pxav3_gen_init_74_clocks, + .get_max_clock = sdhci_pltfm_clk_get_max_clock, + .set_bus_width = sdhci_set_bus_width, +diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c +index bf309a8a66a1..417cfaa85dd9 100644 +--- a/drivers/mmc/host/sdhci.c ++++ b/drivers/mmc/host/sdhci.c +@@ -1284,24 +1284,25 @@ clock_set: + } + EXPORT_SYMBOL_GPL(sdhci_set_clock); + +-static void sdhci_set_power(struct sdhci_host *host, unsigned char mode, +- unsigned short vdd) ++static void sdhci_set_power_reg(struct sdhci_host *host, unsigned char mode, ++ unsigned short vdd) + { + struct mmc_host *mmc = host->mmc; +- u8 pwr = 0; + +- if (!IS_ERR(mmc->supply.vmmc)) { +- spin_unlock_irq(&host->lock); +- mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd); +- spin_lock_irq(&host->lock); ++ spin_unlock_irq(&host->lock); ++ mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd); ++ spin_lock_irq(&host->lock); + +- if (mode != MMC_POWER_OFF) +- sdhci_writeb(host, SDHCI_POWER_ON, SDHCI_POWER_CONTROL); +- else +- sdhci_writeb(host, 0, SDHCI_POWER_CONTROL); ++ if (mode != MMC_POWER_OFF) ++ sdhci_writeb(host, SDHCI_POWER_ON, SDHCI_POWER_CONTROL); ++ else ++ sdhci_writeb(host, 0, SDHCI_POWER_CONTROL); ++} + +- return; +- } ++void sdhci_set_power(struct sdhci_host *host, unsigned char mode, ++ unsigned short vdd) ++{ ++ u8 pwr = 0; + + if (mode != MMC_POWER_OFF) { + switch (1 << vdd) { +@@ -1332,7 +1333,6 @@ static void sdhci_set_power(struct sdhci_host *host, unsigned char mode, + sdhci_writeb(host, 0, SDHCI_POWER_CONTROL); + if (host->quirks2 & SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON) + sdhci_runtime_pm_bus_off(host); +- vdd = 0; + } else { + /* + * Spec says that we should clear the power reg before setting +@@ -1364,6 +1364,20 @@ static void sdhci_set_power(struct sdhci_host *host, unsigned char mode, + mdelay(10); + } + } ++EXPORT_SYMBOL_GPL(sdhci_set_power); ++ ++static void __sdhci_set_power(struct sdhci_host *host, unsigned char mode, ++ unsigned short vdd) ++{ ++ struct mmc_host *mmc = host->mmc; ++ ++ if (host->ops->set_power) ++ host->ops->set_power(host, mode, vdd); ++ else if (!IS_ERR(mmc->supply.vmmc)) ++ sdhci_set_power_reg(host, mode, vdd); ++ else ++ sdhci_set_power(host, mode, vdd); ++} + + /*****************************************************************************\ + * * +@@ -1512,7 +1526,7 @@ static void sdhci_do_set_ios(struct sdhci_host *host, struct mmc_ios *ios) + } + } + +- sdhci_set_power(host, ios->power_mode, ios->vdd); ++ __sdhci_set_power(host, ios->power_mode, ios->vdd); + + if (host->ops->platform_send_init_74_clocks) + host->ops->platform_send_init_74_clocks(host, ios->power_mode); +diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h +index 0115e9907bf8..033d72b5bbd5 100644 +--- a/drivers/mmc/host/sdhci.h ++++ b/drivers/mmc/host/sdhci.h +@@ -529,6 +529,8 @@ struct sdhci_ops { + #endif + + void (*set_clock)(struct sdhci_host *host, unsigned int clock); ++ void (*set_power)(struct sdhci_host *host, unsigned char mode, ++ unsigned short vdd); + + int (*enable_dma)(struct sdhci_host *host); + unsigned int (*get_max_clock)(struct sdhci_host *host); +@@ -660,6 +662,8 @@ static inline bool sdhci_sdio_irq_enabled(struct sdhci_host *host) + } + + void sdhci_set_clock(struct sdhci_host *host, unsigned int clock); ++void sdhci_set_power(struct sdhci_host *host, unsigned char mode, ++ unsigned short vdd); + void sdhci_set_bus_width(struct sdhci_host *host, int width); + void sdhci_reset(struct sdhci_host *host, u8 mask); + void sdhci_set_uhs_signaling(struct sdhci_host *host, unsigned timing); +diff --git a/drivers/mtd/nand/denali.c b/drivers/mtd/nand/denali.c +index 67eb2be0db87..9a5035cac129 100644 +--- a/drivers/mtd/nand/denali.c ++++ b/drivers/mtd/nand/denali.c +@@ -1622,9 +1622,16 @@ EXPORT_SYMBOL(denali_init); + /* driver exit point */ + void denali_remove(struct denali_nand_info *denali) + { ++ /* ++ * Pre-compute DMA buffer size to avoid any problems in case ++ * nand_release() ever changes in a way that mtd->writesize and ++ * mtd->oobsize are not reliable after this call. ++ */ ++ int bufsize = denali->mtd.writesize + denali->mtd.oobsize; ++ ++ nand_release(&denali->mtd); + denali_irq_cleanup(denali->irq, denali); +- dma_unmap_single(denali->dev, denali->buf.dma_buf, +- denali->mtd.writesize + denali->mtd.oobsize, ++ dma_unmap_single(denali->dev, denali->buf.dma_buf, bufsize, + DMA_BIDIRECTIONAL); + } + EXPORT_SYMBOL(denali_remove); +diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c +index 399c627b15cc..22ebdf4d8cc4 100644 +--- a/drivers/net/bonding/bond_3ad.c ++++ b/drivers/net/bonding/bond_3ad.c +@@ -100,11 +100,14 @@ enum ad_link_speed_type { + #define MAC_ADDRESS_EQUAL(A, B) \ + ether_addr_equal_64bits((const u8 *)A, (const u8 *)B) + +-static struct mac_addr null_mac_addr = { { 0, 0, 0, 0, 0, 0 } }; ++static const u8 null_mac_addr[ETH_ALEN + 2] __long_aligned = { ++ 0, 0, 0, 0, 0, 0 ++}; + static u16 ad_ticks_per_sec; + static const int ad_delta_in_ticks = (AD_TIMER_INTERVAL * HZ) / 1000; + +-static const u8 lacpdu_mcast_addr[ETH_ALEN] = MULTICAST_LACPDU_ADDR; ++static const u8 lacpdu_mcast_addr[ETH_ALEN + 2] __long_aligned = ++ MULTICAST_LACPDU_ADDR; + + /* ================= main 802.3ad protocol functions ================== */ + static int ad_lacpdu_send(struct port *port); +@@ -1701,7 +1704,7 @@ static void ad_clear_agg(struct aggregator *aggregator) + aggregator->is_individual = false; + aggregator->actor_admin_aggregator_key = 0; + aggregator->actor_oper_aggregator_key = 0; +- aggregator->partner_system = null_mac_addr; ++ eth_zero_addr(aggregator->partner_system.mac_addr_value); + aggregator->partner_system_priority = 0; + aggregator->partner_oper_aggregator_key = 0; + aggregator->receive_state = 0; +@@ -1723,7 +1726,7 @@ static void ad_initialize_agg(struct aggregator *aggregator) + if (aggregator) { + ad_clear_agg(aggregator); + +- aggregator->aggregator_mac_address = null_mac_addr; ++ eth_zero_addr(aggregator->aggregator_mac_address.mac_addr_value); + aggregator->aggregator_identifier = 0; + aggregator->slave = NULL; + } +diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c +index 41bd9186d383..295d86ba63d3 100644 +--- a/drivers/net/bonding/bond_alb.c ++++ b/drivers/net/bonding/bond_alb.c +@@ -42,13 +42,10 @@ + + + +-#ifndef __long_aligned +-#define __long_aligned __attribute__((aligned((sizeof(long))))) +-#endif +-static const u8 mac_bcast[ETH_ALEN] __long_aligned = { ++static const u8 mac_bcast[ETH_ALEN + 2] __long_aligned = { + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff + }; +-static const u8 mac_v6_allmcast[ETH_ALEN] __long_aligned = { ++static const u8 mac_v6_allmcast[ETH_ALEN + 2] __long_aligned = { + 0x33, 0x33, 0x00, 0x00, 0x00, 0x01 + }; + static const int alb_delta_in_ticks = HZ / ALB_TIMER_TICKS_PER_SEC; +diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c +index b8df0f5e8c25..3f320f470345 100644 +--- a/drivers/net/bonding/bond_netlink.c ++++ b/drivers/net/bonding/bond_netlink.c +@@ -628,8 +628,7 @@ static int bond_fill_info(struct sk_buff *skb, + goto nla_put_failure; + + if (nla_put(skb, IFLA_BOND_AD_ACTOR_SYSTEM, +- sizeof(bond->params.ad_actor_system), +- &bond->params.ad_actor_system)) ++ ETH_ALEN, &bond->params.ad_actor_system)) + goto nla_put_failure; + } + if (!bond_3ad_get_active_agg_info(bond, &info)) { +diff --git a/drivers/net/dsa/mv88e6xxx.c b/drivers/net/dsa/mv88e6xxx.c +index e2414f2d7ba9..68ef738a7689 100644 +--- a/drivers/net/dsa/mv88e6xxx.c ++++ b/drivers/net/dsa/mv88e6xxx.c +@@ -2064,9 +2064,9 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port) + * the other bits clear. + */ + reg = 1 << port; +- /* Disable learning for DSA and CPU ports */ +- if (dsa_is_cpu_port(ds, port) || dsa_is_dsa_port(ds, port)) +- reg = PORT_ASSOC_VECTOR_LOCKED_PORT; ++ /* Disable learning for CPU port */ ++ if (dsa_is_cpu_port(ds, port)) ++ reg = 0; + + ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_ASSOC_VECTOR, reg); + if (ret) +@@ -2150,7 +2150,8 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port) + * database, and allow every port to egress frames on all other ports. + */ + reg = BIT(ps->num_ports) - 1; /* all ports */ +- ret = _mv88e6xxx_port_vlan_map_set(ds, port, reg & ~port); ++ reg &= ~BIT(port); /* except itself */ ++ ret = _mv88e6xxx_port_vlan_map_set(ds, port, reg); + if (ret) + goto abort; + +diff --git a/drivers/net/ethernet/agere/et131x.c b/drivers/net/ethernet/agere/et131x.c +index e0f3d197e7f2..8ff10fd70b02 100644 +--- a/drivers/net/ethernet/agere/et131x.c ++++ b/drivers/net/ethernet/agere/et131x.c +@@ -3854,7 +3854,7 @@ static void et131x_tx_timeout(struct net_device *netdev) + unsigned long flags; + + /* If the device is closed, ignore the timeout */ +- if (~(adapter->flags & FMP_ADAPTER_INTERRUPT_IN_USE)) ++ if (!(adapter->flags & FMP_ADAPTER_INTERRUPT_IN_USE)) + return; + + /* Any nonrecoverable hardware error? +diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c +index 3cb99ce7325b..94f06c35ad9c 100644 +--- a/drivers/net/ethernet/broadcom/bcmsysport.c ++++ b/drivers/net/ethernet/broadcom/bcmsysport.c +@@ -396,7 +396,7 @@ static void bcm_sysport_get_stats(struct net_device *dev, + else + p = (char *)priv; + p += s->stat_offset; +- data[i] = *(u32 *)p; ++ data[i] = *(unsigned long *)p; + } + } + +@@ -526,7 +526,8 @@ static struct sk_buff *bcm_sysport_rx_refill(struct bcm_sysport_priv *priv, + dma_addr_t mapping; + + /* Allocate a new SKB for a new packet */ +- skb = netdev_alloc_skb(priv->netdev, RX_BUF_LENGTH); ++ skb = __netdev_alloc_skb(priv->netdev, RX_BUF_LENGTH, ++ GFP_ATOMIC | __GFP_NOWARN); + if (!skb) { + priv->mib.alloc_rx_buff_failed++; + netif_err(priv, rx_err, ndev, "SKB alloc failed\n"); +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +index d91953eabfeb..a3949c1a0c23 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +@@ -4250,6 +4250,10 @@ static void bnxt_del_napi(struct bnxt *bp) + napi_hash_del(&bnapi->napi); + netif_napi_del(&bnapi->napi); + } ++ /* We called napi_hash_del() before netif_napi_del(), we need ++ * to respect an RCU grace period before freeing napi structures. ++ */ ++ synchronize_net(); + } + + static void bnxt_init_napi(struct bnxt *bp) +@@ -4306,9 +4310,7 @@ static void bnxt_tx_disable(struct bnxt *bp) + bnapi = bp->bnapi[i]; + txr = &bnapi->tx_ring; + txq = netdev_get_tx_queue(bp->dev, i); +- __netif_tx_lock(txq, smp_processor_id()); + txr->dev_state = BNXT_DEV_STATE_CLOSING; +- __netif_tx_unlock(txq); + } + } + /* Stop all TX queues */ +diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c +index 34fae5576b60..bae8df951780 100644 +--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c ++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c +@@ -927,7 +927,11 @@ static void bcmgenet_get_ethtool_stats(struct net_device *dev, + else + p = (char *)priv; + p += s->stat_offset; +- data[i] = *(u32 *)p; ++ if (sizeof(unsigned long) != sizeof(u32) && ++ s->stat_sizeof == sizeof(unsigned long)) ++ data[i] = *(unsigned long *)p; ++ else ++ data[i] = *(u32 *)p; + } + } + +@@ -1346,7 +1350,7 @@ static int bcmgenet_xmit_single(struct net_device *dev, + + tx_cb_ptr->skb = skb; + +- skb_len = skb_headlen(skb) < ETH_ZLEN ? ETH_ZLEN : skb_headlen(skb); ++ skb_len = skb_headlen(skb); + + mapping = dma_map_single(kdev, skb->data, skb_len, DMA_TO_DEVICE); + ret = dma_mapping_error(kdev, mapping); +@@ -1575,7 +1579,8 @@ static struct sk_buff *bcmgenet_rx_refill(struct bcmgenet_priv *priv, + dma_addr_t mapping; + + /* Allocate a new Rx skb */ +- skb = netdev_alloc_skb(priv->dev, priv->rx_buf_len + SKB_ALIGNMENT); ++ skb = __netdev_alloc_skb(priv->dev, priv->rx_buf_len + SKB_ALIGNMENT, ++ GFP_ATOMIC | __GFP_NOWARN); + if (!skb) { + priv->mib.alloc_rx_buff_failed++; + netif_err(priv, rx_err, priv->dev, +diff --git a/drivers/net/ethernet/brocade/bna/bnad_ethtool.c b/drivers/net/ethernet/brocade/bna/bnad_ethtool.c +index 18672ad773fb..856b7abe4b8a 100644 +--- a/drivers/net/ethernet/brocade/bna/bnad_ethtool.c ++++ b/drivers/net/ethernet/brocade/bna/bnad_ethtool.c +@@ -31,7 +31,7 @@ + #define BNAD_NUM_TXF_COUNTERS 12 + #define BNAD_NUM_RXF_COUNTERS 10 + #define BNAD_NUM_CQ_COUNTERS (3 + 5) +-#define BNAD_NUM_RXQ_COUNTERS 6 ++#define BNAD_NUM_RXQ_COUNTERS 7 + #define BNAD_NUM_TXQ_COUNTERS 5 + + #define BNAD_ETHTOOL_STATS_NUM \ +@@ -658,6 +658,8 @@ bnad_get_strings(struct net_device *netdev, u32 stringset, u8 *string) + string += ETH_GSTRING_LEN; + sprintf(string, "rxq%d_allocbuf_failed", q_num); + string += ETH_GSTRING_LEN; ++ sprintf(string, "rxq%d_mapbuf_failed", q_num); ++ string += ETH_GSTRING_LEN; + sprintf(string, "rxq%d_producer_index", q_num); + string += ETH_GSTRING_LEN; + sprintf(string, "rxq%d_consumer_index", q_num); +@@ -678,6 +680,9 @@ bnad_get_strings(struct net_device *netdev, u32 stringset, u8 *string) + sprintf(string, "rxq%d_allocbuf_failed", + q_num); + string += ETH_GSTRING_LEN; ++ sprintf(string, "rxq%d_mapbuf_failed", ++ q_num); ++ string += ETH_GSTRING_LEN; + sprintf(string, "rxq%d_producer_index", + q_num); + string += ETH_GSTRING_LEN; +diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c +index 75bdb6aad352..78803e7de360 100644 +--- a/drivers/net/ethernet/cadence/macb.c ++++ b/drivers/net/ethernet/cadence/macb.c +@@ -1104,7 +1104,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id) + macb_writel(bp, NCR, ctrl | MACB_BIT(RE)); + + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) +- macb_writel(bp, ISR, MACB_BIT(RXUBR)); ++ queue_writel(queue, ISR, MACB_BIT(RXUBR)); + } + + if (status & MACB_BIT(ISR_ROVR)) { +@@ -2904,7 +2904,7 @@ static int macb_probe(struct platform_device *pdev) + dev->irq = platform_get_irq(pdev, 0); + if (dev->irq < 0) { + err = dev->irq; +- goto err_disable_clocks; ++ goto err_out_free_netdev; + } + + mac = of_get_mac_address(np); +diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c +index b7b93e7a643d..d579f4770ee3 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c ++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c +@@ -1165,7 +1165,7 @@ out_free: dev_kfree_skb_any(skb); + + /* Discard the packet if the length is greater than mtu */ + max_pkt_len = ETH_HLEN + dev->mtu; +- if (skb_vlan_tag_present(skb)) ++ if (skb_vlan_tagged(skb)) + max_pkt_len += VLAN_HLEN; + if (!skb_shinfo(skb)->gso_size && (unlikely(skb->len > max_pkt_len))) + goto out_free; +diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c +index ec8ffd7eae33..735bcdeaa7de 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c ++++ b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c +@@ -1188,7 +1188,7 @@ int t4vf_eth_xmit(struct sk_buff *skb, struct net_device *dev) + + /* Discard the packet if the length is greater than mtu */ + max_pkt_len = ETH_HLEN + dev->mtu; +- if (skb_vlan_tag_present(skb)) ++ if (skb_vlan_tagged(skb)) + max_pkt_len += VLAN_HLEN; + if (!skb_shinfo(skb)->gso_size && (unlikely(skb->len > max_pkt_len))) + goto out_free; +diff --git a/drivers/net/ethernet/cirrus/ep93xx_eth.c b/drivers/net/ethernet/cirrus/ep93xx_eth.c +index 796ee362ad70..24f69034f52c 100644 +--- a/drivers/net/ethernet/cirrus/ep93xx_eth.c ++++ b/drivers/net/ethernet/cirrus/ep93xx_eth.c +@@ -468,6 +468,9 @@ static void ep93xx_free_buffers(struct ep93xx_priv *ep) + struct device *dev = ep->dev->dev.parent; + int i; + ++ if (!ep->descs) ++ return; ++ + for (i = 0; i < RX_QUEUE_ENTRIES; i++) { + dma_addr_t d; + +@@ -490,6 +493,7 @@ static void ep93xx_free_buffers(struct ep93xx_priv *ep) + + dma_free_coherent(dev, sizeof(struct ep93xx_descs), ep->descs, + ep->descs_dma_addr); ++ ep->descs = NULL; + } + + static int ep93xx_alloc_buffers(struct ep93xx_priv *ep) +diff --git a/drivers/net/ethernet/emulex/benet/be.h b/drivers/net/ethernet/emulex/benet/be.h +index 6ee78c203eca..e5fb5cf5401b 100644 +--- a/drivers/net/ethernet/emulex/benet/be.h ++++ b/drivers/net/ethernet/emulex/benet/be.h +@@ -531,6 +531,7 @@ struct be_adapter { + + struct delayed_work be_err_detection_work; + u8 err_flags; ++ bool pcicfg_mapped; /* pcicfg obtained via pci_iomap() */ + u32 flags; + u32 cmd_privileges; + /* Ethtool knobs and info */ +diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c +index 7524a33b7032..7cd39324106d 100644 +--- a/drivers/net/ethernet/emulex/benet/be_main.c ++++ b/drivers/net/ethernet/emulex/benet/be_main.c +@@ -5526,6 +5526,8 @@ static void be_unmap_pci_bars(struct be_adapter *adapter) + pci_iounmap(adapter->pdev, adapter->csr); + if (adapter->db) + pci_iounmap(adapter->pdev, adapter->db); ++ if (adapter->pcicfg && adapter->pcicfg_mapped) ++ pci_iounmap(adapter->pdev, adapter->pcicfg); + } + + static int db_bar(struct be_adapter *adapter) +@@ -5577,8 +5579,10 @@ static int be_map_pci_bars(struct be_adapter *adapter) + if (!addr) + goto pci_map_err; + adapter->pcicfg = addr; ++ adapter->pcicfg_mapped = true; + } else { + adapter->pcicfg = adapter->db + SRIOV_VF_PCICFG_OFFSET; ++ adapter->pcicfg_mapped = false; + } + } + +diff --git a/drivers/net/ethernet/ethoc.c b/drivers/net/ethernet/ethoc.c +index 52f2230062e7..4d80d9f85c5d 100644 +--- a/drivers/net/ethernet/ethoc.c ++++ b/drivers/net/ethernet/ethoc.c +@@ -1088,7 +1088,7 @@ static int ethoc_probe(struct platform_device *pdev) + if (!priv->iobase) { + dev_err(&pdev->dev, "cannot remap I/O memory space\n"); + ret = -ENXIO; +- goto error; ++ goto free; + } + + if (netdev->mem_end) { +@@ -1097,7 +1097,7 @@ static int ethoc_probe(struct platform_device *pdev) + if (!priv->membase) { + dev_err(&pdev->dev, "cannot remap memory space\n"); + ret = -ENXIO; +- goto error; ++ goto free; + } + } else { + /* Allocate buffer memory */ +@@ -1108,7 +1108,7 @@ static int ethoc_probe(struct platform_device *pdev) + dev_err(&pdev->dev, "cannot allocate %dB buffer\n", + buffer_size); + ret = -ENOMEM; +- goto error; ++ goto free; + } + netdev->mem_end = netdev->mem_start + buffer_size; + priv->dma_alloc = buffer_size; +@@ -1122,7 +1122,7 @@ static int ethoc_probe(struct platform_device *pdev) + 128, (netdev->mem_end - netdev->mem_start + 1) / ETHOC_BUFSIZ); + if (num_bd < 4) { + ret = -ENODEV; +- goto error; ++ goto free; + } + priv->num_bd = num_bd; + /* num_tx must be a power of two */ +@@ -1135,7 +1135,7 @@ static int ethoc_probe(struct platform_device *pdev) + priv->vma = devm_kzalloc(&pdev->dev, num_bd*sizeof(void *), GFP_KERNEL); + if (!priv->vma) { + ret = -ENOMEM; +- goto error; ++ goto free; + } + + /* Allow the platform setup code to pass in a MAC address. */ +diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.c b/drivers/net/ethernet/hisilicon/hns/hnae.c +index 3ce41efe8a94..4dd57e6bb3f9 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hnae.c ++++ b/drivers/net/ethernet/hisilicon/hns/hnae.c +@@ -331,8 +331,10 @@ struct hnae_handle *hnae_get_handle(struct device *owner_dev, + return ERR_PTR(-ENODEV); + + handle = dev->ops->get_handle(dev, port_id); +- if (IS_ERR(handle)) ++ if (IS_ERR(handle)) { ++ put_device(&dev->cls_dev); + return handle; ++ } + + handle->dev = dev; + handle->owner_dev = owner_dev; +@@ -355,6 +357,8 @@ out_when_init_queue: + for (j = i - 1; j >= 0; j--) + hnae_fini_queue(handle->qs[j]); + ++ put_device(&dev->cls_dev); ++ + return ERR_PTR(-ENOMEM); + } + EXPORT_SYMBOL(hnae_get_handle); +@@ -376,6 +380,8 @@ void hnae_put_handle(struct hnae_handle *h) + dev->ops->put_handle(h); + + module_put(dev->owner); ++ ++ put_device(&dev->cls_dev); + } + EXPORT_SYMBOL(hnae_put_handle); + +diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c +index fdbba588c6db..efe84ca20da7 100644 +--- a/drivers/net/ethernet/ibm/ehea/ehea_main.c ++++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c +@@ -1169,16 +1169,15 @@ static void ehea_parse_eqe(struct ehea_adapter *adapter, u64 eqe) + ec = EHEA_BMASK_GET(NEQE_EVENT_CODE, eqe); + portnum = EHEA_BMASK_GET(NEQE_PORTNUM, eqe); + port = ehea_get_port(adapter, portnum); ++ if (!port) { ++ netdev_err(NULL, "unknown portnum %x\n", portnum); ++ return; ++ } + dev = port->netdev; + + switch (ec) { + case EHEA_EC_PORTSTATE_CHG: /* port state change */ + +- if (!port) { +- netdev_err(dev, "unknown portnum %x\n", portnum); +- break; +- } +- + if (EHEA_BMASK_GET(NEQE_PORT_UP, eqe)) { + if (!netif_carrier_ok(dev)) { + ret = ehea_sense_port_attr(port); +diff --git a/drivers/net/ethernet/intel/i40e/i40e_hmc.c b/drivers/net/ethernet/intel/i40e/i40e_hmc.c +index 5ebe12d56ebf..a7c7b1d9b7c8 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_hmc.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_hmc.c +@@ -49,7 +49,7 @@ i40e_status i40e_add_sd_table_entry(struct i40e_hw *hw, + struct i40e_hmc_sd_entry *sd_entry; + bool dma_mem_alloc_done = false; + struct i40e_dma_mem mem; +- i40e_status ret_code; ++ i40e_status ret_code = I40E_SUCCESS; + u64 alloc_len; + + if (NULL == hmc_info->sd_table.sd_entry) { +diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c +index 82f080a5ed5c..7fe6c2cf1c62 100644 +--- a/drivers/net/ethernet/marvell/mv643xx_eth.c ++++ b/drivers/net/ethernet/marvell/mv643xx_eth.c +@@ -762,10 +762,10 @@ txq_put_data_tso(struct net_device *dev, struct tx_queue *txq, + + if (length <= 8 && (uintptr_t)data & 0x7) { + /* Copy unaligned small data fragment to TSO header data area */ +- memcpy(txq->tso_hdrs + txq->tx_curr_desc * TSO_HEADER_SIZE, ++ memcpy(txq->tso_hdrs + tx_index * TSO_HEADER_SIZE, + data, length); + desc->buf_ptr = txq->tso_hdrs_dma +- + txq->tx_curr_desc * TSO_HEADER_SIZE; ++ + tx_index * TSO_HEADER_SIZE; + } else { + /* Alignment is okay, map buffer and hand off to hardware */ + txq->tx_desc_mapping[tx_index] = DESC_DMA_MAP_SINGLE; +diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c +index 1c300259d70a..575da945f151 100644 +--- a/drivers/net/ethernet/marvell/mvneta.c ++++ b/drivers/net/ethernet/marvell/mvneta.c +@@ -3058,26 +3058,25 @@ static void mvneta_ethtool_update_stats(struct mvneta_port *pp) + const struct mvneta_statistic *s; + void __iomem *base = pp->base; + u32 high, low, val; ++ u64 val64; + int i; + + for (i = 0, s = mvneta_statistics; + s < mvneta_statistics + ARRAY_SIZE(mvneta_statistics); + s++, i++) { +- val = 0; +- + switch (s->type) { + case T_REG_32: + val = readl_relaxed(base + s->offset); ++ pp->ethtool_stats[i] += val; + break; + case T_REG_64: + /* Docs say to read low 32-bit then high */ + low = readl_relaxed(base + s->offset); + high = readl_relaxed(base + s->offset + 4); +- val = (u64)high << 32 | low; ++ val64 = (u64)high << 32 | low; ++ pp->ethtool_stats[i] += val64; + break; + } +- +- pp->ethtool_stats[i] += val; + } + } + +@@ -3406,7 +3405,7 @@ static int mvneta_probe(struct platform_device *pdev) + dev->features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO; + dev->hw_features |= dev->features; + dev->vlan_features |= dev->features; +- dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE; ++ dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; + dev->gso_max_segs = MVNETA_MAX_TSO_SEGS; + + err = register_netdev(dev); +diff --git a/drivers/net/ethernet/marvell/mvpp2.c b/drivers/net/ethernet/marvell/mvpp2.c +index 03f0d20aa08b..b508345aeca6 100644 +--- a/drivers/net/ethernet/marvell/mvpp2.c ++++ b/drivers/net/ethernet/marvell/mvpp2.c +@@ -3305,7 +3305,7 @@ static void mvpp2_cls_init(struct mvpp2 *priv) + mvpp2_write(priv, MVPP2_CLS_MODE_REG, MVPP2_CLS_MODE_ACTIVE_MASK); + + /* Clear classifier flow table */ +- memset(&fe.data, 0, MVPP2_CLS_FLOWS_TBL_DATA_WORDS); ++ memset(&fe.data, 0, sizeof(fe.data)); + for (index = 0; index < MVPP2_CLS_FLOWS_TBL_SIZE; index++) { + fe.index = index; + mvpp2_cls_flow_write(priv, &fe); +diff --git a/drivers/net/ethernet/mellanox/mlx4/catas.c b/drivers/net/ethernet/mellanox/mlx4/catas.c +index e203d0c4e5a3..53daa6ca5d83 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/catas.c ++++ b/drivers/net/ethernet/mellanox/mlx4/catas.c +@@ -182,10 +182,17 @@ void mlx4_enter_error_state(struct mlx4_dev_persistent *persist) + err = mlx4_reset_slave(dev); + else + err = mlx4_reset_master(dev); +- BUG_ON(err != 0); + ++ if (!err) { ++ mlx4_err(dev, "device was reset successfully\n"); ++ } else { ++ /* EEH could have disabled the PCI channel during reset. That's ++ * recoverable and the PCI error flow will handle it. ++ */ ++ if (!pci_channel_offline(dev->persist->pdev)) ++ BUG_ON(1); ++ } + dev->persist->state |= MLX4_DEVICE_STATE_INTERNAL_ERROR; +- mlx4_err(dev, "device was reset successfully\n"); + mutex_unlock(&persist->device_state_mutex); + + /* At that step HW was already reset, now notify clients */ +diff --git a/drivers/net/ethernet/mellanox/mlx4/cmd.c b/drivers/net/ethernet/mellanox/mlx4/cmd.c +index 9e104dcfa9dd..dc1cb6fc5b4e 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/cmd.c ++++ b/drivers/net/ethernet/mellanox/mlx4/cmd.c +@@ -2451,6 +2451,7 @@ err_comm_admin: + kfree(priv->mfunc.master.slave_state); + err_comm: + iounmap(priv->mfunc.comm); ++ priv->mfunc.comm = NULL; + err_vhcr: + dma_free_coherent(&dev->persist->pdev->dev, PAGE_SIZE, + priv->mfunc.vhcr, +@@ -2518,6 +2519,13 @@ void mlx4_report_internal_err_comm_event(struct mlx4_dev *dev) + int slave; + u32 slave_read; + ++ /* If the comm channel has not yet been initialized, ++ * skip reporting the internal error event to all ++ * the communication channels. ++ */ ++ if (!priv->mfunc.comm) ++ return; ++ + /* Report an internal error event to all + * communication channels. + */ +@@ -2552,6 +2560,7 @@ void mlx4_multi_func_cleanup(struct mlx4_dev *dev) + } + + iounmap(priv->mfunc.comm); ++ priv->mfunc.comm = NULL; + } + + void mlx4_cmd_cleanup(struct mlx4_dev *dev, int cleanup_mask) +diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +index 232191417b93..7d61a5de9d5a 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c ++++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +@@ -424,14 +424,18 @@ static int mlx4_en_vlan_rx_add_vid(struct net_device *dev, + mutex_lock(&mdev->state_lock); + if (mdev->device_up && priv->port_up) { + err = mlx4_SET_VLAN_FLTR(mdev->dev, priv); +- if (err) ++ if (err) { + en_err(priv, "Failed configuring VLAN filter\n"); ++ goto out; ++ } + } +- if (mlx4_register_vlan(mdev->dev, priv->port, vid, &idx)) +- en_dbg(HW, priv, "failed adding vlan %d\n", vid); +- mutex_unlock(&mdev->state_lock); ++ err = mlx4_register_vlan(mdev->dev, priv->port, vid, &idx); ++ if (err) ++ en_dbg(HW, priv, "Failed adding vlan %d\n", vid); + +- return 0; ++out: ++ mutex_unlock(&mdev->state_lock); ++ return err; + } + + static int mlx4_en_vlan_rx_kill_vid(struct net_device *dev, +@@ -439,7 +443,7 @@ static int mlx4_en_vlan_rx_kill_vid(struct net_device *dev, + { + struct mlx4_en_priv *priv = netdev_priv(dev); + struct mlx4_en_dev *mdev = priv->mdev; +- int err; ++ int err = 0; + + en_dbg(HW, priv, "Killing VID:%d\n", vid); + +@@ -456,7 +460,7 @@ static int mlx4_en_vlan_rx_kill_vid(struct net_device *dev, + } + mutex_unlock(&mdev->state_lock); + +- return 0; ++ return err; + } + + static void mlx4_en_u64_to_mac(unsigned char dst_mac[ETH_ALEN + 2], u64 src_mac) +@@ -1716,6 +1720,16 @@ int mlx4_en_start_port(struct net_device *dev) + vxlan_get_rx_port(dev); + #endif + priv->port_up = true; ++ ++ /* Process all completions if exist to prevent ++ * the queues freezing if they are full ++ */ ++ for (i = 0; i < priv->rx_ring_num; i++) { ++ local_bh_disable(); ++ napi_schedule(&priv->rx_cq[i]->napi); ++ local_bh_enable(); ++ } ++ + netif_tx_start_all_queues(dev); + netif_device_attach(dev); + +diff --git a/drivers/net/ethernet/mellanox/mlx4/en_port.c b/drivers/net/ethernet/mellanox/mlx4/en_port.c +index 3904b5fc0b7c..96fc35a9bb05 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/en_port.c ++++ b/drivers/net/ethernet/mellanox/mlx4/en_port.c +@@ -164,7 +164,7 @@ int mlx4_en_DUMP_ETH_STATS(struct mlx4_en_dev *mdev, u8 port, u8 reset) + return PTR_ERR(mailbox); + err = mlx4_cmd_box(mdev->dev, 0, mailbox->dma, in_mod, 0, + MLX4_CMD_DUMP_ETH_STATS, MLX4_CMD_TIME_CLASS_B, +- MLX4_CMD_WRAPPED); ++ MLX4_CMD_NATIVE); + if (err) + goto out; + +@@ -325,7 +325,7 @@ int mlx4_en_DUMP_ETH_STATS(struct mlx4_en_dev *mdev, u8 port, u8 reset) + err = mlx4_cmd_box(mdev->dev, 0, mailbox->dma, + in_mod | MLX4_DUMP_ETH_STATS_FLOW_CONTROL, + 0, MLX4_CMD_DUMP_ETH_STATS, +- MLX4_CMD_TIME_CLASS_B, MLX4_CMD_WRAPPED); ++ MLX4_CMD_TIME_CLASS_B, MLX4_CMD_NATIVE); + if (err) + goto out; + } +diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c +index 033f99d2f15c..5ac6e62f7dcc 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/fw.c ++++ b/drivers/net/ethernet/mellanox/mlx4/fw.c +@@ -610,8 +610,7 @@ int mlx4_QUERY_FUNC_CAP(struct mlx4_dev *dev, u8 gen_or_port, + MLX4_GET(func_cap->phys_port_id, outbox, + QUERY_FUNC_CAP_PHYS_PORT_ID); + +- MLX4_GET(field, outbox, QUERY_FUNC_CAP_FLAGS0_OFFSET); +- func_cap->flags |= (field & QUERY_FUNC_CAP_PHV_BIT); ++ MLX4_GET(func_cap->flags0, outbox, QUERY_FUNC_CAP_FLAGS0_OFFSET); + + /* All other resources are allocated by the master, but we still report + * 'num' and 'reserved' capabilities as follows: +@@ -2840,7 +2839,7 @@ int get_phv_bit(struct mlx4_dev *dev, u8 port, int *phv) + memset(&func_cap, 0, sizeof(func_cap)); + err = mlx4_QUERY_FUNC_CAP(dev, port, &func_cap); + if (!err) +- *phv = func_cap.flags & QUERY_FUNC_CAP_PHV_BIT; ++ *phv = func_cap.flags0 & QUERY_FUNC_CAP_PHV_BIT; + return err; + } + EXPORT_SYMBOL(get_phv_bit); +diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.h b/drivers/net/ethernet/mellanox/mlx4/fw.h +index 08de5555c2f4..074631be342b 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/fw.h ++++ b/drivers/net/ethernet/mellanox/mlx4/fw.h +@@ -150,7 +150,7 @@ struct mlx4_func_cap { + u32 qp1_proxy_qpn; + u32 reserved_lkey; + u8 physical_port; +- u8 port_flags; ++ u8 flags0; + u8 flags1; + u64 phys_port_id; + u32 extra_flags; +diff --git a/drivers/net/ethernet/mellanox/mlx4/intf.c b/drivers/net/ethernet/mellanox/mlx4/intf.c +index 1a134e08f010..cfc2a7632201 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/intf.c ++++ b/drivers/net/ethernet/mellanox/mlx4/intf.c +@@ -217,6 +217,9 @@ void mlx4_unregister_device(struct mlx4_dev *dev) + struct mlx4_priv *priv = mlx4_priv(dev); + struct mlx4_interface *intf; + ++ if (!(dev->persist->interface_state & MLX4_INTERFACE_STATE_UP)) ++ return; ++ + mlx4_stop_catas_poll(dev); + if (dev->persist->interface_state & MLX4_INTERFACE_STATE_DELETION && + mlx4_is_slave(dev)) { +diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c +index a7d3144c2388..f8ac0e69d14b 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/main.c ++++ b/drivers/net/ethernet/mellanox/mlx4/main.c +@@ -3854,45 +3854,53 @@ static pci_ers_result_t mlx4_pci_slot_reset(struct pci_dev *pdev) + { + struct mlx4_dev_persistent *persist = pci_get_drvdata(pdev); + struct mlx4_dev *dev = persist->dev; +- struct mlx4_priv *priv = mlx4_priv(dev); +- int ret; +- int nvfs[MLX4_MAX_PORTS + 1] = {0, 0, 0}; +- int total_vfs; ++ int err; + + mlx4_err(dev, "mlx4_pci_slot_reset was called\n"); +- ret = pci_enable_device(pdev); +- if (ret) { +- mlx4_err(dev, "Can not re-enable device, ret=%d\n", ret); ++ err = pci_enable_device(pdev); ++ if (err) { ++ mlx4_err(dev, "Can not re-enable device, err=%d\n", err); + return PCI_ERS_RESULT_DISCONNECT; + } + + pci_set_master(pdev); + pci_restore_state(pdev); + pci_save_state(pdev); ++ return PCI_ERS_RESULT_RECOVERED; ++} ++ ++static void mlx4_pci_resume(struct pci_dev *pdev) ++{ ++ struct mlx4_dev_persistent *persist = pci_get_drvdata(pdev); ++ struct mlx4_dev *dev = persist->dev; ++ struct mlx4_priv *priv = mlx4_priv(dev); ++ int nvfs[MLX4_MAX_PORTS + 1] = {0, 0, 0}; ++ int total_vfs; ++ int err; + ++ mlx4_err(dev, "%s was called\n", __func__); + total_vfs = dev->persist->num_vfs; + memcpy(nvfs, dev->persist->nvfs, sizeof(dev->persist->nvfs)); + + mutex_lock(&persist->interface_state_mutex); + if (!(persist->interface_state & MLX4_INTERFACE_STATE_UP)) { +- ret = mlx4_load_one(pdev, priv->pci_dev_data, total_vfs, nvfs, ++ err = mlx4_load_one(pdev, priv->pci_dev_data, total_vfs, nvfs, + priv, 1); +- if (ret) { +- mlx4_err(dev, "%s: mlx4_load_one failed, ret=%d\n", +- __func__, ret); ++ if (err) { ++ mlx4_err(dev, "%s: mlx4_load_one failed, err=%d\n", ++ __func__, err); + goto end; + } + +- ret = restore_current_port_types(dev, dev->persist-> ++ err = restore_current_port_types(dev, dev->persist-> + curr_port_type, dev->persist-> + curr_port_poss_type); +- if (ret) +- mlx4_err(dev, "could not restore original port types (%d)\n", ret); ++ if (err) ++ mlx4_err(dev, "could not restore original port types (%d)\n", err); + } + end: + mutex_unlock(&persist->interface_state_mutex); + +- return ret ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; + } + + static void mlx4_shutdown(struct pci_dev *pdev) +@@ -3909,6 +3917,7 @@ static void mlx4_shutdown(struct pci_dev *pdev) + static const struct pci_error_handlers mlx4_err_handler = { + .error_detected = mlx4_pci_err_detected, + .slot_reset = mlx4_pci_slot_reset, ++ .resume = mlx4_pci_resume, + }; + + static struct pci_driver mlx4_driver = { +diff --git a/drivers/net/ethernet/mellanox/mlx4/mcg.c b/drivers/net/ethernet/mellanox/mlx4/mcg.c +index 3bf63de3a725..15c8f53f2497 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/mcg.c ++++ b/drivers/net/ethernet/mellanox/mlx4/mcg.c +@@ -1109,7 +1109,7 @@ int mlx4_qp_attach_common(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16], + struct mlx4_cmd_mailbox *mailbox; + struct mlx4_mgm *mgm; + u32 members_count; +- int index, prev; ++ int index = -1, prev; + int link = 0; + int i; + int err; +@@ -1188,7 +1188,7 @@ int mlx4_qp_attach_common(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16], + goto out; + + out: +- if (prot == MLX4_PROT_ETH) { ++ if (prot == MLX4_PROT_ETH && index != -1) { + /* manage the steering entry for promisc mode */ + if (new_entry) + err = new_steering_entry(dev, port, steer, +@@ -1464,7 +1464,12 @@ EXPORT_SYMBOL_GPL(mlx4_multicast_detach); + int mlx4_flow_steer_promisc_add(struct mlx4_dev *dev, u8 port, + u32 qpn, enum mlx4_net_trans_promisc_mode mode) + { +- struct mlx4_net_trans_rule rule; ++ struct mlx4_net_trans_rule rule = { ++ .queue_mode = MLX4_NET_TRANS_Q_FIFO, ++ .exclusive = 0, ++ .allow_loopback = 1, ++ }; ++ + u64 *regid_p; + + switch (mode) { +diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4.h b/drivers/net/ethernet/mellanox/mlx4/mlx4.h +index db40387ffaf6..b0af462f5e04 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/mlx4.h ++++ b/drivers/net/ethernet/mellanox/mlx4/mlx4.h +@@ -143,9 +143,10 @@ enum mlx4_resource { + RES_MTT, + RES_MAC, + RES_VLAN, +- RES_EQ, ++ RES_NPORT_ID, + RES_COUNTER, + RES_FS_RULE, ++ RES_EQ, + MLX4_NUM_OF_RESOURCE_TYPE + }; + +@@ -1312,8 +1313,6 @@ int mlx4_SET_VLAN_FLTR_wrapper(struct mlx4_dev *dev, int slave, + struct mlx4_cmd_info *cmd); + int mlx4_common_set_vlan_fltr(struct mlx4_dev *dev, int function, + int port, void *buf); +-int mlx4_common_dump_eth_stats(struct mlx4_dev *dev, int slave, u32 in_mod, +- struct mlx4_cmd_mailbox *outbox); + int mlx4_DUMP_ETH_STATS_wrapper(struct mlx4_dev *dev, int slave, + struct mlx4_vhcr *vhcr, + struct mlx4_cmd_mailbox *inbox, +diff --git a/drivers/net/ethernet/mellanox/mlx4/port.c b/drivers/net/ethernet/mellanox/mlx4/port.c +index a9c4818448f9..d764081ef675 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/port.c ++++ b/drivers/net/ethernet/mellanox/mlx4/port.c +@@ -1155,24 +1155,13 @@ int mlx4_SET_VLAN_FLTR_wrapper(struct mlx4_dev *dev, int slave, + return err; + } + +-int mlx4_common_dump_eth_stats(struct mlx4_dev *dev, int slave, +- u32 in_mod, struct mlx4_cmd_mailbox *outbox) +-{ +- return mlx4_cmd_box(dev, 0, outbox->dma, in_mod, 0, +- MLX4_CMD_DUMP_ETH_STATS, MLX4_CMD_TIME_CLASS_B, +- MLX4_CMD_NATIVE); +-} +- + int mlx4_DUMP_ETH_STATS_wrapper(struct mlx4_dev *dev, int slave, + struct mlx4_vhcr *vhcr, + struct mlx4_cmd_mailbox *inbox, + struct mlx4_cmd_mailbox *outbox, + struct mlx4_cmd_info *cmd) + { +- if (slave != dev->caps.function) +- return 0; +- return mlx4_common_dump_eth_stats(dev, slave, +- vhcr->in_modifier, outbox); ++ return 0; + } + + int mlx4_get_slave_from_roce_gid(struct mlx4_dev *dev, int port, u8 *gid, +diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c +index 170a49a6803e..6466edfc833b 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c ++++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c +@@ -918,11 +918,13 @@ static int handle_existing_counter(struct mlx4_dev *dev, u8 slave, int port, + + spin_lock_irq(mlx4_tlock(dev)); + r = find_res(dev, counter_index, RES_COUNTER); +- if (!r || r->owner != slave) ++ if (!r || r->owner != slave) { + ret = -EINVAL; +- counter = container_of(r, struct res_counter, com); +- if (!counter->port) +- counter->port = port; ++ } else { ++ counter = container_of(r, struct res_counter, com); ++ if (!counter->port) ++ counter->port = port; ++ } + + spin_unlock_irq(mlx4_tlock(dev)); + return ret; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +index 9ac14df0ca3b..9b8599c2aca8 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +@@ -634,11 +634,36 @@ static void free_msg(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *msg); + static void mlx5_free_cmd_msg(struct mlx5_core_dev *dev, + struct mlx5_cmd_msg *msg); + ++static u16 msg_to_opcode(struct mlx5_cmd_msg *in) ++{ ++ struct mlx5_inbox_hdr *hdr = (struct mlx5_inbox_hdr *)(in->first.data); ++ ++ return be16_to_cpu(hdr->opcode); ++} ++ ++static void cb_timeout_handler(struct work_struct *work) ++{ ++ struct delayed_work *dwork = container_of(work, struct delayed_work, ++ work); ++ struct mlx5_cmd_work_ent *ent = container_of(dwork, ++ struct mlx5_cmd_work_ent, ++ cb_timeout_work); ++ struct mlx5_core_dev *dev = container_of(ent->cmd, struct mlx5_core_dev, ++ cmd); ++ ++ ent->ret = -ETIMEDOUT; ++ mlx5_core_warn(dev, "%s(0x%x) timeout. Will cause a leak of a command resource\n", ++ mlx5_command_str(msg_to_opcode(ent->in)), ++ msg_to_opcode(ent->in)); ++ mlx5_cmd_comp_handler(dev, 1UL << ent->idx); ++} ++ + static void cmd_work_handler(struct work_struct *work) + { + struct mlx5_cmd_work_ent *ent = container_of(work, struct mlx5_cmd_work_ent, work); + struct mlx5_cmd *cmd = ent->cmd; + struct mlx5_core_dev *dev = container_of(cmd, struct mlx5_core_dev, cmd); ++ unsigned long cb_timeout = msecs_to_jiffies(MLX5_CMD_TIMEOUT_MSEC); + struct mlx5_cmd_layout *lay; + struct semaphore *sem; + unsigned long flags; +@@ -691,6 +716,9 @@ static void cmd_work_handler(struct work_struct *work) + ent->ts1 = ktime_get_ns(); + cmd_mode = cmd->mode; + ++ if (ent->callback) ++ schedule_delayed_work(&ent->cb_timeout_work, cb_timeout); ++ + /* ring doorbell after the descriptor is valid */ + mlx5_core_dbg(dev, "writing 0x%x to command doorbell\n", 1 << ent->idx); + wmb(); +@@ -735,13 +763,6 @@ static const char *deliv_status_to_str(u8 status) + } + } + +-static u16 msg_to_opcode(struct mlx5_cmd_msg *in) +-{ +- struct mlx5_inbox_hdr *hdr = (struct mlx5_inbox_hdr *)(in->first.data); +- +- return be16_to_cpu(hdr->opcode); +-} +- + static int wait_func(struct mlx5_core_dev *dev, struct mlx5_cmd_work_ent *ent) + { + unsigned long timeout = msecs_to_jiffies(MLX5_CMD_TIMEOUT_MSEC); +@@ -750,13 +771,13 @@ static int wait_func(struct mlx5_core_dev *dev, struct mlx5_cmd_work_ent *ent) + + if (cmd->mode == CMD_MODE_POLLING) { + wait_for_completion(&ent->done); +- err = ent->ret; +- } else { +- if (!wait_for_completion_timeout(&ent->done, timeout)) +- err = -ETIMEDOUT; +- else +- err = 0; ++ } else if (!wait_for_completion_timeout(&ent->done, timeout)) { ++ ent->ret = -ETIMEDOUT; ++ mlx5_cmd_comp_handler(dev, 1UL << ent->idx); + } ++ ++ err = ent->ret; ++ + if (err == -ETIMEDOUT) { + mlx5_core_warn(dev, "%s(0x%x) timeout. Will cause a leak of a command resource\n", + mlx5_command_str(msg_to_opcode(ent->in)), +@@ -808,6 +829,7 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in, + if (!callback) + init_completion(&ent->done); + ++ INIT_DELAYED_WORK(&ent->cb_timeout_work, cb_timeout_handler); + INIT_WORK(&ent->work, cmd_work_handler); + if (page_queue) { + cmd_work_handler(&ent->work); +@@ -817,28 +839,26 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in, + goto out_free; + } + +- if (!callback) { +- err = wait_func(dev, ent); +- if (err == -ETIMEDOUT) +- goto out; +- +- ds = ent->ts2 - ent->ts1; +- op = be16_to_cpu(((struct mlx5_inbox_hdr *)in->first.data)->opcode); +- if (op < ARRAY_SIZE(cmd->stats)) { +- stats = &cmd->stats[op]; +- spin_lock_irq(&stats->lock); +- stats->sum += ds; +- ++stats->n; +- spin_unlock_irq(&stats->lock); +- } +- mlx5_core_dbg_mask(dev, 1 << MLX5_CMD_TIME, +- "fw exec time for %s is %lld nsec\n", +- mlx5_command_str(op), ds); +- *status = ent->status; +- free_cmd(ent); +- } ++ if (callback) ++ goto out; + +- return err; ++ err = wait_func(dev, ent); ++ if (err == -ETIMEDOUT) ++ goto out_free; ++ ++ ds = ent->ts2 - ent->ts1; ++ op = be16_to_cpu(((struct mlx5_inbox_hdr *)in->first.data)->opcode); ++ if (op < ARRAY_SIZE(cmd->stats)) { ++ stats = &cmd->stats[op]; ++ spin_lock_irq(&stats->lock); ++ stats->sum += ds; ++ ++stats->n; ++ spin_unlock_irq(&stats->lock); ++ } ++ mlx5_core_dbg_mask(dev, 1 << MLX5_CMD_TIME, ++ "fw exec time for %s is %lld nsec\n", ++ mlx5_command_str(op), ds); ++ *status = ent->status; + + out_free: + free_cmd(ent); +@@ -1230,41 +1250,30 @@ err_dbg: + return err; + } + +-void mlx5_cmd_use_events(struct mlx5_core_dev *dev) ++static void mlx5_cmd_change_mod(struct mlx5_core_dev *dev, int mode) + { + struct mlx5_cmd *cmd = &dev->cmd; + int i; + + for (i = 0; i < cmd->max_reg_cmds; i++) + down(&cmd->sem); +- + down(&cmd->pages_sem); + +- flush_workqueue(cmd->wq); +- +- cmd->mode = CMD_MODE_EVENTS; ++ cmd->mode = mode; + + up(&cmd->pages_sem); + for (i = 0; i < cmd->max_reg_cmds; i++) + up(&cmd->sem); + } + +-void mlx5_cmd_use_polling(struct mlx5_core_dev *dev) ++void mlx5_cmd_use_events(struct mlx5_core_dev *dev) + { +- struct mlx5_cmd *cmd = &dev->cmd; +- int i; +- +- for (i = 0; i < cmd->max_reg_cmds; i++) +- down(&cmd->sem); +- +- down(&cmd->pages_sem); +- +- flush_workqueue(cmd->wq); +- cmd->mode = CMD_MODE_POLLING; ++ mlx5_cmd_change_mod(dev, CMD_MODE_EVENTS); ++} + +- up(&cmd->pages_sem); +- for (i = 0; i < cmd->max_reg_cmds; i++) +- up(&cmd->sem); ++void mlx5_cmd_use_polling(struct mlx5_core_dev *dev) ++{ ++ mlx5_cmd_change_mod(dev, CMD_MODE_POLLING); + } + + static void free_msg(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *msg) +@@ -1300,6 +1309,8 @@ void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec) + struct semaphore *sem; + + ent = cmd->ent_arr[i]; ++ if (ent->callback) ++ cancel_delayed_work(&ent->cb_timeout_work); + if (ent->page_queue) + sem = &cmd->pages_sem; + else +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h +index 7a716733d9ca..717a381bcfd1 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h +@@ -543,7 +543,7 @@ enum mlx5e_link_mode { + MLX5E_100GBASE_KR4 = 22, + MLX5E_100GBASE_LR4 = 23, + MLX5E_100BASE_TX = 24, +- MLX5E_100BASE_T = 25, ++ MLX5E_1000BASE_T = 25, + MLX5E_10GBASE_T = 26, + MLX5E_25GBASE_CR = 27, + MLX5E_25GBASE_KR = 28, +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +index c1dd75fe935f..392fa74f1952 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +@@ -138,10 +138,10 @@ static const struct { + [MLX5E_100BASE_TX] = { + .speed = 100, + }, +- [MLX5E_100BASE_T] = { +- .supported = SUPPORTED_100baseT_Full, +- .advertised = ADVERTISED_100baseT_Full, +- .speed = 100, ++ [MLX5E_1000BASE_T] = { ++ .supported = SUPPORTED_1000baseT_Full, ++ .advertised = ADVERTISED_1000baseT_Full, ++ .speed = 1000, + }, + [MLX5E_10GBASE_T] = { + .supported = SUPPORTED_10000baseT_Full, +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +index 1341b1d3c421..7e2026429d26 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +@@ -124,7 +124,7 @@ static inline u16 mlx5e_get_inline_hdr_size(struct mlx5e_sq *sq, + * headers and occur before the data gather. + * Therefore these headers must be copied into the WQE + */ +-#define MLX5E_MIN_INLINE ETH_HLEN ++#define MLX5E_MIN_INLINE (ETH_HLEN + VLAN_HLEN) + + if (bf) { + u16 ihs = skb_headlen(skb); +@@ -136,7 +136,7 @@ static inline u16 mlx5e_get_inline_hdr_size(struct mlx5e_sq *sq, + return skb_headlen(skb); + } + +- return MLX5E_MIN_INLINE; ++ return max(skb_network_offset(skb), MLX5E_MIN_INLINE); + } + + static inline void mlx5e_insert_vlan(void *start, struct sk_buff *skb, u16 ihs) +@@ -290,7 +290,8 @@ static netdev_tx_t mlx5e_sq_xmit(struct mlx5e_sq *sq, struct sk_buff *skb) + while ((sq->pc & wq->sz_m1) > sq->edge) + mlx5e_send_nop(sq, false); + +- sq->bf_budget = bf ? sq->bf_budget - 1 : 0; ++ if (bf) ++ sq->bf_budget--; + + sq->stats.packets++; + return NETDEV_TX_OK; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c +index f5deb642d0d6..94594d47cae6 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c +@@ -108,15 +108,21 @@ static int in_fatal(struct mlx5_core_dev *dev) + + void mlx5_enter_error_state(struct mlx5_core_dev *dev) + { ++ mutex_lock(&dev->intf_state_mutex); + if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) +- return; ++ goto unlock; + + mlx5_core_err(dev, "start\n"); +- if (pci_channel_offline(dev->pdev) || in_fatal(dev)) ++ if (pci_channel_offline(dev->pdev) || in_fatal(dev)) { + dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; ++ trigger_cmd_completions(dev); ++ } + + mlx5_core_event(dev, MLX5_DEV_EVENT_SYS_ERROR, 0); + mlx5_core_err(dev, "end\n"); ++ ++unlock: ++ mutex_unlock(&dev->intf_state_mutex); + } + + static void mlx5_handle_bad_state(struct mlx5_core_dev *dev) +@@ -245,7 +251,6 @@ static void poll_health(unsigned long data) + u32 count; + + if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) { +- trigger_cmd_completions(dev); + mod_timer(&health->timer, get_next_poll_jiffies()); + return; + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c +index 35bcc6dbada9..bf4447581072 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c +@@ -1276,15 +1276,43 @@ static pci_ers_result_t mlx5_pci_err_detected(struct pci_dev *pdev, + dev_info(&pdev->dev, "%s was called\n", __func__); + mlx5_enter_error_state(dev); + mlx5_unload_one(dev, priv); ++ pci_save_state(pdev); + mlx5_pci_disable_device(dev); + return state == pci_channel_io_perm_failure ? + PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_NEED_RESET; + } + ++/* wait for the device to show vital signs by waiting ++ * for the health counter to start counting. ++ */ ++static int wait_vital(struct pci_dev *pdev) ++{ ++ struct mlx5_core_dev *dev = pci_get_drvdata(pdev); ++ struct mlx5_core_health *health = &dev->priv.health; ++ const int niter = 100; ++ u32 last_count = 0; ++ u32 count; ++ int i; ++ ++ for (i = 0; i < niter; i++) { ++ count = ioread32be(health->health_counter); ++ if (count && count != 0xffffffff) { ++ if (last_count && last_count != count) { ++ dev_info(&pdev->dev, "Counter value 0x%x after %d iterations\n", count, i); ++ return 0; ++ } ++ last_count = count; ++ } ++ msleep(50); ++ } ++ ++ return -ETIMEDOUT; ++} ++ + static pci_ers_result_t mlx5_pci_slot_reset(struct pci_dev *pdev) + { + struct mlx5_core_dev *dev = pci_get_drvdata(pdev); +- int err = 0; ++ int err; + + dev_info(&pdev->dev, "%s was called\n", __func__); + +@@ -1294,11 +1322,16 @@ static pci_ers_result_t mlx5_pci_slot_reset(struct pci_dev *pdev) + , __func__, err); + return PCI_ERS_RESULT_DISCONNECT; + } ++ + pci_set_master(pdev); +- pci_set_power_state(pdev, PCI_D0); + pci_restore_state(pdev); + +- return err ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; ++ if (wait_vital(pdev)) { ++ dev_err(&pdev->dev, "%s: wait_vital timed out\n", __func__); ++ return PCI_ERS_RESULT_DISCONNECT; ++ } ++ ++ return PCI_ERS_RESULT_RECOVERED; + } + + void mlx5_disable_device(struct mlx5_core_dev *dev) +@@ -1306,48 +1339,6 @@ void mlx5_disable_device(struct mlx5_core_dev *dev) + mlx5_pci_err_detected(dev->pdev, 0); + } + +-/* wait for the device to show vital signs. For now we check +- * that we can read the device ID and that the health buffer +- * shows a non zero value which is different than 0xffffffff +- */ +-static void wait_vital(struct pci_dev *pdev) +-{ +- struct mlx5_core_dev *dev = pci_get_drvdata(pdev); +- struct mlx5_core_health *health = &dev->priv.health; +- const int niter = 100; +- u32 count; +- u16 did; +- int i; +- +- /* Wait for firmware to be ready after reset */ +- msleep(1000); +- for (i = 0; i < niter; i++) { +- if (pci_read_config_word(pdev, 2, &did)) { +- dev_warn(&pdev->dev, "failed reading config word\n"); +- break; +- } +- if (did == pdev->device) { +- dev_info(&pdev->dev, "device ID correctly read after %d iterations\n", i); +- break; +- } +- msleep(50); +- } +- if (i == niter) +- dev_warn(&pdev->dev, "%s-%d: could not read device ID\n", __func__, __LINE__); +- +- for (i = 0; i < niter; i++) { +- count = ioread32be(health->health_counter); +- if (count && count != 0xffffffff) { +- dev_info(&pdev->dev, "Counter value 0x%x after %d iterations\n", count, i); +- break; +- } +- msleep(50); +- } +- +- if (i == niter) +- dev_warn(&pdev->dev, "%s-%d: could not read device ID\n", __func__, __LINE__); +-} +- + static void mlx5_pci_resume(struct pci_dev *pdev) + { + struct mlx5_core_dev *dev = pci_get_drvdata(pdev); +@@ -1356,9 +1347,6 @@ static void mlx5_pci_resume(struct pci_dev *pdev) + + dev_info(&pdev->dev, "%s was called\n", __func__); + +- pci_save_state(pdev); +- wait_vital(pdev); +- + err = mlx5_load_one(dev, priv); + if (err) + dev_err(&pdev->dev, "%s: mlx5_load_one failed with error code: %d\n" +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c +index 4d3377b12657..9a6bd830d104 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c +@@ -243,6 +243,7 @@ static void free_4k(struct mlx5_core_dev *dev, u64 addr) + static int alloc_system_page(struct mlx5_core_dev *dev, u16 func_id) + { + struct page *page; ++ u64 zero_addr = 1; + u64 addr; + int err; + int nid = dev_to_node(&dev->pdev->dev); +@@ -252,26 +253,35 @@ static int alloc_system_page(struct mlx5_core_dev *dev, u16 func_id) + mlx5_core_warn(dev, "failed to allocate page\n"); + return -ENOMEM; + } ++map: + addr = dma_map_page(&dev->pdev->dev, page, 0, + PAGE_SIZE, DMA_BIDIRECTIONAL); + if (dma_mapping_error(&dev->pdev->dev, addr)) { + mlx5_core_warn(dev, "failed dma mapping page\n"); + err = -ENOMEM; +- goto out_alloc; ++ goto err_mapping; + } ++ ++ /* Firmware doesn't support page with physical address 0 */ ++ if (addr == 0) { ++ zero_addr = addr; ++ goto map; ++ } ++ + err = insert_page(dev, addr, page, func_id); + if (err) { + mlx5_core_err(dev, "failed to track allocated page\n"); +- goto out_mapping; ++ dma_unmap_page(&dev->pdev->dev, addr, PAGE_SIZE, ++ DMA_BIDIRECTIONAL); + } + +- return 0; +- +-out_mapping: +- dma_unmap_page(&dev->pdev->dev, addr, PAGE_SIZE, DMA_BIDIRECTIONAL); ++err_mapping: ++ if (err) ++ __free_page(page); + +-out_alloc: +- __free_page(page); ++ if (zero_addr == 0) ++ dma_unmap_page(&dev->pdev->dev, zero_addr, PAGE_SIZE, ++ DMA_BIDIRECTIONAL); + + return err; + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qp.c b/drivers/net/ethernet/mellanox/mlx5/core/qp.c +index 30e2ba3f5f16..a4f8c1b99f71 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/qp.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/qp.c +@@ -393,7 +393,7 @@ int mlx5_core_xrcd_alloc(struct mlx5_core_dev *dev, u32 *xrcdn) + if (out.hdr.status) + err = mlx5_cmd_status_to_err(&out.hdr); + else +- *xrcdn = be32_to_cpu(out.xrcdn); ++ *xrcdn = be32_to_cpu(out.xrcdn) & 0xffffff; + + return err; + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wq.c b/drivers/net/ethernet/mellanox/mlx5/core/wq.c +index ce21ee5b2357..821a087c7ae2 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/wq.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/wq.c +@@ -75,14 +75,14 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param, + + err = mlx5_db_alloc_node(mdev, &wq_ctrl->db, param->db_numa_node); + if (err) { +- mlx5_core_warn(mdev, "mlx5_db_alloc() failed, %d\n", err); ++ mlx5_core_warn(mdev, "mlx5_db_alloc_node() failed, %d\n", err); + return err; + } + + err = mlx5_buf_alloc_node(mdev, mlx5_wq_cyc_get_byte_size(wq), + &wq_ctrl->buf, param->buf_numa_node); + if (err) { +- mlx5_core_warn(mdev, "mlx5_buf_alloc() failed, %d\n", err); ++ mlx5_core_warn(mdev, "mlx5_buf_alloc_node() failed, %d\n", err); + goto err_db_free; + } + +@@ -111,14 +111,14 @@ int mlx5_cqwq_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param, + + err = mlx5_db_alloc_node(mdev, &wq_ctrl->db, param->db_numa_node); + if (err) { +- mlx5_core_warn(mdev, "mlx5_db_alloc() failed, %d\n", err); ++ mlx5_core_warn(mdev, "mlx5_db_alloc_node() failed, %d\n", err); + return err; + } + + err = mlx5_buf_alloc_node(mdev, mlx5_cqwq_get_byte_size(wq), + &wq_ctrl->buf, param->buf_numa_node); + if (err) { +- mlx5_core_warn(mdev, "mlx5_buf_alloc() failed, %d\n", err); ++ mlx5_core_warn(mdev, "mlx5_buf_alloc_node() failed, %d\n", err); + goto err_db_free; + } + +@@ -148,13 +148,14 @@ int mlx5_wq_ll_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param, + + err = mlx5_db_alloc_node(mdev, &wq_ctrl->db, param->db_numa_node); + if (err) { +- mlx5_core_warn(mdev, "mlx5_db_alloc() failed, %d\n", err); ++ mlx5_core_warn(mdev, "mlx5_db_alloc_node() failed, %d\n", err); + return err; + } + +- err = mlx5_buf_alloc(mdev, mlx5_wq_ll_get_byte_size(wq), &wq_ctrl->buf); ++ err = mlx5_buf_alloc_node(mdev, mlx5_wq_ll_get_byte_size(wq), ++ &wq_ctrl->buf, param->buf_numa_node); + if (err) { +- mlx5_core_warn(mdev, "mlx5_buf_alloc() failed, %d\n", err); ++ mlx5_core_warn(mdev, "mlx5_buf_alloc_node() failed, %d\n", err); + goto err_db_free; + } + +diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c +index de69e719dc9d..75a590d1ae40 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c +@@ -215,7 +215,7 @@ mlxsw_pci_queue_elem_info_producer_get(struct mlxsw_pci_queue *q) + { + int index = q->producer_counter & (q->count - 1); + +- if ((q->producer_counter - q->consumer_counter) == q->count) ++ if ((u16) (q->producer_counter - q->consumer_counter) == q->count) + return NULL; + return mlxsw_pci_queue_elem_info_get(q, index); + } +diff --git a/drivers/net/ethernet/mellanox/mlxsw/port.h b/drivers/net/ethernet/mellanox/mlxsw/port.h +index 726f5435b32f..ae65b9940aed 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/port.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/port.h +@@ -49,7 +49,7 @@ + #define MLXSW_PORT_MID 0xd000 + + #define MLXSW_PORT_MAX_PHY_PORTS 0x40 +-#define MLXSW_PORT_MAX_PORTS MLXSW_PORT_MAX_PHY_PORTS ++#define MLXSW_PORT_MAX_PORTS (MLXSW_PORT_MAX_PHY_PORTS + 1) + + #define MLXSW_PORT_DEVID_BITS_OFFSET 10 + #define MLXSW_PORT_PHY_BITS_OFFSET 4 +diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +index cb165c2d4803..b23f508de811 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +@@ -399,7 +399,11 @@ static netdev_tx_t mlxsw_sp_port_xmit(struct sk_buff *skb, + } + + mlxsw_sp_txhdr_construct(skb, &tx_info); +- len = skb->len; ++ /* TX header is consumed by HW on the way so we shouldn't count its ++ * bytes as being sent. ++ */ ++ len = skb->len - MLXSW_TXHDR_LEN; ++ + /* Due to a race we might fail here because of a full queue. In that + * unlikely case we simply drop the packet. + */ +@@ -1100,7 +1104,8 @@ static int mlxsw_sp_port_get_settings(struct net_device *dev, + + cmd->supported = mlxsw_sp_from_ptys_supported_port(eth_proto_cap) | + mlxsw_sp_from_ptys_supported_link(eth_proto_cap) | +- SUPPORTED_Pause | SUPPORTED_Asym_Pause; ++ SUPPORTED_Pause | SUPPORTED_Asym_Pause | ++ SUPPORTED_Autoneg; + cmd->advertising = mlxsw_sp_from_ptys_advert_link(eth_proto_admin); + mlxsw_sp_from_ptys_speed_duplex(netif_carrier_ok(dev), + eth_proto_oper, cmd); +@@ -1256,7 +1261,7 @@ static int mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port) + /* Each packet needs to have a Tx header (metadata) on top all other + * headers. + */ +- dev->hard_header_len += MLXSW_TXHDR_LEN; ++ dev->needed_headroom = MLXSW_TXHDR_LEN; + + err = mlxsw_sp_port_module_check(mlxsw_sp_port, &usable); + if (err) { +diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c +index d4c4c2b5156c..a1df4227ed9d 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c +@@ -87,14 +87,14 @@ static int mlxsw_sp_port_stp_state_set(struct mlxsw_sp_port *mlxsw_sp_port, + int err; + + switch (state) { +- case BR_STATE_DISABLED: /* fall-through */ + case BR_STATE_FORWARDING: + spms_state = MLXSW_REG_SPMS_STATE_FORWARDING; + break; +- case BR_STATE_LISTENING: /* fall-through */ + case BR_STATE_LEARNING: + spms_state = MLXSW_REG_SPMS_STATE_LEARNING; + break; ++ case BR_STATE_LISTENING: /* fall-through */ ++ case BR_STATE_DISABLED: /* fall-through */ + case BR_STATE_BLOCKING: + spms_state = MLXSW_REG_SPMS_STATE_DISCARDING; + break; +diff --git a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c +index fb2d9a82ce3d..4d7b3edf6662 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c +@@ -993,7 +993,7 @@ static int mlxsw_sx_port_create(struct mlxsw_sx *mlxsw_sx, u8 local_port) + /* Each packet needs to have a Tx header (metadata) on top all other + * headers. + */ +- dev->hard_header_len += MLXSW_TXHDR_LEN; ++ dev->needed_headroom = MLXSW_TXHDR_LEN; + + err = mlxsw_sx_port_module_check(mlxsw_sx_port, &usable); + if (err) { +@@ -1074,6 +1074,7 @@ err_port_stp_state_set: + err_port_admin_status_set: + err_port_mtu_set: + err_port_speed_set: ++ mlxsw_sx_port_swid_set(mlxsw_sx_port, MLXSW_PORT_SWID_DISABLED_PORT); + err_port_swid_set: + err_port_system_port_mapping_set: + port_not_usable: +diff --git a/drivers/net/ethernet/micrel/ks8842.c b/drivers/net/ethernet/micrel/ks8842.c +index 09d2e16fd6b0..cb0102dd7f70 100644 +--- a/drivers/net/ethernet/micrel/ks8842.c ++++ b/drivers/net/ethernet/micrel/ks8842.c +@@ -561,8 +561,8 @@ static int __ks8842_start_new_rx_dma(struct net_device *netdev) + sg_init_table(sg, 1); + sg_dma_address(sg) = dma_map_single(adapter->dev, + ctl->skb->data, DMA_BUFFER_SIZE, DMA_FROM_DEVICE); +- err = dma_mapping_error(adapter->dev, sg_dma_address(sg)); +- if (unlikely(err)) { ++ if (dma_mapping_error(adapter->dev, sg_dma_address(sg))) { ++ err = -ENOMEM; + sg_dma_address(sg) = 0; + goto out; + } +@@ -572,8 +572,10 @@ static int __ks8842_start_new_rx_dma(struct net_device *netdev) + ctl->adesc = dmaengine_prep_slave_sg(ctl->chan, + sg, 1, DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT); + +- if (!ctl->adesc) ++ if (!ctl->adesc) { ++ err = -ENOMEM; + goto out; ++ } + + ctl->adesc->callback_param = netdev; + ctl->adesc->callback = ks8842_dma_rx_cb; +@@ -584,7 +586,7 @@ static int __ks8842_start_new_rx_dma(struct net_device *netdev) + goto out; + } + +- return err; ++ return 0; + out: + if (sg_dma_address(sg)) + dma_unmap_single(adapter->dev, sg_dma_address(sg), +diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c +index a10c928bbd6b..f1dde59c9fa6 100644 +--- a/drivers/net/ethernet/moxa/moxart_ether.c ++++ b/drivers/net/ethernet/moxa/moxart_ether.c +@@ -460,9 +460,9 @@ static int moxart_mac_probe(struct platform_device *pdev) + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + ndev->base_addr = res->start; + priv->base = devm_ioremap_resource(p_dev, res); +- ret = IS_ERR(priv->base); +- if (ret) { ++ if (IS_ERR(priv->base)) { + dev_err(p_dev, "devm_ioremap_resource failed\n"); ++ ret = PTR_ERR(priv->base); + goto init_fail; + } + +diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c +index 8b63c9d183a2..c677b69bbb0b 100644 +--- a/drivers/net/ethernet/qlogic/qede/qede_main.c ++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c +@@ -400,7 +400,7 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb, + u8 xmit_type; + u16 idx; + u16 hlen; +- bool data_split; ++ bool data_split = false; + + /* Get tx-queue context and netdev index */ + txq_index = skb_get_queue_mapping(skb); +diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c +index 98042a3701b5..621eac53ab01 100644 +--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c ++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_io.c +@@ -2220,7 +2220,7 @@ void qlcnic_83xx_process_rcv_ring_diag(struct qlcnic_host_sds_ring *sds_ring) + if (!opcode) + return; + +- ring = QLCNIC_FETCH_RING_ID(qlcnic_83xx_hndl(sts_data[0])); ++ ring = QLCNIC_FETCH_RING_ID(sts_data[0]); + qlcnic_83xx_process_rcv_diag(adapter, ring, sts_data); + desc = &sds_ring->desc_head[consumer]; + desc->status_desc_data[0] = cpu_to_le64(STATUS_OWNER_PHANTOM); +diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c +index e5ea8e972b91..5174e0bd75d1 100644 +--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c ++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c +@@ -1419,6 +1419,7 @@ void qlcnic_83xx_get_minidump_template(struct qlcnic_adapter *adapter) + struct qlcnic_fw_dump *fw_dump = &ahw->fw_dump; + struct pci_dev *pdev = adapter->pdev; + bool extended = false; ++ int ret; + + prev_version = adapter->fw_version; + current_version = qlcnic_83xx_get_fw_version(adapter); +@@ -1429,8 +1430,11 @@ void qlcnic_83xx_get_minidump_template(struct qlcnic_adapter *adapter) + if (qlcnic_83xx_md_check_extended_dump_capability(adapter)) + extended = !qlcnic_83xx_extend_md_capab(adapter); + +- if (!qlcnic_fw_cmd_get_minidump_temp(adapter)) +- dev_info(&pdev->dev, "Supports FW dump capability\n"); ++ ret = qlcnic_fw_cmd_get_minidump_temp(adapter); ++ if (ret) ++ return; ++ ++ dev_info(&pdev->dev, "Supports FW dump capability\n"); + + /* Once we have minidump template with extended iSCSI dump + * capability, update the minidump capture mask to 0x1f as +diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c +index fedfd94699cb..5b6320f9c935 100644 +--- a/drivers/net/ethernet/renesas/ravb_main.c ++++ b/drivers/net/ethernet/renesas/ravb_main.c +@@ -1528,6 +1528,8 @@ static int ravb_close(struct net_device *ndev) + priv->phydev = NULL; + } + ++ if (priv->chip_id == RCAR_GEN3) ++ free_irq(priv->emac_irq, ndev); + free_irq(ndev->irq, ndev); + + napi_disable(&priv->napi[RAVB_NC]); +diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c +index 6dcd436e6e32..e289cb47e6ab 100644 +--- a/drivers/net/ethernet/sfc/ef10.c ++++ b/drivers/net/ethernet/sfc/ef10.c +@@ -1304,13 +1304,14 @@ static void efx_ef10_get_stat_mask(struct efx_nic *efx, unsigned long *mask) + } + + #if BITS_PER_LONG == 64 ++ BUILD_BUG_ON(BITS_TO_LONGS(EF10_STAT_COUNT) != 2); + mask[0] = raw_mask[0]; + mask[1] = raw_mask[1]; + #else ++ BUILD_BUG_ON(BITS_TO_LONGS(EF10_STAT_COUNT) != 3); + mask[0] = raw_mask[0] & 0xffffffff; + mask[1] = raw_mask[0] >> 32; + mask[2] = raw_mask[1] & 0xffffffff; +- mask[3] = raw_mask[1] >> 32; + #endif + } + +diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c +index a3c42a376741..167ccb27e2f1 100644 +--- a/drivers/net/ethernet/sfc/efx.c ++++ b/drivers/net/ethernet/sfc/efx.c +@@ -479,6 +479,9 @@ efx_copy_channel(const struct efx_channel *old_channel) + *channel = *old_channel; + + channel->napi_dev = NULL; ++ INIT_HLIST_NODE(&channel->napi_str.napi_hash_node); ++ channel->napi_str.napi_id = 0; ++ channel->napi_str.state = 0; + memset(&channel->eventq, 0, sizeof(channel->eventq)); + + for (j = 0; j < EFX_TXQ_TYPES; j++) { +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c +index b1e5f24708c9..05e46a82cdb1 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c +@@ -53,7 +53,17 @@ static int dwmac_generic_probe(struct platform_device *pdev) + return ret; + } + +- return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); ++ ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); ++ if (ret) ++ goto err_exit; ++ ++ return 0; ++ ++err_exit: ++ if (plat_dat->exit) ++ plat_dat->exit(pdev, plat_dat->bsp_priv); ++ ++ return ret; + } + + static const struct of_device_id dwmac_generic_match[] = { +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c +index 68a58333bd74..f2f24f99d086 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c +@@ -600,7 +600,16 @@ static int rk_gmac_probe(struct platform_device *pdev) + if (ret) + return ret; + +- return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); ++ ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); ++ if (ret) ++ goto err_gmac_exit; ++ ++ return 0; ++ ++err_gmac_exit: ++ rk_gmac_exit(pdev, plat_dat->bsp_priv); ++ ++ return ret; + } + + static const struct of_device_id rk_gmac_dwmac_match[] = { +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c +index 58c05acc2aab..a1ce018bf844 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c +@@ -365,7 +365,16 @@ static int sti_dwmac_probe(struct platform_device *pdev) + if (ret) + return ret; + +- return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); ++ ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); ++ if (ret) ++ goto err_dwmac_exit; ++ ++ return 0; ++ ++err_dwmac_exit: ++ sti_dwmac_exit(pdev, plat_dat->bsp_priv); ++ ++ return ret; + } + + static const struct sti_dwmac_of_data stih4xx_dwmac_data = { +diff --git a/drivers/net/ethernet/ti/cpsw-phy-sel.c b/drivers/net/ethernet/ti/cpsw-phy-sel.c +index e9cc61e1ec74..7b0dfdced517 100644 +--- a/drivers/net/ethernet/ti/cpsw-phy-sel.c ++++ b/drivers/net/ethernet/ti/cpsw-phy-sel.c +@@ -154,9 +154,12 @@ void cpsw_phy_sel(struct device *dev, phy_interface_t phy_mode, int slave) + } + + dev = bus_find_device(&platform_bus_type, NULL, node, match); ++ of_node_put(node); + priv = dev_get_drvdata(dev); + + priv->cpsw_phy_sel(priv, phy_mode, slave); ++ ++ put_device(dev); + } + EXPORT_SYMBOL_GPL(cpsw_phy_sel); + +diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c +index 9a9cb6b11e4c..6ee0bd72d89b 100644 +--- a/drivers/net/ethernet/ti/cpsw.c ++++ b/drivers/net/ethernet/ti/cpsw.c +@@ -2060,7 +2060,11 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data, + slave_data->phy_node = of_parse_phandle(slave_node, + "phy-handle", 0); + parp = of_get_property(slave_node, "phy_id", &lenp); +- if (of_phy_is_fixed_link(slave_node)) { ++ if (slave_data->phy_node) { ++ dev_dbg(&pdev->dev, ++ "slave[%d] using phy-handle=\"%s\"\n", ++ i, slave_data->phy_node->full_name); ++ } else if (of_phy_is_fixed_link(slave_node)) { + struct device_node *phy_node; + struct phy_device *phy_dev; + +@@ -2097,7 +2101,9 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data, + PHY_ID_FMT, mdio->name, phyid); + put_device(&mdio->dev); + } else { +- dev_err(&pdev->dev, "No slave[%d] phy_id or fixed-link property\n", i); ++ dev_err(&pdev->dev, ++ "No slave[%d] phy_id, phy-handle, or fixed-link property\n", ++ i); + goto no_phy_slave; + } + slave_data->phy_if = of_get_phy_mode(slave_node); +@@ -2526,12 +2532,14 @@ static int cpsw_probe(struct platform_device *pdev) + ret = cpsw_probe_dual_emac(pdev, priv); + if (ret) { + cpsw_err(priv, probe, "error probe slave 2 emac interface\n"); +- goto clean_ale_ret; ++ goto clean_unregister_netdev_ret; + } + } + + return 0; + ++clean_unregister_netdev_ret: ++ unregister_netdev(ndev); + clean_ale_ret: + cpsw_ale_destroy(priv->ale); + clean_dma_ret: +diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c +index 8ecb24186b7f..e4c4747bdf32 100644 +--- a/drivers/net/ethernet/ti/davinci_emac.c ++++ b/drivers/net/ethernet/ti/davinci_emac.c +@@ -1512,7 +1512,10 @@ static int emac_devioctl(struct net_device *ndev, struct ifreq *ifrq, int cmd) + + /* TODO: Add phy read and write and private statistics get feature */ + +- return phy_mii_ioctl(priv->phydev, ifrq, cmd); ++ if (priv->phydev) ++ return phy_mii_ioctl(priv->phydev, ifrq, cmd); ++ else ++ return -EOPNOTSUPP; + } + + static int match_first_device(struct device *dev, void *data) +@@ -1885,8 +1888,6 @@ davinci_emac_of_get_pdata(struct platform_device *pdev, struct emac_priv *priv) + pdata->hw_ram_addr = auxdata->hw_ram_addr; + } + +- pdev->dev.platform_data = pdata; +- + return pdata; + } + +diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +index 7f1a57bb2ab1..44870fc37f54 100644 +--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c ++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +@@ -1602,9 +1602,9 @@ static int axienet_probe(struct platform_device *pdev) + + /* Find the DMA node, map the DMA registers, and decode the DMA IRQs */ + np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0); +- if (IS_ERR(np)) { ++ if (!np) { + dev_err(&pdev->dev, "could not find DMA node\n"); +- ret = PTR_ERR(np); ++ ret = -ENODEV; + goto free_netdev; + } + ret = of_address_to_resource(np, 0, &dmares); +diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c +index f0961cbaf87e..1988bc00de3c 100644 +--- a/drivers/net/geneve.c ++++ b/drivers/net/geneve.c +@@ -1340,6 +1340,7 @@ struct net_device *geneve_dev_create_fb(struct net *net, const char *name, + { + struct nlattr *tb[IFLA_MAX + 1]; + struct net_device *dev; ++ LIST_HEAD(list_kill); + int err; + + memset(tb, 0, sizeof(tb)); +@@ -1350,8 +1351,10 @@ struct net_device *geneve_dev_create_fb(struct net *net, const char *name, + + err = geneve_configure(net, dev, &geneve_remote_unspec, + 0, 0, 0, htons(dst_port), true); +- if (err) +- goto err; ++ if (err) { ++ free_netdev(dev); ++ return ERR_PTR(err); ++ } + + /* openvswitch users expect packet sizes to be unrestricted, + * so set the largest MTU we can. +@@ -1360,10 +1363,15 @@ struct net_device *geneve_dev_create_fb(struct net *net, const char *name, + if (err) + goto err; + ++ err = rtnl_configure_link(dev, NULL); ++ if (err < 0) ++ goto err; ++ + return dev; + + err: +- free_netdev(dev); ++ geneve_dellink(dev, &list_kill); ++ unregister_netdevice_many(&list_kill); + return ERR_PTR(err); + } + EXPORT_SYMBOL_GPL(geneve_dev_create_fb); +diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c +index d5d4d109ee10..0c4e1ef80355 100644 +--- a/drivers/net/macvlan.c ++++ b/drivers/net/macvlan.c +@@ -305,6 +305,8 @@ static void macvlan_process_broadcast(struct work_struct *w) + + rcu_read_unlock(); + ++ if (src) ++ dev_put(src->dev); + kfree_skb(skb); + + cond_resched(); +@@ -312,6 +314,7 @@ static void macvlan_process_broadcast(struct work_struct *w) + } + + static void macvlan_broadcast_enqueue(struct macvlan_port *port, ++ const struct macvlan_dev *src, + struct sk_buff *skb) + { + struct sk_buff *nskb; +@@ -321,8 +324,12 @@ static void macvlan_broadcast_enqueue(struct macvlan_port *port, + if (!nskb) + goto err; + ++ MACVLAN_SKB_CB(nskb)->src = src; ++ + spin_lock(&port->bc_queue.lock); + if (skb_queue_len(&port->bc_queue) < MACVLAN_BC_QUEUE_LEN) { ++ if (src) ++ dev_hold(src->dev); + __skb_queue_tail(&port->bc_queue, nskb); + err = 0; + } +@@ -432,8 +439,7 @@ static rx_handler_result_t macvlan_handle_frame(struct sk_buff **pskb) + goto out; + } + +- MACVLAN_SKB_CB(skb)->src = src; +- macvlan_broadcast_enqueue(port, skb); ++ macvlan_broadcast_enqueue(port, src, skb); + + return RX_HANDLER_PASS; + } +diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c +index ed96fdefd8e5..3a76ca395103 100644 +--- a/drivers/net/macvtap.c ++++ b/drivers/net/macvtap.c +@@ -373,7 +373,7 @@ static rx_handler_result_t macvtap_handle_frame(struct sk_buff **pskb) + goto wake_up; + } + +- kfree_skb(skb); ++ consume_skb(skb); + while (segs) { + struct sk_buff *nskb = segs->next; + +diff --git a/drivers/net/phy/at803x.c b/drivers/net/phy/at803x.c +index 37333d38b576..f88e7cc813ef 100644 +--- a/drivers/net/phy/at803x.c ++++ b/drivers/net/phy/at803x.c +@@ -198,7 +198,7 @@ static int at803x_probe(struct phy_device *phydev) + if (!priv) + return -ENOMEM; + +- gpiod_reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); ++ gpiod_reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW); + if (IS_ERR(gpiod_reset)) + return PTR_ERR(gpiod_reset); + +@@ -274,10 +274,10 @@ static void at803x_link_change_notify(struct phy_device *phydev) + + at803x_context_save(phydev, &context); + +- gpiod_set_value(priv->gpiod_reset, 0); +- msleep(1); + gpiod_set_value(priv->gpiod_reset, 1); + msleep(1); ++ gpiod_set_value(priv->gpiod_reset, 0); ++ msleep(1); + + at803x_context_restore(phydev, &context); + +diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c +index bffa70e46202..b7bc27a89454 100644 +--- a/drivers/net/phy/bcm7xxx.c ++++ b/drivers/net/phy/bcm7xxx.c +@@ -270,7 +270,7 @@ static int bcm7xxx_config_init(struct phy_device *phydev) + phy_write(phydev, MII_BCM7XXX_100TX_FALSE_CAR, 0x7555); + + /* reset shadow mode 2 */ +- ret = phy_set_clr_bits(phydev, MII_BCM7XXX_TEST, MII_BCM7XXX_SHD_MODE_2, 0); ++ ret = phy_set_clr_bits(phydev, MII_BCM7XXX_TEST, 0, MII_BCM7XXX_SHD_MODE_2); + if (ret < 0) + return ret; + +diff --git a/drivers/net/phy/mdio-sun4i.c b/drivers/net/phy/mdio-sun4i.c +index afd76e07088b..0e8dd446e8c1 100644 +--- a/drivers/net/phy/mdio-sun4i.c ++++ b/drivers/net/phy/mdio-sun4i.c +@@ -134,6 +134,7 @@ static int sun4i_mdio_probe(struct platform_device *pdev) + } + + dev_info(&pdev->dev, "no regulator found\n"); ++ data->regulator = NULL; + } else { + ret = regulator_enable(data->regulator); + if (ret) +@@ -149,7 +150,8 @@ static int sun4i_mdio_probe(struct platform_device *pdev) + return 0; + + err_out_disable_regulator: +- regulator_disable(data->regulator); ++ if (data->regulator) ++ regulator_disable(data->regulator); + err_out_free_mdiobus: + mdiobus_free(bus); + return ret; +diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c +index ba84fc3637b1..4eba646789c3 100644 +--- a/drivers/net/phy/micrel.c ++++ b/drivers/net/phy/micrel.c +@@ -482,9 +482,17 @@ static int ksz9031_config_init(struct phy_device *phydev) + "txd2-skew-ps", "txd3-skew-ps" + }; + static const char *control_skews[2] = {"txen-skew-ps", "rxdv-skew-ps"}; ++ const struct device *dev_walker; + +- if (!of_node && dev->parent->of_node) +- of_node = dev->parent->of_node; ++ /* The Micrel driver has a deprecated option to place phy OF ++ * properties in the MAC node. Walk up the tree of devices to ++ * find a device with an OF node. ++ */ ++ dev_walker = &phydev->dev; ++ do { ++ of_node = dev_walker->of_node; ++ dev_walker = dev_walker->parent; ++ } while (!of_node && dev_walker); + + if (of_node) { + ksz9031_of_load_skew_values(phydev, of_node, +diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c +index 7d2cf015c5e7..e1c17ab5c2d5 100644 +--- a/drivers/net/phy/phy.c ++++ b/drivers/net/phy/phy.c +@@ -699,25 +699,29 @@ void phy_change(struct work_struct *work) + struct phy_device *phydev = + container_of(work, struct phy_device, phy_queue); + +- if (phydev->drv->did_interrupt && +- !phydev->drv->did_interrupt(phydev)) +- goto ignore; ++ if (phy_interrupt_is_valid(phydev)) { ++ if (phydev->drv->did_interrupt && ++ !phydev->drv->did_interrupt(phydev)) ++ goto ignore; + +- if (phy_disable_interrupts(phydev)) +- goto phy_err; ++ if (phy_disable_interrupts(phydev)) ++ goto phy_err; ++ } + + mutex_lock(&phydev->lock); + if ((PHY_RUNNING == phydev->state) || (PHY_NOLINK == phydev->state)) + phydev->state = PHY_CHANGELINK; + mutex_unlock(&phydev->lock); + +- atomic_dec(&phydev->irq_disable); +- enable_irq(phydev->irq); ++ if (phy_interrupt_is_valid(phydev)) { ++ atomic_dec(&phydev->irq_disable); ++ enable_irq(phydev->irq); + +- /* Reenable interrupts */ +- if (PHY_HALTED != phydev->state && +- phy_config_interrupt(phydev, PHY_INTERRUPT_ENABLED)) +- goto irq_enable_err; ++ /* Reenable interrupts */ ++ if (PHY_HALTED != phydev->state && ++ phy_config_interrupt(phydev, PHY_INTERRUPT_ENABLED)) ++ goto irq_enable_err; ++ } + + /* reschedule state queue work to run as soon as possible */ + cancel_delayed_work_sync(&phydev->state_queue); +@@ -912,10 +916,10 @@ void phy_state_machine(struct work_struct *work) + phydev->adjust_link(phydev->attached_dev); + break; + case PHY_RUNNING: +- /* Only register a CHANGE if we are polling or ignoring +- * interrupts and link changed since latest checking. ++ /* Only register a CHANGE if we are polling and link changed ++ * since latest checking. + */ +- if (!phy_interrupt_is_valid(phydev)) { ++ if (phydev->irq == PHY_POLL) { + old_link = phydev->link; + err = phy_read_status(phydev); + if (err) +@@ -1015,15 +1019,21 @@ void phy_state_machine(struct work_struct *work) + dev_dbg(&phydev->dev, "PHY state change %s -> %s\n", + phy_state_to_str(old_state), phy_state_to_str(phydev->state)); + +- queue_delayed_work(system_power_efficient_wq, &phydev->state_queue, +- PHY_STATE_TIME * HZ); ++ /* Only re-schedule a PHY state machine change if we are polling the ++ * PHY, if PHY_IGNORE_INTERRUPT is set, then we will be moving ++ * between states from phy_mac_interrupt() ++ */ ++ if (phydev->irq == PHY_POLL) ++ queue_delayed_work(system_power_efficient_wq, &phydev->state_queue, ++ PHY_STATE_TIME * HZ); + } + + void phy_mac_interrupt(struct phy_device *phydev, int new_link) + { +- cancel_work_sync(&phydev->phy_queue); + phydev->link = new_link; +- schedule_work(&phydev->phy_queue); ++ ++ /* Trigger a state machine change */ ++ queue_work(system_power_efficient_wq, &phydev->phy_queue); + } + EXPORT_SYMBOL(phy_mac_interrupt); + +diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c +index b15eceb8b442..3b2b853ee3d3 100644 +--- a/drivers/net/phy/phy_device.c ++++ b/drivers/net/phy/phy_device.c +@@ -522,6 +522,7 @@ struct phy_device *phy_connect(struct net_device *dev, const char *bus_id, + phydev = to_phy_device(d); + + rc = phy_connect_direct(dev, phydev, handler, interface); ++ put_device(d); + if (rc) + return ERR_PTR(rc); + +@@ -721,6 +722,7 @@ struct phy_device *phy_attach(struct net_device *dev, const char *bus_id, + phydev = to_phy_device(d); + + rc = phy_attach_direct(dev, phydev, phydev->dev_flags, interface); ++ put_device(d); + if (rc) + return ERR_PTR(rc); + +diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c +index d3d59122a357..27fd5640a273 100644 +--- a/drivers/net/vrf.c ++++ b/drivers/net/vrf.c +@@ -71,41 +71,6 @@ struct pcpu_dstats { + struct u64_stats_sync syncp; + }; + +-static struct dst_entry *vrf_ip_check(struct dst_entry *dst, u32 cookie) +-{ +- return dst; +-} +- +-static int vrf_ip_local_out(struct net *net, struct sock *sk, struct sk_buff *skb) +-{ +- return ip_local_out(net, sk, skb); +-} +- +-static unsigned int vrf_v4_mtu(const struct dst_entry *dst) +-{ +- /* TO-DO: return max ethernet size? */ +- return dst->dev->mtu; +-} +- +-static void vrf_dst_destroy(struct dst_entry *dst) +-{ +- /* our dst lives forever - or until the device is closed */ +-} +- +-static unsigned int vrf_default_advmss(const struct dst_entry *dst) +-{ +- return 65535 - 40; +-} +- +-static struct dst_ops vrf_dst_ops = { +- .family = AF_INET, +- .local_out = vrf_ip_local_out, +- .check = vrf_ip_check, +- .mtu = vrf_v4_mtu, +- .destroy = vrf_dst_destroy, +- .default_advmss = vrf_default_advmss, +-}; +- + /* neighbor handling is done with actual device; do not want + * to flip skb->dev for those ndisc packets. This really fails + * for multiple next protocols (e.g., NEXTHDR_HOP). But it is +@@ -363,46 +328,6 @@ static netdev_tx_t vrf_xmit(struct sk_buff *skb, struct net_device *dev) + } + + #if IS_ENABLED(CONFIG_IPV6) +-static struct dst_entry *vrf_ip6_check(struct dst_entry *dst, u32 cookie) +-{ +- return dst; +-} +- +-static struct dst_ops vrf_dst_ops6 = { +- .family = AF_INET6, +- .local_out = ip6_local_out, +- .check = vrf_ip6_check, +- .mtu = vrf_v4_mtu, +- .destroy = vrf_dst_destroy, +- .default_advmss = vrf_default_advmss, +-}; +- +-static int init_dst_ops6_kmem_cachep(void) +-{ +- vrf_dst_ops6.kmem_cachep = kmem_cache_create("vrf_ip6_dst_cache", +- sizeof(struct rt6_info), +- 0, +- SLAB_HWCACHE_ALIGN, +- NULL); +- +- if (!vrf_dst_ops6.kmem_cachep) +- return -ENOMEM; +- +- return 0; +-} +- +-static void free_dst_ops6_kmem_cachep(void) +-{ +- kmem_cache_destroy(vrf_dst_ops6.kmem_cachep); +-} +- +-static int vrf_input6(struct sk_buff *skb) +-{ +- skb->dev->stats.rx_errors++; +- kfree_skb(skb); +- return 0; +-} +- + /* modelled after ip6_finish_output2 */ + static int vrf_finish_output6(struct net *net, struct sock *sk, + struct sk_buff *skb) +@@ -445,67 +370,34 @@ static int vrf_output6(struct net *net, struct sock *sk, struct sk_buff *skb) + !(IP6CB(skb)->flags & IP6SKB_REROUTED)); + } + +-static void vrf_rt6_destroy(struct net_vrf *vrf) ++static void vrf_rt6_release(struct net_vrf *vrf) + { +- dst_destroy(&vrf->rt6->dst); +- free_percpu(vrf->rt6->rt6i_pcpu); ++ dst_release(&vrf->rt6->dst); + vrf->rt6 = NULL; + } + + static int vrf_rt6_create(struct net_device *dev) + { + struct net_vrf *vrf = netdev_priv(dev); +- struct dst_entry *dst; ++ struct net *net = dev_net(dev); + struct rt6_info *rt6; +- int cpu; + int rc = -ENOMEM; + +- rt6 = dst_alloc(&vrf_dst_ops6, dev, 0, +- DST_OBSOLETE_NONE, +- (DST_HOST | DST_NOPOLICY | DST_NOXFRM)); ++ rt6 = ip6_dst_alloc(net, dev, ++ DST_HOST | DST_NOPOLICY | DST_NOXFRM | DST_NOCACHE); + if (!rt6) + goto out; + +- dst = &rt6->dst; +- +- rt6->rt6i_pcpu = alloc_percpu_gfp(struct rt6_info *, GFP_KERNEL); +- if (!rt6->rt6i_pcpu) { +- dst_destroy(dst); +- goto out; +- } +- for_each_possible_cpu(cpu) { +- struct rt6_info **p = per_cpu_ptr(rt6->rt6i_pcpu, cpu); +- *p = NULL; +- } +- +- memset(dst + 1, 0, sizeof(*rt6) - sizeof(*dst)); +- +- INIT_LIST_HEAD(&rt6->rt6i_siblings); +- INIT_LIST_HEAD(&rt6->rt6i_uncached); +- +- rt6->dst.input = vrf_input6; + rt6->dst.output = vrf_output6; +- +- rt6->rt6i_table = fib6_get_table(dev_net(dev), vrf->tb_id); +- +- atomic_set(&rt6->dst.__refcnt, 2); +- ++ rt6->rt6i_table = fib6_get_table(net, vrf->tb_id); ++ dst_hold(&rt6->dst); + vrf->rt6 = rt6; + rc = 0; + out: + return rc; + } + #else +-static int init_dst_ops6_kmem_cachep(void) +-{ +- return 0; +-} +- +-static void free_dst_ops6_kmem_cachep(void) +-{ +-} +- +-static void vrf_rt6_destroy(struct net_vrf *vrf) ++static void vrf_rt6_release(struct net_vrf *vrf) + { + } + +@@ -577,11 +469,11 @@ static int vrf_output(struct net *net, struct sock *sk, struct sk_buff *skb) + !(IPCB(skb)->flags & IPSKB_REROUTED)); + } + +-static void vrf_rtable_destroy(struct net_vrf *vrf) ++static void vrf_rtable_release(struct net_vrf *vrf) + { + struct dst_entry *dst = (struct dst_entry *)vrf->rth; + +- dst_destroy(dst); ++ dst_release(dst); + vrf->rth = NULL; + } + +@@ -590,22 +482,10 @@ static struct rtable *vrf_rtable_create(struct net_device *dev) + struct net_vrf *vrf = netdev_priv(dev); + struct rtable *rth; + +- rth = dst_alloc(&vrf_dst_ops, dev, 2, +- DST_OBSOLETE_NONE, +- (DST_HOST | DST_NOPOLICY | DST_NOXFRM)); ++ rth = rt_dst_alloc(dev, 0, RTN_UNICAST, 1, 1, 0); + if (rth) { + rth->dst.output = vrf_output; +- rth->rt_genid = rt_genid_ipv4(dev_net(dev)); +- rth->rt_flags = 0; +- rth->rt_type = RTN_UNICAST; +- rth->rt_is_input = 0; +- rth->rt_iif = 0; +- rth->rt_pmtu = 0; +- rth->rt_gateway = 0; +- rth->rt_uses_gateway = 0; + rth->rt_table_id = vrf->tb_id; +- INIT_LIST_HEAD(&rth->rt_uncached); +- rth->rt_uncached_list = NULL; + } + + return rth; +@@ -739,8 +619,8 @@ static void vrf_dev_uninit(struct net_device *dev) + // struct list_head *head = &queue->all_slaves; + // struct slave *slave, *next; + +- vrf_rtable_destroy(vrf); +- vrf_rt6_destroy(vrf); ++ vrf_rtable_release(vrf); ++ vrf_rt6_release(vrf); + + // list_for_each_entry_safe(slave, next, head, list) + // vrf_del_slave(dev, slave->dev); +@@ -772,7 +652,7 @@ static int vrf_dev_init(struct net_device *dev) + return 0; + + out_rth: +- vrf_rtable_destroy(vrf); ++ vrf_rtable_release(vrf); + out_stats: + free_percpu(dev->dstats); + dev->dstats = NULL; +@@ -805,7 +685,7 @@ static struct rtable *vrf_get_rtable(const struct net_device *dev, + struct net_vrf *vrf = netdev_priv(dev); + + rth = vrf->rth; +- atomic_inc(&rth->dst.__refcnt); ++ dst_hold(&rth->dst); + } + + return rth; +@@ -856,7 +736,7 @@ static struct dst_entry *vrf_get_rt6_dst(const struct net_device *dev, + struct net_vrf *vrf = netdev_priv(dev); + + rt = vrf->rt6; +- atomic_inc(&rt->dst.__refcnt); ++ dst_hold(&rt->dst); + } + + return (struct dst_entry *)rt; +@@ -1003,19 +883,6 @@ static int __init vrf_init_module(void) + { + int rc; + +- vrf_dst_ops.kmem_cachep = +- kmem_cache_create("vrf_ip_dst_cache", +- sizeof(struct rtable), 0, +- SLAB_HWCACHE_ALIGN, +- NULL); +- +- if (!vrf_dst_ops.kmem_cachep) +- return -ENOMEM; +- +- rc = init_dst_ops6_kmem_cachep(); +- if (rc != 0) +- goto error2; +- + register_netdevice_notifier(&vrf_notifier_block); + + rc = rtnl_link_register(&vrf_link_ops); +@@ -1026,22 +893,10 @@ static int __init vrf_init_module(void) + + error: + unregister_netdevice_notifier(&vrf_notifier_block); +- free_dst_ops6_kmem_cachep(); +-error2: +- kmem_cache_destroy(vrf_dst_ops.kmem_cachep); + return rc; + } + +-static void __exit vrf_cleanup_module(void) +-{ +- rtnl_link_unregister(&vrf_link_ops); +- unregister_netdevice_notifier(&vrf_notifier_block); +- kmem_cache_destroy(vrf_dst_ops.kmem_cachep); +- free_dst_ops6_kmem_cachep(); +-} +- + module_init(vrf_init_module); +-module_exit(vrf_cleanup_module); + MODULE_AUTHOR("Shrijeet Mukherjee, David Ahern"); + MODULE_DESCRIPTION("Device driver to instantiate VRF domains"); + MODULE_LICENSE("GPL"); +diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c +index d294949005bd..752f44a0e3af 100644 +--- a/drivers/net/vxlan.c ++++ b/drivers/net/vxlan.c +@@ -2054,7 +2054,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, + } + + /* Bypass encapsulation if the destination is local */ +- if (rt->rt_flags & RTCF_LOCAL && ++ if (!info && rt->rt_flags & RTCF_LOCAL && + !(rt->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))) { + struct vxlan_dev *dst_vxlan; + +@@ -2112,7 +2112,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, + + /* Bypass encapsulation if the destination is local */ + rt6i_flags = ((struct rt6_info *)ndst)->rt6i_flags; +- if (rt6i_flags & RTF_LOCAL && ++ if (!info && rt6i_flags & RTF_LOCAL && + !(rt6i_flags & (RTCF_BROADCAST | RTCF_MULTICAST))) { + struct vxlan_dev *dst_vxlan; + +@@ -2927,30 +2927,6 @@ static int vxlan_dev_configure(struct net *src_net, struct net_device *dev, + return 0; + } + +-struct net_device *vxlan_dev_create(struct net *net, const char *name, +- u8 name_assign_type, struct vxlan_config *conf) +-{ +- struct nlattr *tb[IFLA_MAX+1]; +- struct net_device *dev; +- int err; +- +- memset(&tb, 0, sizeof(tb)); +- +- dev = rtnl_create_link(net, name, name_assign_type, +- &vxlan_link_ops, tb); +- if (IS_ERR(dev)) +- return dev; +- +- err = vxlan_dev_configure(net, dev, conf); +- if (err < 0) { +- free_netdev(dev); +- return ERR_PTR(err); +- } +- +- return dev; +-} +-EXPORT_SYMBOL_GPL(vxlan_dev_create); +- + static int vxlan_newlink(struct net *src_net, struct net_device *dev, + struct nlattr *tb[], struct nlattr *data[]) + { +@@ -3218,6 +3194,40 @@ static struct rtnl_link_ops vxlan_link_ops __read_mostly = { + .get_link_net = vxlan_get_link_net, + }; + ++struct net_device *vxlan_dev_create(struct net *net, const char *name, ++ u8 name_assign_type, ++ struct vxlan_config *conf) ++{ ++ struct nlattr *tb[IFLA_MAX + 1]; ++ struct net_device *dev; ++ int err; ++ ++ memset(&tb, 0, sizeof(tb)); ++ ++ dev = rtnl_create_link(net, name, name_assign_type, ++ &vxlan_link_ops, tb); ++ if (IS_ERR(dev)) ++ return dev; ++ ++ err = vxlan_dev_configure(net, dev, conf); ++ if (err < 0) { ++ free_netdev(dev); ++ return ERR_PTR(err); ++ } ++ ++ err = rtnl_configure_link(dev, NULL); ++ if (err < 0) { ++ LIST_HEAD(list_kill); ++ ++ vxlan_dellink(dev, &list_kill); ++ unregister_netdevice_many(&list_kill); ++ return ERR_PTR(err); ++ } ++ ++ return dev; ++} ++EXPORT_SYMBOL_GPL(vxlan_dev_create); ++ + static void vxlan_handle_lowerdev_unregister(struct vxlan_net *vn, + struct net_device *dev) + { +diff --git a/drivers/net/wimax/i2400m/usb-fw.c b/drivers/net/wimax/i2400m/usb-fw.c +index e74664b84925..4e4167976acf 100644 +--- a/drivers/net/wimax/i2400m/usb-fw.c ++++ b/drivers/net/wimax/i2400m/usb-fw.c +@@ -354,6 +354,7 @@ out: + usb_autopm_put_interface(i2400mu->usb_iface); + d_fnend(8, dev, "(i2400m %p ack %p size %zu) = %ld\n", + i2400m, ack, ack_size, (long) result); ++ usb_put_urb(¬if_urb); + return result; + + error_exceeded: +diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c +index 0c23768aa1ec..ef9fb9ddde5e 100644 +--- a/drivers/net/wireless/ath/ath10k/core.c ++++ b/drivers/net/wireless/ath/ath10k/core.c +@@ -1805,7 +1805,7 @@ static int ath10k_core_probe_fw(struct ath10k *ar) + if (ret && ret != -EOPNOTSUPP) { + ath10k_err(ar, "failed to get board id from otp for qca99x0: %d\n", + ret); +- return ret; ++ goto err_free_firmware_files; + } + + ret = ath10k_core_fetch_board_file(ar); +diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c +index 2294709ee8b0..fd85f996c554 100644 +--- a/drivers/net/wireless/ath/ath9k/htc_hst.c ++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c +@@ -414,7 +414,7 @@ void ath9k_htc_rx_msg(struct htc_target *htc_handle, + return; + } + +- if (epid >= ENDPOINT_MAX) { ++ if (epid < 0 || epid >= ENDPOINT_MAX) { + if (pipe_id != USB_REG_IN_PIPE) + dev_kfree_skb_any(skb); + else +diff --git a/drivers/net/wireless/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/brcm80211/brcmfmac/cfg80211.c +index 231c0ba6acb9..1992aae137cd 100644 +--- a/drivers/net/wireless/brcm80211/brcmfmac/cfg80211.c ++++ b/drivers/net/wireless/brcm80211/brcmfmac/cfg80211.c +@@ -2419,12 +2419,14 @@ brcmf_cfg80211_get_station(struct wiphy *wiphy, struct net_device *ndev, + const u8 *mac, struct station_info *sinfo) + { + struct brcmf_if *ifp = netdev_priv(ndev); ++ struct brcmf_scb_val_le scb_val; + s32 err = 0; + struct brcmf_sta_info_le sta_info_le; + u32 sta_flags; + u32 is_tdls_peer; + s32 total_rssi; + s32 count_rssi; ++ int rssi; + u32 i; + + brcmf_dbg(TRACE, "Enter, MAC %pM\n", mac); +@@ -2505,6 +2507,20 @@ brcmf_cfg80211_get_station(struct wiphy *wiphy, struct net_device *ndev, + sinfo->filled |= BIT(NL80211_STA_INFO_SIGNAL); + total_rssi /= count_rssi; + sinfo->signal = total_rssi; ++ } else if (test_bit(BRCMF_VIF_STATUS_CONNECTED, ++ &ifp->vif->sme_state)) { ++ memset(&scb_val, 0, sizeof(scb_val)); ++ err = brcmf_fil_cmd_data_get(ifp, BRCMF_C_GET_RSSI, ++ &scb_val, sizeof(scb_val)); ++ if (err) { ++ brcmf_err("Could not get rssi (%d)\n", err); ++ goto done; ++ } else { ++ rssi = le32_to_cpu(scb_val.val); ++ sinfo->filled |= BIT(NL80211_STA_INFO_SIGNAL); ++ sinfo->signal = rssi; ++ brcmf_dbg(CONN, "RSSI %d dBm\n", rssi); ++ } + } + } + done: +diff --git a/drivers/net/wireless/brcm80211/brcmfmac/fwsignal.c b/drivers/net/wireless/brcm80211/brcmfmac/fwsignal.c +index 086cac3f86d6..7b120d841aed 100644 +--- a/drivers/net/wireless/brcm80211/brcmfmac/fwsignal.c ++++ b/drivers/net/wireless/brcm80211/brcmfmac/fwsignal.c +@@ -2262,10 +2262,22 @@ void brcmf_fws_bustxfail(struct brcmf_fws_info *fws, struct sk_buff *skb) + void brcmf_fws_bus_blocked(struct brcmf_pub *drvr, bool flow_blocked) + { + struct brcmf_fws_info *fws = drvr->fws; ++ struct brcmf_if *ifp; ++ int i; + +- fws->bus_flow_blocked = flow_blocked; +- if (!flow_blocked) +- brcmf_fws_schedule_deq(fws); +- else +- fws->stats.bus_flow_block++; ++ if (fws->avoid_queueing) { ++ for (i = 0; i < BRCMF_MAX_IFS; i++) { ++ ifp = drvr->iflist[i]; ++ if (!ifp || !ifp->ndev) ++ continue; ++ brcmf_txflowblock_if(ifp, BRCMF_NETIF_STOP_REASON_FLOW, ++ flow_blocked); ++ } ++ } else { ++ fws->bus_flow_blocked = flow_blocked; ++ if (!flow_blocked) ++ brcmf_fws_schedule_deq(fws); ++ else ++ fws->stats.bus_flow_block++; ++ } + } +diff --git a/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c +index 6f7138cea555..f944f356d9c5 100644 +--- a/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c ++++ b/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c +@@ -1155,6 +1155,8 @@ brcmf_msgbuf_process_rx_complete(struct brcmf_msgbuf *msgbuf, void *buf) + brcmu_pkt_buf_free_skb(skb); + return; + } ++ ++ skb->protocol = eth_type_trans(skb, ifp->ndev); + brcmf_netif_rx(ifp, skb); + } + +diff --git a/drivers/net/wireless/iwlwifi/iwl-7000.c b/drivers/net/wireless/iwlwifi/iwl-7000.c +index d9a4aee246a6..c7e34bb486c9 100644 +--- a/drivers/net/wireless/iwlwifi/iwl-7000.c ++++ b/drivers/net/wireless/iwlwifi/iwl-7000.c +@@ -70,7 +70,7 @@ + + /* Highest firmware API version supported */ + #define IWL7260_UCODE_API_MAX 17 +-#define IWL7265_UCODE_API_MAX 19 ++#define IWL7265_UCODE_API_MAX 17 + #define IWL7265D_UCODE_API_MAX 19 + + /* Oldest version we won't warn about */ +diff --git a/drivers/net/wireless/mwifiex/pcie.h b/drivers/net/wireless/mwifiex/pcie.h +index 48e549c3b285..347ba45f1f2a 100644 +--- a/drivers/net/wireless/mwifiex/pcie.h ++++ b/drivers/net/wireless/mwifiex/pcie.h +@@ -210,17 +210,17 @@ static const struct mwifiex_pcie_card_reg mwifiex_reg_8997 = { + .cmdrsp_addr_lo = PCIE_SCRATCH_4_REG, + .cmdrsp_addr_hi = PCIE_SCRATCH_5_REG, + .tx_rdptr = 0xC1A4, +- .tx_wrptr = 0xC1A8, +- .rx_rdptr = 0xC1A8, ++ .tx_wrptr = 0xC174, ++ .rx_rdptr = 0xC174, + .rx_wrptr = 0xC1A4, + .evt_rdptr = PCIE_SCRATCH_10_REG, + .evt_wrptr = PCIE_SCRATCH_11_REG, + .drv_rdy = PCIE_SCRATCH_12_REG, + .tx_start_ptr = 16, + .tx_mask = 0x0FFF0000, +- .tx_wrap_mask = 0x01FF0000, ++ .tx_wrap_mask = 0x1FFF0000, + .rx_mask = 0x00000FFF, +- .rx_wrap_mask = 0x000001FF, ++ .rx_wrap_mask = 0x00001FFF, + .tx_rollover_ind = BIT(28), + .rx_rollover_ind = BIT(12), + .evt_rollover_ind = MWIFIEX_BD_FLAG_EVT_ROLLOVER_IND, +@@ -342,6 +342,7 @@ mwifiex_pcie_txbd_empty(struct pcie_service_card *card, u32 rdptr) + return 1; + break; + case PCIE_DEVICE_ID_MARVELL_88W8897: ++ case PCIE_DEVICE_ID_MARVELL_88W8997: + if (((card->txbd_wrptr & reg->tx_mask) == + (rdptr & reg->tx_mask)) && + ((card->txbd_wrptr & reg->tx_rollover_ind) == +diff --git a/drivers/net/wireless/mwifiex/sta_event.c b/drivers/net/wireless/mwifiex/sta_event.c +index ff3ee9dfbbd5..23bae87d4d3d 100644 +--- a/drivers/net/wireless/mwifiex/sta_event.c ++++ b/drivers/net/wireless/mwifiex/sta_event.c +@@ -607,11 +607,13 @@ int mwifiex_process_sta_event(struct mwifiex_private *priv) + + case EVENT_PS_AWAKE: + mwifiex_dbg(adapter, EVENT, "info: EVENT: AWAKE\n"); +- if (!adapter->pps_uapsd_mode && priv->port_open && ++ if (!adapter->pps_uapsd_mode && ++ (priv->port_open || ++ (priv->bss_mode == NL80211_IFTYPE_ADHOC)) && + priv->media_connected && adapter->sleep_period.period) { +- adapter->pps_uapsd_mode = true; +- mwifiex_dbg(adapter, EVENT, +- "event: PPS/UAPSD mode activated\n"); ++ adapter->pps_uapsd_mode = true; ++ mwifiex_dbg(adapter, EVENT, ++ "event: PPS/UAPSD mode activated\n"); + } + adapter->tx_lock_flag = false; + if (adapter->pps_uapsd_mode && adapter->gen_null_pkt) { +diff --git a/drivers/net/wireless/mwifiex/wmm.c b/drivers/net/wireless/mwifiex/wmm.c +index 3a2ecb6cf1c3..cad399221b61 100644 +--- a/drivers/net/wireless/mwifiex/wmm.c ++++ b/drivers/net/wireless/mwifiex/wmm.c +@@ -475,7 +475,8 @@ mwifiex_wmm_lists_empty(struct mwifiex_adapter *adapter) + priv = adapter->priv[i]; + if (!priv) + continue; +- if (!priv->port_open) ++ if (!priv->port_open && ++ (priv->bss_mode != NL80211_IFTYPE_ADHOC)) + continue; + if (adapter->if_ops.is_port_ready && + !adapter->if_ops.is_port_ready(priv)) +@@ -1109,7 +1110,8 @@ mwifiex_wmm_get_highest_priolist_ptr(struct mwifiex_adapter *adapter, + + priv_tmp = adapter->bss_prio_tbl[j].bss_prio_cur->priv; + +- if (!priv_tmp->port_open || ++ if (((priv_tmp->bss_mode != NL80211_IFTYPE_ADHOC) && ++ !priv_tmp->port_open) || + (atomic_read(&priv_tmp->wmm.tx_pkts_queued) == 0)) + continue; + +diff --git a/drivers/of/of_mdio.c b/drivers/of/of_mdio.c +index a87a868fed64..2b1ccb806249 100644 +--- a/drivers/of/of_mdio.c ++++ b/drivers/of/of_mdio.c +@@ -334,8 +334,11 @@ int of_phy_register_fixed_link(struct device_node *np) + status.link = 1; + status.duplex = of_property_read_bool(fixed_link_node, + "full-duplex"); +- if (of_property_read_u32(fixed_link_node, "speed", &status.speed)) ++ if (of_property_read_u32(fixed_link_node, "speed", ++ &status.speed)) { ++ of_node_put(fixed_link_node); + return -EINVAL; ++ } + status.pause = of_property_read_bool(fixed_link_node, "pause"); + status.asym_pause = of_property_read_bool(fixed_link_node, + "asym-pause"); +diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c +index 6ac6618c1c10..ac9c1172c84a 100644 +--- a/drivers/pci/pci-sysfs.c ++++ b/drivers/pci/pci-sysfs.c +@@ -1027,6 +1027,9 @@ static int pci_mmap_resource(struct kobject *kobj, struct bin_attribute *attr, + if (i >= PCI_ROM_RESOURCE) + return -ENODEV; + ++ if (res->flags & IORESOURCE_MEM && iomem_is_exclusive(res->start)) ++ return -EINVAL; ++ + if (!pci_mmap_fits(pdev, i, vma, PCI_MMAP_SYSFS)) { + WARN(1, "process \"%s\" tried to map 0x%08lx bytes at page 0x%08lx on %s BAR %d (start 0x%16Lx, size 0x%16Lx)\n", + current->comm, vma->vm_end-vma->vm_start, vma->vm_pgoff, +@@ -1043,10 +1046,6 @@ static int pci_mmap_resource(struct kobject *kobj, struct bin_attribute *attr, + pci_resource_to_user(pdev, i, res, &start, &end); + vma->vm_pgoff += start >> PAGE_SHIFT; + mmap_type = res->flags & IORESOURCE_MEM ? pci_mmap_mem : pci_mmap_io; +- +- if (res->flags & IORESOURCE_MEM && iomem_is_exclusive(start)) +- return -EINVAL; +- + return pci_mmap_page_range(pdev, vma, mmap_type, write_combine); + } + +diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c +index 17dd8fe12b54..4ae15edde037 100644 +--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c ++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c +@@ -795,7 +795,7 @@ static int bcm2835_pctl_dt_node_to_map(struct pinctrl_dev *pctldev, + return 0; + + out: +- kfree(maps); ++ bcm2835_pctl_dt_free_map(pctldev, maps, num_pins * maps_per_pin); + return err; + } + +diff --git a/drivers/pinctrl/pinctrl-tegra.c b/drivers/pinctrl/pinctrl-tegra.c +index a30e967d75c2..d3d1dceaec2d 100644 +--- a/drivers/pinctrl/pinctrl-tegra.c ++++ b/drivers/pinctrl/pinctrl-tegra.c +@@ -418,7 +418,7 @@ static int tegra_pinconf_reg(struct tegra_pmx *pmx, + return -ENOTSUPP; + } + +- if (*reg < 0 || *bit > 31) { ++ if (*reg < 0 || *bit < 0) { + if (report_err) { + const char *prop = "unknown"; + int i; +diff --git a/drivers/power/bq27xxx_battery.c b/drivers/power/bq27xxx_battery.c +index 6c3a447f378b..286122df3e01 100644 +--- a/drivers/power/bq27xxx_battery.c ++++ b/drivers/power/bq27xxx_battery.c +@@ -198,10 +198,10 @@ static u8 bq27500_regs[] = { + INVALID_REG_ADDR, /* TTECP - NA */ + 0x0c, /* NAC */ + 0x12, /* LMD(FCC) */ +- 0x1e, /* CYCT */ ++ 0x2a, /* CYCT */ + INVALID_REG_ADDR, /* AE - NA */ +- 0x20, /* SOC(RSOC) */ +- 0x2e, /* DCAP(ILMD) */ ++ 0x2c, /* SOC(RSOC) */ ++ 0x3c, /* DCAP(ILMD) */ + INVALID_REG_ADDR, /* AP - NA */ + }; + +@@ -242,7 +242,7 @@ static u8 bq27541_regs[] = { + INVALID_REG_ADDR, /* AE - NA */ + 0x2c, /* SOC(RSOC) */ + 0x3c, /* DCAP */ +- 0x76, /* AP */ ++ 0x24, /* AP */ + }; + + static u8 bq27545_regs[] = { +@@ -471,7 +471,10 @@ static int bq27xxx_battery_read_soc(struct bq27xxx_device_info *di) + { + int soc; + +- soc = bq27xxx_read(di, BQ27XXX_REG_SOC, false); ++ if (di->chip == BQ27000 || di->chip == BQ27010) ++ soc = bq27xxx_read(di, BQ27XXX_REG_SOC, true); ++ else ++ soc = bq27xxx_read(di, BQ27XXX_REG_SOC, false); + + if (soc < 0) + dev_dbg(di->dev, "error reading State-of-Charge\n"); +@@ -536,7 +539,10 @@ static int bq27xxx_battery_read_dcap(struct bq27xxx_device_info *di) + { + int dcap; + +- dcap = bq27xxx_read(di, BQ27XXX_REG_DCAP, false); ++ if (di->chip == BQ27000 || di->chip == BQ27010) ++ dcap = bq27xxx_read(di, BQ27XXX_REG_DCAP, true); ++ else ++ dcap = bq27xxx_read(di, BQ27XXX_REG_DCAP, false); + + if (dcap < 0) { + dev_dbg(di->dev, "error reading initial last measured discharge\n"); +@@ -544,7 +550,7 @@ static int bq27xxx_battery_read_dcap(struct bq27xxx_device_info *di) + } + + if (di->chip == BQ27000 || di->chip == BQ27010) +- dcap *= BQ27XXX_CURRENT_CONSTANT / BQ27XXX_RS; ++ dcap = (dcap << 8) * BQ27XXX_CURRENT_CONSTANT / BQ27XXX_RS; + else + dcap *= 1000; + +diff --git a/drivers/power/ipaq_micro_battery.c b/drivers/power/ipaq_micro_battery.c +index f03014ea1dc4..65e9921c5a11 100644 +--- a/drivers/power/ipaq_micro_battery.c ++++ b/drivers/power/ipaq_micro_battery.c +@@ -261,7 +261,7 @@ static int micro_batt_probe(struct platform_device *pdev) + return 0; + + ac_err: +- power_supply_unregister(micro_ac_power); ++ power_supply_unregister(micro_batt_power); + batt_err: + cancel_delayed_work_sync(&mb->update); + destroy_workqueue(mb->wq); +diff --git a/drivers/power/test_power.c b/drivers/power/test_power.c +index 83c42ea88f2b..57246cdbd042 100644 +--- a/drivers/power/test_power.c ++++ b/drivers/power/test_power.c +@@ -301,6 +301,8 @@ static int map_get_value(struct battery_property_map *map, const char *key, + buf[MAX_KEYLENGTH-1] = '\0'; + + cr = strnlen(buf, MAX_KEYLENGTH) - 1; ++ if (cr < 0) ++ return def_val; + if (buf[cr] == '\n') + buf[cr] = '\0'; + +diff --git a/drivers/power/tps65217_charger.c b/drivers/power/tps65217_charger.c +index 040a40b4b173..4c56e54af6ac 100644 +--- a/drivers/power/tps65217_charger.c ++++ b/drivers/power/tps65217_charger.c +@@ -197,6 +197,7 @@ static int tps65217_charger_probe(struct platform_device *pdev) + { + struct tps65217 *tps = dev_get_drvdata(pdev->dev.parent); + struct tps65217_charger *charger; ++ struct power_supply_config cfg = {}; + int ret; + + dev_dbg(&pdev->dev, "%s\n", __func__); +@@ -209,9 +210,12 @@ static int tps65217_charger_probe(struct platform_device *pdev) + charger->tps = tps; + charger->dev = &pdev->dev; + ++ cfg.of_node = pdev->dev.of_node; ++ cfg.drv_data = charger; ++ + charger->ac = devm_power_supply_register(&pdev->dev, + &tps65217_charger_desc, +- NULL); ++ &cfg); + if (IS_ERR(charger->ac)) { + dev_err(&pdev->dev, "failed: power supply register\n"); + return PTR_ERR(charger->ac); +diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c +index f9b8c44677eb..fbf16603c494 100644 +--- a/drivers/regulator/core.c ++++ b/drivers/regulator/core.c +@@ -1057,18 +1057,18 @@ static int set_machine_constraints(struct regulator_dev *rdev, + + ret = machine_constraints_voltage(rdev, rdev->constraints); + if (ret != 0) +- goto out; ++ return ret; + + ret = machine_constraints_current(rdev, rdev->constraints); + if (ret != 0) +- goto out; ++ return ret; + + if (rdev->constraints->ilim_uA && ops->set_input_current_limit) { + ret = ops->set_input_current_limit(rdev, + rdev->constraints->ilim_uA); + if (ret < 0) { + rdev_err(rdev, "failed to set input limit\n"); +- goto out; ++ return ret; + } + } + +@@ -1077,21 +1077,20 @@ static int set_machine_constraints(struct regulator_dev *rdev, + ret = suspend_prepare(rdev, rdev->constraints->initial_state); + if (ret < 0) { + rdev_err(rdev, "failed to set suspend state\n"); +- goto out; ++ return ret; + } + } + + if (rdev->constraints->initial_mode) { + if (!ops->set_mode) { + rdev_err(rdev, "no set_mode operation\n"); +- ret = -EINVAL; +- goto out; ++ return -EINVAL; + } + + ret = ops->set_mode(rdev, rdev->constraints->initial_mode); + if (ret < 0) { + rdev_err(rdev, "failed to set initial mode: %d\n", ret); +- goto out; ++ return ret; + } + } + +@@ -1102,7 +1101,7 @@ static int set_machine_constraints(struct regulator_dev *rdev, + ret = _regulator_do_enable(rdev); + if (ret < 0 && ret != -EINVAL) { + rdev_err(rdev, "failed to enable\n"); +- goto out; ++ return ret; + } + } + +@@ -1111,7 +1110,7 @@ static int set_machine_constraints(struct regulator_dev *rdev, + ret = ops->set_ramp_delay(rdev, rdev->constraints->ramp_delay); + if (ret < 0) { + rdev_err(rdev, "failed to set ramp_delay\n"); +- goto out; ++ return ret; + } + } + +@@ -1119,7 +1118,7 @@ static int set_machine_constraints(struct regulator_dev *rdev, + ret = ops->set_pull_down(rdev); + if (ret < 0) { + rdev_err(rdev, "failed to set pull down\n"); +- goto out; ++ return ret; + } + } + +@@ -1127,7 +1126,7 @@ static int set_machine_constraints(struct regulator_dev *rdev, + ret = ops->set_soft_start(rdev); + if (ret < 0) { + rdev_err(rdev, "failed to set soft start\n"); +- goto out; ++ return ret; + } + } + +@@ -1136,16 +1135,12 @@ static int set_machine_constraints(struct regulator_dev *rdev, + ret = ops->set_over_current_protection(rdev); + if (ret < 0) { + rdev_err(rdev, "failed to set over current protection\n"); +- goto out; ++ return ret; + } + } + + print_constraints(rdev); + return 0; +-out: +- kfree(rdev->constraints); +- rdev->constraints = NULL; +- return ret; + } + + /** +@@ -3983,7 +3978,7 @@ unset_supplies: + + scrub: + regulator_ena_gpio_free(rdev); +- kfree(rdev->constraints); ++ + wash: + device_unregister(&rdev->dev); + /* device core frees rdev */ +diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c +index 5eaf14c15590..59cb3af9a318 100644 +--- a/drivers/scsi/cxgbi/libcxgbi.c ++++ b/drivers/scsi/cxgbi/libcxgbi.c +@@ -692,6 +692,7 @@ static struct rt6_info *find_route_ipv6(const struct in6_addr *saddr, + { + struct flowi6 fl; + ++ memset(&fl, 0, sizeof(fl)); + if (saddr) + memcpy(&fl.saddr, saddr, sizeof(struct in6_addr)); + if (daddr) +diff --git a/drivers/staging/media/lirc/lirc_imon.c b/drivers/staging/media/lirc/lirc_imon.c +index 534b8103ae80..ff1926ca1f96 100644 +--- a/drivers/staging/media/lirc/lirc_imon.c ++++ b/drivers/staging/media/lirc/lirc_imon.c +@@ -885,12 +885,14 @@ static int imon_probe(struct usb_interface *interface, + vendor, product, ifnum, usbdev->bus->busnum, usbdev->devnum); + + /* Everything went fine. Just unlock and return retval (with is 0) */ ++ mutex_unlock(&context->ctx_lock); + goto driver_unlock; + + unregister_lirc: + lirc_unregister_driver(driver->minor); + + free_tx_urb: ++ mutex_unlock(&context->ctx_lock); + usb_free_urb(tx_urb); + + free_rx_urb: +diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c +index 0f6bc6b8e4c6..1e0d2a33787e 100644 +--- a/drivers/staging/rtl8192u/r8192U_core.c ++++ b/drivers/staging/rtl8192u/r8192U_core.c +@@ -1050,7 +1050,7 @@ static void rtl8192_hard_data_xmit(struct sk_buff *skb, struct net_device *dev, + + spin_lock_irqsave(&priv->tx_lock, flags); + +- memcpy((unsigned char *)(skb->cb), &dev, sizeof(dev)); ++ *(struct net_device **)(skb->cb) = dev; + tcb_desc->bTxEnableFwCalcDur = 1; + skb_push(skb, priv->ieee80211->tx_headroom); + ret = rtl8192_tx(dev, skb); +@@ -1092,7 +1092,7 @@ static int rtl8192_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) + static void rtl8192_tx_isr(struct urb *tx_urb) + { + struct sk_buff *skb = (struct sk_buff *)tx_urb->context; +- struct net_device *dev = (struct net_device *)(skb->cb); ++ struct net_device *dev = *(struct net_device **)(skb->cb); + struct r8192_priv *priv = NULL; + cb_desc *tcb_desc = (cb_desc *)(skb->cb + MAX_DEV_ADDR_SIZE); + u8 queue_index = tcb_desc->queue_index; +diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c +index b9b9ffde4c7a..d2ceefe4a076 100644 +--- a/drivers/target/target_core_configfs.c ++++ b/drivers/target/target_core_configfs.c +@@ -1980,14 +1980,14 @@ static ssize_t target_dev_lba_map_store(struct config_item *item, + struct se_device *dev = to_device(item); + struct t10_alua_lba_map *lba_map = NULL; + struct list_head lba_list; +- char *map_entries, *ptr; ++ char *map_entries, *orig, *ptr; + char state; + int pg_num = -1, pg; + int ret = 0, num = 0, pg_id, alua_state; + unsigned long start_lba = -1, end_lba = -1; + unsigned long segment_size = -1, segment_mult = -1; + +- map_entries = kstrdup(page, GFP_KERNEL); ++ orig = map_entries = kstrdup(page, GFP_KERNEL); + if (!map_entries) + return -ENOMEM; + +@@ -2085,7 +2085,7 @@ out: + } else + core_alua_set_lba_map(dev, &lba_list, + segment_size, segment_mult); +- kfree(map_entries); ++ kfree(orig); + return count; + } + +diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c +index 1a4df5005aec..5fbb2d56565d 100644 +--- a/drivers/tty/serial/msm_serial.c ++++ b/drivers/tty/serial/msm_serial.c +@@ -872,37 +872,72 @@ struct msm_baud_map { + }; + + static const struct msm_baud_map * +-msm_find_best_baud(struct uart_port *port, unsigned int baud) ++msm_find_best_baud(struct uart_port *port, unsigned int baud, ++ unsigned long *rate) + { +- unsigned int i, divisor; +- const struct msm_baud_map *entry; ++ struct msm_port *msm_port = UART_TO_MSM(port); ++ unsigned int divisor, result; ++ unsigned long target, old, best_rate = 0, diff, best_diff = ULONG_MAX; ++ const struct msm_baud_map *entry, *end, *best; + static const struct msm_baud_map table[] = { +- { 1536, 0x00, 1 }, +- { 768, 0x11, 1 }, +- { 384, 0x22, 1 }, +- { 192, 0x33, 1 }, +- { 96, 0x44, 1 }, +- { 48, 0x55, 1 }, +- { 32, 0x66, 1 }, +- { 24, 0x77, 1 }, +- { 16, 0x88, 1 }, +- { 12, 0x99, 6 }, +- { 8, 0xaa, 6 }, +- { 6, 0xbb, 6 }, +- { 4, 0xcc, 6 }, +- { 3, 0xdd, 8 }, +- { 2, 0xee, 16 }, + { 1, 0xff, 31 }, +- { 0, 0xff, 31 }, ++ { 2, 0xee, 16 }, ++ { 3, 0xdd, 8 }, ++ { 4, 0xcc, 6 }, ++ { 6, 0xbb, 6 }, ++ { 8, 0xaa, 6 }, ++ { 12, 0x99, 6 }, ++ { 16, 0x88, 1 }, ++ { 24, 0x77, 1 }, ++ { 32, 0x66, 1 }, ++ { 48, 0x55, 1 }, ++ { 96, 0x44, 1 }, ++ { 192, 0x33, 1 }, ++ { 384, 0x22, 1 }, ++ { 768, 0x11, 1 }, ++ { 1536, 0x00, 1 }, + }; + +- divisor = uart_get_divisor(port, baud); ++ best = table; /* Default to smallest divider */ ++ target = clk_round_rate(msm_port->clk, 16 * baud); ++ divisor = DIV_ROUND_CLOSEST(target, 16 * baud); ++ ++ end = table + ARRAY_SIZE(table); ++ entry = table; ++ while (entry < end) { ++ if (entry->divisor <= divisor) { ++ result = target / entry->divisor / 16; ++ diff = abs(result - baud); ++ ++ /* Keep track of best entry */ ++ if (diff < best_diff) { ++ best_diff = diff; ++ best = entry; ++ best_rate = target; ++ } + +- for (i = 0, entry = table; i < ARRAY_SIZE(table); i++, entry++) +- if (entry->divisor <= divisor) +- break; ++ if (result == baud) ++ break; ++ } else if (entry->divisor > divisor) { ++ old = target; ++ target = clk_round_rate(msm_port->clk, old + 1); ++ /* ++ * The rate didn't get any faster so we can't do ++ * better at dividing it down ++ */ ++ if (target == old) ++ break; ++ ++ /* Start the divisor search over at this new rate */ ++ entry = table; ++ divisor = DIV_ROUND_CLOSEST(target, 16 * baud); ++ continue; ++ } ++ entry++; ++ } + +- return entry; /* Default to smallest divider */ ++ *rate = best_rate; ++ return best; + } + + static int msm_set_baud_rate(struct uart_port *port, unsigned int baud, +@@ -911,22 +946,20 @@ static int msm_set_baud_rate(struct uart_port *port, unsigned int baud, + unsigned int rxstale, watermark, mask; + struct msm_port *msm_port = UART_TO_MSM(port); + const struct msm_baud_map *entry; +- unsigned long flags; +- +- entry = msm_find_best_baud(port, baud); +- +- msm_write(port, entry->code, UART_CSR); +- +- if (baud > 460800) +- port->uartclk = baud * 16; ++ unsigned long flags, rate; + + flags = *saved_flags; + spin_unlock_irqrestore(&port->lock, flags); + +- clk_set_rate(msm_port->clk, port->uartclk); ++ entry = msm_find_best_baud(port, baud, &rate); ++ clk_set_rate(msm_port->clk, rate); ++ baud = rate / 16 / entry->divisor; + + spin_lock_irqsave(&port->lock, flags); + *saved_flags = flags; ++ port->uartclk = rate; ++ ++ msm_write(port, entry->code, UART_CSR); + + /* RX stale watermark */ + rxstale = entry->rxstale; +diff --git a/drivers/tty/serial/samsung.c b/drivers/tty/serial/samsung.c +index 12bac2cbae4b..8d485f82443e 100644 +--- a/drivers/tty/serial/samsung.c ++++ b/drivers/tty/serial/samsung.c +@@ -1841,8 +1841,6 @@ static int s3c24xx_serial_probe(struct platform_device *pdev) + ourport->min_dma_size = max_t(int, ourport->port.fifosize, + dma_get_cache_alignment()); + +- probe_index++; +- + dbg("%s: initialising port %p...\n", __func__, ourport); + + ret = s3c24xx_serial_init_port(ourport, pdev); +@@ -1872,6 +1870,8 @@ static int s3c24xx_serial_probe(struct platform_device *pdev) + if (ret < 0) + dev_err(&pdev->dev, "failed to add cpufreq notifier\n"); + ++ probe_index++; ++ + return 0; + } + +diff --git a/drivers/usb/gadget/function/f_acm.c b/drivers/usb/gadget/function/f_acm.c +index 67e474b13fca..670a89f197cd 100644 +--- a/drivers/usb/gadget/function/f_acm.c ++++ b/drivers/usb/gadget/function/f_acm.c +@@ -779,10 +779,10 @@ static ssize_t f_acm_port_num_show(struct config_item *item, char *page) + return sprintf(page, "%u\n", to_f_serial_opts(item)->port_num); + } + +-CONFIGFS_ATTR_RO(f_acm_port_, num); ++CONFIGFS_ATTR_RO(f_acm_, port_num); + + static struct configfs_attribute *acm_attrs[] = { +- &f_acm_port_attr_num, ++ &f_acm_attr_port_num, + NULL, + }; + +diff --git a/drivers/usb/gadget/udc/pch_udc.c b/drivers/usb/gadget/udc/pch_udc.c +index 2806457b4748..3fd603494e86 100644 +--- a/drivers/usb/gadget/udc/pch_udc.c ++++ b/drivers/usb/gadget/udc/pch_udc.c +@@ -1488,11 +1488,11 @@ static void complete_req(struct pch_udc_ep *ep, struct pch_udc_request *req, + req->dma_mapped = 0; + } + ep->halted = 1; +- spin_lock(&dev->lock); ++ spin_unlock(&dev->lock); + if (!ep->in) + pch_udc_ep_clear_rrdy(ep); + usb_gadget_giveback_request(&ep->ep, &req->req); +- spin_unlock(&dev->lock); ++ spin_lock(&dev->lock); + ep->halted = halted; + } + +@@ -1731,14 +1731,12 @@ static int pch_udc_pcd_ep_enable(struct usb_ep *usbep, + static int pch_udc_pcd_ep_disable(struct usb_ep *usbep) + { + struct pch_udc_ep *ep; +- struct pch_udc_dev *dev; + unsigned long iflags; + + if (!usbep) + return -EINVAL; + + ep = container_of(usbep, struct pch_udc_ep, ep); +- dev = ep->dev; + if ((usbep->name == ep0_string) || !ep->ep.desc) + return -EINVAL; + +@@ -1769,12 +1767,10 @@ static struct usb_request *pch_udc_alloc_request(struct usb_ep *usbep, + struct pch_udc_request *req; + struct pch_udc_ep *ep; + struct pch_udc_data_dma_desc *dma_desc; +- struct pch_udc_dev *dev; + + if (!usbep) + return NULL; + ep = container_of(usbep, struct pch_udc_ep, ep); +- dev = ep->dev; + req = kzalloc(sizeof *req, gfp); + if (!req) + return NULL; +@@ -1947,12 +1943,10 @@ static int pch_udc_pcd_dequeue(struct usb_ep *usbep, + { + struct pch_udc_ep *ep; + struct pch_udc_request *req; +- struct pch_udc_dev *dev; + unsigned long flags; + int ret = -EINVAL; + + ep = container_of(usbep, struct pch_udc_ep, ep); +- dev = ep->dev; + if (!usbep || !usbreq || (!ep->ep.desc && ep->num)) + return ret; + req = container_of(usbreq, struct pch_udc_request, req); +@@ -1984,14 +1978,12 @@ static int pch_udc_pcd_dequeue(struct usb_ep *usbep, + static int pch_udc_pcd_set_halt(struct usb_ep *usbep, int halt) + { + struct pch_udc_ep *ep; +- struct pch_udc_dev *dev; + unsigned long iflags; + int ret; + + if (!usbep) + return -EINVAL; + ep = container_of(usbep, struct pch_udc_ep, ep); +- dev = ep->dev; + if (!ep->ep.desc && !ep->num) + return -EINVAL; + if (!ep->dev->driver || (ep->dev->gadget.speed == USB_SPEED_UNKNOWN)) +@@ -2029,14 +2021,12 @@ static int pch_udc_pcd_set_halt(struct usb_ep *usbep, int halt) + static int pch_udc_pcd_set_wedge(struct usb_ep *usbep) + { + struct pch_udc_ep *ep; +- struct pch_udc_dev *dev; + unsigned long iflags; + int ret; + + if (!usbep) + return -EINVAL; + ep = container_of(usbep, struct pch_udc_ep, ep); +- dev = ep->dev; + if (!ep->ep.desc && !ep->num) + return -EINVAL; + if (!ep->dev->driver || (ep->dev->gadget.speed == USB_SPEED_UNKNOWN)) +@@ -2593,9 +2583,9 @@ static void pch_udc_svc_ur_interrupt(struct pch_udc_dev *dev) + empty_req_queue(ep); + } + if (dev->driver) { +- spin_lock(&dev->lock); +- usb_gadget_udc_reset(&dev->gadget, dev->driver); + spin_unlock(&dev->lock); ++ usb_gadget_udc_reset(&dev->gadget, dev->driver); ++ spin_lock(&dev->lock); + } + } + +@@ -2646,7 +2636,7 @@ static void pch_udc_svc_enum_interrupt(struct pch_udc_dev *dev) + static void pch_udc_svc_intf_interrupt(struct pch_udc_dev *dev) + { + u32 reg, dev_stat = 0; +- int i, ret; ++ int i; + + dev_stat = pch_udc_read_device_status(dev); + dev->cfg_data.cur_intf = (dev_stat & UDC_DEVSTS_INTF_MASK) >> +@@ -2674,9 +2664,9 @@ static void pch_udc_svc_intf_interrupt(struct pch_udc_dev *dev) + dev->ep[i].halted = 0; + } + dev->stall = 0; +- spin_lock(&dev->lock); +- ret = dev->driver->setup(&dev->gadget, &dev->setup_data); + spin_unlock(&dev->lock); ++ dev->driver->setup(&dev->gadget, &dev->setup_data); ++ spin_lock(&dev->lock); + } + + /** +@@ -2686,7 +2676,7 @@ static void pch_udc_svc_intf_interrupt(struct pch_udc_dev *dev) + */ + static void pch_udc_svc_cfg_interrupt(struct pch_udc_dev *dev) + { +- int i, ret; ++ int i; + u32 reg, dev_stat = 0; + + dev_stat = pch_udc_read_device_status(dev); +@@ -2711,9 +2701,9 @@ static void pch_udc_svc_cfg_interrupt(struct pch_udc_dev *dev) + dev->stall = 0; + + /* call gadget zero with setup data received */ +- spin_lock(&dev->lock); +- ret = dev->driver->setup(&dev->gadget, &dev->setup_data); + spin_unlock(&dev->lock); ++ dev->driver->setup(&dev->gadget, &dev->setup_data); ++ spin_lock(&dev->lock); + } + + /** +diff --git a/drivers/usb/gadget/udc/udc-core.c b/drivers/usb/gadget/udc/udc-core.c +index 89f7cd66f5e6..a6a1678cb927 100644 +--- a/drivers/usb/gadget/udc/udc-core.c ++++ b/drivers/usb/gadget/udc/udc-core.c +@@ -97,7 +97,7 @@ void usb_gadget_unmap_request(struct usb_gadget *gadget, + return; + + if (req->num_mapped_sgs) { +- dma_unmap_sg(gadget->dev.parent, req->sg, req->num_mapped_sgs, ++ dma_unmap_sg(gadget->dev.parent, req->sg, req->num_sgs, + is_in ? DMA_TO_DEVICE : DMA_FROM_DEVICE); + + req->num_mapped_sgs = 0; +diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c +index ad5929fbceb1..98a12be76c9c 100644 +--- a/drivers/vfio/pci/vfio_pci_config.c ++++ b/drivers/vfio/pci/vfio_pci_config.c +@@ -698,7 +698,8 @@ static int vfio_vpd_config_write(struct vfio_pci_device *vdev, int pos, + if (pci_write_vpd(pdev, addr & ~PCI_VPD_ADDR_F, 4, &data) != 4) + return count; + } else { +- if (pci_read_vpd(pdev, addr, 4, &data) != 4) ++ data = 0; ++ if (pci_read_vpd(pdev, addr, 4, &data) < 0) + return count; + *pdata = cpu_to_le32(data); + } +diff --git a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c +index da5356f48d0b..d4030d0c38e9 100644 +--- a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c ++++ b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c +@@ -110,7 +110,7 @@ int vfio_platform_amdxgbe_reset(struct vfio_platform_device *vdev) + usleep_range(10, 15); + + count = 2000; +- while (count-- && (ioread32(xgmac_regs->ioaddr + DMA_MR) & 1)) ++ while (--count && (ioread32(xgmac_regs->ioaddr + DMA_MR) & 1)) + usleep_range(500, 600); + + if (!count) +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c +index 774728143b63..de63cb9bc64b 100644 +--- a/fs/btrfs/disk-io.c ++++ b/fs/btrfs/disk-io.c +@@ -1750,7 +1750,7 @@ static int cleaner_kthread(void *arg) + */ + btrfs_delete_unused_bgs(root->fs_info); + sleep: +- if (!try_to_freeze() && !again) { ++ if (!again) { + set_current_state(TASK_INTERRUPTIBLE); + if (!kthread_should_stop()) + schedule(); +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c +index 34ffc125763f..3bb731b2156c 100644 +--- a/fs/btrfs/extent-tree.c ++++ b/fs/btrfs/extent-tree.c +@@ -10688,7 +10688,7 @@ int btrfs_init_space_info(struct btrfs_fs_info *fs_info) + + disk_super = fs_info->super_copy; + if (!btrfs_super_root(disk_super)) +- return 1; ++ return -EINVAL; + + features = btrfs_super_incompat_flags(disk_super); + if (features & BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS) +diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c +index cf104bbe30a1..c9793ce0d336 100644 +--- a/fs/cifs/connect.c ++++ b/fs/cifs/connect.c +@@ -338,8 +338,10 @@ static int reconn_set_ipaddr(struct TCP_Server_Info *server) + return rc; + } + ++ spin_lock(&cifs_tcp_ses_lock); + rc = cifs_convert_address((struct sockaddr *)&server->dstaddr, ipaddr, + strlen(ipaddr)); ++ spin_unlock(&cifs_tcp_ses_lock); + kfree(ipaddr); + + return !rc ? -1 : 0; +diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c +index 8744bd773823..dec23fb358ec 100644 +--- a/fs/gfs2/file.c ++++ b/fs/gfs2/file.c +@@ -1035,7 +1035,10 @@ static int do_flock(struct file *file, int cmd, struct file_lock *fl) + if (fl_gh->gh_state == state) + goto out; + locks_lock_file_wait(file, +- &(struct file_lock){.fl_type = F_UNLCK}); ++ &(struct file_lock) { ++ .fl_type = F_UNLCK, ++ .fl_flags = FL_FLOCK ++ }); + gfs2_glock_dq(fl_gh); + gfs2_holder_reinit(state, flags, fl_gh); + } else { +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 08207001d475..0308b5689638 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -6054,6 +6054,7 @@ static int nfs41_lock_expired(struct nfs4_state *state, struct file_lock *reques + static int _nfs4_proc_setlk(struct nfs4_state *state, int cmd, struct file_lock *request) + { + struct nfs_inode *nfsi = NFS_I(state->inode); ++ struct nfs4_state_owner *sp = state->owner; + unsigned char fl_flags = request->fl_flags; + int status = -ENOLCK; + +@@ -6068,6 +6069,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int cmd, struct file_lock + status = do_vfs_lock(state->inode, request); + if (status < 0) + goto out; ++ mutex_lock(&sp->so_delegreturn_mutex); + down_read(&nfsi->rwsem); + if (test_bit(NFS_DELEGATED_STATE, &state->flags)) { + /* Yes: cache locks! */ +@@ -6075,9 +6077,11 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int cmd, struct file_lock + request->fl_flags = fl_flags & ~FL_SLEEP; + status = do_vfs_lock(state->inode, request); + up_read(&nfsi->rwsem); ++ mutex_unlock(&sp->so_delegreturn_mutex); + goto out; + } + up_read(&nfsi->rwsem); ++ mutex_unlock(&sp->so_delegreturn_mutex); + status = _nfs4_do_setlk(state, cmd, request, NFS_LOCK_NEW); + out: + request->fl_flags = fl_flags; +diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h +index 5d8ffa3e6f8c..c1cde3577551 100644 +--- a/include/asm-generic/preempt.h ++++ b/include/asm-generic/preempt.h +@@ -7,10 +7,10 @@ + + static __always_inline int preempt_count(void) + { +- return current_thread_info()->preempt_count; ++ return READ_ONCE(current_thread_info()->preempt_count); + } + +-static __always_inline int *preempt_count_ptr(void) ++static __always_inline volatile int *preempt_count_ptr(void) + { + return ¤t_thread_info()->preempt_count; + } +diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h +index e684a9ba98a3..a0e12cf8919c 100644 +--- a/include/linux/cpufreq.h ++++ b/include/linux/cpufreq.h +@@ -100,10 +100,6 @@ struct cpufreq_policy { + * - Any routine that will write to the policy structure and/or may take away + * the policy altogether (eg. CPU hotplug), will hold this lock in write + * mode before doing so. +- * +- * Additional rules: +- * - Lock should not be held across +- * __cpufreq_governor(data, CPUFREQ_GOV_POLICY_EXIT); + */ + struct rw_semaphore rwsem; + +diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h +index 149a7a6687e9..e7a278ca1fde 100644 +--- a/include/linux/ieee80211.h ++++ b/include/linux/ieee80211.h +@@ -606,6 +606,15 @@ static inline bool ieee80211_is_qos_nullfunc(__le16 fc) + cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_STYPE_QOS_NULLFUNC); + } + ++/** ++ * ieee80211_is_any_nullfunc - check if frame is regular or QoS nullfunc frame ++ * @fc: frame control bytes in little-endian byteorder ++ */ ++static inline bool ieee80211_is_any_nullfunc(__le16 fc) ++{ ++ return (ieee80211_is_nullfunc(fc) || ieee80211_is_qos_nullfunc(fc)); ++} ++ + /** + * ieee80211_is_bufferable_mmpdu - check if frame is bufferable MMPDU + * @fc: frame control field in little-endian byteorder +diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h +index 412aa988c6ad..06cc39623d13 100644 +--- a/include/linux/mlx5/driver.h ++++ b/include/linux/mlx5/driver.h +@@ -54,7 +54,7 @@ enum { + /* one minute for the sake of bringup. Generally, commands must always + * complete and we may need to increase this timeout value + */ +- MLX5_CMD_TIMEOUT_MSEC = 7200 * 1000, ++ MLX5_CMD_TIMEOUT_MSEC = 60 * 1000, + MLX5_CMD_WQ_MAX_NAME = 32, + }; + +@@ -566,6 +566,7 @@ struct mlx5_cmd_work_ent { + void *uout; + int uout_size; + mlx5_cmd_cbk_t callback; ++ struct delayed_work cb_timeout_work; + void *context; + int idx; + struct completion done; +diff --git a/include/linux/mlx5/qp.h b/include/linux/mlx5/qp.h +index a8786d27ab81..489fc317746a 100644 +--- a/include/linux/mlx5/qp.h ++++ b/include/linux/mlx5/qp.h +@@ -539,6 +539,7 @@ struct mlx5_modify_qp_mbox_in { + __be32 optparam; + u8 rsvd1[4]; + struct mlx5_qp_context ctx; ++ u8 rsvd2[16]; + }; + + struct mlx5_modify_qp_mbox_out { +diff --git a/include/linux/mtd/nand.h b/include/linux/mtd/nand.h +index 5a9d1d4c2487..93fc37200793 100644 +--- a/include/linux/mtd/nand.h ++++ b/include/linux/mtd/nand.h +@@ -276,7 +276,7 @@ struct nand_onfi_params { + __le16 t_r; + __le16 t_ccs; + __le16 src_sync_timing_mode; +- __le16 src_ssync_features; ++ u8 src_ssync_features; + __le16 clk_pin_capacitance_typ; + __le16 io_pin_capacitance_typ; + __le16 input_pin_capacitance_typ; +@@ -284,7 +284,7 @@ struct nand_onfi_params { + u8 driver_strength_support; + __le16 t_int_r; + __le16 t_ald; +- u8 reserved4[7]; ++ u8 reserved4[8]; + + /* vendor */ + __le16 vendor_revision; +diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h +index d999e503ba8a..c1a42027ee0e 100644 +--- a/include/linux/netdevice.h ++++ b/include/linux/netdevice.h +@@ -2013,7 +2013,10 @@ struct napi_gro_cb { + /* Number of gro_receive callbacks this packet already went through */ + u8 recursion_counter:4; + +- /* 3 bit hole */ ++ /* Used in GRE, set in fou/gue_gro_receive */ ++ u8 is_fou:1; ++ ++ /* 2 bit hole */ + + /* used to support CHECKSUM_COMPLETE for tunneling protocols */ + __wsum csum; +diff --git a/include/linux/sunrpc/msg_prot.h b/include/linux/sunrpc/msg_prot.h +index 807371357160..59cbf16eaeb5 100644 +--- a/include/linux/sunrpc/msg_prot.h ++++ b/include/linux/sunrpc/msg_prot.h +@@ -158,9 +158,9 @@ typedef __be32 rpc_fraghdr; + + /* + * Note that RFC 1833 does not put any size restrictions on the +- * netid string, but all currently defined netid's fit in 4 bytes. ++ * netid string, but all currently defined netid's fit in 5 bytes. + */ +-#define RPCBIND_MAXNETIDLEN (4u) ++#define RPCBIND_MAXNETIDLEN (5u) + + /* + * Universal addresses are introduced in RFC 1833 and further spelled +diff --git a/include/net/bonding.h b/include/net/bonding.h +index d5abd3a80896..6fbfc21b27b1 100644 +--- a/include/net/bonding.h ++++ b/include/net/bonding.h +@@ -34,6 +34,9 @@ + + #define BOND_DEFAULT_MIIMON 100 + ++#ifndef __long_aligned ++#define __long_aligned __attribute__((aligned((sizeof(long))))) ++#endif + /* + * Less bad way to call ioctl from within the kernel; this needs to be + * done some other way to get the call out of interrupt context. +@@ -138,7 +141,9 @@ struct bond_params { + struct reciprocal_value reciprocal_packets_per_slave; + u16 ad_actor_sys_prio; + u16 ad_user_port_key; +- u8 ad_actor_system[ETH_ALEN]; ++ ++ /* 2 bytes of padding : see ether_addr_equal_64bits() */ ++ u8 ad_actor_system[ETH_ALEN + 2]; + }; + + struct bond_parm_tbl { +diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h +index fa5e703a14ed..a6bcb18ac4c3 100644 +--- a/include/net/ip6_fib.h ++++ b/include/net/ip6_fib.h +@@ -258,6 +258,8 @@ struct fib6_table { + rwlock_t tb6_lock; + struct fib6_node tb6_root; + struct inet_peer_base tb6_peers; ++ unsigned int flags; ++#define RT6_TABLE_HAS_DFLT_ROUTER BIT(0) + }; + + #define RT6_TABLE_UNSPEC RT_TABLE_UNSPEC +diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h +index df6474c37ca0..8d0a9b1fc39a 100644 +--- a/include/net/ip6_route.h ++++ b/include/net/ip6_route.h +@@ -103,6 +103,9 @@ void fib6_force_start_gc(struct net *net); + struct rt6_info *addrconf_dst_alloc(struct inet6_dev *idev, + const struct in6_addr *addr, bool anycast); + ++struct rt6_info *ip6_dst_alloc(struct net *net, struct net_device *dev, ++ int flags); ++ + /* + * support functions for ND + * +diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h +index f6ff83b2ac87..b8dfab88c877 100644 +--- a/include/net/ip_fib.h ++++ b/include/net/ip_fib.h +@@ -112,6 +112,7 @@ struct fib_info { + unsigned char fib_scope; + unsigned char fib_type; + __be32 fib_prefsrc; ++ u32 fib_tb_id; + u32 fib_priority; + struct dst_metrics *fib_metrics; + #define fib_mtu fib_metrics->metrics[RTAX_MTU-1] +@@ -320,7 +321,7 @@ void fib_flush_external(struct net *net); + /* Exported by fib_semantics.c */ + int ip_fib_check_default(__be32 gw, struct net_device *dev); + int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force); +-int fib_sync_down_addr(struct net *net, __be32 local); ++int fib_sync_down_addr(struct net_device *dev, __be32 local); + int fib_sync_up(struct net_device *dev, unsigned int nh_flags); + void fib_sync_mtu(struct net_device *dev, u32 orig_mtu); + +diff --git a/include/net/route.h b/include/net/route.h +index d2a92d94ff72..6be55d00a200 100644 +--- a/include/net/route.h ++++ b/include/net/route.h +@@ -210,6 +210,9 @@ unsigned int inet_addr_type_dev_table(struct net *net, + void ip_rt_multicast_event(struct in_device *); + int ip_rt_ioctl(struct net *, unsigned int cmd, void __user *arg); + void ip_rt_get_source(u8 *src, struct sk_buff *skb, struct rtable *rt); ++struct rtable *rt_dst_alloc(struct net_device *dev, ++ unsigned int flags, u16 type, ++ bool nopolicy, bool noxfrm, bool will_cache); + + struct in_ifaddr; + void fib_add_ifaddr(struct in_ifaddr *); +diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h +index ccd2a964dad7..d236ce450da3 100644 +--- a/include/net/sch_generic.h ++++ b/include/net/sch_generic.h +@@ -674,9 +674,11 @@ static inline struct sk_buff *qdisc_peek_dequeued(struct Qdisc *sch) + /* we can reuse ->gso_skb because peek isn't called for root qdiscs */ + if (!sch->gso_skb) { + sch->gso_skb = sch->dequeue(sch); +- if (sch->gso_skb) ++ if (sch->gso_skb) { + /* it's still part of the queue */ ++ qdisc_qstats_backlog_inc(sch, sch->gso_skb); + sch->q.qlen++; ++ } + } + + return sch->gso_skb; +@@ -689,6 +691,7 @@ static inline struct sk_buff *qdisc_dequeue_peeked(struct Qdisc *sch) + + if (skb) { + sch->gso_skb = NULL; ++ qdisc_qstats_backlog_dec(sch, skb); + sch->q.qlen--; + } else { + skb = sch->dequeue(sch); +diff --git a/include/net/sock.h b/include/net/sock.h +index de4434284a34..be5ec94020f1 100644 +--- a/include/net/sock.h ++++ b/include/net/sock.h +@@ -1204,11 +1204,13 @@ static inline void memcg_memory_allocated_add(struct cg_proto *prot, + unsigned long amt, + int *parent_status) + { +- page_counter_charge(&prot->memory_allocated, amt); ++ struct page_counter *counter; ++ ++ if (page_counter_try_charge(&prot->memory_allocated, amt, &counter)) ++ return; + +- if (page_counter_read(&prot->memory_allocated) > +- prot->memory_allocated.limit) +- *parent_status = OVER_LIMIT; ++ page_counter_charge(&prot->memory_allocated, amt); ++ *parent_status = OVER_LIMIT; + } + + static inline void memcg_memory_allocated_sub(struct cg_proto *prot, +@@ -1651,7 +1653,13 @@ static inline void sock_put(struct sock *sk) + */ + void sock_gen_put(struct sock *sk); + +-int sk_receive_skb(struct sock *sk, struct sk_buff *skb, const int nested); ++int __sk_receive_skb(struct sock *sk, struct sk_buff *skb, const int nested, ++ unsigned int trim_cap); ++static inline int sk_receive_skb(struct sock *sk, struct sk_buff *skb, ++ const int nested) ++{ ++ return __sk_receive_skb(sk, skb, nested, 1); ++} + + static inline void sk_tx_queue_set(struct sock *sk, int tx_queue) + { +diff --git a/include/net/xfrm.h b/include/net/xfrm.h +index 631614856afc..89685c7bc7c0 100644 +--- a/include/net/xfrm.h ++++ b/include/net/xfrm.h +@@ -1551,8 +1551,10 @@ int xfrm4_tunnel_deregister(struct xfrm_tunnel *handler, unsigned short family); + void xfrm4_local_error(struct sk_buff *skb, u32 mtu); + int xfrm6_extract_header(struct sk_buff *skb); + int xfrm6_extract_input(struct xfrm_state *x, struct sk_buff *skb); +-int xfrm6_rcv_spi(struct sk_buff *skb, int nexthdr, __be32 spi); ++int xfrm6_rcv_spi(struct sk_buff *skb, int nexthdr, __be32 spi, ++ struct ip6_tnl *t); + int xfrm6_transport_finish(struct sk_buff *skb, int async); ++int xfrm6_rcv_tnl(struct sk_buff *skb, struct ip6_tnl *t); + int xfrm6_rcv(struct sk_buff *skb); + int xfrm6_input_addr(struct sk_buff *skb, xfrm_address_t *daddr, + xfrm_address_t *saddr, u8 proto); +diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c +index fd3fd8d17ef5..01431ef8cf07 100644 +--- a/kernel/bpf/syscall.c ++++ b/kernel/bpf/syscall.c +@@ -152,7 +152,7 @@ static int map_create(union bpf_attr *attr) + + err = bpf_map_charge_memlock(map); + if (err) +- goto free_map; ++ goto free_map_nouncharge; + + err = bpf_map_new_fd(map); + if (err < 0) +@@ -162,6 +162,8 @@ static int map_create(union bpf_attr *attr) + return err; + + free_map: ++ bpf_map_uncharge_memlock(map); ++free_map_nouncharge: + map->ops->map_free(map); + return err; + } +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index b42d2b8b283e..0daf4a40a985 100644 +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -2394,28 +2394,22 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se) + + #ifdef CONFIG_FAIR_GROUP_SCHED + # ifdef CONFIG_SMP +-static inline long calc_tg_weight(struct task_group *tg, struct cfs_rq *cfs_rq) ++static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg) + { +- long tg_weight; ++ long tg_weight, load, shares; + + /* +- * Use this CPU's real-time load instead of the last load contribution +- * as the updating of the contribution is delayed, and we will use the +- * the real-time load to calc the share. See update_tg_load_avg(). ++ * This really should be: cfs_rq->avg.load_avg, but instead we use ++ * cfs_rq->load.weight, which is its upper bound. This helps ramp up ++ * the shares for small weight interactive tasks. + */ +- tg_weight = atomic_long_read(&tg->load_avg); +- tg_weight -= cfs_rq->tg_load_avg_contrib; +- tg_weight += cfs_rq->load.weight; +- +- return tg_weight; +-} ++ load = scale_load_down(cfs_rq->load.weight); + +-static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg) +-{ +- long tg_weight, load, shares; ++ tg_weight = atomic_long_read(&tg->load_avg); + +- tg_weight = calc_tg_weight(tg, cfs_rq); +- load = cfs_rq->load.weight; ++ /* Ensure tg_weight >= load */ ++ tg_weight -= cfs_rq->tg_load_avg_contrib; ++ tg_weight += load; + + shares = (tg->shares * load); + if (tg_weight) +@@ -2434,6 +2428,7 @@ static inline long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg) + return tg->shares; + } + # endif /* CONFIG_SMP */ ++ + static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, + unsigned long weight) + { +diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c +index 3dd40c736067..a71bdad638d5 100644 +--- a/kernel/trace/bpf_trace.c ++++ b/kernel/trace/bpf_trace.c +@@ -206,6 +206,10 @@ static u64 bpf_perf_event_read(u64 r1, u64 index, u64 r3, u64 r4, u64 r5) + event->pmu->count) + return -EINVAL; + ++ if (unlikely(event->attr.type != PERF_TYPE_HARDWARE && ++ event->attr.type != PERF_TYPE_RAW)) ++ return -EINVAL; ++ + /* + * we don't know if the function is run successfully by the + * return value. It can be judged in other places, such as +diff --git a/lib/mpi/longlong.h b/lib/mpi/longlong.h +index d2ecf0a09180..f1f31c754b3e 100644 +--- a/lib/mpi/longlong.h ++++ b/lib/mpi/longlong.h +@@ -756,22 +756,22 @@ do { \ + do { \ + if (__builtin_constant_p(bh) && (bh) == 0) \ + __asm__ ("{a%I4|add%I4c} %1,%3,%4\n\t{aze|addze} %0,%2" \ +- : "=r" ((USItype)(sh)), \ +- "=&r" ((USItype)(sl)) \ ++ : "=r" (sh), \ ++ "=&r" (sl) \ + : "%r" ((USItype)(ah)), \ + "%r" ((USItype)(al)), \ + "rI" ((USItype)(bl))); \ + else if (__builtin_constant_p(bh) && (bh) == ~(USItype) 0) \ + __asm__ ("{a%I4|add%I4c} %1,%3,%4\n\t{ame|addme} %0,%2" \ +- : "=r" ((USItype)(sh)), \ +- "=&r" ((USItype)(sl)) \ ++ : "=r" (sh), \ ++ "=&r" (sl) \ + : "%r" ((USItype)(ah)), \ + "%r" ((USItype)(al)), \ + "rI" ((USItype)(bl))); \ + else \ + __asm__ ("{a%I5|add%I5c} %1,%4,%5\n\t{ae|adde} %0,%2,%3" \ +- : "=r" ((USItype)(sh)), \ +- "=&r" ((USItype)(sl)) \ ++ : "=r" (sh), \ ++ "=&r" (sl) \ + : "%r" ((USItype)(ah)), \ + "r" ((USItype)(bh)), \ + "%r" ((USItype)(al)), \ +@@ -781,36 +781,36 @@ do { \ + do { \ + if (__builtin_constant_p(ah) && (ah) == 0) \ + __asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{sfze|subfze} %0,%2" \ +- : "=r" ((USItype)(sh)), \ +- "=&r" ((USItype)(sl)) \ ++ : "=r" (sh), \ ++ "=&r" (sl) \ + : "r" ((USItype)(bh)), \ + "rI" ((USItype)(al)), \ + "r" ((USItype)(bl))); \ + else if (__builtin_constant_p(ah) && (ah) == ~(USItype) 0) \ + __asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{sfme|subfme} %0,%2" \ +- : "=r" ((USItype)(sh)), \ +- "=&r" ((USItype)(sl)) \ ++ : "=r" (sh), \ ++ "=&r" (sl) \ + : "r" ((USItype)(bh)), \ + "rI" ((USItype)(al)), \ + "r" ((USItype)(bl))); \ + else if (__builtin_constant_p(bh) && (bh) == 0) \ + __asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{ame|addme} %0,%2" \ +- : "=r" ((USItype)(sh)), \ +- "=&r" ((USItype)(sl)) \ ++ : "=r" (sh), \ ++ "=&r" (sl) \ + : "r" ((USItype)(ah)), \ + "rI" ((USItype)(al)), \ + "r" ((USItype)(bl))); \ + else if (__builtin_constant_p(bh) && (bh) == ~(USItype) 0) \ + __asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{aze|addze} %0,%2" \ +- : "=r" ((USItype)(sh)), \ +- "=&r" ((USItype)(sl)) \ ++ : "=r" (sh), \ ++ "=&r" (sl) \ + : "r" ((USItype)(ah)), \ + "rI" ((USItype)(al)), \ + "r" ((USItype)(bl))); \ + else \ + __asm__ ("{sf%I4|subf%I4c} %1,%5,%4\n\t{sfe|subfe} %0,%3,%2" \ +- : "=r" ((USItype)(sh)), \ +- "=&r" ((USItype)(sl)) \ ++ : "=r" (sh), \ ++ "=&r" (sl) \ + : "r" ((USItype)(ah)), \ + "r" ((USItype)(bh)), \ + "rI" ((USItype)(al)), \ +@@ -821,7 +821,7 @@ do { \ + do { \ + USItype __m0 = (m0), __m1 = (m1); \ + __asm__ ("mulhwu %0,%1,%2" \ +- : "=r" ((USItype) ph) \ ++ : "=r" (ph) \ + : "%r" (__m0), \ + "r" (__m1)); \ + (pl) = __m0 * __m1; \ +diff --git a/net/batman-adv/main.c b/net/batman-adv/main.c +index 2bdbaff3279b..88cea5154113 100644 +--- a/net/batman-adv/main.c ++++ b/net/batman-adv/main.c +@@ -747,7 +747,7 @@ static u16 batadv_tvlv_container_list_size(struct batadv_priv *bat_priv) + static void batadv_tvlv_container_remove(struct batadv_priv *bat_priv, + struct batadv_tvlv_container *tvlv) + { +- lockdep_assert_held(&bat_priv->tvlv.handler_list_lock); ++ lockdep_assert_held(&bat_priv->tvlv.container_list_lock); + + if (!tvlv) + return; +diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c +index 67ee7c83a28d..06f366d234ff 100644 +--- a/net/batman-adv/translation-table.c ++++ b/net/batman-adv/translation-table.c +@@ -614,8 +614,10 @@ bool batadv_tt_local_add(struct net_device *soft_iface, const u8 *addr, + + /* increase the refcounter of the related vlan */ + vlan = batadv_softif_vlan_get(bat_priv, vid); +- if (WARN(!vlan, "adding TT local entry %pM to non-existent VLAN %d", +- addr, BATADV_PRINT_VID(vid))) { ++ if (!vlan) { ++ net_ratelimited_function(batadv_info, soft_iface, ++ "adding TT local entry %pM to non-existent VLAN %d\n", ++ addr, BATADV_PRINT_VID(vid)); + kfree(tt_local); + tt_local = NULL; + goto out; +diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c +index 09442e0f7f67..1aa1d3d4979f 100644 +--- a/net/bridge/br_fdb.c ++++ b/net/bridge/br_fdb.c +@@ -266,7 +266,7 @@ void br_fdb_change_mac_address(struct net_bridge *br, const u8 *newaddr) + + /* If old entry was unassociated with any port, then delete it. */ + f = __br_fdb_get(br, br->dev->dev_addr, 0); +- if (f && f->is_local && !f->dst) ++ if (f && f->is_local && !f->dst && !f->added_by_user) + fdb_delete_local(br, NULL, f); + + fdb_insert(br, NULL, newaddr, 0); +@@ -281,7 +281,7 @@ void br_fdb_change_mac_address(struct net_bridge *br, const u8 *newaddr) + if (!br_vlan_should_use(v)) + continue; + f = __br_fdb_get(br, br->dev->dev_addr, v->vid); +- if (f && f->is_local && !f->dst) ++ if (f && f->is_local && !f->dst && !f->added_by_user) + fdb_delete_local(br, NULL, f); + fdb_insert(br, NULL, newaddr, v->vid); + } +@@ -758,20 +758,25 @@ out: + } + + /* Update (create or replace) forwarding database entry */ +-static int fdb_add_entry(struct net_bridge_port *source, const __u8 *addr, +- __u16 state, __u16 flags, __u16 vid) ++static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source, ++ const __u8 *addr, __u16 state, __u16 flags, __u16 vid) + { +- struct net_bridge *br = source->br; + struct hlist_head *head = &br->hash[br_mac_hash(addr, vid)]; + struct net_bridge_fdb_entry *fdb; + bool modified = false; + + /* If the port cannot learn allow only local and static entries */ +- if (!(state & NUD_PERMANENT) && !(state & NUD_NOARP) && ++ if (source && !(state & NUD_PERMANENT) && !(state & NUD_NOARP) && + !(source->state == BR_STATE_LEARNING || + source->state == BR_STATE_FORWARDING)) + return -EPERM; + ++ if (!source && !(state & NUD_PERMANENT)) { ++ pr_info("bridge: RTM_NEWNEIGH %s without NUD_PERMANENT\n", ++ br->dev->name); ++ return -EINVAL; ++ } ++ + fdb = fdb_find(head, addr, vid); + if (fdb == NULL) { + if (!(flags & NLM_F_CREATE)) +@@ -826,22 +831,28 @@ static int fdb_add_entry(struct net_bridge_port *source, const __u8 *addr, + return 0; + } + +-static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge_port *p, +- const unsigned char *addr, u16 nlh_flags, u16 vid) ++static int __br_fdb_add(struct ndmsg *ndm, struct net_bridge *br, ++ struct net_bridge_port *p, const unsigned char *addr, ++ u16 nlh_flags, u16 vid) + { + int err = 0; + + if (ndm->ndm_flags & NTF_USE) { ++ if (!p) { ++ pr_info("bridge: RTM_NEWNEIGH %s with NTF_USE is not supported\n", ++ br->dev->name); ++ return -EINVAL; ++ } + local_bh_disable(); + rcu_read_lock(); +- br_fdb_update(p->br, p, addr, vid, true); ++ br_fdb_update(br, p, addr, vid, true); + rcu_read_unlock(); + local_bh_enable(); + } else { +- spin_lock_bh(&p->br->hash_lock); +- err = fdb_add_entry(p, addr, ndm->ndm_state, ++ spin_lock_bh(&br->hash_lock); ++ err = fdb_add_entry(br, p, addr, ndm->ndm_state, + nlh_flags, vid); +- spin_unlock_bh(&p->br->hash_lock); ++ spin_unlock_bh(&br->hash_lock); + } + + return err; +@@ -878,6 +889,7 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], + dev->name); + return -EINVAL; + } ++ br = p->br; + vg = nbp_vlan_group(p); + } + +@@ -889,15 +901,9 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], + } + + /* VID was specified, so use it. */ +- if (dev->priv_flags & IFF_EBRIDGE) +- err = br_fdb_insert(br, NULL, addr, vid); +- else +- err = __br_fdb_add(ndm, p, addr, nlh_flags, vid); ++ err = __br_fdb_add(ndm, br, p, addr, nlh_flags, vid); + } else { +- if (dev->priv_flags & IFF_EBRIDGE) +- err = br_fdb_insert(br, NULL, addr, 0); +- else +- err = __br_fdb_add(ndm, p, addr, nlh_flags, 0); ++ err = __br_fdb_add(ndm, br, p, addr, nlh_flags, 0); + if (err || !vg || !vg->num_vlans) + goto out; + +@@ -908,11 +914,7 @@ int br_fdb_add(struct ndmsg *ndm, struct nlattr *tb[], + list_for_each_entry(v, &vg->vlan_list, vlist) { + if (!br_vlan_should_use(v)) + continue; +- if (dev->priv_flags & IFF_EBRIDGE) +- err = br_fdb_insert(br, NULL, addr, v->vid); +- else +- err = __br_fdb_add(ndm, p, addr, nlh_flags, +- v->vid); ++ err = __br_fdb_add(ndm, br, p, addr, nlh_flags, v->vid); + if (err) + goto out; + } +diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c +index e24754a0e052..920b7c0f1e2d 100644 +--- a/net/bridge/br_input.c ++++ b/net/bridge/br_input.c +@@ -78,13 +78,10 @@ static void br_do_proxy_arp(struct sk_buff *skb, struct net_bridge *br, + + BR_INPUT_SKB_CB(skb)->proxyarp_replied = false; + +- if (dev->flags & IFF_NOARP) ++ if ((dev->flags & IFF_NOARP) || ++ !pskb_may_pull(skb, arp_hdr_len(dev))) + return; + +- if (!pskb_may_pull(skb, arp_hdr_len(dev))) { +- dev->stats.tx_dropped++; +- return; +- } + parp = arp_hdr(skb); + + if (parp->ar_pro != htons(ETH_P_IP) || +diff --git a/net/core/dev.c b/net/core/dev.c +index 108c32903a74..a1043225c0c0 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -4320,6 +4320,7 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff + NAPI_GRO_CB(skb)->free = 0; + NAPI_GRO_CB(skb)->encap_mark = 0; + NAPI_GRO_CB(skb)->recursion_counter = 0; ++ NAPI_GRO_CB(skb)->is_fou = 0; + NAPI_GRO_CB(skb)->gro_remcsum_start = 0; + + /* Setup for GRO checksum validation */ +diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c +index 496bfcb787e7..189c3f2326f9 100644 +--- a/net/core/flow_dissector.c ++++ b/net/core/flow_dissector.c +@@ -178,15 +178,16 @@ ip: + + ip_proto = iph->protocol; + +- if (!dissector_uses_key(flow_dissector, +- FLOW_DISSECTOR_KEY_IPV4_ADDRS)) +- break; ++ if (dissector_uses_key(flow_dissector, ++ FLOW_DISSECTOR_KEY_IPV4_ADDRS)) { ++ key_addrs = skb_flow_dissector_target(flow_dissector, ++ FLOW_DISSECTOR_KEY_IPV4_ADDRS, ++ target_container); + +- key_addrs = skb_flow_dissector_target(flow_dissector, +- FLOW_DISSECTOR_KEY_IPV4_ADDRS, target_container); +- memcpy(&key_addrs->v4addrs, &iph->saddr, +- sizeof(key_addrs->v4addrs)); +- key_control->addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS; ++ memcpy(&key_addrs->v4addrs, &iph->saddr, ++ sizeof(key_addrs->v4addrs)); ++ key_control->addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS; ++ } + + if (ip_is_fragment(iph)) { + key_control->flags |= FLOW_DIS_IS_FRAGMENT; +diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c +index a9da58204afa..6f32d3086c7a 100644 +--- a/net/core/rtnetlink.c ++++ b/net/core/rtnetlink.c +@@ -253,6 +253,7 @@ int rtnl_unregister(int protocol, int msgtype) + + rtnl_msg_handlers[protocol][msgindex].doit = NULL; + rtnl_msg_handlers[protocol][msgindex].dumpit = NULL; ++ rtnl_msg_handlers[protocol][msgindex].calcit = NULL; + + return 0; + } +diff --git a/net/core/skbuff.c b/net/core/skbuff.c +index 2f63a90065e6..4e944fe98627 100644 +--- a/net/core/skbuff.c ++++ b/net/core/skbuff.c +@@ -4451,9 +4451,8 @@ int skb_vlan_pop(struct sk_buff *skb) + if (likely(skb_vlan_tag_present(skb))) { + skb->vlan_tci = 0; + } else { +- if (unlikely((skb->protocol != htons(ETH_P_8021Q) && +- skb->protocol != htons(ETH_P_8021AD)) || +- skb->len < VLAN_ETH_HLEN)) ++ if (unlikely(skb->protocol != htons(ETH_P_8021Q) && ++ skb->protocol != htons(ETH_P_8021AD))) + return 0; + + err = __skb_vlan_pop(skb, &vlan_tci); +@@ -4461,9 +4460,8 @@ int skb_vlan_pop(struct sk_buff *skb) + return err; + } + /* move next vlan tag to hw accel tag */ +- if (likely((skb->protocol != htons(ETH_P_8021Q) && +- skb->protocol != htons(ETH_P_8021AD)) || +- skb->len < VLAN_ETH_HLEN)) ++ if (likely(skb->protocol != htons(ETH_P_8021Q) && ++ skb->protocol != htons(ETH_P_8021AD))) + return 0; + + vlan_proto = skb->protocol; +diff --git a/net/core/sock.c b/net/core/sock.c +index 0f4c15fcd87d..60b19c3bb0f7 100644 +--- a/net/core/sock.c ++++ b/net/core/sock.c +@@ -484,11 +484,12 @@ int sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) + } + EXPORT_SYMBOL(sock_queue_rcv_skb); + +-int sk_receive_skb(struct sock *sk, struct sk_buff *skb, const int nested) ++int __sk_receive_skb(struct sock *sk, struct sk_buff *skb, ++ const int nested, unsigned int trim_cap) + { + int rc = NET_RX_SUCCESS; + +- if (sk_filter(sk, skb)) ++ if (sk_filter_trim_cap(sk, skb, trim_cap)) + goto discard_and_relse; + + skb->dev = NULL; +@@ -524,7 +525,7 @@ discard_and_relse: + kfree_skb(skb); + goto out; + } +-EXPORT_SYMBOL(sk_receive_skb); ++EXPORT_SYMBOL(__sk_receive_skb); + + struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie) + { +diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c +index ef4c44d46293..3f51280374f0 100644 +--- a/net/dccp/ipv4.c ++++ b/net/dccp/ipv4.c +@@ -868,7 +868,7 @@ lookup: + goto discard_and_relse; + nf_reset(skb); + +- return sk_receive_skb(sk, skb, 1); ++ return __sk_receive_skb(sk, skb, 1, dh->dccph_doff * 4); + + no_dccp_socket: + if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) +diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c +index d2caa4d69159..10eabd1a60aa 100644 +--- a/net/dccp/ipv6.c ++++ b/net/dccp/ipv6.c +@@ -741,7 +741,7 @@ lookup: + if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb)) + goto discard_and_relse; + +- return sk_receive_skb(sk, skb, 1) ? -1 : 0; ++ return __sk_receive_skb(sk, skb, 1, dh->dccph_doff * 4) ? -1 : 0; + + no_dccp_socket: + if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) +diff --git a/net/dsa/slave.c b/net/dsa/slave.c +index 4256ac95a141..061c3939f93b 100644 +--- a/net/dsa/slave.c ++++ b/net/dsa/slave.c +@@ -1031,7 +1031,7 @@ static int dsa_slave_phy_setup(struct dsa_slave_priv *p, + p->phy_interface = mode; + + phy_dn = of_parse_phandle(port_dn, "phy-handle", 0); +- if (of_phy_is_fixed_link(port_dn)) { ++ if (!phy_dn && of_phy_is_fixed_link(port_dn)) { + /* In the case of a fixed PHY, the DT node associated + * to the fixed PHY is the Port DT node + */ +@@ -1041,7 +1041,7 @@ static int dsa_slave_phy_setup(struct dsa_slave_priv *p, + return ret; + } + phy_is_fixed = true; +- phy_dn = port_dn; ++ phy_dn = of_node_get(port_dn); + } + + if (ds->drv->get_phy_flags) +@@ -1060,6 +1060,7 @@ static int dsa_slave_phy_setup(struct dsa_slave_priv *p, + ret = dsa_slave_phy_connect(p, slave_dev, phy_id); + if (ret) { + netdev_err(slave_dev, "failed to connect to phy%d: %d\n", phy_id, ret); ++ of_node_put(phy_dn); + return ret; + } + } else { +@@ -1068,6 +1069,8 @@ static int dsa_slave_phy_setup(struct dsa_slave_priv *p, + phy_flags, + p->phy_interface); + } ++ ++ of_node_put(phy_dn); + } + + if (p->phy && phy_is_fixed) +diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c +index 03ccacff3c3d..b062f4c41306 100644 +--- a/net/ipv4/devinet.c ++++ b/net/ipv4/devinet.c +@@ -1814,7 +1814,7 @@ void inet_netconf_notify_devconf(struct net *net, int type, int ifindex, + struct sk_buff *skb; + int err = -ENOBUFS; + +- skb = nlmsg_new(inet_netconf_msgsize_devconf(type), GFP_ATOMIC); ++ skb = nlmsg_new(inet_netconf_msgsize_devconf(type), GFP_KERNEL); + if (!skb) + goto errout; + +@@ -1826,7 +1826,7 @@ void inet_netconf_notify_devconf(struct net *net, int type, int ifindex, + kfree_skb(skb); + goto errout; + } +- rtnl_notify(skb, net, 0, RTNLGRP_IPV4_NETCONF, NULL, GFP_ATOMIC); ++ rtnl_notify(skb, net, 0, RTNLGRP_IPV4_NETCONF, NULL, GFP_KERNEL); + return; + errout: + if (err < 0) +@@ -1883,7 +1883,7 @@ static int inet_netconf_get_devconf(struct sk_buff *in_skb, + } + + err = -ENOBUFS; +- skb = nlmsg_new(inet_netconf_msgsize_devconf(-1), GFP_ATOMIC); ++ skb = nlmsg_new(inet_netconf_msgsize_devconf(-1), GFP_KERNEL); + if (!skb) + goto errout; + +@@ -2007,16 +2007,16 @@ static void inet_forward_change(struct net *net) + + for_each_netdev(net, dev) { + struct in_device *in_dev; ++ + if (on) + dev_disable_lro(dev); +- rcu_read_lock(); +- in_dev = __in_dev_get_rcu(dev); ++ ++ in_dev = __in_dev_get_rtnl(dev); + if (in_dev) { + IN_DEV_CONF_SET(in_dev, FORWARDING, on); + inet_netconf_notify_devconf(net, NETCONFA_FORWARDING, + dev->ifindex, &in_dev->cnf); + } +- rcu_read_unlock(); + } + } + +diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c +index 1f7b47ca2243..7d98aaa3bcad 100644 +--- a/net/ipv4/fib_frontend.c ++++ b/net/ipv4/fib_frontend.c +@@ -509,6 +509,7 @@ static int rtentry_to_fib_config(struct net *net, int cmd, struct rtentry *rt, + if (!dev) + return -ENODEV; + cfg->fc_oif = dev->ifindex; ++ cfg->fc_table = l3mdev_fib_table(dev); + if (colon) { + struct in_ifaddr *ifa; + struct in_device *in_dev = __in_dev_get_rtnl(dev); +@@ -1034,7 +1035,7 @@ no_promotions: + * First of all, we scan fib_info list searching + * for stray nexthop entries, then ignite fib_flush. + */ +- if (fib_sync_down_addr(dev_net(dev), ifa->ifa_local)) ++ if (fib_sync_down_addr(dev, ifa->ifa_local)) + fib_flush(dev_net(dev)); + } + } +diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c +index 3109b9bb95d2..498d5a929d6f 100644 +--- a/net/ipv4/fib_semantics.c ++++ b/net/ipv4/fib_semantics.c +@@ -1069,6 +1069,7 @@ struct fib_info *fib_create_info(struct fib_config *cfg) + fi->fib_priority = cfg->fc_priority; + fi->fib_prefsrc = cfg->fc_prefsrc; + fi->fib_type = cfg->fc_type; ++ fi->fib_tb_id = cfg->fc_table; + + fi->fib_nhs = nhs; + change_nexthops(fi) { +@@ -1352,18 +1353,21 @@ nla_put_failure: + * referring to it. + * - device went down -> we must shutdown all nexthops going via it. + */ +-int fib_sync_down_addr(struct net *net, __be32 local) ++int fib_sync_down_addr(struct net_device *dev, __be32 local) + { + int ret = 0; + unsigned int hash = fib_laddr_hashfn(local); + struct hlist_head *head = &fib_info_laddrhash[hash]; ++ int tb_id = l3mdev_fib_table(dev) ? : RT_TABLE_MAIN; ++ struct net *net = dev_net(dev); + struct fib_info *fi; + + if (!fib_info_laddrhash || local == 0) + return 0; + + hlist_for_each_entry(fi, head, fib_lhash) { +- if (!net_eq(fi->fib_net, net)) ++ if (!net_eq(fi->fib_net, net) || ++ fi->fib_tb_id != tb_id) + continue; + if (fi->fib_prefsrc == local) { + fi->fib_flags |= RTNH_F_DEAD; +diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c +index 0d87639deb27..09b01b888583 100644 +--- a/net/ipv4/fib_trie.c ++++ b/net/ipv4/fib_trie.c +@@ -1714,8 +1714,10 @@ struct fib_table *fib_trie_unmerge(struct fib_table *oldtb) + local_l = fib_find_node(lt, &local_tp, l->key); + + if (fib_insert_alias(lt, local_tp, local_l, new_fa, +- NULL, l->key)) ++ NULL, l->key)) { ++ kmem_cache_free(fn_alias_kmem, new_fa); + goto out; ++ } + } + + /* stop loop if key wrapped back to 0 */ +diff --git a/net/ipv4/fou.c b/net/ipv4/fou.c +index b5a137338e50..7ac370505e44 100644 +--- a/net/ipv4/fou.c ++++ b/net/ipv4/fou.c +@@ -205,6 +205,9 @@ static struct sk_buff **fou_gro_receive(struct sk_buff **head, + */ + NAPI_GRO_CB(skb)->encap_mark = 0; + ++ /* Flag this frame as already having an outer encap header */ ++ NAPI_GRO_CB(skb)->is_fou = 1; ++ + rcu_read_lock(); + offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads; + ops = rcu_dereference(offloads[proto]); +@@ -372,6 +375,9 @@ static struct sk_buff **gue_gro_receive(struct sk_buff **head, + */ + NAPI_GRO_CB(skb)->encap_mark = 0; + ++ /* Flag this frame as already having an outer encap header */ ++ NAPI_GRO_CB(skb)->is_fou = 1; ++ + rcu_read_lock(); + offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads; + ops = rcu_dereference(offloads[guehdr->proto_ctype]); +diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c +index 79ae0d7becbf..d9268af2ea44 100644 +--- a/net/ipv4/gre_offload.c ++++ b/net/ipv4/gre_offload.c +@@ -151,6 +151,14 @@ static struct sk_buff **gre_gro_receive(struct sk_buff **head, + if ((greh->flags & ~(GRE_KEY|GRE_CSUM)) != 0) + goto out; + ++ /* We can only support GRE_CSUM if we can track the location of ++ * the GRE header. In the case of FOU/GUE we cannot because the ++ * outer UDP header displaces the GRE header leaving us in a state ++ * of limbo. ++ */ ++ if ((greh->flags & GRE_CSUM) && NAPI_GRO_CB(skb)->is_fou) ++ goto out; ++ + type = greh->protocol; + + rcu_read_lock(); +diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c +index 031945bead36..9a9f49b55abd 100644 +--- a/net/ipv4/icmp.c ++++ b/net/ipv4/icmp.c +@@ -478,7 +478,7 @@ static struct rtable *icmp_route_lookup(struct net *net, + fl4->flowi4_proto = IPPROTO_ICMP; + fl4->fl4_icmp_type = type; + fl4->fl4_icmp_code = code; +- fl4->flowi4_oif = l3mdev_master_ifindex(skb_in->dev); ++ fl4->flowi4_oif = l3mdev_master_ifindex(skb_dst(skb_in)->dev); + + security_skb_classify_flow(skb_in, flowi4_to_flowi(fl4)); + rt = __ip_route_output_key_hash(net, fl4, +@@ -503,7 +503,7 @@ static struct rtable *icmp_route_lookup(struct net *net, + if (err) + goto relookup_failed; + +- if (inet_addr_type_dev_table(net, skb_in->dev, ++ if (inet_addr_type_dev_table(net, skb_dst(skb_in)->dev, + fl4_dec.saddr) == RTN_LOCAL) { + rt2 = __ip_route_output_key(net, &fl4_dec); + if (IS_ERR(rt2)) +diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c +index 3e4184088082..e5448570d648 100644 +--- a/net/ipv4/ip_gre.c ++++ b/net/ipv4/ip_gre.c +@@ -520,7 +520,8 @@ static struct rtable *gre_get_rt(struct sk_buff *skb, + return ip_route_output_key(net, fl); + } + +-static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev) ++static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev, ++ __be16 proto) + { + struct ip_tunnel_info *tun_info; + const struct ip_tunnel_key *key; +@@ -563,7 +564,7 @@ static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev) + } + + flags = tun_info->key.tun_flags & (TUNNEL_CSUM | TUNNEL_KEY); +- build_header(skb, tunnel_hlen, flags, htons(ETH_P_TEB), ++ build_header(skb, tunnel_hlen, flags, proto, + tunnel_id_to_key(tun_info->key.tun_id), 0); + + df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0; +@@ -605,7 +606,7 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb, + const struct iphdr *tnl_params; + + if (tunnel->collect_md) { +- gre_fb_xmit(skb, dev); ++ gre_fb_xmit(skb, dev, skb->protocol); + return NETDEV_TX_OK; + } + +@@ -649,7 +650,7 @@ static netdev_tx_t gre_tap_xmit(struct sk_buff *skb, + struct ip_tunnel *tunnel = netdev_priv(dev); + + if (tunnel->collect_md) { +- gre_fb_xmit(skb, dev); ++ gre_fb_xmit(skb, dev, htons(ETH_P_TEB)); + return NETDEV_TX_OK; + } + +@@ -851,9 +852,16 @@ static void __gre_tunnel_init(struct net_device *dev) + dev->hw_features |= GRE_FEATURES; + + if (!(tunnel->parms.o_flags & TUNNEL_SEQ)) { +- /* TCP offload with GRE SEQ is not supported. */ +- dev->features |= NETIF_F_GSO_SOFTWARE; +- dev->hw_features |= NETIF_F_GSO_SOFTWARE; ++ /* TCP offload with GRE SEQ is not supported, nor ++ * can we support 2 levels of outer headers requiring ++ * an update. ++ */ ++ if (!(tunnel->parms.o_flags & TUNNEL_CSUM) || ++ (tunnel->encap.type == TUNNEL_ENCAP_NONE)) { ++ dev->features |= NETIF_F_GSO_SOFTWARE; ++ dev->hw_features |= NETIF_F_GSO_SOFTWARE; ++ } ++ + /* Can use a lockless transmit, unless we generate + * output sequences + */ +@@ -875,7 +883,7 @@ static int ipgre_tunnel_init(struct net_device *dev) + netif_keep_dst(dev); + dev->addr_len = 4; + +- if (iph->daddr) { ++ if (iph->daddr && !tunnel->collect_md) { + #ifdef CONFIG_NET_IPGRE_BROADCAST + if (ipv4_is_multicast(iph->daddr)) { + if (!iph->saddr) +@@ -884,8 +892,9 @@ static int ipgre_tunnel_init(struct net_device *dev) + dev->header_ops = &ipgre_header_ops; + } + #endif +- } else ++ } else if (!tunnel->collect_md) { + dev->header_ops = &ipgre_header_ops; ++ } + + return ip_tunnel_init(dev); + } +@@ -928,6 +937,11 @@ static int ipgre_tunnel_validate(struct nlattr *tb[], struct nlattr *data[]) + if (flags & (GRE_VERSION|GRE_ROUTING)) + return -EINVAL; + ++ if (data[IFLA_GRE_COLLECT_METADATA] && ++ data[IFLA_GRE_ENCAP_TYPE] && ++ nla_get_u16(data[IFLA_GRE_ENCAP_TYPE]) != TUNNEL_ENCAP_NONE) ++ return -EINVAL; ++ + return 0; + } + +@@ -1230,6 +1244,7 @@ struct net_device *gretap_fb_dev_create(struct net *net, const char *name, + { + struct nlattr *tb[IFLA_MAX + 1]; + struct net_device *dev; ++ LIST_HEAD(list_kill); + struct ip_tunnel *t; + int err; + +@@ -1245,8 +1260,10 @@ struct net_device *gretap_fb_dev_create(struct net *net, const char *name, + t->collect_md = true; + + err = ipgre_newlink(net, dev, tb, NULL); +- if (err < 0) +- goto out; ++ if (err < 0) { ++ free_netdev(dev); ++ return ERR_PTR(err); ++ } + + /* openvswitch users expect packet sizes to be unrestricted, + * so set the largest MTU we can. +@@ -1255,9 +1272,14 @@ struct net_device *gretap_fb_dev_create(struct net *net, const char *name, + if (err) + goto out; + ++ err = rtnl_configure_link(dev, NULL); ++ if (err < 0) ++ goto out; ++ + return dev; + out: +- free_netdev(dev); ++ ip_tunnel_dellink(dev, &list_kill); ++ unregister_netdevice_many(&list_kill); + return ERR_PTR(err); + } + EXPORT_SYMBOL_GPL(gretap_fb_dev_create); +diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c +index 1ea36bf778e6..9a7b60d6c670 100644 +--- a/net/ipv4/ip_sockglue.c ++++ b/net/ipv4/ip_sockglue.c +@@ -279,9 +279,12 @@ int ip_cmsg_send(struct net *net, struct msghdr *msg, struct ipcm_cookie *ipc, + ipc->ttl = val; + break; + case IP_TOS: +- if (cmsg->cmsg_len != CMSG_LEN(sizeof(int))) ++ if (cmsg->cmsg_len == CMSG_LEN(sizeof(int))) ++ val = *(int *)CMSG_DATA(cmsg); ++ else if (cmsg->cmsg_len == CMSG_LEN(sizeof(u8))) ++ val = *(u8 *)CMSG_DATA(cmsg); ++ else + return -EINVAL; +- val = *(int *)CMSG_DATA(cmsg); + if (val < 0 || val > 255) + return -EINVAL; + ipc->tos = val; +diff --git a/net/ipv4/netfilter/nft_dup_ipv4.c b/net/ipv4/netfilter/nft_dup_ipv4.c +index bf855e64fc45..0c01a270bf9f 100644 +--- a/net/ipv4/netfilter/nft_dup_ipv4.c ++++ b/net/ipv4/netfilter/nft_dup_ipv4.c +@@ -28,7 +28,7 @@ static void nft_dup_ipv4_eval(const struct nft_expr *expr, + struct in_addr gw = { + .s_addr = (__force __be32)regs->data[priv->sreg_addr], + }; +- int oif = regs->data[priv->sreg_dev]; ++ int oif = priv->sreg_dev ? regs->data[priv->sreg_dev] : -1; + + nf_dup_ipv4(pkt->net, pkt->skb, pkt->hook, &gw, oif); + } +@@ -59,7 +59,9 @@ static int nft_dup_ipv4_dump(struct sk_buff *skb, const struct nft_expr *expr) + { + struct nft_dup_ipv4 *priv = nft_expr_priv(expr); + +- if (nft_dump_register(skb, NFTA_DUP_SREG_ADDR, priv->sreg_addr) || ++ if (nft_dump_register(skb, NFTA_DUP_SREG_ADDR, priv->sreg_addr)) ++ goto nla_put_failure; ++ if (priv->sreg_dev && + nft_dump_register(skb, NFTA_DUP_SREG_DEV, priv->sreg_dev)) + goto nla_put_failure; + +diff --git a/net/ipv4/route.c b/net/ipv4/route.c +index 74ae703c6909..29a87fadf01b 100644 +--- a/net/ipv4/route.c ++++ b/net/ipv4/route.c +@@ -477,12 +477,18 @@ u32 ip_idents_reserve(u32 hash, int segs) + atomic_t *p_id = ip_idents + hash % IP_IDENTS_SZ; + u32 old = ACCESS_ONCE(*p_tstamp); + u32 now = (u32)jiffies; +- u32 delta = 0; ++ u32 new, delta = 0; + + if (old != now && cmpxchg(p_tstamp, old, now) == old) + delta = prandom_u32_max(now - old); + +- return atomic_add_return(segs + delta, p_id) - segs; ++ /* Do not use atomic_add_return() as it makes UBSAN unhappy */ ++ do { ++ old = (u32)atomic_read(p_id); ++ new = old + delta + segs; ++ } while (atomic_cmpxchg(p_id, old, new) != old); ++ ++ return new - segs; + } + EXPORT_SYMBOL(ip_idents_reserve); + +@@ -1494,9 +1500,9 @@ static void rt_set_nexthop(struct rtable *rt, __be32 daddr, + #endif + } + +-static struct rtable *rt_dst_alloc(struct net_device *dev, +- unsigned int flags, u16 type, +- bool nopolicy, bool noxfrm, bool will_cache) ++struct rtable *rt_dst_alloc(struct net_device *dev, ++ unsigned int flags, u16 type, ++ bool nopolicy, bool noxfrm, bool will_cache) + { + struct rtable *rt; + +@@ -1525,6 +1531,7 @@ static struct rtable *rt_dst_alloc(struct net_device *dev, + + return rt; + } ++EXPORT_SYMBOL(rt_dst_alloc); + + /* called in rcu_read_lock() section */ + static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr, +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c +index 88bfd663d9a2..64c7265793a5 100644 +--- a/net/ipv4/tcp_input.c ++++ b/net/ipv4/tcp_input.c +@@ -2926,7 +2926,10 @@ static void tcp_update_rtt_min(struct sock *sk, u32 rtt_us) + { + const u32 now = tcp_time_stamp, wlen = sysctl_tcp_min_rtt_wlen * HZ; + struct rtt_meas *m = tcp_sk(sk)->rtt_min; +- struct rtt_meas rttm = { .rtt = (rtt_us ? : 1), .ts = now }; ++ struct rtt_meas rttm = { ++ .rtt = likely(rtt_us) ? rtt_us : jiffies_to_usecs(1), ++ .ts = now, ++ }; + u32 elapsed; + + /* Check if the new measurement updates the 1st, 2nd, or 3rd choices */ +diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c +index 0924f93a0aff..bb306996c15e 100644 +--- a/net/ipv4/udp.c ++++ b/net/ipv4/udp.c +@@ -1685,10 +1685,10 @@ static int __udp4_lib_mcast_deliver(struct net *net, struct sk_buff *skb, + + if (use_hash2) { + hash2_any = udp4_portaddr_hash(net, htonl(INADDR_ANY), hnum) & +- udp_table.mask; +- hash2 = udp4_portaddr_hash(net, daddr, hnum) & udp_table.mask; ++ udptable->mask; ++ hash2 = udp4_portaddr_hash(net, daddr, hnum) & udptable->mask; + start_lookup: +- hslot = &udp_table.hash2[hash2]; ++ hslot = &udptable->hash2[hash2]; + offset = offsetof(typeof(*sk), __sk_common.skc_portaddr_node); + } + +@@ -1754,8 +1754,11 @@ static inline int udp4_csum_init(struct sk_buff *skb, struct udphdr *uh, + } + } + +- return skb_checksum_init_zero_check(skb, proto, uh->check, +- inet_compute_pseudo); ++ /* Note, we are only interested in != 0 or == 0, thus the ++ * force to int. ++ */ ++ return (__force int)skb_checksum_init_zero_check(skb, proto, uh->check, ++ inet_compute_pseudo); + } + + /* +diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c +index 086cdf9f0501..583765a330ff 100644 +--- a/net/ipv6/addrconf.c ++++ b/net/ipv6/addrconf.c +@@ -540,7 +540,7 @@ void inet6_netconf_notify_devconf(struct net *net, int type, int ifindex, + struct sk_buff *skb; + int err = -ENOBUFS; + +- skb = nlmsg_new(inet6_netconf_msgsize_devconf(type), GFP_ATOMIC); ++ skb = nlmsg_new(inet6_netconf_msgsize_devconf(type), GFP_KERNEL); + if (!skb) + goto errout; + +@@ -552,7 +552,7 @@ void inet6_netconf_notify_devconf(struct net *net, int type, int ifindex, + kfree_skb(skb); + goto errout; + } +- rtnl_notify(skb, net, 0, RTNLGRP_IPV6_NETCONF, NULL, GFP_ATOMIC); ++ rtnl_notify(skb, net, 0, RTNLGRP_IPV6_NETCONF, NULL, GFP_KERNEL); + return; + errout: + rtnl_set_sk_err(net, RTNLGRP_IPV6_NETCONF, err); +@@ -771,7 +771,14 @@ static int addrconf_fixup_forwarding(struct ctl_table *table, int *p, int newf) + } + + if (p == &net->ipv6.devconf_all->forwarding) { ++ int old_dflt = net->ipv6.devconf_dflt->forwarding; ++ + net->ipv6.devconf_dflt->forwarding = newf; ++ if ((!newf) ^ (!old_dflt)) ++ inet6_netconf_notify_devconf(net, NETCONFA_FORWARDING, ++ NETCONFA_IFINDEX_DEFAULT, ++ net->ipv6.devconf_dflt); ++ + addrconf_forward_change(net, newf); + if ((!newf) ^ (!old)) + inet6_netconf_notify_devconf(net, NETCONFA_FORWARDING, +@@ -3146,6 +3153,7 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event, + void *ptr) + { + struct net_device *dev = netdev_notifier_info_to_dev(ptr); ++ struct netdev_notifier_changeupper_info *info; + struct inet6_dev *idev = __in6_dev_get(dev); + struct net *net = dev_net(dev); + int run_pending = 0; +@@ -3307,6 +3315,15 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event, + case NETDEV_POST_TYPE_CHANGE: + addrconf_type_change(dev, event); + break; ++ ++ case NETDEV_CHANGEUPPER: ++ info = ptr; ++ ++ /* flush all routes if dev is linked to or unlinked from ++ * an L3 master device (e.g., VRF) ++ */ ++ if (info->upper_dev && netif_is_l3_master(info->upper_dev)) ++ addrconf_ifdown(dev, 0); + } + + return NOTIFY_OK; +diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c +index 3697cd08c515..d21e81cd6120 100644 +--- a/net/ipv6/icmp.c ++++ b/net/ipv6/icmp.c +@@ -445,6 +445,8 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info) + + if (__ipv6_addr_needs_scope_id(addr_type)) + iif = skb->dev->ifindex; ++ else ++ iif = l3mdev_master_ifindex(skb_dst(skb)->dev); + + /* + * Must not send error if the source does not uniquely +@@ -499,9 +501,6 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info) + else if (!fl6.flowi6_oif) + fl6.flowi6_oif = np->ucast_oif; + +- if (!fl6.flowi6_oif) +- fl6.flowi6_oif = l3mdev_master_ifindex(skb->dev); +- + dst = icmpv6_route_lookup(net, skb, sk, &fl6); + if (IS_ERR(dst)) + goto out; +diff --git a/net/ipv6/ip6_checksum.c b/net/ipv6/ip6_checksum.c +index 391a8fedb27e..1132624edee9 100644 +--- a/net/ipv6/ip6_checksum.c ++++ b/net/ipv6/ip6_checksum.c +@@ -84,9 +84,12 @@ int udp6_csum_init(struct sk_buff *skb, struct udphdr *uh, int proto) + * we accept a checksum of zero here. When we find the socket + * for the UDP packet we'll check if that socket allows zero checksum + * for IPv6 (set by socket option). ++ * ++ * Note, we are only interested in != 0 or == 0, thus the ++ * force to int. + */ +- return skb_checksum_init_zero_check(skb, proto, uh->check, +- ip6_compute_pseudo); ++ return (__force int)skb_checksum_init_zero_check(skb, proto, uh->check, ++ ip6_compute_pseudo); + } + EXPORT_SYMBOL(udp6_csum_init); + +diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c +index 4d273adcf130..2267920c086a 100644 +--- a/net/ipv6/ip6_vti.c ++++ b/net/ipv6/ip6_vti.c +@@ -324,11 +324,9 @@ static int vti6_rcv(struct sk_buff *skb) + goto discard; + } + +- XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = t; +- + rcu_read_unlock(); + +- return xfrm6_rcv(skb); ++ return xfrm6_rcv_tnl(skb, t); + } + rcu_read_unlock(); + return -EINVAL; +diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c +index 91f16e679f63..20812e8b24dd 100644 +--- a/net/ipv6/ip6mr.c ++++ b/net/ipv6/ip6mr.c +@@ -1594,14 +1594,15 @@ static int ip6mr_sk_init(struct mr6_table *mrt, struct sock *sk) + if (likely(mrt->mroute6_sk == NULL)) { + mrt->mroute6_sk = sk; + net->ipv6.devconf_all->mc_forwarding++; +- inet6_netconf_notify_devconf(net, NETCONFA_MC_FORWARDING, +- NETCONFA_IFINDEX_ALL, +- net->ipv6.devconf_all); +- } +- else ++ } else { + err = -EADDRINUSE; ++ } + write_unlock_bh(&mrt_lock); + ++ if (!err) ++ inet6_netconf_notify_devconf(net, NETCONFA_MC_FORWARDING, ++ NETCONFA_IFINDEX_ALL, ++ net->ipv6.devconf_all); + rtnl_unlock(); + + return err; +@@ -1619,11 +1620,11 @@ int ip6mr_sk_done(struct sock *sk) + write_lock_bh(&mrt_lock); + mrt->mroute6_sk = NULL; + net->ipv6.devconf_all->mc_forwarding--; ++ write_unlock_bh(&mrt_lock); + inet6_netconf_notify_devconf(net, + NETCONFA_MC_FORWARDING, + NETCONFA_IFINDEX_ALL, + net->ipv6.devconf_all); +- write_unlock_bh(&mrt_lock); + + mroute_clean_tables(mrt, false); + err = 0; +diff --git a/net/ipv6/netfilter/nft_dup_ipv6.c b/net/ipv6/netfilter/nft_dup_ipv6.c +index 8bfd470cbe72..831f86e1ec08 100644 +--- a/net/ipv6/netfilter/nft_dup_ipv6.c ++++ b/net/ipv6/netfilter/nft_dup_ipv6.c +@@ -26,7 +26,7 @@ static void nft_dup_ipv6_eval(const struct nft_expr *expr, + { + struct nft_dup_ipv6 *priv = nft_expr_priv(expr); + struct in6_addr *gw = (struct in6_addr *)®s->data[priv->sreg_addr]; +- int oif = regs->data[priv->sreg_dev]; ++ int oif = priv->sreg_dev ? regs->data[priv->sreg_dev] : -1; + + nf_dup_ipv6(pkt->net, pkt->skb, pkt->hook, gw, oif); + } +@@ -57,7 +57,9 @@ static int nft_dup_ipv6_dump(struct sk_buff *skb, const struct nft_expr *expr) + { + struct nft_dup_ipv6 *priv = nft_expr_priv(expr); + +- if (nft_dump_register(skb, NFTA_DUP_SREG_ADDR, priv->sreg_addr) || ++ if (nft_dump_register(skb, NFTA_DUP_SREG_ADDR, priv->sreg_addr)) ++ goto nla_put_failure; ++ if (priv->sreg_dev && + nft_dump_register(skb, NFTA_DUP_SREG_DEV, priv->sreg_dev)) + goto nla_put_failure; + +diff --git a/net/ipv6/route.c b/net/ipv6/route.c +index 63a7d31fa9f0..50eba77f5a0d 100644 +--- a/net/ipv6/route.c ++++ b/net/ipv6/route.c +@@ -101,11 +101,13 @@ static int rt6_score_route(struct rt6_info *rt, int oif, int strict); + #ifdef CONFIG_IPV6_ROUTE_INFO + static struct rt6_info *rt6_add_route_info(struct net *net, + const struct in6_addr *prefix, int prefixlen, +- const struct in6_addr *gwaddr, int ifindex, ++ const struct in6_addr *gwaddr, ++ struct net_device *dev, + unsigned int pref); + static struct rt6_info *rt6_get_route_info(struct net *net, + const struct in6_addr *prefix, int prefixlen, +- const struct in6_addr *gwaddr, int ifindex); ++ const struct in6_addr *gwaddr, ++ struct net_device *dev); + #endif + + struct uncached_list { +@@ -337,9 +339,9 @@ static struct rt6_info *__ip6_dst_alloc(struct net *net, + return rt; + } + +-static struct rt6_info *ip6_dst_alloc(struct net *net, +- struct net_device *dev, +- int flags) ++struct rt6_info *ip6_dst_alloc(struct net *net, ++ struct net_device *dev, ++ int flags) + { + struct rt6_info *rt = __ip6_dst_alloc(net, dev, flags); + +@@ -363,6 +365,7 @@ static struct rt6_info *ip6_dst_alloc(struct net *net, + + return rt; + } ++EXPORT_SYMBOL(ip6_dst_alloc); + + static void ip6_dst_destroy(struct dst_entry *dst) + { +@@ -801,7 +804,7 @@ int rt6_route_rcv(struct net_device *dev, u8 *opt, int len, + rt = rt6_get_dflt_router(gwaddr, dev); + else + rt = rt6_get_route_info(net, prefix, rinfo->prefix_len, +- gwaddr, dev->ifindex); ++ gwaddr, dev); + + if (rt && !lifetime) { + ip6_del_rt(rt); +@@ -809,8 +812,8 @@ int rt6_route_rcv(struct net_device *dev, u8 *opt, int len, + } + + if (!rt && lifetime) +- rt = rt6_add_route_info(net, prefix, rinfo->prefix_len, gwaddr, dev->ifindex, +- pref); ++ rt = rt6_add_route_info(net, prefix, rinfo->prefix_len, gwaddr, ++ dev, pref); + else if (rt) + rt->rt6i_flags = RTF_ROUTEINFO | + (rt->rt6i_flags & ~RTF_PREF_MASK) | RTF_PREF(pref); +@@ -2273,13 +2276,16 @@ static void ip6_rt_copy_init(struct rt6_info *rt, struct rt6_info *ort) + #ifdef CONFIG_IPV6_ROUTE_INFO + static struct rt6_info *rt6_get_route_info(struct net *net, + const struct in6_addr *prefix, int prefixlen, +- const struct in6_addr *gwaddr, int ifindex) ++ const struct in6_addr *gwaddr, ++ struct net_device *dev) + { ++ u32 tb_id = l3mdev_fib_table(dev) ? : RT6_TABLE_INFO; ++ int ifindex = dev->ifindex; + struct fib6_node *fn; + struct rt6_info *rt = NULL; + struct fib6_table *table; + +- table = fib6_get_table(net, RT6_TABLE_INFO); ++ table = fib6_get_table(net, tb_id); + if (!table) + return NULL; + +@@ -2305,12 +2311,13 @@ out: + + static struct rt6_info *rt6_add_route_info(struct net *net, + const struct in6_addr *prefix, int prefixlen, +- const struct in6_addr *gwaddr, int ifindex, ++ const struct in6_addr *gwaddr, ++ struct net_device *dev, + unsigned int pref) + { + struct fib6_config cfg = { + .fc_metric = IP6_RT_PRIO_USER, +- .fc_ifindex = ifindex, ++ .fc_ifindex = dev->ifindex, + .fc_dst_len = prefixlen, + .fc_flags = RTF_GATEWAY | RTF_ADDRCONF | RTF_ROUTEINFO | + RTF_UP | RTF_PREF(pref), +@@ -2319,7 +2326,7 @@ static struct rt6_info *rt6_add_route_info(struct net *net, + .fc_nlinfo.nl_net = net, + }; + +- cfg.fc_table = l3mdev_fib_table_by_index(net, ifindex) ? : RT6_TABLE_INFO; ++ cfg.fc_table = l3mdev_fib_table(dev) ? : RT6_TABLE_INFO, + cfg.fc_dst = *prefix; + cfg.fc_gateway = *gwaddr; + +@@ -2329,16 +2336,17 @@ static struct rt6_info *rt6_add_route_info(struct net *net, + + ip6_route_add(&cfg); + +- return rt6_get_route_info(net, prefix, prefixlen, gwaddr, ifindex); ++ return rt6_get_route_info(net, prefix, prefixlen, gwaddr, dev); + } + #endif + + struct rt6_info *rt6_get_dflt_router(const struct in6_addr *addr, struct net_device *dev) + { ++ u32 tb_id = l3mdev_fib_table(dev) ? : RT6_TABLE_DFLT; + struct rt6_info *rt; + struct fib6_table *table; + +- table = fib6_get_table(dev_net(dev), RT6_TABLE_DFLT); ++ table = fib6_get_table(dev_net(dev), tb_id); + if (!table) + return NULL; + +@@ -2372,20 +2380,20 @@ struct rt6_info *rt6_add_dflt_router(const struct in6_addr *gwaddr, + + cfg.fc_gateway = *gwaddr; + +- ip6_route_add(&cfg); ++ if (!ip6_route_add(&cfg)) { ++ struct fib6_table *table; ++ ++ table = fib6_get_table(dev_net(dev), cfg.fc_table); ++ if (table) ++ table->flags |= RT6_TABLE_HAS_DFLT_ROUTER; ++ } + + return rt6_get_dflt_router(gwaddr, dev); + } + +-void rt6_purge_dflt_routers(struct net *net) ++static void __rt6_purge_dflt_routers(struct fib6_table *table) + { + struct rt6_info *rt; +- struct fib6_table *table; +- +- /* NOTE: Keep consistent with rt6_get_dflt_router */ +- table = fib6_get_table(net, RT6_TABLE_DFLT); +- if (!table) +- return; + + restart: + read_lock_bh(&table->tb6_lock); +@@ -2399,6 +2407,27 @@ restart: + } + } + read_unlock_bh(&table->tb6_lock); ++ ++ table->flags &= ~RT6_TABLE_HAS_DFLT_ROUTER; ++} ++ ++void rt6_purge_dflt_routers(struct net *net) ++{ ++ struct fib6_table *table; ++ struct hlist_head *head; ++ unsigned int h; ++ ++ rcu_read_lock(); ++ ++ for (h = 0; h < FIB6_TABLE_HASHSZ; h++) { ++ head = &net->ipv6.fib_table_hash[h]; ++ hlist_for_each_entry_rcu(table, head, tb6_hlist) { ++ if (table->flags & RT6_TABLE_HAS_DFLT_ROUTER) ++ __rt6_purge_dflt_routers(table); ++ } ++ } ++ ++ rcu_read_unlock(); + } + + static void rtmsg_to_fib6_config(struct net *net, +diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c +index 6e7f99569bdf..6a36fcc5c4e1 100644 +--- a/net/ipv6/tcp_ipv6.c ++++ b/net/ipv6/tcp_ipv6.c +@@ -815,8 +815,13 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32 + fl6.flowi6_proto = IPPROTO_TCP; + if (rt6_need_strict(&fl6.daddr) && !oif) + fl6.flowi6_oif = tcp_v6_iif(skb); +- else ++ else { ++ if (!oif && netif_index_is_l3_master(net, skb->skb_iif)) ++ oif = skb->skb_iif; ++ + fl6.flowi6_oif = oif; ++ } ++ + fl6.flowi6_mark = IP6_REPLY_MARK(net, skb->mark); + fl6.fl6_dport = t1->dest; + fl6.fl6_sport = t1->source; +diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c +index f4e06748f86b..73f111206e36 100644 +--- a/net/ipv6/udp.c ++++ b/net/ipv6/udp.c +@@ -801,10 +801,10 @@ static int __udp6_lib_mcast_deliver(struct net *net, struct sk_buff *skb, + + if (use_hash2) { + hash2_any = udp6_portaddr_hash(net, &in6addr_any, hnum) & +- udp_table.mask; +- hash2 = udp6_portaddr_hash(net, daddr, hnum) & udp_table.mask; ++ udptable->mask; ++ hash2 = udp6_portaddr_hash(net, daddr, hnum) & udptable->mask; + start_lookup: +- hslot = &udp_table.hash2[hash2]; ++ hslot = &udptable->hash2[hash2]; + offset = offsetof(typeof(*sk), __sk_common.skc_portaddr_node); + } + +diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c +index 0eaab1fa6be5..b5789562aded 100644 +--- a/net/ipv6/xfrm6_input.c ++++ b/net/ipv6/xfrm6_input.c +@@ -21,8 +21,10 @@ int xfrm6_extract_input(struct xfrm_state *x, struct sk_buff *skb) + return xfrm6_extract_header(skb); + } + +-int xfrm6_rcv_spi(struct sk_buff *skb, int nexthdr, __be32 spi) ++int xfrm6_rcv_spi(struct sk_buff *skb, int nexthdr, __be32 spi, ++ struct ip6_tnl *t) + { ++ XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = t; + XFRM_SPI_SKB_CB(skb)->family = AF_INET6; + XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct ipv6hdr, daddr); + return xfrm_input(skb, nexthdr, spi, 0); +@@ -48,13 +50,18 @@ int xfrm6_transport_finish(struct sk_buff *skb, int async) + return -1; + } + +-int xfrm6_rcv(struct sk_buff *skb) ++int xfrm6_rcv_tnl(struct sk_buff *skb, struct ip6_tnl *t) + { + return xfrm6_rcv_spi(skb, skb_network_header(skb)[IP6CB(skb)->nhoff], +- 0); ++ 0, t); + } +-EXPORT_SYMBOL(xfrm6_rcv); ++EXPORT_SYMBOL(xfrm6_rcv_tnl); + ++int xfrm6_rcv(struct sk_buff *skb) ++{ ++ return xfrm6_rcv_tnl(skb, NULL); ++} ++EXPORT_SYMBOL(xfrm6_rcv); + int xfrm6_input_addr(struct sk_buff *skb, xfrm_address_t *daddr, + xfrm_address_t *saddr, u8 proto) + { +diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c +index f9d493c59d6c..07b7b2540579 100644 +--- a/net/ipv6/xfrm6_tunnel.c ++++ b/net/ipv6/xfrm6_tunnel.c +@@ -239,7 +239,7 @@ static int xfrm6_tunnel_rcv(struct sk_buff *skb) + __be32 spi; + + spi = xfrm6_tunnel_spi_lookup(net, (const xfrm_address_t *)&iph->saddr); +- return xfrm6_rcv_spi(skb, IPPROTO_IPV6, spi); ++ return xfrm6_rcv_spi(skb, IPPROTO_IPV6, spi, NULL); + } + + static int xfrm6_tunnel_err(struct sk_buff *skb, struct inet6_skb_parm *opt, +diff --git a/net/irda/af_irda.c b/net/irda/af_irda.c +index 7cc9db38e1b6..0e8f8a3f7b23 100644 +--- a/net/irda/af_irda.c ++++ b/net/irda/af_irda.c +@@ -839,7 +839,7 @@ static int irda_accept(struct socket *sock, struct socket *newsock, int flags) + struct sock *sk = sock->sk; + struct irda_sock *new, *self = irda_sk(sk); + struct sock *newsk; +- struct sk_buff *skb; ++ struct sk_buff *skb = NULL; + int err; + + err = irda_create(sock_net(sk), newsock, sk->sk_protocol, 0); +@@ -907,7 +907,6 @@ static int irda_accept(struct socket *sock, struct socket *newsock, int flags) + err = -EPERM; /* value does not seem to make sense. -arnd */ + if (!new->tsap) { + pr_debug("%s(), dup failed!\n", __func__); +- kfree_skb(skb); + goto out; + } + +@@ -926,7 +925,6 @@ static int irda_accept(struct socket *sock, struct socket *newsock, int flags) + /* Clean up the original one to keep it in listen state */ + irttp_listen(self->tsap); + +- kfree_skb(skb); + sk->sk_ack_backlog--; + + newsock->state = SS_CONNECTED; +@@ -934,6 +932,7 @@ static int irda_accept(struct socket *sock, struct socket *newsock, int flags) + irda_connect_response(new); + err = 0; + out: ++ kfree_skb(skb); + release_sock(sk); + return err; + } +diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c +index 2b8b5c57c7f0..8cbccddc0b1e 100644 +--- a/net/l2tp/l2tp_core.c ++++ b/net/l2tp/l2tp_core.c +@@ -1953,6 +1953,9 @@ static __net_exit void l2tp_exit_net(struct net *net) + l2tp_tunnel_delete(tunnel); + } + rcu_read_unlock_bh(); ++ ++ flush_workqueue(l2tp_wq); ++ rcu_barrier(); + } + + static struct pernet_operations l2tp_net_ops = { +diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h +index 72f76da88912..a991d1df6774 100644 +--- a/net/mac80211/ieee80211_i.h ++++ b/net/mac80211/ieee80211_i.h +@@ -1706,6 +1706,10 @@ ieee80211_vht_cap_ie_to_sta_vht_cap(struct ieee80211_sub_if_data *sdata, + enum ieee80211_sta_rx_bandwidth ieee80211_sta_cap_rx_bw(struct sta_info *sta); + enum ieee80211_sta_rx_bandwidth ieee80211_sta_cur_vht_bw(struct sta_info *sta); + void ieee80211_sta_set_rx_nss(struct sta_info *sta); ++enum ieee80211_sta_rx_bandwidth ++ieee80211_chan_width_to_rx_bw(enum nl80211_chan_width width); ++enum nl80211_chan_width ieee80211_sta_cap_chan_bw(struct sta_info *sta); ++void ieee80211_sta_set_rx_nss(struct sta_info *sta); + u32 __ieee80211_vht_handle_opmode(struct ieee80211_sub_if_data *sdata, + struct sta_info *sta, u8 opmode, + enum ieee80211_band band); +diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c +index 031fbfd36d58..4ab78bc6c2ca 100644 +--- a/net/mac80211/mlme.c ++++ b/net/mac80211/mlme.c +@@ -2283,7 +2283,7 @@ void ieee80211_sta_tx_notify(struct ieee80211_sub_if_data *sdata, + if (!ieee80211_is_data(hdr->frame_control)) + return; + +- if (ieee80211_is_nullfunc(hdr->frame_control) && ++ if (ieee80211_is_any_nullfunc(hdr->frame_control) && + sdata->u.mgd.probe_send_count > 0) { + if (ack) + ieee80211_sta_reset_conn_monitor(sdata); +diff --git a/net/mac80211/offchannel.c b/net/mac80211/offchannel.c +index b6be51940ead..af489405d5b3 100644 +--- a/net/mac80211/offchannel.c ++++ b/net/mac80211/offchannel.c +@@ -308,11 +308,10 @@ void ieee80211_roc_notify_destroy(struct ieee80211_roc_work *roc, bool free) + + /* was never transmitted */ + if (roc->frame) { +- cfg80211_mgmt_tx_status(&roc->sdata->wdev, +- (unsigned long)roc->frame, ++ cfg80211_mgmt_tx_status(&roc->sdata->wdev, roc->mgmt_tx_cookie, + roc->frame->data, roc->frame->len, + false, GFP_KERNEL); +- kfree_skb(roc->frame); ++ ieee80211_free_txskb(&roc->sdata->local->hw, roc->frame); + } + + if (!roc->mgmt_tx_cookie) +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c +index 2b7975c4dac7..a74a6ff18f91 100644 +--- a/net/mac80211/rx.c ++++ b/net/mac80211/rx.c +@@ -1110,8 +1110,7 @@ ieee80211_rx_h_check_dup(struct ieee80211_rx_data *rx) + return RX_CONTINUE; + + if (ieee80211_is_ctl(hdr->frame_control) || +- ieee80211_is_nullfunc(hdr->frame_control) || +- ieee80211_is_qos_nullfunc(hdr->frame_control) || ++ ieee80211_is_any_nullfunc(hdr->frame_control) || + is_multicast_ether_addr(hdr->addr1)) + return RX_CONTINUE; + +@@ -1487,8 +1486,7 @@ ieee80211_rx_h_sta_process(struct ieee80211_rx_data *rx) + * Drop (qos-)data::nullfunc frames silently, since they + * are used only to control station power saving mode. + */ +- if (ieee80211_is_nullfunc(hdr->frame_control) || +- ieee80211_is_qos_nullfunc(hdr->frame_control)) { ++ if (ieee80211_is_any_nullfunc(hdr->frame_control)) { + I802_DEBUG_INC(rx->local->rx_handlers_drop_nullfunc); + + /* +@@ -1977,7 +1975,7 @@ static int ieee80211_drop_unencrypted(struct ieee80211_rx_data *rx, __le16 fc) + + /* Drop unencrypted frames if key is set. */ + if (unlikely(!ieee80211_has_protected(fc) && +- !ieee80211_is_nullfunc(fc) && ++ !ieee80211_is_any_nullfunc(fc) && + ieee80211_is_data(fc) && rx->key)) + return -EACCES; + +diff --git a/net/mac80211/status.c b/net/mac80211/status.c +index d221300e59e5..618479e0d648 100644 +--- a/net/mac80211/status.c ++++ b/net/mac80211/status.c +@@ -474,8 +474,7 @@ static void ieee80211_report_ack_skb(struct ieee80211_local *local, + rcu_read_lock(); + sdata = ieee80211_sdata_from_skb(local, skb); + if (sdata) { +- if (ieee80211_is_nullfunc(hdr->frame_control) || +- ieee80211_is_qos_nullfunc(hdr->frame_control)) ++ if (ieee80211_is_any_nullfunc(hdr->frame_control)) + cfg80211_probe_status(sdata->dev, hdr->addr1, + cookie, acked, + GFP_ATOMIC); +@@ -905,7 +904,7 @@ void ieee80211_tx_status(struct ieee80211_hw *hw, struct sk_buff *skb) + I802_DEBUG_INC(local->dot11FailedCount); + } + +- if (ieee80211_is_nullfunc(fc) && ieee80211_has_pm(fc) && ++ if (ieee80211_is_any_nullfunc(fc) && ieee80211_has_pm(fc) && + ieee80211_hw_check(&local->hw, REPORTS_TX_ACK_STATUS) && + !(info->flags & IEEE80211_TX_CTL_INJECTED) && + local->ps_sdata && !(local->scanning)) { +diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c +index ce2ece424384..ef6bde9b4ef9 100644 +--- a/net/mac80211/tdls.c ++++ b/net/mac80211/tdls.c +@@ -4,7 +4,7 @@ + * Copyright 2006-2010 Johannes Berg + * Copyright 2014, Intel Corporation + * Copyright 2014 Intel Mobile Communications GmbH +- * Copyright 2015 Intel Deutschland GmbH ++ * Copyright 2015 - 2016 Intel Deutschland GmbH + * + * This file is GPLv2 as found in COPYING. + */ +@@ -15,6 +15,7 @@ + #include + #include "ieee80211_i.h" + #include "driver-ops.h" ++#include "rate.h" + + /* give usermode some time for retries in setting up the TDLS session */ + #define TDLS_PEER_SETUP_TIMEOUT (15 * HZ) +@@ -302,7 +303,7 @@ ieee80211_tdls_chandef_vht_upgrade(struct ieee80211_sub_if_data *sdata, + /* IEEE802.11ac-2013 Table E-4 */ + u16 centers_80mhz[] = { 5210, 5290, 5530, 5610, 5690, 5775 }; + struct cfg80211_chan_def uc = sta->tdls_chandef; +- enum nl80211_chan_width max_width = ieee80211_get_sta_bw(&sta->sta); ++ enum nl80211_chan_width max_width = ieee80211_sta_cap_chan_bw(sta); + int i; + + /* only support upgrading non-narrow channels up to 80Mhz */ +@@ -313,7 +314,7 @@ ieee80211_tdls_chandef_vht_upgrade(struct ieee80211_sub_if_data *sdata, + if (max_width > NL80211_CHAN_WIDTH_80) + max_width = NL80211_CHAN_WIDTH_80; + +- if (uc.width == max_width) ++ if (uc.width >= max_width) + return; + /* + * Channel usage constrains in the IEEE802.11ac-2013 specification only +@@ -324,6 +325,7 @@ ieee80211_tdls_chandef_vht_upgrade(struct ieee80211_sub_if_data *sdata, + for (i = 0; i < ARRAY_SIZE(centers_80mhz); i++) + if (abs(uc.chan->center_freq - centers_80mhz[i]) <= 30) { + uc.center_freq1 = centers_80mhz[i]; ++ uc.center_freq2 = 0; + uc.width = NL80211_CHAN_WIDTH_80; + break; + } +@@ -332,7 +334,7 @@ ieee80211_tdls_chandef_vht_upgrade(struct ieee80211_sub_if_data *sdata, + return; + + /* proceed to downgrade the chandef until usable or the same */ +- while (uc.width > max_width && ++ while (uc.width > max_width || + !cfg80211_reg_can_beacon_relax(sdata->local->hw.wiphy, &uc, + sdata->wdev.iftype)) + ieee80211_chandef_downgrade(&uc); +@@ -1242,18 +1244,44 @@ int ieee80211_tdls_mgmt(struct wiphy *wiphy, struct net_device *dev, + return ret; + } + +-static void iee80211_tdls_recalc_chanctx(struct ieee80211_sub_if_data *sdata) ++static void iee80211_tdls_recalc_chanctx(struct ieee80211_sub_if_data *sdata, ++ struct sta_info *sta) + { + struct ieee80211_local *local = sdata->local; + struct ieee80211_chanctx_conf *conf; + struct ieee80211_chanctx *ctx; ++ enum nl80211_chan_width width; ++ struct ieee80211_supported_band *sband; + + mutex_lock(&local->chanctx_mtx); + conf = rcu_dereference_protected(sdata->vif.chanctx_conf, + lockdep_is_held(&local->chanctx_mtx)); + if (conf) { ++ width = conf->def.width; ++ sband = local->hw.wiphy->bands[conf->def.chan->band]; + ctx = container_of(conf, struct ieee80211_chanctx, conf); + ieee80211_recalc_chanctx_chantype(local, ctx); ++ ++ /* if width changed and a peer is given, update its BW */ ++ if (width != conf->def.width && sta && ++ test_sta_flag(sta, WLAN_STA_TDLS_WIDER_BW)) { ++ enum ieee80211_sta_rx_bandwidth bw; ++ ++ bw = ieee80211_chan_width_to_rx_bw(conf->def.width); ++ bw = min(bw, ieee80211_sta_cap_rx_bw(sta)); ++ if (bw != sta->sta.bandwidth) { ++ sta->sta.bandwidth = bw; ++ rate_control_rate_update(local, sband, sta, ++ IEEE80211_RC_BW_CHANGED); ++ /* ++ * if a TDLS peer BW was updated, we need to ++ * recalc the chandef width again, to get the ++ * correct chanctx min_def ++ */ ++ ieee80211_recalc_chanctx_chantype(local, ctx); ++ } ++ } ++ + } + mutex_unlock(&local->chanctx_mtx); + } +@@ -1350,8 +1378,6 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev, + break; + } + +- iee80211_tdls_recalc_chanctx(sdata); +- + mutex_lock(&local->sta_mtx); + sta = sta_info_get(sdata, peer); + if (!sta) { +@@ -1360,6 +1386,7 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev, + break; + } + ++ iee80211_tdls_recalc_chanctx(sdata, sta); + iee80211_tdls_recalc_ht_protection(sdata, sta); + + set_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH); +@@ -1390,7 +1417,7 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev, + iee80211_tdls_recalc_ht_protection(sdata, NULL); + mutex_unlock(&local->sta_mtx); + +- iee80211_tdls_recalc_chanctx(sdata); ++ iee80211_tdls_recalc_chanctx(sdata, NULL); + break; + default: + ret = -ENOTSUPP; +diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c +index 41f3eb565ef3..98c34c3adf39 100644 +--- a/net/mac80211/tx.c ++++ b/net/mac80211/tx.c +@@ -291,7 +291,7 @@ ieee80211_tx_h_check_assoc(struct ieee80211_tx_data *tx) + if (unlikely(test_bit(SCAN_SW_SCANNING, &tx->local->scanning)) && + test_bit(SDATA_STATE_OFFCHANNEL, &tx->sdata->state) && + !ieee80211_is_probe_req(hdr->frame_control) && +- !ieee80211_is_nullfunc(hdr->frame_control)) ++ !ieee80211_is_any_nullfunc(hdr->frame_control)) + /* + * When software scanning only nullfunc frames (to notify + * the sleep state to the AP) and probe requests (for the +diff --git a/net/mac80211/vht.c b/net/mac80211/vht.c +index c38b2f07a919..c77ef4e2daa3 100644 +--- a/net/mac80211/vht.c ++++ b/net/mac80211/vht.c +@@ -299,7 +299,30 @@ enum ieee80211_sta_rx_bandwidth ieee80211_sta_cap_rx_bw(struct sta_info *sta) + return IEEE80211_STA_RX_BW_80; + } + +-static enum ieee80211_sta_rx_bandwidth ++enum nl80211_chan_width ieee80211_sta_cap_chan_bw(struct sta_info *sta) ++{ ++ struct ieee80211_sta_vht_cap *vht_cap = &sta->sta.vht_cap; ++ u32 cap_width; ++ ++ if (!vht_cap->vht_supported) { ++ if (!sta->sta.ht_cap.ht_supported) ++ return NL80211_CHAN_WIDTH_20_NOHT; ++ ++ return sta->sta.ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40 ? ++ NL80211_CHAN_WIDTH_40 : NL80211_CHAN_WIDTH_20; ++ } ++ ++ cap_width = vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK; ++ ++ if (cap_width == IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ) ++ return NL80211_CHAN_WIDTH_160; ++ else if (cap_width == IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) ++ return NL80211_CHAN_WIDTH_80P80; ++ ++ return NL80211_CHAN_WIDTH_80; ++} ++ ++enum ieee80211_sta_rx_bandwidth + ieee80211_chan_width_to_rx_bw(enum nl80211_chan_width width) + { + switch (width) { +@@ -327,10 +350,7 @@ enum ieee80211_sta_rx_bandwidth ieee80211_sta_cur_vht_bw(struct sta_info *sta) + + bw = ieee80211_sta_cap_rx_bw(sta); + bw = min(bw, sta->cur_max_bandwidth); +- +- /* do not cap the BW of TDLS WIDER_BW peers by the bss */ +- if (!test_sta_flag(sta, WLAN_STA_TDLS_WIDER_BW)) +- bw = min(bw, ieee80211_chan_width_to_rx_bw(bss_width)); ++ bw = min(bw, ieee80211_chan_width_to_rx_bw(bss_width)); + + return bw; + } +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index a7967af0da82..6203995003a5 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -2849,12 +2849,14 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk, + + err = nft_trans_set_add(&ctx, NFT_MSG_NEWSET, set); + if (err < 0) +- goto err2; ++ goto err3; + + list_add_tail_rcu(&set->list, &table->sets); + table->use++; + return 0; + ++err3: ++ ops->destroy(set); + err2: + kfree(set); + err1: +diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c +index 99bc2f87a974..204be9374657 100644 +--- a/net/netfilter/nf_tables_core.c ++++ b/net/netfilter/nf_tables_core.c +@@ -130,7 +130,7 @@ next_rule: + list_for_each_entry_continue_rcu(rule, &chain->rules, list) { + + /* This rule is not active, skip. */ +- if (unlikely(rule->genmask & (1 << gencursor))) ++ if (unlikely(rule->genmask & gencursor)) + continue; + + rulenum++; +diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c +index 044559c10e98..f01764b94b34 100644 +--- a/net/netfilter/nfnetlink.c ++++ b/net/netfilter/nfnetlink.c +@@ -309,14 +309,14 @@ replay: + #endif + { + nfnl_unlock(subsys_id); +- netlink_ack(skb, nlh, -EOPNOTSUPP); ++ netlink_ack(oskb, nlh, -EOPNOTSUPP); + return kfree_skb(skb); + } + } + + if (!ss->commit || !ss->abort) { + nfnl_unlock(subsys_id); +- netlink_ack(skb, nlh, -EOPNOTSUPP); ++ netlink_ack(oskb, nlh, -EOPNOTSUPP); + return kfree_skb(skb); + } + +@@ -406,7 +406,7 @@ ack: + * pointing to the batch header. + */ + nfnl_err_reset(&err_list); +- netlink_ack(skb, nlmsg_hdr(oskb), -ENOMEM); ++ netlink_ack(oskb, nlmsg_hdr(oskb), -ENOMEM); + status |= NFNL_BATCH_FAILURE; + goto done; + } +diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c +index 0a5df0cbaa28..a6c29c5bbfbd 100644 +--- a/net/netfilter/nft_dynset.c ++++ b/net/netfilter/nft_dynset.c +@@ -121,6 +121,9 @@ static int nft_dynset_init(const struct nft_ctx *ctx, + return PTR_ERR(set); + } + ++ if (set->ops->update == NULL) ++ return -EOPNOTSUPP; ++ + if (set->flags & NFT_SET_CONSTANT) + return -EBUSY; + +diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c +index 67583ad7f610..6ac1a8d19b88 100644 +--- a/net/nfc/nci/core.c ++++ b/net/nfc/nci/core.c +@@ -610,14 +610,14 @@ int nci_core_conn_create(struct nci_dev *ndev, u8 destination_type, + struct nci_core_conn_create_cmd *cmd; + struct core_conn_create_data data; + ++ if (!number_destination_params) ++ return -EINVAL; ++ + data.length = params_len + sizeof(struct nci_core_conn_create_cmd); + cmd = kzalloc(data.length, GFP_KERNEL); + if (!cmd) + return -ENOMEM; + +- if (!number_destination_params) +- return -EINVAL; +- + cmd->destination_type = destination_type; + cmd->number_destination_params = number_destination_params; + memcpy(cmd->params, params, params_len); +diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c +index 7cb8184ac165..fd6c587b6a04 100644 +--- a/net/openvswitch/actions.c ++++ b/net/openvswitch/actions.c +@@ -137,11 +137,23 @@ static bool is_flow_key_valid(const struct sw_flow_key *key) + return !!key->eth.type; + } + ++static void update_ethertype(struct sk_buff *skb, struct ethhdr *hdr, ++ __be16 ethertype) ++{ ++ if (skb->ip_summed == CHECKSUM_COMPLETE) { ++ __be16 diff[] = { ~(hdr->h_proto), ethertype }; ++ ++ skb->csum = ~csum_partial((char *)diff, sizeof(diff), ++ ~skb->csum); ++ } ++ ++ hdr->h_proto = ethertype; ++} ++ + static int push_mpls(struct sk_buff *skb, struct sw_flow_key *key, + const struct ovs_action_push_mpls *mpls) + { + __be32 *new_mpls_lse; +- struct ethhdr *hdr; + + /* Networking stack do not allow simultaneous Tunnel and MPLS GSO. */ + if (skb->encapsulation) +@@ -160,9 +172,7 @@ static int push_mpls(struct sk_buff *skb, struct sw_flow_key *key, + + skb_postpush_rcsum(skb, new_mpls_lse, MPLS_HLEN); + +- hdr = eth_hdr(skb); +- hdr->h_proto = mpls->mpls_ethertype; +- ++ update_ethertype(skb, eth_hdr(skb), mpls->mpls_ethertype); + if (!skb->inner_protocol) + skb_set_inner_protocol(skb, skb->protocol); + skb->protocol = mpls->mpls_ethertype; +@@ -193,7 +203,7 @@ static int pop_mpls(struct sk_buff *skb, struct sw_flow_key *key, + * field correctly in the presence of VLAN tags. + */ + hdr = (struct ethhdr *)(skb_mpls_header(skb) - ETH_HLEN); +- hdr->h_proto = ethertype; ++ update_ethertype(skb, hdr, ethertype); + if (eth_p_mpls(skb->protocol)) + skb->protocol = ethertype; + +diff --git a/net/rds/tcp.c b/net/rds/tcp.c +index c10622a9321c..465756fe7958 100644 +--- a/net/rds/tcp.c ++++ b/net/rds/tcp.c +@@ -110,7 +110,7 @@ void rds_tcp_restore_callbacks(struct socket *sock, + + /* + * This is the only path that sets tc->t_sock. Send and receive trust that +- * it is set. The RDS_CONN_CONNECTED bit protects those paths from being ++ * it is set. The RDS_CONN_UP bit protects those paths from being + * called while it isn't set. + */ + void rds_tcp_set_callbacks(struct socket *sock, struct rds_connection *conn) +diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c +index e353e3255206..5213cd781c24 100644 +--- a/net/rds/tcp_listen.c ++++ b/net/rds/tcp_listen.c +@@ -115,24 +115,32 @@ int rds_tcp_accept_one(struct socket *sock) + * rds_tcp_state_change() will do that cleanup + */ + rs_tcp = (struct rds_tcp_connection *)conn->c_transport_data; +- if (rs_tcp->t_sock && +- ntohl(inet->inet_saddr) < ntohl(inet->inet_daddr)) { +- struct sock *nsk = new_sock->sk; +- +- nsk->sk_user_data = NULL; +- nsk->sk_prot->disconnect(nsk, 0); +- tcp_done(nsk); +- new_sock = NULL; +- ret = 0; +- goto out; +- } else if (rs_tcp->t_sock) { +- rds_tcp_restore_callbacks(rs_tcp->t_sock, rs_tcp); +- conn->c_outgoing = 0; +- } +- + rds_conn_transition(conn, RDS_CONN_DOWN, RDS_CONN_CONNECTING); ++ if (rs_tcp->t_sock) { ++ /* Need to resolve a duelling SYN between peers. ++ * We have an outstanding SYN to this peer, which may ++ * potentially have transitioned to the RDS_CONN_UP state, ++ * so we must quiesce any send threads before resetting ++ * c_transport_data. ++ */ ++ wait_event(conn->c_waitq, ++ !test_bit(RDS_IN_XMIT, &conn->c_flags)); ++ if (ntohl(inet->inet_saddr) < ntohl(inet->inet_daddr)) { ++ struct sock *nsk = new_sock->sk; ++ ++ nsk->sk_user_data = NULL; ++ nsk->sk_prot->disconnect(nsk, 0); ++ tcp_done(nsk); ++ new_sock = NULL; ++ ret = 0; ++ goto out; ++ } else if (rs_tcp->t_sock) { ++ rds_tcp_restore_callbacks(rs_tcp->t_sock, rs_tcp); ++ conn->c_outgoing = 0; ++ } ++ } + rds_tcp_set_callbacks(new_sock, conn); +- rds_connect_complete(conn); ++ rds_connect_complete(conn); /* marks RDS_CONN_UP */ + new_sock = NULL; + ret = 0; + +diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c +index 3eef0215e53f..cdfb8d33bcba 100644 +--- a/net/sched/cls_bpf.c ++++ b/net/sched/cls_bpf.c +@@ -107,8 +107,9 @@ static int cls_bpf_classify(struct sk_buff *skb, const struct tcf_proto *tp, + } + + if (prog->exts_integrated) { +- res->class = prog->res.class; +- res->classid = qdisc_skb_cb(skb)->tc_classid; ++ res->class = 0; ++ res->classid = TC_H_MAJ(prog->res.classid) | ++ qdisc_skb_cb(skb)->tc_classid; + + ret = cls_bpf_exec_opcode(filter_res); + if (ret == TC_ACT_UNSPEC) +@@ -118,10 +119,12 @@ static int cls_bpf_classify(struct sk_buff *skb, const struct tcf_proto *tp, + + if (filter_res == 0) + continue; +- +- *res = prog->res; +- if (filter_res != -1) ++ if (filter_res != -1) { ++ res->class = 0; + res->classid = filter_res; ++ } else { ++ *res = prog->res; ++ } + + ret = tcf_exts_exec(skb, &prog->exts, res); + if (ret < 0) +diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c +index 5ab8205f988b..a97096a7f801 100644 +--- a/net/sched/cls_flower.c ++++ b/net/sched/cls_flower.c +@@ -351,12 +351,10 @@ static int fl_init_hashtable(struct cls_fl_head *head, + + #define FL_KEY_MEMBER_OFFSET(member) offsetof(struct fl_flow_key, member) + #define FL_KEY_MEMBER_SIZE(member) (sizeof(((struct fl_flow_key *) 0)->member)) +-#define FL_KEY_MEMBER_END_OFFSET(member) \ +- (FL_KEY_MEMBER_OFFSET(member) + FL_KEY_MEMBER_SIZE(member)) + +-#define FL_KEY_IN_RANGE(mask, member) \ +- (FL_KEY_MEMBER_OFFSET(member) <= (mask)->range.end && \ +- FL_KEY_MEMBER_END_OFFSET(member) >= (mask)->range.start) ++#define FL_KEY_IS_MASKED(mask, member) \ ++ memchr_inv(((char *)mask) + FL_KEY_MEMBER_OFFSET(member), \ ++ 0, FL_KEY_MEMBER_SIZE(member)) \ + + #define FL_KEY_SET(keys, cnt, id, member) \ + do { \ +@@ -365,9 +363,9 @@ static int fl_init_hashtable(struct cls_fl_head *head, + cnt++; \ + } while(0); + +-#define FL_KEY_SET_IF_IN_RANGE(mask, keys, cnt, id, member) \ ++#define FL_KEY_SET_IF_MASKED(mask, keys, cnt, id, member) \ + do { \ +- if (FL_KEY_IN_RANGE(mask, member)) \ ++ if (FL_KEY_IS_MASKED(mask, member)) \ + FL_KEY_SET(keys, cnt, id, member); \ + } while(0); + +@@ -379,14 +377,14 @@ static void fl_init_dissector(struct cls_fl_head *head, + + FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_CONTROL, control); + FL_KEY_SET(keys, cnt, FLOW_DISSECTOR_KEY_BASIC, basic); +- FL_KEY_SET_IF_IN_RANGE(mask, keys, cnt, +- FLOW_DISSECTOR_KEY_ETH_ADDRS, eth); +- FL_KEY_SET_IF_IN_RANGE(mask, keys, cnt, +- FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4); +- FL_KEY_SET_IF_IN_RANGE(mask, keys, cnt, +- FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6); +- FL_KEY_SET_IF_IN_RANGE(mask, keys, cnt, +- FLOW_DISSECTOR_KEY_PORTS, tp); ++ FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt, ++ FLOW_DISSECTOR_KEY_ETH_ADDRS, eth); ++ FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt, ++ FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4); ++ FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt, ++ FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6); ++ FL_KEY_SET_IF_MASKED(&mask->key, keys, cnt, ++ FLOW_DISSECTOR_KEY_PORTS, tp); + + skb_flow_dissector_init(&head->dissector, keys, cnt); + } +diff --git a/net/sched/sch_drr.c b/net/sched/sch_drr.c +index d6e3ad43cecb..06e42727590a 100644 +--- a/net/sched/sch_drr.c ++++ b/net/sched/sch_drr.c +@@ -375,6 +375,7 @@ static int drr_enqueue(struct sk_buff *skb, struct Qdisc *sch) + cl->deficit = cl->quantum; + } + ++ qdisc_qstats_backlog_inc(sch, skb); + sch->q.qlen++; + return err; + } +@@ -405,6 +406,7 @@ static struct sk_buff *drr_dequeue(struct Qdisc *sch) + + bstats_update(&cl->bstats, skb); + qdisc_bstats_update(sch, skb); ++ qdisc_qstats_backlog_dec(sch, skb); + sch->q.qlen--; + return skb; + } +@@ -426,6 +428,7 @@ static unsigned int drr_drop(struct Qdisc *sch) + if (cl->qdisc->ops->drop) { + len = cl->qdisc->ops->drop(cl->qdisc); + if (len > 0) { ++ sch->qstats.backlog -= len; + sch->q.qlen--; + if (cl->qdisc->q.qlen == 0) + list_del(&cl->alist); +@@ -461,6 +464,7 @@ static void drr_reset_qdisc(struct Qdisc *sch) + qdisc_reset(cl->qdisc); + } + } ++ sch->qstats.backlog = 0; + sch->q.qlen = 0; + } + +diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c +index eb814ffc0902..f4aa2ab4713a 100644 +--- a/net/sched/sch_fq.c ++++ b/net/sched/sch_fq.c +@@ -830,20 +830,24 @@ nla_put_failure: + static int fq_dump_stats(struct Qdisc *sch, struct gnet_dump *d) + { + struct fq_sched_data *q = qdisc_priv(sch); +- u64 now = ktime_get_ns(); +- struct tc_fq_qd_stats st = { +- .gc_flows = q->stat_gc_flows, +- .highprio_packets = q->stat_internal_packets, +- .tcp_retrans = q->stat_tcp_retrans, +- .throttled = q->stat_throttled, +- .flows_plimit = q->stat_flows_plimit, +- .pkts_too_long = q->stat_pkts_too_long, +- .allocation_errors = q->stat_allocation_errors, +- .flows = q->flows, +- .inactive_flows = q->inactive_flows, +- .throttled_flows = q->throttled_flows, +- .time_next_delayed_flow = q->time_next_delayed_flow - now, +- }; ++ struct tc_fq_qd_stats st; ++ ++ sch_tree_lock(sch); ++ ++ st.gc_flows = q->stat_gc_flows; ++ st.highprio_packets = q->stat_internal_packets; ++ st.tcp_retrans = q->stat_tcp_retrans; ++ st.throttled = q->stat_throttled; ++ st.flows_plimit = q->stat_flows_plimit; ++ st.pkts_too_long = q->stat_pkts_too_long; ++ st.allocation_errors = q->stat_allocation_errors; ++ st.time_next_delayed_flow = q->time_next_delayed_flow - ktime_get_ns(); ++ st.flows = q->flows; ++ st.inactive_flows = q->inactive_flows; ++ st.throttled_flows = q->throttled_flows; ++ st.pad = 0; ++ ++ sch_tree_unlock(sch); + + return gnet_stats_copy_app(d, &st, sizeof(st)); + } +diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c +index 1800f7977595..70e0dfd21f04 100644 +--- a/net/sched/sch_fq_codel.c ++++ b/net/sched/sch_fq_codel.c +@@ -588,7 +588,7 @@ static int fq_codel_dump_class_stats(struct Qdisc *sch, unsigned long cl, + qs.backlog = q->backlogs[idx]; + qs.drops = flow->dropped; + } +- if (gnet_stats_copy_queue(d, NULL, &qs, 0) < 0) ++ if (gnet_stats_copy_queue(d, NULL, &qs, qs.qlen) < 0) + return -1; + if (idx < q->flows_cnt) + return gnet_stats_copy_app(d, &xstats, sizeof(xstats)); +diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c +index eec6dc2d3152..09cd65434748 100644 +--- a/net/sched/sch_generic.c ++++ b/net/sched/sch_generic.c +@@ -49,6 +49,7 @@ static inline int dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q) + { + q->gso_skb = skb; + q->qstats.requeues++; ++ qdisc_qstats_backlog_inc(q, skb); + q->q.qlen++; /* it's still part of the queue */ + __netif_schedule(q); + +@@ -92,6 +93,7 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate, + txq = skb_get_tx_queue(txq->dev, skb); + if (!netif_xmit_frozen_or_stopped(txq)) { + q->gso_skb = NULL; ++ qdisc_qstats_backlog_dec(q, skb); + q->q.qlen--; + } else + skb = NULL; +@@ -624,18 +626,19 @@ struct Qdisc *qdisc_create_dflt(struct netdev_queue *dev_queue, + struct Qdisc *sch; + + if (!try_module_get(ops->owner)) +- goto errout; ++ return NULL; + + sch = qdisc_alloc(dev_queue, ops); +- if (IS_ERR(sch)) +- goto errout; ++ if (IS_ERR(sch)) { ++ module_put(ops->owner); ++ return NULL; ++ } + sch->parent = parentid; + + if (!ops->init || ops->init(sch, NULL) == 0) + return sch; + + qdisc_destroy(sch); +-errout: + return NULL; + } + EXPORT_SYMBOL(qdisc_create_dflt); +diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c +index d783d7cc3348..1ac9f9f03fe3 100644 +--- a/net/sched/sch_hfsc.c ++++ b/net/sched/sch_hfsc.c +@@ -1529,6 +1529,7 @@ hfsc_reset_qdisc(struct Qdisc *sch) + q->eligible = RB_ROOT; + INIT_LIST_HEAD(&q->droplist); + qdisc_watchdog_cancel(&q->watchdog); ++ sch->qstats.backlog = 0; + sch->q.qlen = 0; + } + +@@ -1559,14 +1560,6 @@ hfsc_dump_qdisc(struct Qdisc *sch, struct sk_buff *skb) + struct hfsc_sched *q = qdisc_priv(sch); + unsigned char *b = skb_tail_pointer(skb); + struct tc_hfsc_qopt qopt; +- struct hfsc_class *cl; +- unsigned int i; +- +- sch->qstats.backlog = 0; +- for (i = 0; i < q->clhash.hashsize; i++) { +- hlist_for_each_entry(cl, &q->clhash.hash[i], cl_common.hnode) +- sch->qstats.backlog += cl->qdisc->qstats.backlog; +- } + + qopt.defcls = q->defcls; + if (nla_put(skb, TCA_OPTIONS, sizeof(qopt), &qopt)) +@@ -1604,6 +1597,7 @@ hfsc_enqueue(struct sk_buff *skb, struct Qdisc *sch) + if (cl->qdisc->q.qlen == 1) + set_active(cl, qdisc_pkt_len(skb)); + ++ qdisc_qstats_backlog_inc(sch, skb); + sch->q.qlen++; + + return NET_XMIT_SUCCESS; +@@ -1672,6 +1666,7 @@ hfsc_dequeue(struct Qdisc *sch) + + qdisc_unthrottled(sch); + qdisc_bstats_update(sch, skb); ++ qdisc_qstats_backlog_dec(sch, skb); + sch->q.qlen--; + + return skb; +@@ -1695,6 +1690,7 @@ hfsc_drop(struct Qdisc *sch) + } + cl->qstats.drops++; + qdisc_qstats_drop(sch); ++ sch->qstats.backlog -= len; + sch->q.qlen--; + return len; + } +diff --git a/net/sched/sch_prio.c b/net/sched/sch_prio.c +index ca9fb2b0c14a..40ed14433f2c 100644 +--- a/net/sched/sch_prio.c ++++ b/net/sched/sch_prio.c +@@ -85,6 +85,7 @@ prio_enqueue(struct sk_buff *skb, struct Qdisc *sch) + + ret = qdisc_enqueue(skb, qdisc); + if (ret == NET_XMIT_SUCCESS) { ++ qdisc_qstats_backlog_inc(sch, skb); + sch->q.qlen++; + return NET_XMIT_SUCCESS; + } +@@ -117,6 +118,7 @@ static struct sk_buff *prio_dequeue(struct Qdisc *sch) + struct sk_buff *skb = qdisc_dequeue_peeked(qdisc); + if (skb) { + qdisc_bstats_update(sch, skb); ++ qdisc_qstats_backlog_dec(sch, skb); + sch->q.qlen--; + return skb; + } +@@ -135,6 +137,7 @@ static unsigned int prio_drop(struct Qdisc *sch) + for (prio = q->bands-1; prio >= 0; prio--) { + qdisc = q->queues[prio]; + if (qdisc->ops->drop && (len = qdisc->ops->drop(qdisc)) != 0) { ++ sch->qstats.backlog -= len; + sch->q.qlen--; + return len; + } +@@ -151,6 +154,7 @@ prio_reset(struct Qdisc *sch) + + for (prio = 0; prio < q->bands; prio++) + qdisc_reset(q->queues[prio]); ++ sch->qstats.backlog = 0; + sch->q.qlen = 0; + } + +diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c +index 8d2d8d953432..8dabd8257b49 100644 +--- a/net/sched/sch_qfq.c ++++ b/net/sched/sch_qfq.c +@@ -1150,6 +1150,7 @@ static struct sk_buff *qfq_dequeue(struct Qdisc *sch) + if (!skb) + return NULL; + ++ qdisc_qstats_backlog_dec(sch, skb); + sch->q.qlen--; + qdisc_bstats_update(sch, skb); + +@@ -1250,6 +1251,7 @@ static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch) + } + + bstats_update(&cl->bstats, skb); ++ qdisc_qstats_backlog_inc(sch, skb); + ++sch->q.qlen; + + agg = cl->agg; +@@ -1516,6 +1518,7 @@ static void qfq_reset_qdisc(struct Qdisc *sch) + qdisc_reset(cl->qdisc); + } + } ++ sch->qstats.backlog = 0; + sch->q.qlen = 0; + } + +diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c +index 10c0b184cdbe..624b5e6fa52f 100644 +--- a/net/sched/sch_sfb.c ++++ b/net/sched/sch_sfb.c +@@ -400,6 +400,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch) + enqueue: + ret = qdisc_enqueue(skb, child); + if (likely(ret == NET_XMIT_SUCCESS)) { ++ qdisc_qstats_backlog_inc(sch, skb); + sch->q.qlen++; + increment_qlen(skb, q); + } else if (net_xmit_drop_count(ret)) { +@@ -428,6 +429,7 @@ static struct sk_buff *sfb_dequeue(struct Qdisc *sch) + + if (skb) { + qdisc_bstats_update(sch, skb); ++ qdisc_qstats_backlog_dec(sch, skb); + sch->q.qlen--; + decrement_qlen(skb, q); + } +@@ -450,6 +452,7 @@ static void sfb_reset(struct Qdisc *sch) + struct sfb_sched_data *q = qdisc_priv(sch); + + qdisc_reset(q->qdisc); ++ sch->qstats.backlog = 0; + sch->q.qlen = 0; + q->slot = 0; + q->double_buffering = false; +diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c +index 05c7a66f64da..87dee4deb66e 100644 +--- a/net/sched/sch_tbf.c ++++ b/net/sched/sch_tbf.c +@@ -197,6 +197,7 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch) + return ret; + } + ++ qdisc_qstats_backlog_inc(sch, skb); + sch->q.qlen++; + return NET_XMIT_SUCCESS; + } +@@ -207,6 +208,7 @@ static unsigned int tbf_drop(struct Qdisc *sch) + unsigned int len = 0; + + if (q->qdisc->ops->drop && (len = q->qdisc->ops->drop(q->qdisc)) != 0) { ++ sch->qstats.backlog -= len; + sch->q.qlen--; + qdisc_qstats_drop(sch); + } +@@ -253,6 +255,7 @@ static struct sk_buff *tbf_dequeue(struct Qdisc *sch) + q->t_c = now; + q->tokens = toks; + q->ptokens = ptoks; ++ qdisc_qstats_backlog_dec(sch, skb); + sch->q.qlen--; + qdisc_unthrottled(sch); + qdisc_bstats_update(sch, skb); +@@ -284,6 +287,7 @@ static void tbf_reset(struct Qdisc *sch) + struct tbf_sched_data *q = qdisc_priv(sch); + + qdisc_reset(q->qdisc); ++ sch->qstats.backlog = 0; + sch->q.qlen = 0; + q->t_c = ktime_get_ns(); + q->tokens = q->buffer; +diff --git a/net/sctp/associola.c b/net/sctp/associola.c +index f085b01b6603..f24d31f12cb4 100644 +--- a/net/sctp/associola.c ++++ b/net/sctp/associola.c +@@ -1290,7 +1290,7 @@ static struct sctp_transport *sctp_trans_elect_best(struct sctp_transport *curr, + if (score_curr > score_best) + return curr; + else if (score_curr == score_best) +- return sctp_trans_elect_tie(curr, best); ++ return sctp_trans_elect_tie(best, curr); + else + return best; + } +diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c +index 509e9426a056..e3e44237de1c 100644 +--- a/net/sctp/sm_make_chunk.c ++++ b/net/sctp/sm_make_chunk.c +@@ -857,7 +857,11 @@ struct sctp_chunk *sctp_make_shutdown(const struct sctp_association *asoc, + sctp_shutdownhdr_t shut; + __u32 ctsn; + +- ctsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map); ++ if (chunk && chunk->asoc) ++ ctsn = sctp_tsnmap_get_ctsn(&chunk->asoc->peer.tsn_map); ++ else ++ ctsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map); ++ + shut.cum_tsn_ack = htonl(ctsn); + + retval = sctp_make_control(asoc, SCTP_CID_SHUTDOWN, 0, +diff --git a/net/sctp/transport.c b/net/sctp/transport.c +index aab9e3f29755..fbbe268e34e7 100644 +--- a/net/sctp/transport.c ++++ b/net/sctp/transport.c +@@ -72,7 +72,7 @@ static struct sctp_transport *sctp_transport_init(struct net *net, + */ + peer->rto = msecs_to_jiffies(net->sctp.rto_initial); + +- peer->last_time_heard = ktime_get(); ++ peer->last_time_heard = ktime_set(0, 0); + peer->last_time_ecne_reduced = jiffies; + + peer->param_flags = SPP_HB_DISABLE | +diff --git a/net/sunrpc/xprtrdma/backchannel.c b/net/sunrpc/xprtrdma/backchannel.c +index 2dcb44f69e53..ddd70aec4d88 100644 +--- a/net/sunrpc/xprtrdma/backchannel.c ++++ b/net/sunrpc/xprtrdma/backchannel.c +@@ -42,8 +42,8 @@ static int rpcrdma_bc_setup_rqst(struct rpcrdma_xprt *r_xprt, + size_t size; + + req = rpcrdma_create_req(r_xprt); +- if (!req) +- return -ENOMEM; ++ if (IS_ERR(req)) ++ return PTR_ERR(req); + req->rl_backchannel = true; + + size = RPCRDMA_INLINE_WRITE_THRESHOLD(rqst); +@@ -84,25 +84,13 @@ out_fail: + static int rpcrdma_bc_setup_reps(struct rpcrdma_xprt *r_xprt, + unsigned int count) + { +- struct rpcrdma_buffer *buffers = &r_xprt->rx_buf; +- struct rpcrdma_rep *rep; +- unsigned long flags; + int rc = 0; + + while (count--) { +- rep = rpcrdma_create_rep(r_xprt); +- if (IS_ERR(rep)) { +- pr_err("RPC: %s: reply buffer alloc failed\n", +- __func__); +- rc = PTR_ERR(rep); ++ rc = rpcrdma_create_rep(r_xprt); ++ if (rc) + break; +- } +- +- spin_lock_irqsave(&buffers->rb_lock, flags); +- list_add(&rep->rr_list, &buffers->rb_recv_bufs); +- spin_unlock_irqrestore(&buffers->rb_lock, flags); + } +- + return rc; + } + +@@ -341,6 +329,8 @@ void rpcrdma_bc_receive_call(struct rpcrdma_xprt *r_xprt, + rqst->rq_reply_bytes_recvd = 0; + rqst->rq_bytes_sent = 0; + rqst->rq_xid = headerp->rm_xid; ++ ++ rqst->rq_private_buf.len = size; + set_bit(RPC_BC_PA_IN_USE, &rqst->rq_bc_pa_state); + + buf = &rqst->rq_rcv_buf; +diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c +index 8c545f7d7525..740bddcf3488 100644 +--- a/net/sunrpc/xprtrdma/transport.c ++++ b/net/sunrpc/xprtrdma/transport.c +@@ -576,6 +576,9 @@ xprt_rdma_free(void *buffer) + + rb = container_of(buffer, struct rpcrdma_regbuf, rg_base[0]); + req = rb->rg_owner; ++ if (req->rl_backchannel) ++ return; ++ + r_xprt = container_of(req->rl_buffer, struct rpcrdma_xprt, rx_buf); + + dprintk("RPC: %s: called on 0x%p\n", __func__, req->rl_reply); +diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c +index eadd1655145a..b6879a1986a7 100644 +--- a/net/sunrpc/xprtrdma/verbs.c ++++ b/net/sunrpc/xprtrdma/verbs.c +@@ -911,10 +911,17 @@ rpcrdma_create_req(struct rpcrdma_xprt *r_xprt) + return req; + } + +-struct rpcrdma_rep * +-rpcrdma_create_rep(struct rpcrdma_xprt *r_xprt) ++/** ++ * rpcrdma_create_rep - Allocate an rpcrdma_rep object ++ * @r_xprt: controlling transport ++ * ++ * Returns 0 on success or a negative errno on failure. ++ */ ++int ++ rpcrdma_create_rep(struct rpcrdma_xprt *r_xprt) + { + struct rpcrdma_create_data_internal *cdata = &r_xprt->rx_data; ++ struct rpcrdma_buffer *buf = &r_xprt->rx_buf; + struct rpcrdma_ia *ia = &r_xprt->rx_ia; + struct rpcrdma_rep *rep; + int rc; +@@ -934,12 +941,18 @@ rpcrdma_create_rep(struct rpcrdma_xprt *r_xprt) + rep->rr_device = ia->ri_device; + rep->rr_rxprt = r_xprt; + INIT_WORK(&rep->rr_work, rpcrdma_receive_worker); +- return rep; ++ ++ spin_lock(&buf->rb_lock); ++ list_add(&rep->rr_list, &buf->rb_recv_bufs); ++ spin_unlock(&buf->rb_lock); ++ return 0; + + out_free: + kfree(rep); + out: +- return ERR_PTR(rc); ++ dprintk("RPC: %s: reply buffer %d alloc failed\n", ++ __func__, rc); ++ return rc; + } + + int +@@ -975,17 +988,10 @@ rpcrdma_buffer_create(struct rpcrdma_xprt *r_xprt) + } + + INIT_LIST_HEAD(&buf->rb_recv_bufs); +- for (i = 0; i < buf->rb_max_requests + 2; i++) { +- struct rpcrdma_rep *rep; +- +- rep = rpcrdma_create_rep(r_xprt); +- if (IS_ERR(rep)) { +- dprintk("RPC: %s: reply buffer %d alloc failed\n", +- __func__, i); +- rc = PTR_ERR(rep); ++ for (i = 0; i <= buf->rb_max_requests; i++) { ++ rc = rpcrdma_create_rep(r_xprt); ++ if (rc) + goto out; +- } +- list_add(&rep->rr_list, &buf->rb_recv_bufs); + } + + return 0; +@@ -1337,15 +1343,14 @@ rpcrdma_ep_post_extra_recv(struct rpcrdma_xprt *r_xprt, unsigned int count) + struct rpcrdma_ia *ia = &r_xprt->rx_ia; + struct rpcrdma_ep *ep = &r_xprt->rx_ep; + struct rpcrdma_rep *rep; +- unsigned long flags; + int rc; + + while (count--) { +- spin_lock_irqsave(&buffers->rb_lock, flags); ++ spin_lock(&buffers->rb_lock); + if (list_empty(&buffers->rb_recv_bufs)) + goto out_reqbuf; + rep = rpcrdma_buffer_get_rep_locked(buffers); +- spin_unlock_irqrestore(&buffers->rb_lock, flags); ++ spin_unlock(&buffers->rb_lock); + + rc = rpcrdma_ep_post_recv(ia, ep, rep); + if (rc) +@@ -1355,7 +1360,7 @@ rpcrdma_ep_post_extra_recv(struct rpcrdma_xprt *r_xprt, unsigned int count) + return 0; + + out_reqbuf: +- spin_unlock_irqrestore(&buffers->rb_lock, flags); ++ spin_unlock(&buffers->rb_lock); + pr_warn("%s: no extra receive buffers\n", __func__); + return -ENOMEM; + +diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h +index ac7f8d4f632a..36ec6a602665 100644 +--- a/net/sunrpc/xprtrdma/xprt_rdma.h ++++ b/net/sunrpc/xprtrdma/xprt_rdma.h +@@ -431,8 +431,8 @@ int rpcrdma_ep_post_recv(struct rpcrdma_ia *, struct rpcrdma_ep *, + * Buffer calls - xprtrdma/verbs.c + */ + struct rpcrdma_req *rpcrdma_create_req(struct rpcrdma_xprt *); +-struct rpcrdma_rep *rpcrdma_create_rep(struct rpcrdma_xprt *); + void rpcrdma_destroy_req(struct rpcrdma_ia *, struct rpcrdma_req *); ++int rpcrdma_create_rep(struct rpcrdma_xprt *r_xprt); + int rpcrdma_buffer_create(struct rpcrdma_xprt *); + void rpcrdma_buffer_destroy(struct rpcrdma_buffer *); + +diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c +index 78d6b78de29d..cb39f1c4251e 100644 +--- a/net/tipc/udp_media.c ++++ b/net/tipc/udp_media.c +@@ -405,10 +405,13 @@ static int tipc_udp_enable(struct net *net, struct tipc_bearer *b, + tuncfg.encap_destroy = NULL; + setup_udp_tunnel_sock(net, ub->ubsock, &tuncfg); + +- if (enable_mcast(ub, remote)) ++ err = enable_mcast(ub, remote); ++ if (err) + goto err; + return 0; + err: ++ if (ub->ubsock) ++ udp_tunnel_sock_release(ub->ubsock); + kfree(ub); + return err; + } +diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c +index 1c4ad477ce93..6e3f0254d8a1 100644 +--- a/net/xfrm/xfrm_input.c ++++ b/net/xfrm/xfrm_input.c +@@ -207,15 +207,15 @@ int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type) + family = XFRM_SPI_SKB_CB(skb)->family; + + /* if tunnel is present override skb->mark value with tunnel i_key */ +- if (XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4) { +- switch (family) { +- case AF_INET: ++ switch (family) { ++ case AF_INET: ++ if (XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4) + mark = be32_to_cpu(XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4->parms.i_key); +- break; +- case AF_INET6: ++ break; ++ case AF_INET6: ++ if (XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6) + mark = be32_to_cpu(XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6->parms.i_key); +- break; +- } ++ break; + } + + /* Allocate new secpath or COW existing one. */ +diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c +index 787f2cac18c5..d3595f1d00f2 100644 +--- a/net/xfrm/xfrm_state.c ++++ b/net/xfrm/xfrm_state.c +@@ -332,6 +332,7 @@ static void xfrm_state_gc_destroy(struct xfrm_state *x) + { + tasklet_hrtimer_cancel(&x->mtimer); + del_timer_sync(&x->rtimer); ++ kfree(x->aead); + kfree(x->aalg); + kfree(x->ealg); + kfree(x->calg); +diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c +index dd0509ee14da..158f630cc7a6 100644 +--- a/net/xfrm/xfrm_user.c ++++ b/net/xfrm/xfrm_user.c +@@ -609,9 +609,12 @@ static struct xfrm_state *xfrm_state_construct(struct net *net, + if (err) + goto error; + +- if (attrs[XFRMA_SEC_CTX] && +- security_xfrm_state_alloc(x, nla_data(attrs[XFRMA_SEC_CTX]))) +- goto error; ++ if (attrs[XFRMA_SEC_CTX]) { ++ err = security_xfrm_state_alloc(x, ++ nla_data(attrs[XFRMA_SEC_CTX])); ++ if (err) ++ goto error; ++ } + + if ((err = xfrm_alloc_replay_state_esn(&x->replay_esn, &x->preplay_esn, + attrs[XFRMA_REPLAY_ESN_VAL]))) +@@ -923,7 +926,8 @@ static int xfrm_dump_sa_done(struct netlink_callback *cb) + struct sock *sk = cb->skb->sk; + struct net *net = sock_net(sk); + +- xfrm_state_walk_done(walk, net); ++ if (cb->args[0]) ++ xfrm_state_walk_done(walk, net); + return 0; + } + +@@ -948,8 +952,6 @@ static int xfrm_dump_sa(struct sk_buff *skb, struct netlink_callback *cb) + u8 proto = 0; + int err; + +- cb->args[0] = 1; +- + err = nlmsg_parse(cb->nlh, 0, attrs, XFRMA_MAX, + xfrma_policy); + if (err < 0) +@@ -966,6 +968,7 @@ static int xfrm_dump_sa(struct sk_buff *skb, struct netlink_callback *cb) + proto = nla_get_u8(attrs[XFRMA_PROTO]); + + xfrm_state_walk_init(walk, proto, filter); ++ cb->args[0] = 1; + } + + (void) xfrm_state_walk(net, walk, dump_one_state, &info); +diff --git a/scripts/config b/scripts/config +index 026aeb4f32ee..73de17d39698 100755 +--- a/scripts/config ++++ b/scripts/config +@@ -6,6 +6,9 @@ myname=${0##*/} + # If no prefix forced, use the default CONFIG_ + CONFIG_="${CONFIG_-CONFIG_}" + ++# We use an uncommon delimiter for sed substitutions ++SED_DELIM=$(echo -en "\001") ++ + usage() { + cat >&2 <"$tmpfile" ++ sed -e "s$SED_DELIM$before$SED_DELIM$after$SED_DELIM" "$infile" >"$tmpfile" + # replace original file with the edited one + mv "$tmpfile" "$infile" + } +diff --git a/sound/pci/fm801.c b/sound/pci/fm801.c +index d6e89a6d0bb9..71a00b55d5ea 100644 +--- a/sound/pci/fm801.c ++++ b/sound/pci/fm801.c +@@ -1088,26 +1088,20 @@ static int wait_for_codec(struct fm801 *chip, unsigned int codec_id, + return -EIO; + } + +-static int snd_fm801_chip_init(struct fm801 *chip, int resume) ++static int reset_codec(struct fm801 *chip) + { +- unsigned short cmdw; +- +- if (chip->tea575x_tuner & TUNER_ONLY) +- goto __ac97_ok; +- + /* codec cold reset + AC'97 warm reset */ + fm801_writew(chip, CODEC_CTRL, (1 << 5) | (1 << 6)); + fm801_readw(chip, CODEC_CTRL); /* flush posting data */ + udelay(100); + fm801_writew(chip, CODEC_CTRL, 0); + +- if (wait_for_codec(chip, 0, AC97_RESET, msecs_to_jiffies(750)) < 0) +- if (!resume) { +- dev_info(chip->card->dev, +- "Primary AC'97 codec not found, assume SF64-PCR (tuner-only)\n"); +- chip->tea575x_tuner = 3 | TUNER_ONLY; +- goto __ac97_ok; +- } ++ return wait_for_codec(chip, 0, AC97_RESET, msecs_to_jiffies(750)); ++} ++ ++static void snd_fm801_chip_multichannel_init(struct fm801 *chip) ++{ ++ unsigned short cmdw; + + if (chip->multichannel) { + if (chip->secondary_addr) { +@@ -1134,8 +1128,11 @@ static int snd_fm801_chip_init(struct fm801 *chip, int resume) + /* cause timeout problems */ + wait_for_codec(chip, 0, AC97_VENDOR_ID1, msecs_to_jiffies(750)); + } ++} + +- __ac97_ok: ++static void snd_fm801_chip_init(struct fm801 *chip) ++{ ++ unsigned short cmdw; + + /* init volume */ + fm801_writew(chip, PCM_VOL, 0x0808); +@@ -1156,11 +1153,8 @@ static int snd_fm801_chip_init(struct fm801 *chip, int resume) + /* interrupt clear */ + fm801_writew(chip, IRQ_STATUS, + FM801_IRQ_PLAYBACK | FM801_IRQ_CAPTURE | FM801_IRQ_MPU); +- +- return 0; + } + +- + static int snd_fm801_free(struct fm801 *chip) + { + unsigned short cmdw; +@@ -1173,6 +1167,8 @@ static int snd_fm801_free(struct fm801 *chip) + cmdw |= 0x00c3; + fm801_writew(chip, IRQ_MASK, cmdw); + ++ devm_free_irq(&chip->pci->dev, chip->irq, chip); ++ + __end_hw: + #ifdef CONFIG_SND_FM801_TEA575X_BOOL + if (!(chip->tea575x_tuner & TUNER_DISABLED)) { +@@ -1215,7 +1211,21 @@ static int snd_fm801_create(struct snd_card *card, + if ((err = pci_request_regions(pci, "FM801")) < 0) + return err; + chip->port = pci_resource_start(pci, 0); +- if ((tea575x_tuner & TUNER_ONLY) == 0) { ++ ++ if (pci->revision >= 0xb1) /* FM801-AU */ ++ chip->multichannel = 1; ++ ++ if (!(chip->tea575x_tuner & TUNER_ONLY)) { ++ if (reset_codec(chip) < 0) { ++ dev_info(chip->card->dev, ++ "Primary AC'97 codec not found, assume SF64-PCR (tuner-only)\n"); ++ chip->tea575x_tuner = 3 | TUNER_ONLY; ++ } else { ++ snd_fm801_chip_multichannel_init(chip); ++ } ++ } ++ ++ if ((chip->tea575x_tuner & TUNER_ONLY) == 0) { + if (devm_request_irq(&pci->dev, pci->irq, snd_fm801_interrupt, + IRQF_SHARED, KBUILD_MODNAME, chip)) { + dev_err(card->dev, "unable to grab IRQ %d\n", pci->irq); +@@ -1226,12 +1236,7 @@ static int snd_fm801_create(struct snd_card *card, + pci_set_master(pci); + } + +- if (pci->revision >= 0xb1) /* FM801-AU */ +- chip->multichannel = 1; +- +- snd_fm801_chip_init(chip, 0); +- /* init might set tuner access method */ +- tea575x_tuner = chip->tea575x_tuner; ++ snd_fm801_chip_init(chip); + + if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops)) < 0) { + snd_fm801_free(chip); +@@ -1249,14 +1254,16 @@ static int snd_fm801_create(struct snd_card *card, + chip->tea.private_data = chip; + chip->tea.ops = &snd_fm801_tea_ops; + sprintf(chip->tea.bus_info, "PCI:%s", pci_name(pci)); +- if ((tea575x_tuner & TUNER_TYPE_MASK) > 0 && +- (tea575x_tuner & TUNER_TYPE_MASK) < 4) { ++ if ((chip->tea575x_tuner & TUNER_TYPE_MASK) > 0 && ++ (chip->tea575x_tuner & TUNER_TYPE_MASK) < 4) { + if (snd_tea575x_init(&chip->tea, THIS_MODULE)) { + dev_err(card->dev, "TEA575x radio not found\n"); + snd_fm801_free(chip); + return -ENODEV; + } +- } else if ((tea575x_tuner & TUNER_TYPE_MASK) == 0) { ++ } else if ((chip->tea575x_tuner & TUNER_TYPE_MASK) == 0) { ++ unsigned int tuner_only = chip->tea575x_tuner & TUNER_ONLY; ++ + /* autodetect tuner connection */ + for (tea575x_tuner = 1; tea575x_tuner <= 3; tea575x_tuner++) { + chip->tea575x_tuner = tea575x_tuner; +@@ -1271,6 +1278,8 @@ static int snd_fm801_create(struct snd_card *card, + dev_err(card->dev, "TEA575x radio not found\n"); + chip->tea575x_tuner = TUNER_DISABLED; + } ++ ++ chip->tea575x_tuner |= tuner_only; + } + if (!(chip->tea575x_tuner & TUNER_DISABLED)) { + strlcpy(chip->tea.card, get_tea575x_gpio(chip)->name, +@@ -1389,7 +1398,13 @@ static int snd_fm801_resume(struct device *dev) + struct fm801 *chip = card->private_data; + int i; + +- snd_fm801_chip_init(chip, 1); ++ if (chip->tea575x_tuner & TUNER_ONLY) { ++ snd_fm801_chip_init(chip); ++ } else { ++ reset_codec(chip); ++ snd_fm801_chip_multichannel_init(chip); ++ snd_fm801_chip_init(chip); ++ } + snd_ac97_resume(chip->ac97); + snd_ac97_resume(chip->ac97_sec); + for (i = 0; i < ARRAY_SIZE(saved_regs); i++) +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index da9f6749b3be..8dd6cf0b8939 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -1977,9 +1977,10 @@ static const struct hdac_io_ops pci_hda_io_ops = { + * some HD-audio PCI entries are exposed without any codecs, and such devices + * should be ignored from the beginning. + */ +-static const struct snd_pci_quirk driver_blacklist[] = { +- SND_PCI_QUIRK(0x1462, 0xcb59, "MSI TRX40 Creator", 0), +- SND_PCI_QUIRK(0x1462, 0xcb60, "MSI TRX40", 0), ++static const struct pci_device_id driver_blacklist[] = { ++ { PCI_DEVICE_SUB(0x1022, 0x1487, 0x1043, 0x874f) }, /* ASUS ROG Zenith II / Strix */ ++ { PCI_DEVICE_SUB(0x1022, 0x1487, 0x1462, 0xcb59) }, /* MSI TRX40 Creator */ ++ { PCI_DEVICE_SUB(0x1022, 0x1487, 0x1462, 0xcb60) }, /* MSI TRX40 */ + {} + }; + +@@ -2002,7 +2003,7 @@ static int azx_probe(struct pci_dev *pci, + bool schedule_probe; + int err; + +- if (snd_pci_quirk_lookup(pci, driver_blacklist)) { ++ if (pci_match_id(driver_blacklist, pci)) { + dev_info(&pci->dev, "Skipping the blacklisted device\n"); + return -ENODEV; + } +diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c +index d46e9ad600b4..06736bea422e 100644 +--- a/sound/soc/fsl/fsl_ssi.c ++++ b/sound/soc/fsl/fsl_ssi.c +@@ -146,6 +146,7 @@ static bool fsl_ssi_volatile_reg(struct device *dev, unsigned int reg) + case CCSR_SSI_SRX1: + case CCSR_SSI_SISR: + case CCSR_SSI_SFCSR: ++ case CCSR_SSI_SACNT: + case CCSR_SSI_SACADD: + case CCSR_SSI_SACDAT: + case CCSR_SSI_SATAG: +@@ -239,8 +240,9 @@ struct fsl_ssi_private { + unsigned int baudclk_streams; + unsigned int bitclk_freq; + +- /*regcache for SFCSR*/ ++ /* regcache for volatile regs */ + u32 regcache_sfcsr; ++ u32 regcache_sacnt; + + /* DMA params */ + struct snd_dmaengine_dai_dma_data dma_params_tx; +@@ -1597,6 +1599,8 @@ static int fsl_ssi_suspend(struct device *dev) + + regmap_read(regs, CCSR_SSI_SFCSR, + &ssi_private->regcache_sfcsr); ++ regmap_read(regs, CCSR_SSI_SACNT, ++ &ssi_private->regcache_sacnt); + + regcache_cache_only(regs, true); + regcache_mark_dirty(regs); +@@ -1615,6 +1619,8 @@ static int fsl_ssi_resume(struct device *dev) + CCSR_SSI_SFCSR_RFWM1_MASK | CCSR_SSI_SFCSR_TFWM1_MASK | + CCSR_SSI_SFCSR_RFWM0_MASK | CCSR_SSI_SFCSR_TFWM0_MASK, + ssi_private->regcache_sfcsr); ++ regmap_write(regs, CCSR_SSI_SACNT, ++ ssi_private->regcache_sacnt); + + return regcache_sync(regs); + } +diff --git a/sound/soc/intel/atom/sst/sst_stream.c b/sound/soc/intel/atom/sst/sst_stream.c +index e83da42a8c03..c798f8d4ae43 100644 +--- a/sound/soc/intel/atom/sst/sst_stream.c ++++ b/sound/soc/intel/atom/sst/sst_stream.c +@@ -108,7 +108,7 @@ int sst_alloc_stream_mrfld(struct intel_sst_drv *sst_drv_ctx, void *params) + str_id, pipe_id); + ret = sst_prepare_and_post_msg(sst_drv_ctx, task_id, IPC_CMD, + IPC_IA_ALLOC_STREAM_MRFLD, pipe_id, sizeof(alloc_param), +- &alloc_param, data, true, true, false, true); ++ &alloc_param, &data, true, true, false, true); + + if (ret < 0) { + dev_err(sst_drv_ctx->dev, "FW alloc failed ret %d\n", ret); +diff --git a/sound/soc/tegra/tegra_alc5632.c b/sound/soc/tegra/tegra_alc5632.c +index ba272e21a6fa..deb597f7c302 100644 +--- a/sound/soc/tegra/tegra_alc5632.c ++++ b/sound/soc/tegra/tegra_alc5632.c +@@ -101,12 +101,16 @@ static const struct snd_kcontrol_new tegra_alc5632_controls[] = { + + static int tegra_alc5632_asoc_init(struct snd_soc_pcm_runtime *rtd) + { ++ int ret; + struct tegra_alc5632 *machine = snd_soc_card_get_drvdata(rtd->card); + +- snd_soc_card_jack_new(rtd->card, "Headset Jack", SND_JACK_HEADSET, +- &tegra_alc5632_hs_jack, +- tegra_alc5632_hs_jack_pins, +- ARRAY_SIZE(tegra_alc5632_hs_jack_pins)); ++ ret = snd_soc_card_jack_new(rtd->card, "Headset Jack", ++ SND_JACK_HEADSET, ++ &tegra_alc5632_hs_jack, ++ tegra_alc5632_hs_jack_pins, ++ ARRAY_SIZE(tegra_alc5632_hs_jack_pins)); ++ if (ret) ++ return ret; + + if (gpio_is_valid(machine->gpio_hp_det)) { + tegra_alc5632_hp_jack_gpio.gpio = machine->gpio_hp_det; +diff --git a/tools/perf/util/perf_regs.c b/tools/perf/util/perf_regs.c +index 6b8eb13e14e4..c4023f22f287 100644 +--- a/tools/perf/util/perf_regs.c ++++ b/tools/perf/util/perf_regs.c +@@ -12,18 +12,18 @@ int perf_reg_value(u64 *valp, struct regs_dump *regs, int id) + int i, idx = 0; + u64 mask = regs->mask; + +- if (regs->cache_mask & (1 << id)) ++ if (regs->cache_mask & (1ULL << id)) + goto out; + +- if (!(mask & (1 << id))) ++ if (!(mask & (1ULL << id))) + return -EINVAL; + + for (i = 0; i < id; i++) { +- if (mask & (1 << i)) ++ if (mask & (1ULL << i)) + idx++; + } + +- regs->cache_mask |= (1 << id); ++ regs->cache_mask |= (1ULL << id); + regs->cache_regs[id] = regs->regs[idx]; + + out: +diff --git a/tools/testing/selftests/ipc/msgque.c b/tools/testing/selftests/ipc/msgque.c +index 1b2ce334bb3f..47c074d73e61 100644 +--- a/tools/testing/selftests/ipc/msgque.c ++++ b/tools/testing/selftests/ipc/msgque.c +@@ -135,7 +135,7 @@ int dump_queue(struct msgque_data *msgque) + for (kern_id = 0; kern_id < 256; kern_id++) { + ret = msgctl(kern_id, MSG_STAT, &ds); + if (ret < 0) { +- if (errno == -EINVAL) ++ if (errno == EINVAL) + continue; + printf("Failed to get stats for IPC queue with id %d\n", + kern_id);