From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.6 commit in: /
Date: Wed, 10 Jun 2020 19:41:06 +0000 (UTC) [thread overview]
Message-ID: <1591818053.e6b558ca6926f73ffd8c036e6f4096dae8edfb21.mpagano@gentoo> (raw)
commit: e6b558ca6926f73ffd8c036e6f4096dae8edfb21
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 10 19:40:53 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 10 19:40:53 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e6b558ca
Linux patch 5.6.18
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1017_linux-5.6.18.patch | 1809 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1813 insertions(+)
diff --git a/0000_README b/0000_README
index 07595c4..fd785d4 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch: 1016_linux-5.6.17.patch
From: http://www.kernel.org
Desc: Linux 5.6.17
+Patch: 1017_linux-5.6.18.patch
+From: http://www.kernel.org
+Desc: Linux 5.6.18
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1017_linux-5.6.18.patch b/1017_linux-5.6.18.patch
new file mode 100644
index 0000000..9169925
--- /dev/null
+++ b/1017_linux-5.6.18.patch
@@ -0,0 +1,1809 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 2e0e3b45d02a..b39531a3c5bc 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -492,6 +492,7 @@ What: /sys/devices/system/cpu/vulnerabilities
+ /sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+ /sys/devices/system/cpu/vulnerabilities/l1tf
+ /sys/devices/system/cpu/vulnerabilities/mds
++ /sys/devices/system/cpu/vulnerabilities/srbds
+ /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+ /sys/devices/system/cpu/vulnerabilities/itlb_multihit
+ Date: January 2018
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index 0795e3c2643f..ca4dbdd9016d 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -14,3 +14,4 @@ are configurable at compile, boot or run time.
+ mds
+ tsx_async_abort
+ multihit.rst
++ special-register-buffer-data-sampling.rst
+diff --git a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
+new file mode 100644
+index 000000000000..47b1b3afac99
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
+@@ -0,0 +1,149 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++SRBDS - Special Register Buffer Data Sampling
++=============================================
++
++SRBDS is a hardware vulnerability that allows MDS :doc:`mds` techniques to
++infer values returned from special register accesses. Special register
++accesses are accesses to off core registers. According to Intel's evaluation,
++the special register reads that have a security expectation of privacy are
++RDRAND, RDSEED and SGX EGETKEY.
++
++When RDRAND, RDSEED and EGETKEY instructions are used, the data is moved
++to the core through the special register mechanism that is susceptible
++to MDS attacks.
++
++Affected processors
++--------------------
++Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED may
++be affected.
++
++A processor is affected by SRBDS if its Family_Model and stepping is
++in the following list, with the exception of the listed processors
++exporting MDS_NO while Intel TSX is available yet not enabled. The
++latter class of processors are only affected when Intel TSX is enabled
++by software using TSX_CTRL_MSR otherwise they are not affected.
++
++ ============= ============ ========
++ common name Family_Model Stepping
++ ============= ============ ========
++ IvyBridge 06_3AH All
++
++ Haswell 06_3CH All
++ Haswell_L 06_45H All
++ Haswell_G 06_46H All
++
++ Broadwell_G 06_47H All
++ Broadwell 06_3DH All
++
++ Skylake_L 06_4EH All
++ Skylake 06_5EH All
++
++ Kabylake_L 06_8EH <= 0xC
++ Kabylake 06_9EH <= 0xD
++ ============= ============ ========
++
++Related CVEs
++------------
++
++The following CVE entry is related to this SRBDS issue:
++
++ ============== ===== =====================================
++ CVE-2020-0543 SRBDS Special Register Buffer Data Sampling
++ ============== ===== =====================================
++
++Attack scenarios
++----------------
++An unprivileged user can extract values returned from RDRAND and RDSEED
++executed on another core or sibling thread using MDS techniques.
++
++
++Mitigation mechanism
++-------------------
++Intel will release microcode updates that modify the RDRAND, RDSEED, and
++EGETKEY instructions to overwrite secret special register data in the shared
++staging buffer before the secret data can be accessed by another logical
++processor.
++
++During execution of the RDRAND, RDSEED, or EGETKEY instructions, off-core
++accesses from other logical processors will be delayed until the special
++register read is complete and the secret data in the shared staging buffer is
++overwritten.
++
++This has three effects on performance:
++
++#. RDRAND, RDSEED, or EGETKEY instructions have higher latency.
++
++#. Executing RDRAND at the same time on multiple logical processors will be
++ serialized, resulting in an overall reduction in the maximum RDRAND
++ bandwidth.
++
++#. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other
++ logical processors that miss their core caches, with an impact similar to
++ legacy locked cache-line-split accesses.
++
++The microcode updates provide an opt-out mechanism (RNGDS_MITG_DIS) to disable
++the mitigation for RDRAND and RDSEED instructions executed outside of Intel
++Software Guard Extensions (Intel SGX) enclaves. On logical processors that
++disable the mitigation using this opt-out mechanism, RDRAND and RDSEED do not
++take longer to execute and do not impact performance of sibling logical
++processors memory accesses. The opt-out mechanism does not affect Intel SGX
++enclaves (including execution of RDRAND or RDSEED inside an enclave, as well
++as EGETKEY execution).
++
++IA32_MCU_OPT_CTRL MSR Definition
++--------------------------------
++Along with the mitigation for this issue, Intel added a new thread-scope
++IA32_MCU_OPT_CTRL MSR, (address 0x123). The presence of this MSR and
++RNGDS_MITG_DIS (bit 0) is enumerated by CPUID.(EAX=07H,ECX=0).EDX[SRBDS_CTRL =
++9]==1. This MSR is introduced through the microcode update.
++
++Setting IA32_MCU_OPT_CTRL[0] (RNGDS_MITG_DIS) to 1 for a logical processor
++disables the mitigation for RDRAND and RDSEED executed outside of an Intel SGX
++enclave on that logical processor. Opting out of the mitigation for a
++particular logical processor does not affect the RDRAND and RDSEED mitigations
++for other logical processors.
++
++Note that inside of an Intel SGX enclave, the mitigation is applied regardless
++of the value of RNGDS_MITG_DS.
++
++Mitigation control on the kernel command line
++---------------------------------------------
++The kernel command line allows control over the SRBDS mitigation at boot time
++with the option "srbds=". The option for this is:
++
++ ============= =============================================================
++ off This option disables SRBDS mitigation for RDRAND and RDSEED on
++ affected platforms.
++ ============= =============================================================
++
++SRBDS System Information
++-----------------------
++The Linux kernel provides vulnerability status information through sysfs. For
++SRBDS this can be accessed by the following sysfs file:
++/sys/devices/system/cpu/vulnerabilities/srbds
++
++The possible values contained in this file are:
++
++ ============================== =============================================
++ Not affected Processor not vulnerable
++ Vulnerable Processor vulnerable and mitigation disabled
++ Vulnerable: No microcode Processor vulnerable and microcode is missing
++ mitigation
++ Mitigation: Microcode Processor is vulnerable and mitigation is in
++ effect.
++ Mitigation: TSX disabled Processor is only vulnerable when TSX is
++ enabled while this system was booted with TSX
++ disabled.
++ Unknown: Dependent on
++ hypervisor status Running on virtual guest processor that is
++ affected but with no way to know if host
++ processor is mitigated or vulnerable.
++ ============================== =============================================
++
++SRBDS Default mitigation
++------------------------
++This new microcode serializes processor access during execution of RDRAND,
++RDSEED ensures that the shared buffer is overwritten before it is released for
++reuse. Use the "srbds=off" kernel command line to disable the mitigation for
++RDRAND and RDSEED.
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 20aac805e197..bb498e7ae2da 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4659,6 +4659,26 @@
+ spia_pedr=
+ spia_peddr=
+
++ srbds= [X86,INTEL]
++ Control the Special Register Buffer Data Sampling
++ (SRBDS) mitigation.
++
++ Certain CPUs are vulnerable to an MDS-like
++ exploit which can leak bits from the random
++ number generator.
++
++ By default, this issue is mitigated by
++ microcode. However, the microcode fix can cause
++ the RDRAND and RDSEED instructions to become
++ much slower. Among other effects, this will
++ result in reduced throughput from /dev/urandom.
++
++ The microcode mitigation can be disabled with
++ the following option:
++
++ off: Disable mitigation and remove
++ performance impact to RDRAND and RDSEED
++
+ srcutree.counter_wrap_check [KNL]
+ Specifies how frequently to check for
+ grace-period sequence counter wrap for the
+diff --git a/Makefile b/Makefile
+index 8254beb87a7b..2948731a235c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
+index 31c379c1da41..0c814cd9ea42 100644
+--- a/arch/x86/include/asm/cpu_device_id.h
++++ b/arch/x86/include/asm/cpu_device_id.h
+@@ -9,6 +9,36 @@
+
+ #include <linux/mod_devicetable.h>
+
++#define X86_CENTAUR_FAM6_C7_D 0xd
++#define X86_CENTAUR_FAM6_NANO 0xf
++
++#define X86_STEPPINGS(mins, maxs) GENMASK(maxs, mins)
++
++/**
++ * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching
++ * @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
++ * The name is expanded to X86_VENDOR_@_vendor
++ * @_family: The family number or X86_FAMILY_ANY
++ * @_model: The model number, model constant or X86_MODEL_ANY
++ * @_steppings: Bitmask for steppings, stepping constant or X86_STEPPING_ANY
++ * @_feature: A X86_FEATURE bit or X86_FEATURE_ANY
++ * @_data: Driver specific data or NULL. The internal storage
++ * format is unsigned long. The supplied value, pointer
++ * etc. is casted to unsigned long internally.
++ *
++ * Backport version to keep the SRBDS pile consistant. No shorter variants
++ * required for this.
++ */
++#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \
++ _steppings, _feature, _data) { \
++ .vendor = X86_VENDOR_##_vendor, \
++ .family = _family, \
++ .model = _model, \
++ .steppings = _steppings, \
++ .feature = _feature, \
++ .driver_data = (unsigned long) _data \
++}
++
+ /*
+ * Match specific microcode revisions.
+ *
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index f3327cb56edf..69f7dcb1fa5c 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -360,6 +360,7 @@
+ #define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
+ #define X86_FEATURE_FSRM (18*32+ 4) /* Fast Short Rep Mov */
+ #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
++#define X86_FEATURE_SRBDS_CTRL (18*32+ 9) /* "" SRBDS mitigation MSR available */
+ #define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */
+ #define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */
+ #define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */
+@@ -404,5 +405,6 @@
+ #define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
+ #define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
+ #define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
++#define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
+
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index d5e517d1c3dd..af64c8e80ff4 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -119,6 +119,10 @@
+ #define TSX_CTRL_RTM_DISABLE BIT(0) /* Disable RTM feature */
+ #define TSX_CTRL_CPUID_CLEAR BIT(1) /* Disable TSX enumeration */
+
++/* SRBDS support */
++#define MSR_IA32_MCU_OPT_CTRL 0x00000123
++#define RNGDS_MITG_DIS BIT(0)
++
+ #define MSR_IA32_SYSENTER_CS 0x00000174
+ #define MSR_IA32_SYSENTER_ESP 0x00000175
+ #define MSR_IA32_SYSENTER_EIP 0x00000176
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index ed54b3b21c39..56978cb06149 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
+ static void __init mds_select_mitigation(void);
+ static void __init mds_print_mitigation(void);
+ static void __init taa_select_mitigation(void);
++static void __init srbds_select_mitigation(void);
+
+ /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+ u64 x86_spec_ctrl_base;
+@@ -108,6 +109,7 @@ void __init check_bugs(void)
+ l1tf_select_mitigation();
+ mds_select_mitigation();
+ taa_select_mitigation();
++ srbds_select_mitigation();
+
+ /*
+ * As MDS and TAA mitigations are inter-related, print MDS
+@@ -397,6 +399,97 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
+ }
+ early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
+
++#undef pr_fmt
++#define pr_fmt(fmt) "SRBDS: " fmt
++
++enum srbds_mitigations {
++ SRBDS_MITIGATION_OFF,
++ SRBDS_MITIGATION_UCODE_NEEDED,
++ SRBDS_MITIGATION_FULL,
++ SRBDS_MITIGATION_TSX_OFF,
++ SRBDS_MITIGATION_HYPERVISOR,
++};
++
++static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
++
++static const char * const srbds_strings[] = {
++ [SRBDS_MITIGATION_OFF] = "Vulnerable",
++ [SRBDS_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
++ [SRBDS_MITIGATION_FULL] = "Mitigation: Microcode",
++ [SRBDS_MITIGATION_TSX_OFF] = "Mitigation: TSX disabled",
++ [SRBDS_MITIGATION_HYPERVISOR] = "Unknown: Dependent on hypervisor status",
++};
++
++static bool srbds_off;
++
++void update_srbds_msr(void)
++{
++ u64 mcu_ctrl;
++
++ if (!boot_cpu_has_bug(X86_BUG_SRBDS))
++ return;
++
++ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
++ return;
++
++ if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
++ return;
++
++ rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++
++ switch (srbds_mitigation) {
++ case SRBDS_MITIGATION_OFF:
++ case SRBDS_MITIGATION_TSX_OFF:
++ mcu_ctrl |= RNGDS_MITG_DIS;
++ break;
++ case SRBDS_MITIGATION_FULL:
++ mcu_ctrl &= ~RNGDS_MITG_DIS;
++ break;
++ default:
++ break;
++ }
++
++ wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++}
++
++static void __init srbds_select_mitigation(void)
++{
++ u64 ia32_cap;
++
++ if (!boot_cpu_has_bug(X86_BUG_SRBDS))
++ return;
++
++ /*
++ * Check to see if this is one of the MDS_NO systems supporting
++ * TSX that are only exposed to SRBDS when TSX is enabled.
++ */
++ ia32_cap = x86_read_arch_cap_msr();
++ if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
++ srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
++ else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
++ srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
++ else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
++ srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
++ else if (cpu_mitigations_off() || srbds_off)
++ srbds_mitigation = SRBDS_MITIGATION_OFF;
++
++ update_srbds_msr();
++ pr_info("%s\n", srbds_strings[srbds_mitigation]);
++}
++
++static int __init srbds_parse_cmdline(char *str)
++{
++ if (!str)
++ return -EINVAL;
++
++ if (!boot_cpu_has_bug(X86_BUG_SRBDS))
++ return 0;
++
++ srbds_off = !strcmp(str, "off");
++ return 0;
++}
++early_param("srbds", srbds_parse_cmdline);
++
+ #undef pr_fmt
+ #define pr_fmt(fmt) "Spectre V1 : " fmt
+
+@@ -1528,6 +1621,11 @@ static char *ibpb_state(void)
+ return "";
+ }
+
++static ssize_t srbds_show_state(char *buf)
++{
++ return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
++}
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ char *buf, unsigned int bug)
+ {
+@@ -1572,6 +1670,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ case X86_BUG_ITLB_MULTIHIT:
+ return itlb_multihit_show_state(buf);
+
++ case X86_BUG_SRBDS:
++ return srbds_show_state(buf);
++
+ default:
+ break;
+ }
+@@ -1618,4 +1719,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
+ {
+ return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
+ }
++
++ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 4cdb123ff66a..0567448124e1 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1075,9 +1075,30 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ {}
+ };
+
+-static bool __init cpu_matches(unsigned long which)
++#define VULNBL_INTEL_STEPPINGS(model, steppings, issues) \
++ X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6, \
++ INTEL_FAM6_##model, steppings, \
++ X86_FEATURE_ANY, issues)
++
++#define SRBDS BIT(0)
++
++static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
++ VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(HASWELL, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(HASWELL_L, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(HASWELL_G, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(BROADWELL_G, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(BROADWELL, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS),
++ VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0xC), SRBDS),
++ VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0xD), SRBDS),
++ {}
++};
++
++static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which)
+ {
+- const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
++ const struct x86_cpu_id *m = x86_match_cpu(table);
+
+ return m && !!(m->driver_data & which);
+ }
+@@ -1097,31 +1118,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ u64 ia32_cap = x86_read_arch_cap_msr();
+
+ /* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
+- if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
++ if (!cpu_matches(cpu_vuln_whitelist, NO_ITLB_MULTIHIT) &&
++ !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
+ setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
+
+- if (cpu_matches(NO_SPECULATION))
++ if (cpu_matches(cpu_vuln_whitelist, NO_SPECULATION))
+ return;
+
+ setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
+
+- if (!cpu_matches(NO_SPECTRE_V2))
++ if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2))
+ setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
+
+- if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
++ if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
++ !(ia32_cap & ARCH_CAP_SSB_NO) &&
+ !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
+ setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
+
+ if (ia32_cap & ARCH_CAP_IBRS_ALL)
+ setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
+
+- if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
++ if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
++ !(ia32_cap & ARCH_CAP_MDS_NO)) {
+ setup_force_cpu_bug(X86_BUG_MDS);
+- if (cpu_matches(MSBDS_ONLY))
++ if (cpu_matches(cpu_vuln_whitelist, MSBDS_ONLY))
+ setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
+ }
+
+- if (!cpu_matches(NO_SWAPGS))
++ if (!cpu_matches(cpu_vuln_whitelist, NO_SWAPGS))
+ setup_force_cpu_bug(X86_BUG_SWAPGS);
+
+ /*
+@@ -1139,7 +1163,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
+ setup_force_cpu_bug(X86_BUG_TAA);
+
+- if (cpu_matches(NO_MELTDOWN))
++ /*
++ * SRBDS affects CPUs which support RDRAND or RDSEED and are listed
++ * in the vulnerability blacklist.
++ */
++ if ((cpu_has(c, X86_FEATURE_RDRAND) ||
++ cpu_has(c, X86_FEATURE_RDSEED)) &&
++ cpu_matches(cpu_vuln_blacklist, SRBDS))
++ setup_force_cpu_bug(X86_BUG_SRBDS);
++
++ if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ return;
+
+ /* Rogue Data Cache Load? No! */
+@@ -1148,7 +1181,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+
+ setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
+
+- if (cpu_matches(NO_L1TF))
++ if (cpu_matches(cpu_vuln_whitelist, NO_L1TF))
+ return;
+
+ setup_force_cpu_bug(X86_BUG_L1TF);
+@@ -1589,6 +1622,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
+ mtrr_ap_init();
+ validate_apic_and_package_id(c);
+ x86_spec_ctrl_setup_ap();
++ update_srbds_msr();
+ }
+
+ static __init int setup_noclflush(char *arg)
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 37fdefd14f28..fb538fccd24c 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -77,6 +77,7 @@ extern void detect_ht(struct cpuinfo_x86 *c);
+ unsigned int aperfmperf_get_khz(int cpu);
+
+ extern void x86_spec_ctrl_setup_ap(void);
++extern void update_srbds_msr(void);
+
+ extern u64 x86_read_arch_cap_msr(void);
+
+diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c
+index 6dd78d8235e4..2f163e6646b6 100644
+--- a/arch/x86/kernel/cpu/match.c
++++ b/arch/x86/kernel/cpu/match.c
+@@ -34,13 +34,18 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match)
+ const struct x86_cpu_id *m;
+ struct cpuinfo_x86 *c = &boot_cpu_data;
+
+- for (m = match; m->vendor | m->family | m->model | m->feature; m++) {
++ for (m = match;
++ m->vendor | m->family | m->model | m->steppings | m->feature;
++ m++) {
+ if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor)
+ continue;
+ if (m->family != X86_FAMILY_ANY && c->x86 != m->family)
+ continue;
+ if (m->model != X86_MODEL_ANY && c->x86_model != m->model)
+ continue;
++ if (m->steppings != X86_STEPPING_ANY &&
++ !(BIT(c->x86_stepping) & m->steppings))
++ continue;
+ if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature))
+ continue;
+ return m;
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 6265871a4af2..f00da44ae6fe 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -567,6 +567,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
+ return sprintf(buf, "Not affected\n");
+ }
+
++ssize_t __weak cpu_show_srbds(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ return sprintf(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+@@ -575,6 +581,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
+ static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
+ static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
+ static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
++static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
+
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_meltdown.attr,
+@@ -585,6 +592,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_mds.attr,
+ &dev_attr_tsx_async_abort.attr,
+ &dev_attr_itlb_multihit.attr,
++ &dev_attr_srbds.attr,
+ NULL
+ };
+
+diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c
+index 2df88d2b880a..0e2068ec068b 100644
+--- a/drivers/iio/adc/stm32-adc-core.c
++++ b/drivers/iio/adc/stm32-adc-core.c
+@@ -65,12 +65,14 @@ struct stm32_adc_priv;
+ * @clk_sel: clock selection routine
+ * @max_clk_rate_hz: maximum analog clock rate (Hz, from datasheet)
+ * @has_syscfg: SYSCFG capability flags
++ * @num_irqs: number of interrupt lines
+ */
+ struct stm32_adc_priv_cfg {
+ const struct stm32_adc_common_regs *regs;
+ int (*clk_sel)(struct platform_device *, struct stm32_adc_priv *);
+ u32 max_clk_rate_hz;
+ unsigned int has_syscfg;
++ unsigned int num_irqs;
+ };
+
+ /**
+@@ -375,21 +377,15 @@ static int stm32_adc_irq_probe(struct platform_device *pdev,
+ struct device_node *np = pdev->dev.of_node;
+ unsigned int i;
+
+- for (i = 0; i < STM32_ADC_MAX_ADCS; i++) {
++ /*
++ * Interrupt(s) must be provided, depending on the compatible:
++ * - stm32f4/h7 shares a common interrupt line.
++ * - stm32mp1, has one line per ADC
++ */
++ for (i = 0; i < priv->cfg->num_irqs; i++) {
+ priv->irq[i] = platform_get_irq(pdev, i);
+- if (priv->irq[i] < 0) {
+- /*
+- * At least one interrupt must be provided, make others
+- * optional:
+- * - stm32f4/h7 shares a common interrupt.
+- * - stm32mp1, has one line per ADC (either for ADC1,
+- * ADC2 or both).
+- */
+- if (i && priv->irq[i] == -ENXIO)
+- continue;
+-
++ if (priv->irq[i] < 0)
+ return priv->irq[i];
+- }
+ }
+
+ priv->domain = irq_domain_add_simple(np, STM32_ADC_MAX_ADCS, 0,
+@@ -400,9 +396,7 @@ static int stm32_adc_irq_probe(struct platform_device *pdev,
+ return -ENOMEM;
+ }
+
+- for (i = 0; i < STM32_ADC_MAX_ADCS; i++) {
+- if (priv->irq[i] < 0)
+- continue;
++ for (i = 0; i < priv->cfg->num_irqs; i++) {
+ irq_set_chained_handler(priv->irq[i], stm32_adc_irq_handler);
+ irq_set_handler_data(priv->irq[i], priv);
+ }
+@@ -420,11 +414,8 @@ static void stm32_adc_irq_remove(struct platform_device *pdev,
+ irq_dispose_mapping(irq_find_mapping(priv->domain, hwirq));
+ irq_domain_remove(priv->domain);
+
+- for (i = 0; i < STM32_ADC_MAX_ADCS; i++) {
+- if (priv->irq[i] < 0)
+- continue;
++ for (i = 0; i < priv->cfg->num_irqs; i++)
+ irq_set_chained_handler(priv->irq[i], NULL);
+- }
+ }
+
+ static int stm32_adc_core_switches_supply_en(struct stm32_adc_priv *priv,
+@@ -817,6 +808,7 @@ static const struct stm32_adc_priv_cfg stm32f4_adc_priv_cfg = {
+ .regs = &stm32f4_adc_common_regs,
+ .clk_sel = stm32f4_adc_clk_sel,
+ .max_clk_rate_hz = 36000000,
++ .num_irqs = 1,
+ };
+
+ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
+@@ -824,6 +816,7 @@ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
+ .clk_sel = stm32h7_adc_clk_sel,
+ .max_clk_rate_hz = 36000000,
+ .has_syscfg = HAS_VBOOSTER,
++ .num_irqs = 1,
+ };
+
+ static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = {
+@@ -831,6 +824,7 @@ static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = {
+ .clk_sel = stm32h7_adc_clk_sel,
+ .max_clk_rate_hz = 40000000,
+ .has_syscfg = HAS_VBOOSTER | HAS_ANASWVDD,
++ .num_irqs = 2,
+ };
+
+ static const struct of_device_id stm32_adc_of_match[] = {
+diff --git a/drivers/iio/chemical/pms7003.c b/drivers/iio/chemical/pms7003.c
+index 23c9ab252470..07bb90d72434 100644
+--- a/drivers/iio/chemical/pms7003.c
++++ b/drivers/iio/chemical/pms7003.c
+@@ -73,6 +73,11 @@ struct pms7003_state {
+ struct pms7003_frame frame;
+ struct completion frame_ready;
+ struct mutex lock; /* must be held whenever state gets touched */
++ /* Used to construct scan to push to the IIO buffer */
++ struct {
++ u16 data[3]; /* PM1, PM2P5, PM10 */
++ s64 ts;
++ } scan;
+ };
+
+ static int pms7003_do_cmd(struct pms7003_state *state, enum pms7003_cmd cmd)
+@@ -104,7 +109,6 @@ static irqreturn_t pms7003_trigger_handler(int irq, void *p)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct pms7003_state *state = iio_priv(indio_dev);
+ struct pms7003_frame *frame = &state->frame;
+- u16 data[3 + 1 + 4]; /* PM1, PM2P5, PM10, padding, timestamp */
+ int ret;
+
+ mutex_lock(&state->lock);
+@@ -114,12 +118,15 @@ static irqreturn_t pms7003_trigger_handler(int irq, void *p)
+ goto err;
+ }
+
+- data[PM1] = pms7003_get_pm(frame->data + PMS7003_PM1_OFFSET);
+- data[PM2P5] = pms7003_get_pm(frame->data + PMS7003_PM2P5_OFFSET);
+- data[PM10] = pms7003_get_pm(frame->data + PMS7003_PM10_OFFSET);
++ state->scan.data[PM1] =
++ pms7003_get_pm(frame->data + PMS7003_PM1_OFFSET);
++ state->scan.data[PM2P5] =
++ pms7003_get_pm(frame->data + PMS7003_PM2P5_OFFSET);
++ state->scan.data[PM10] =
++ pms7003_get_pm(frame->data + PMS7003_PM10_OFFSET);
+ mutex_unlock(&state->lock);
+
+- iio_push_to_buffers_with_timestamp(indio_dev, data,
++ iio_push_to_buffers_with_timestamp(indio_dev, &state->scan,
+ iio_get_time_ns(indio_dev));
+ err:
+ iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/chemical/sps30.c b/drivers/iio/chemical/sps30.c
+index acb9f8ecbb3d..a88c1fb875a0 100644
+--- a/drivers/iio/chemical/sps30.c
++++ b/drivers/iio/chemical/sps30.c
+@@ -230,15 +230,18 @@ static irqreturn_t sps30_trigger_handler(int irq, void *p)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct sps30_state *state = iio_priv(indio_dev);
+ int ret;
+- s32 data[4 + 2]; /* PM1, PM2P5, PM4, PM10, timestamp */
++ struct {
++ s32 data[4]; /* PM1, PM2P5, PM4, PM10 */
++ s64 ts;
++ } scan;
+
+ mutex_lock(&state->lock);
+- ret = sps30_do_meas(state, data, 4);
++ ret = sps30_do_meas(state, scan.data, ARRAY_SIZE(scan.data));
+ mutex_unlock(&state->lock);
+ if (ret)
+ goto err;
+
+- iio_push_to_buffers_with_timestamp(indio_dev, data,
++ iio_push_to_buffers_with_timestamp(indio_dev, &scan,
+ iio_get_time_ns(indio_dev));
+ err:
+ iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/light/vcnl4000.c b/drivers/iio/light/vcnl4000.c
+index e5b00a6611ac..7384a3ffcac4 100644
+--- a/drivers/iio/light/vcnl4000.c
++++ b/drivers/iio/light/vcnl4000.c
+@@ -193,7 +193,6 @@ static int vcnl4000_measure(struct vcnl4000_data *data, u8 req_mask,
+ u8 rdy_mask, u8 data_reg, int *val)
+ {
+ int tries = 20;
+- __be16 buf;
+ int ret;
+
+ mutex_lock(&data->vcnl4000_lock);
+@@ -220,13 +219,12 @@ static int vcnl4000_measure(struct vcnl4000_data *data, u8 req_mask,
+ goto fail;
+ }
+
+- ret = i2c_smbus_read_i2c_block_data(data->client,
+- data_reg, sizeof(buf), (u8 *) &buf);
++ ret = i2c_smbus_read_word_swapped(data->client, data_reg);
+ if (ret < 0)
+ goto fail;
+
+ mutex_unlock(&data->vcnl4000_lock);
+- *val = be16_to_cpu(buf);
++ *val = ret;
+
+ return 0;
+
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index b74580e87be8..5d9db8d042c1 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -100,13 +100,17 @@ static void felix_vlan_add(struct dsa_switch *ds, int port,
+ const struct switchdev_obj_port_vlan *vlan)
+ {
+ struct ocelot *ocelot = ds->priv;
++ u16 flags = vlan->flags;
+ u16 vid;
+ int err;
+
++ if (dsa_is_cpu_port(ds, port))
++ flags &= ~BRIDGE_VLAN_INFO_UNTAGGED;
++
+ for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
+ err = ocelot_vlan_add(ocelot, port, vid,
+- vlan->flags & BRIDGE_VLAN_INFO_PVID,
+- vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED);
++ flags & BRIDGE_VLAN_INFO_PVID,
++ flags & BRIDGE_VLAN_INFO_UNTAGGED);
+ if (err) {
+ dev_err(ds->dev, "Failed to add VLAN %d to port %d: %d\n",
+ vid, port, err);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 4659c205cc01..46ff83408d05 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1824,7 +1824,7 @@ static int mlx5e_flower_parse_meta(struct net_device *filter_dev,
+ flow_rule_match_meta(rule, &match);
+ if (match.mask->ingress_ifindex != 0xFFFFFFFF) {
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported ingress ifindex mask");
+- return -EINVAL;
++ return -EOPNOTSUPP;
+ }
+
+ ingress_dev = __dev_get_by_index(dev_net(filter_dev),
+@@ -1832,13 +1832,13 @@ static int mlx5e_flower_parse_meta(struct net_device *filter_dev,
+ if (!ingress_dev) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Can't find the ingress port to match on");
+- return -EINVAL;
++ return -ENOENT;
+ }
+
+ if (ingress_dev != filter_dev) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Can't match on the ingress filter port");
+- return -EINVAL;
++ return -EOPNOTSUPP;
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index cf09cfc33234..cdc566768a07 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -416,12 +416,6 @@ static void del_sw_ns(struct fs_node *node)
+
+ static void del_sw_prio(struct fs_node *node)
+ {
+- struct mlx5_flow_root_namespace *root_ns;
+- struct mlx5_flow_namespace *ns;
+-
+- fs_get_obj(ns, node);
+- root_ns = container_of(ns, struct mlx5_flow_root_namespace, ns);
+- mutex_destroy(&root_ns->chain_lock);
+ kfree(node);
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 4a08e4eef283..20e12e14cfa8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1552,6 +1552,22 @@ static void shutdown(struct pci_dev *pdev)
+ mlx5_pci_disable_device(dev);
+ }
+
++static int mlx5_suspend(struct pci_dev *pdev, pm_message_t state)
++{
++ struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
++
++ mlx5_unload_one(dev, false);
++
++ return 0;
++}
++
++static int mlx5_resume(struct pci_dev *pdev)
++{
++ struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
++
++ return mlx5_load_one(dev, false);
++}
++
+ static const struct pci_device_id mlx5_core_pci_table[] = {
+ { PCI_VDEVICE(MELLANOX, PCI_DEVICE_ID_MELLANOX_CONNECTIB) },
+ { PCI_VDEVICE(MELLANOX, 0x1012), MLX5_PCI_DEV_IS_VF}, /* Connect-IB VF */
+@@ -1595,6 +1611,8 @@ static struct pci_driver mlx5_core_driver = {
+ .id_table = mlx5_core_pci_table,
+ .probe = init_one,
+ .remove = remove_one,
++ .suspend = mlx5_suspend,
++ .resume = mlx5_resume,
+ .shutdown = shutdown,
+ .err_handler = &mlx5_err_handler,
+ .sriov_configure = mlx5_core_sriov_configure,
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+index 7ca5c1becfcf..c5dcfdd69773 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+@@ -1440,7 +1440,8 @@ __nfp_flower_update_merge_stats(struct nfp_app *app,
+ ctx_id = be32_to_cpu(sub_flow->meta.host_ctx_id);
+ priv->stats[ctx_id].pkts += pkts;
+ priv->stats[ctx_id].bytes += bytes;
+- max_t(u64, priv->stats[ctx_id].used, used);
++ priv->stats[ctx_id].used = max_t(u64, used,
++ priv->stats[ctx_id].used);
+ }
+ }
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index d564459290ce..bcb39012d34d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -630,7 +630,8 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
+ config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+ ptp_v2 = PTP_TCR_TSVER2ENA;
+ snap_type_sel = PTP_TCR_SNAPTYPSEL_1;
+- ts_event_en = PTP_TCR_TSEVNTENA;
++ if (priv->synopsys_id != DWMAC_CORE_5_10)
++ ts_event_en = PTP_TCR_TSEVNTENA;
+ ptp_over_ipv4_udp = PTP_TCR_TSIPV4ENA;
+ ptp_over_ipv6_udp = PTP_TCR_TSIPV6ENA;
+ ptp_over_ethernet = PTP_TCR_TSIPENA;
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 4bb8552a00d3..4a2c7355be63 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1324,6 +1324,7 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */
+ {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */
+ {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */
++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */
+diff --git a/drivers/nfc/st21nfca/dep.c b/drivers/nfc/st21nfca/dep.c
+index 60acdfd1cb8c..856a10c293f8 100644
+--- a/drivers/nfc/st21nfca/dep.c
++++ b/drivers/nfc/st21nfca/dep.c
+@@ -173,8 +173,10 @@ static int st21nfca_tm_send_atr_res(struct nfc_hci_dev *hdev,
+ memcpy(atr_res->gbi, atr_req->gbi, gb_len);
+ r = nfc_set_remote_general_bytes(hdev->ndev, atr_res->gbi,
+ gb_len);
+- if (r < 0)
++ if (r < 0) {
++ kfree_skb(skb);
+ return r;
++ }
+ }
+
+ info->dep_info.curr_nfc_dep_pni = 0;
+diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
+index d057f1bfb2e9..8a91717600be 100644
+--- a/drivers/nvmem/qfprom.c
++++ b/drivers/nvmem/qfprom.c
+@@ -27,25 +27,11 @@ static int qfprom_reg_read(void *context,
+ return 0;
+ }
+
+-static int qfprom_reg_write(void *context,
+- unsigned int reg, void *_val, size_t bytes)
+-{
+- struct qfprom_priv *priv = context;
+- u8 *val = _val;
+- int i = 0, words = bytes;
+-
+- while (words--)
+- writeb(*val++, priv->base + reg + i++);
+-
+- return 0;
+-}
+-
+ static struct nvmem_config econfig = {
+ .name = "qfprom",
+ .stride = 1,
+ .word_size = 1,
+ .reg_read = qfprom_reg_read,
+- .reg_write = qfprom_reg_write,
+ };
+
+ static int qfprom_probe(struct platform_device *pdev)
+diff --git a/drivers/staging/rtl8712/wifi.h b/drivers/staging/rtl8712/wifi.h
+index be731f1a2209..91b65731fcaa 100644
+--- a/drivers/staging/rtl8712/wifi.h
++++ b/drivers/staging/rtl8712/wifi.h
+@@ -440,7 +440,7 @@ static inline unsigned char *get_hdr_bssid(unsigned char *pframe)
+ /* block-ack parameters */
+ #define IEEE80211_ADDBA_PARAM_POLICY_MASK 0x0002
+ #define IEEE80211_ADDBA_PARAM_TID_MASK 0x003C
+-#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFA0
++#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFC0
+ #define IEEE80211_DELBA_PARAM_TID_MASK 0xF000
+ #define IEEE80211_DELBA_PARAM_INITIATOR_MASK 0x0800
+
+@@ -532,13 +532,6 @@ struct ieee80211_ht_addt_info {
+ #define IEEE80211_HT_IE_NON_GF_STA_PRSNT 0x0004
+ #define IEEE80211_HT_IE_NON_HT_STA_PRSNT 0x0010
+
+-/* block-ack parameters */
+-#define IEEE80211_ADDBA_PARAM_POLICY_MASK 0x0002
+-#define IEEE80211_ADDBA_PARAM_TID_MASK 0x003C
+-#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFA0
+-#define IEEE80211_DELBA_PARAM_TID_MASK 0xF000
+-#define IEEE80211_DELBA_PARAM_INITIATOR_MASK 0x0800
+-
+ /*
+ * A-PMDU buffer sizes
+ * According to IEEE802.11n spec size varies from 8K to 64K (in powers of 2)
+diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
+index 436cc51c92c3..cdcc64ea2554 100644
+--- a/drivers/tty/hvc/hvc_console.c
++++ b/drivers/tty/hvc/hvc_console.c
+@@ -371,15 +371,14 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ * tty fields and return the kref reference.
+ */
+ if (rc) {
+- tty_port_tty_set(&hp->port, NULL);
+- tty->driver_data = NULL;
+- tty_port_put(&hp->port);
+ printk(KERN_ERR "hvc_open: request_irq failed with rc %d.\n", rc);
+- } else
++ } else {
+ /* We are ready... raise DTR/RTS */
+ if (C_BAUD(tty))
+ if (hp->ops->dtr_rts)
+ hp->ops->dtr_rts(hp, 1);
++ tty_port_set_initialized(&hp->port, true);
++ }
+
+ /* Force wakeup of the polling thread */
+ hvc_kick();
+@@ -389,22 +388,12 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+
+ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ {
+- struct hvc_struct *hp;
++ struct hvc_struct *hp = tty->driver_data;
+ unsigned long flags;
+
+ if (tty_hung_up_p(filp))
+ return;
+
+- /*
+- * No driver_data means that this close was issued after a failed
+- * hvc_open by the tty layer's release_dev() function and we can just
+- * exit cleanly because the kref reference wasn't made.
+- */
+- if (!tty->driver_data)
+- return;
+-
+- hp = tty->driver_data;
+-
+ spin_lock_irqsave(&hp->port.lock, flags);
+
+ if (--hp->port.count == 0) {
+@@ -412,6 +401,9 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ /* We are done with the tty pointer now. */
+ tty_port_tty_set(&hp->port, NULL);
+
++ if (!tty_port_initialized(&hp->port))
++ return;
++
+ if (C_HUPCL(tty))
+ if (hp->ops->dtr_rts)
+ hp->ops->dtr_rts(hp, 0);
+@@ -428,6 +420,7 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ * waking periodically to check chars_in_buffer().
+ */
+ tty_wait_until_sent(tty, HVC_CLOSE_WAIT);
++ tty_port_set_initialized(&hp->port, false);
+ } else {
+ if (hp->port.count < 0)
+ printk(KERN_ERR "hvc_close %X: oops, count is %d\n",
+diff --git a/drivers/tty/serial/8250/Kconfig b/drivers/tty/serial/8250/Kconfig
+index f16824bbb573..c9da6c142c6f 100644
+--- a/drivers/tty/serial/8250/Kconfig
++++ b/drivers/tty/serial/8250/Kconfig
+@@ -63,6 +63,7 @@ config SERIAL_8250_PNP
+ config SERIAL_8250_16550A_VARIANTS
+ bool "Support for variants of the 16550A serial port"
+ depends on SERIAL_8250
++ default !X86
+ help
+ The 8250 driver can probe for many variants of the venerable 16550A
+ serial port. Doing so takes additional time at boot.
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index 15d33fa0c925..568b2171f335 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -127,7 +127,11 @@ static DEFINE_SPINLOCK(func_buf_lock); /* guard 'func_buf' and friends */
+ static unsigned long key_down[BITS_TO_LONGS(KEY_CNT)]; /* keyboard key bitmap */
+ static unsigned char shift_down[NR_SHIFT]; /* shift state counters.. */
+ static bool dead_key_next;
+-static int npadch = -1; /* -1 or number assembled on pad */
++
++/* Handles a number being assembled on the number pad */
++static bool npadch_active;
++static unsigned int npadch_value;
++
+ static unsigned int diacr;
+ static char rep; /* flag telling character repeat */
+
+@@ -845,12 +849,12 @@ static void k_shift(struct vc_data *vc, unsigned char value, char up_flag)
+ shift_state &= ~(1 << value);
+
+ /* kludge */
+- if (up_flag && shift_state != old_state && npadch != -1) {
++ if (up_flag && shift_state != old_state && npadch_active) {
+ if (kbd->kbdmode == VC_UNICODE)
+- to_utf8(vc, npadch);
++ to_utf8(vc, npadch_value);
+ else
+- put_queue(vc, npadch & 0xff);
+- npadch = -1;
++ put_queue(vc, npadch_value & 0xff);
++ npadch_active = false;
+ }
+ }
+
+@@ -868,7 +872,7 @@ static void k_meta(struct vc_data *vc, unsigned char value, char up_flag)
+
+ static void k_ascii(struct vc_data *vc, unsigned char value, char up_flag)
+ {
+- int base;
++ unsigned int base;
+
+ if (up_flag)
+ return;
+@@ -882,10 +886,12 @@ static void k_ascii(struct vc_data *vc, unsigned char value, char up_flag)
+ base = 16;
+ }
+
+- if (npadch == -1)
+- npadch = value;
+- else
+- npadch = npadch * base + value;
++ if (!npadch_active) {
++ npadch_value = 0;
++ npadch_active = true;
++ }
++
++ npadch_value = npadch_value * base + value;
+ }
+
+ static void k_lock(struct vc_data *vc, unsigned char value, char up_flag)
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 8ca72d80501d..f67088bb8218 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -584,7 +584,7 @@ static void acm_softint(struct work_struct *work)
+ }
+
+ if (test_and_clear_bit(ACM_ERROR_DELAY, &acm->flags)) {
+- for (i = 0; i < ACM_NR; i++)
++ for (i = 0; i < acm->rx_buflimit; i++)
+ if (test_and_clear_bit(i, &acm->urbs_in_error_delay))
+ acm_submit_read_urb(acm, i, GFP_NOIO);
+ }
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index f616fb489542..f38d24fff166 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -2877,6 +2877,13 @@ static int musb_resume(struct device *dev)
+ musb_enable_interrupts(musb);
+ musb_platform_enable(musb);
+
++ /* session might be disabled in suspend */
++ if (musb->port_mode == MUSB_HOST &&
++ !(musb->ops->quirks & MUSB_PRESERVE_SESSION)) {
++ devctl |= MUSB_DEVCTL_SESSION;
++ musb_writeb(musb->mregs, MUSB_DEVCTL, devctl);
++ }
++
+ spin_lock_irqsave(&musb->lock, flags);
+ error = musb_run_resume_work(musb);
+ if (error)
+diff --git a/drivers/usb/musb/musb_debugfs.c b/drivers/usb/musb/musb_debugfs.c
+index 7b6281ab62ed..30a89aa8a3e7 100644
+--- a/drivers/usb/musb/musb_debugfs.c
++++ b/drivers/usb/musb/musb_debugfs.c
+@@ -168,6 +168,11 @@ static ssize_t musb_test_mode_write(struct file *file,
+ u8 test;
+ char buf[24];
+
++ memset(buf, 0x00, sizeof(buf));
++
++ if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
++ return -EFAULT;
++
+ pm_runtime_get_sync(musb->controller);
+ test = musb_readb(musb->mregs, MUSB_TESTMODE);
+ if (test) {
+@@ -176,11 +181,6 @@ static ssize_t musb_test_mode_write(struct file *file,
+ goto ret;
+ }
+
+- memset(buf, 0x00, sizeof(buf));
+-
+- if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
+- return -EFAULT;
+-
+ if (strstarts(buf, "force host full-speed"))
+ test = MUSB_TEST_FORCE_HOST | MUSB_TEST_FORCE_FS;
+
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index c5ecdcd51ffc..89675ee29645 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -73,6 +73,8 @@
+ #define CH341_LCR_CS6 0x01
+ #define CH341_LCR_CS5 0x00
+
++#define CH341_QUIRK_LIMITED_PRESCALER BIT(0)
++
+ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x4348, 0x5523) },
+ { USB_DEVICE(0x1a86, 0x7523) },
+@@ -87,6 +89,7 @@ struct ch341_private {
+ u8 mcr;
+ u8 msr;
+ u8 lcr;
++ unsigned long quirks;
+ };
+
+ static void ch341_set_termios(struct tty_struct *tty,
+@@ -159,9 +162,11 @@ static const speed_t ch341_min_rates[] = {
+ * 2 <= div <= 256 if fact = 0, or
+ * 9 <= div <= 256 if fact = 1
+ */
+-static int ch341_get_divisor(speed_t speed)
++static int ch341_get_divisor(struct ch341_private *priv)
+ {
+ unsigned int fact, div, clk_div;
++ speed_t speed = priv->baud_rate;
++ bool force_fact0 = false;
+ int ps;
+
+ /*
+@@ -187,8 +192,12 @@ static int ch341_get_divisor(speed_t speed)
+ clk_div = CH341_CLK_DIV(ps, fact);
+ div = CH341_CLKRATE / (clk_div * speed);
+
++ /* Some devices require a lower base clock if ps < 3. */
++ if (ps < 3 && (priv->quirks & CH341_QUIRK_LIMITED_PRESCALER))
++ force_fact0 = true;
++
+ /* Halve base clock (fact = 0) if required. */
+- if (div < 9 || div > 255) {
++ if (div < 9 || div > 255 || force_fact0) {
+ div /= 2;
+ clk_div *= 2;
+ fact = 0;
+@@ -227,7 +236,7 @@ static int ch341_set_baudrate_lcr(struct usb_device *dev,
+ if (!priv->baud_rate)
+ return -EINVAL;
+
+- val = ch341_get_divisor(priv->baud_rate);
++ val = ch341_get_divisor(priv);
+ if (val < 0)
+ return -EINVAL;
+
+@@ -308,6 +317,54 @@ out: kfree(buffer);
+ return r;
+ }
+
++static int ch341_detect_quirks(struct usb_serial_port *port)
++{
++ struct ch341_private *priv = usb_get_serial_port_data(port);
++ struct usb_device *udev = port->serial->dev;
++ const unsigned int size = 2;
++ unsigned long quirks = 0;
++ char *buffer;
++ int r;
++
++ buffer = kmalloc(size, GFP_KERNEL);
++ if (!buffer)
++ return -ENOMEM;
++
++ /*
++ * A subset of CH34x devices does not support all features. The
++ * prescaler is limited and there is no support for sending a RS232
++ * break condition. A read failure when trying to set up the latter is
++ * used to detect these devices.
++ */
++ r = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), CH341_REQ_READ_REG,
++ USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
++ CH341_REG_BREAK, 0, buffer, size, DEFAULT_TIMEOUT);
++ if (r == -EPIPE) {
++ dev_dbg(&port->dev, "break control not supported\n");
++ quirks = CH341_QUIRK_LIMITED_PRESCALER;
++ r = 0;
++ goto out;
++ }
++
++ if (r != size) {
++ if (r >= 0)
++ r = -EIO;
++ dev_err(&port->dev, "failed to read break control: %d\n", r);
++ goto out;
++ }
++
++ r = 0;
++out:
++ kfree(buffer);
++
++ if (quirks) {
++ dev_dbg(&port->dev, "enabling quirk flags: 0x%02lx\n", quirks);
++ priv->quirks |= quirks;
++ }
++
++ return r;
++}
++
+ static int ch341_port_probe(struct usb_serial_port *port)
+ {
+ struct ch341_private *priv;
+@@ -330,6 +387,11 @@ static int ch341_port_probe(struct usb_serial_port *port)
+ goto error;
+
+ usb_set_serial_port_data(port, priv);
++
++ r = ch341_detect_quirks(port);
++ if (r < 0)
++ goto error;
++
+ return 0;
+
+ error: kfree(priv);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 8bfffca3e4ae..254a8bbeea67 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1157,6 +1157,10 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1031, 0xff), /* Telit LE910C1-EUX */
++ .driver_info = NCTRL(0) | RSVD(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1033, 0xff), /* Telit LE910C1-EUX (ECM) */
++ .driver_info = NCTRL(0) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
+ .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index ce0401d3137f..d147feae83e6 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -173,6 +173,7 @@ static const struct usb_device_id id_table[] = {
+ {DEVICE_SWI(0x413c, 0x81b3)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
+ {DEVICE_SWI(0x413c, 0x81b5)}, /* Dell Wireless 5811e QDL */
+ {DEVICE_SWI(0x413c, 0x81b6)}, /* Dell Wireless 5811e QDL */
++ {DEVICE_SWI(0x413c, 0x81cb)}, /* Dell Wireless 5816e QDL */
+ {DEVICE_SWI(0x413c, 0x81cc)}, /* Dell Wireless 5816e */
+ {DEVICE_SWI(0x413c, 0x81cf)}, /* Dell Wireless 5819 */
+ {DEVICE_SWI(0x413c, 0x81d0)}, /* Dell Wireless 5819 */
+diff --git a/drivers/usb/serial/usb_wwan.c b/drivers/usb/serial/usb_wwan.c
+index 13be21aad2f4..4b9845807bee 100644
+--- a/drivers/usb/serial/usb_wwan.c
++++ b/drivers/usb/serial/usb_wwan.c
+@@ -270,6 +270,10 @@ static void usb_wwan_indat_callback(struct urb *urb)
+ if (status) {
+ dev_dbg(dev, "%s: nonzero status: %d on endpoint %02x.\n",
+ __func__, status, endpoint);
++
++ /* don't resubmit on fatal errors */
++ if (status == -ESHUTDOWN || status == -ENOENT)
++ return;
+ } else {
+ if (urb->actual_length) {
+ tty_insert_flip_string(&port->port, data,
+diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
+index e3596db077dc..953d7ca01eb6 100644
+--- a/include/linux/mod_devicetable.h
++++ b/include/linux/mod_devicetable.h
+@@ -657,6 +657,10 @@ struct mips_cdmm_device_id {
+ /*
+ * MODULE_DEVICE_TABLE expects this struct to be called x86cpu_device_id.
+ * Although gcc seems to ignore this error, clang fails without this define.
++ *
++ * Note: The ordering of the struct is different from upstream because the
++ * static initializers in kernels < 5.7 still use C89 style while upstream
++ * has been converted to proper C99 initializers.
+ */
+ #define x86cpu_device_id x86_cpu_id
+ struct x86_cpu_id {
+@@ -665,6 +669,7 @@ struct x86_cpu_id {
+ __u16 model;
+ __u16 feature; /* bit index */
+ kernel_ulong_t driver_data;
++ __u16 steppings;
+ };
+
+ #define X86_FEATURE_MATCH(x) \
+@@ -673,6 +678,7 @@ struct x86_cpu_id {
+ #define X86_VENDOR_ANY 0xffff
+ #define X86_FAMILY_ANY 0
+ #define X86_MODEL_ANY 0
++#define X86_STEPPING_ANY 0
+ #define X86_FEATURE_ANY 0 /* Same as FPU, you can't test for that */
+
+ /*
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 6f6ade63b04c..e8a924eeea3d 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -31,6 +31,7 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ {
+ unsigned int gso_type = 0;
+ unsigned int thlen = 0;
++ unsigned int p_off = 0;
+ unsigned int ip_proto;
+
+ if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+@@ -68,7 +69,8 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ if (!skb_partial_csum_set(skb, start, off))
+ return -EINVAL;
+
+- if (skb_transport_offset(skb) + thlen > skb_headlen(skb))
++ p_off = skb_transport_offset(skb) + thlen;
++ if (p_off > skb_headlen(skb))
+ return -EINVAL;
+ } else {
+ /* gso packets without NEEDS_CSUM do not set transport_offset.
+@@ -92,23 +94,32 @@ retry:
+ return -EINVAL;
+ }
+
+- if (keys.control.thoff + thlen > skb_headlen(skb) ||
++ p_off = keys.control.thoff + thlen;
++ if (p_off > skb_headlen(skb) ||
+ keys.basic.ip_proto != ip_proto)
+ return -EINVAL;
+
+ skb_set_transport_header(skb, keys.control.thoff);
++ } else if (gso_type) {
++ p_off = thlen;
++ if (p_off > skb_headlen(skb))
++ return -EINVAL;
+ }
+ }
+
+ if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+ u16 gso_size = __virtio16_to_cpu(little_endian, hdr->gso_size);
++ struct skb_shared_info *shinfo = skb_shinfo(skb);
+
+- skb_shinfo(skb)->gso_size = gso_size;
+- skb_shinfo(skb)->gso_type = gso_type;
++ /* Too small packets are not really GSO ones. */
++ if (skb->len - p_off > gso_size) {
++ shinfo->gso_size = gso_size;
++ shinfo->gso_type = gso_type;
+
+- /* Header must be checked, and gso_segs computed. */
+- skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
+- skb_shinfo(skb)->gso_segs = 0;
++ /* Header must be checked, and gso_segs computed. */
++ shinfo->gso_type |= SKB_GSO_DODGY;
++ shinfo->gso_segs = 0;
++ }
+ }
+
+ return 0;
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index ece7e13f6e4a..cc2095607c74 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -867,10 +867,6 @@ static int prepare_uprobe(struct uprobe *uprobe, struct file *file,
+ if (ret)
+ goto out;
+
+- /* uprobe_write_opcode() assumes we don't cross page boundary */
+- BUG_ON((uprobe->offset & ~PAGE_MASK) +
+- UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);
+-
+ smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */
+ set_bit(UPROBE_COPY_INSN, &uprobe->flags);
+
+@@ -1166,6 +1162,15 @@ static int __uprobe_register(struct inode *inode, loff_t offset,
+ if (offset > i_size_read(inode))
+ return -EINVAL;
+
++ /*
++ * This ensures that copy_from_page(), copy_to_page() and
++ * __update_ref_ctr() can't cross page boundary.
++ */
++ if (!IS_ALIGNED(offset, UPROBE_SWBP_INSN_SIZE))
++ return -EINVAL;
++ if (!IS_ALIGNED(ref_ctr_offset, sizeof(short)))
++ return -EINVAL;
++
+ retry:
+ uprobe = alloc_uprobe(inode, offset, ref_ctr_offset);
+ if (!uprobe)
+@@ -2014,6 +2019,9 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr)
+ uprobe_opcode_t opcode;
+ int result;
+
++ if (WARN_ON_ONCE(!IS_ALIGNED(vaddr, UPROBE_SWBP_INSN_SIZE)))
++ return -EINVAL;
++
+ pagefault_disable();
+ result = __get_user(opcode, (uprobe_opcode_t __user *)vaddr);
+ pagefault_enable();
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index 458dc6eb5a68..a27d034c85cc 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -276,6 +276,7 @@ static struct in_device *inetdev_init(struct net_device *dev)
+ err = devinet_sysctl_register(in_dev);
+ if (err) {
+ in_dev->dead = 1;
++ neigh_parms_release(&arp_tbl, in_dev->arp_parms);
+ in_dev_put(in_dev);
+ in_dev = NULL;
+ goto out;
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index fcb53ed1c4fb..6d7ef78c88af 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -1458,6 +1458,9 @@ static int l2tp_validate_socket(const struct sock *sk, const struct net *net,
+ if (sk->sk_type != SOCK_DGRAM)
+ return -EPROTONOSUPPORT;
+
++ if (sk->sk_family != PF_INET && sk->sk_family != PF_INET6)
++ return -EPROTONOSUPPORT;
++
+ if ((encap == L2TP_ENCAPTYPE_UDP && sk->sk_protocol != IPPROTO_UDP) ||
+ (encap == L2TP_ENCAPTYPE_IP && sk->sk_protocol != IPPROTO_L2TP))
+ return -EPROTONOSUPPORT;
+diff --git a/net/l2tp/l2tp_ip.c b/net/l2tp/l2tp_ip.c
+index 0d7c887a2b75..955662a6dee7 100644
+--- a/net/l2tp/l2tp_ip.c
++++ b/net/l2tp/l2tp_ip.c
+@@ -20,7 +20,6 @@
+ #include <net/icmp.h>
+ #include <net/udp.h>
+ #include <net/inet_common.h>
+-#include <net/inet_hashtables.h>
+ #include <net/tcp_states.h>
+ #include <net/protocol.h>
+ #include <net/xfrm.h>
+@@ -209,15 +208,31 @@ discard:
+ return 0;
+ }
+
+-static int l2tp_ip_open(struct sock *sk)
++static int l2tp_ip_hash(struct sock *sk)
+ {
+- /* Prevent autobind. We don't have ports. */
+- inet_sk(sk)->inet_num = IPPROTO_L2TP;
++ if (sk_unhashed(sk)) {
++ write_lock_bh(&l2tp_ip_lock);
++ sk_add_node(sk, &l2tp_ip_table);
++ write_unlock_bh(&l2tp_ip_lock);
++ }
++ return 0;
++}
+
++static void l2tp_ip_unhash(struct sock *sk)
++{
++ if (sk_unhashed(sk))
++ return;
+ write_lock_bh(&l2tp_ip_lock);
+- sk_add_node(sk, &l2tp_ip_table);
++ sk_del_node_init(sk);
+ write_unlock_bh(&l2tp_ip_lock);
++}
++
++static int l2tp_ip_open(struct sock *sk)
++{
++ /* Prevent autobind. We don't have ports. */
++ inet_sk(sk)->inet_num = IPPROTO_L2TP;
+
++ l2tp_ip_hash(sk);
+ return 0;
+ }
+
+@@ -594,8 +609,8 @@ static struct proto l2tp_ip_prot = {
+ .sendmsg = l2tp_ip_sendmsg,
+ .recvmsg = l2tp_ip_recvmsg,
+ .backlog_rcv = l2tp_ip_backlog_recv,
+- .hash = inet_hash,
+- .unhash = inet_unhash,
++ .hash = l2tp_ip_hash,
++ .unhash = l2tp_ip_unhash,
+ .obj_size = sizeof(struct l2tp_ip_sock),
+ #ifdef CONFIG_COMPAT
+ .compat_setsockopt = compat_ip_setsockopt,
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index d148766f40d1..0fa694bd3f6a 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -20,8 +20,6 @@
+ #include <net/icmp.h>
+ #include <net/udp.h>
+ #include <net/inet_common.h>
+-#include <net/inet_hashtables.h>
+-#include <net/inet6_hashtables.h>
+ #include <net/tcp_states.h>
+ #include <net/protocol.h>
+ #include <net/xfrm.h>
+@@ -222,15 +220,31 @@ discard:
+ return 0;
+ }
+
+-static int l2tp_ip6_open(struct sock *sk)
++static int l2tp_ip6_hash(struct sock *sk)
+ {
+- /* Prevent autobind. We don't have ports. */
+- inet_sk(sk)->inet_num = IPPROTO_L2TP;
++ if (sk_unhashed(sk)) {
++ write_lock_bh(&l2tp_ip6_lock);
++ sk_add_node(sk, &l2tp_ip6_table);
++ write_unlock_bh(&l2tp_ip6_lock);
++ }
++ return 0;
++}
+
++static void l2tp_ip6_unhash(struct sock *sk)
++{
++ if (sk_unhashed(sk))
++ return;
+ write_lock_bh(&l2tp_ip6_lock);
+- sk_add_node(sk, &l2tp_ip6_table);
++ sk_del_node_init(sk);
+ write_unlock_bh(&l2tp_ip6_lock);
++}
++
++static int l2tp_ip6_open(struct sock *sk)
++{
++ /* Prevent autobind. We don't have ports. */
++ inet_sk(sk)->inet_num = IPPROTO_L2TP;
+
++ l2tp_ip6_hash(sk);
+ return 0;
+ }
+
+@@ -728,8 +742,8 @@ static struct proto l2tp_ip6_prot = {
+ .sendmsg = l2tp_ip6_sendmsg,
+ .recvmsg = l2tp_ip6_recvmsg,
+ .backlog_rcv = l2tp_ip6_backlog_recv,
+- .hash = inet6_hash,
+- .unhash = inet_unhash,
++ .hash = l2tp_ip6_hash,
++ .unhash = l2tp_ip6_unhash,
+ .obj_size = sizeof(struct l2tp_ip6_sock),
+ #ifdef CONFIG_COMPAT
+ .compat_setsockopt = compat_ipv6_setsockopt,
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 3c19a8efdcea..ddeb840acd29 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -920,6 +920,14 @@ static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+ int err;
+
+ lock_sock(sock->sk);
++ if (sock->state != SS_UNCONNECTED && msk->subflow) {
++ /* pending connection or invalid state, let existing subflow
++ * cope with that
++ */
++ ssock = msk->subflow;
++ goto do_connect;
++ }
++
+ ssock = __mptcp_socket_create(msk, TCP_SYN_SENT);
+ if (IS_ERR(ssock)) {
+ err = PTR_ERR(ssock);
+@@ -934,9 +942,17 @@ static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+ mptcp_subflow_ctx(ssock->sk)->request_mptcp = 0;
+ #endif
+
++do_connect:
+ err = ssock->ops->connect(ssock, uaddr, addr_len, flags);
+- inet_sk_state_store(sock->sk, inet_sk_state_load(ssock->sk));
+- mptcp_copy_inaddrs(sock->sk, ssock->sk);
++ sock->state = ssock->state;
++
++ /* on successful connect, the msk state will be moved to established by
++ * subflow_finish_connect()
++ */
++ if (!err || err == EINPROGRESS)
++ mptcp_copy_inaddrs(sock->sk, ssock->sk);
++ else
++ inet_sk_state_store(sock->sk, inet_sk_state_load(ssock->sk));
+
+ unlock:
+ release_sock(sock->sk);
+diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
+index 214657eb3dfd..6675ec591356 100644
+--- a/net/sched/sch_fq_pie.c
++++ b/net/sched/sch_fq_pie.c
+@@ -298,9 +298,9 @@ static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt,
+ goto flow_error;
+ }
+ q->flows_cnt = nla_get_u32(tb[TCA_FQ_PIE_FLOWS]);
+- if (!q->flows_cnt || q->flows_cnt > 65536) {
++ if (!q->flows_cnt || q->flows_cnt >= 65536) {
+ NL_SET_ERR_MSG_MOD(extack,
+- "Number of flows must be < 65536");
++ "Number of flows must range in [1..65535]");
+ goto flow_error;
+ }
+ }
+diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c
+index c82dbdcf13f2..77d5c36a8991 100644
+--- a/net/sctp/ulpevent.c
++++ b/net/sctp/ulpevent.c
+@@ -343,6 +343,9 @@ void sctp_ulpevent_nofity_peer_addr_change(struct sctp_transport *transport,
+ struct sockaddr_storage addr;
+ struct sctp_ulpevent *event;
+
++ if (asoc->state < SCTP_STATE_ESTABLISHED)
++ return;
++
+ memset(&addr, 0, sizeof(struct sockaddr_storage));
+ memcpy(&addr, &transport->ipaddr, transport->af_specific->sockaddr_len);
+
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index a5f28708e0e7..626bf9044418 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1408,7 +1408,7 @@ static int vsock_accept(struct socket *sock, struct socket *newsock, int flags,
+ /* Wait for children sockets to appear; these are the new sockets
+ * created upon connection establishment.
+ */
+- timeout = sock_sndtimeo(listener, flags & O_NONBLOCK);
++ timeout = sock_rcvtimeo(listener, flags & O_NONBLOCK);
+ prepare_to_wait(sk_sleep(listener), &wait, TASK_INTERRUPTIBLE);
+
+ while ((connected = vsock_dequeue_accept(listener)) == NULL &&
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index f3c4bab2f737..cfab9403a9c4 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -1128,6 +1128,14 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
+
+ lock_sock(sk);
+
++ /* Check if sk has been released before lock_sock */
++ if (sk->sk_shutdown == SHUTDOWN_MASK) {
++ (void)virtio_transport_reset_no_sock(t, pkt);
++ release_sock(sk);
++ sock_put(sk);
++ goto free_pkt;
++ }
++
+ /* Update CID in case it has changed after a transport reset event */
+ vsk->local_addr.svm_cid = dst.svm_cid;
+
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
+new file mode 100644
+index 000000000000..1cda2e11b3ad
+--- /dev/null
++++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
+@@ -0,0 +1,21 @@
++[
++ {
++ "id": "83be",
++ "name": "Create FQ-PIE with invalid number of flows",
++ "category": [
++ "qdisc",
++ "fq_pie"
++ ],
++ "setup": [
++ "$IP link add dev $DUMMY type dummy || /bin/true"
++ ],
++ "cmdUnderTest": "$TC qdisc add dev $DUMMY root fq_pie flows 65536",
++ "expExitCode": "2",
++ "verifyCmd": "$TC qdisc show dev $DUMMY",
++ "matchPattern": "qdisc",
++ "matchCount": "0",
++ "teardown": [
++ "$IP link del dev $DUMMY"
++ ]
++ }
++]
next reply other threads:[~2020-06-10 19:41 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-10 19:41 Mike Pagano [this message]
-- strict thread matches above, loose matches on Subject: below --
2020-06-17 16:41 [gentoo-commits] proj/linux-patches:5.6 commit in: / Mike Pagano
2020-06-07 21:54 Mike Pagano
2020-06-03 11:44 Mike Pagano
2020-05-27 16:32 Mike Pagano
2020-05-20 23:13 Mike Pagano
2020-05-20 11:35 Mike Pagano
2020-05-14 11:34 Mike Pagano
2020-05-13 16:48 Mike Pagano
2020-05-13 12:06 Mike Pagano
2020-05-11 22:46 Mike Pagano
2020-05-09 19:45 Mike Pagano
2020-05-06 11:47 Mike Pagano
2020-05-02 19:25 Mike Pagano
2020-05-02 13:26 Mike Pagano
2020-04-29 17:55 Mike Pagano
2020-04-23 11:56 Mike Pagano
2020-04-21 11:24 Mike Pagano
2020-04-17 14:50 Mike Pagano
2020-04-15 15:40 Mike Pagano
2020-04-13 12:21 Mike Pagano
2020-04-12 15:29 Mike Pagano
2020-04-08 17:39 Mike Pagano
2020-04-08 12:45 Mike Pagano
2020-04-02 11:37 Mike Pagano
2020-04-02 11:35 Mike Pagano
2020-04-01 12:06 Mike Pagano
2020-03-30 12:31 Mike Pagano
2020-03-30 11:33 Mike Pagano
2020-03-30 11:15 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1591818053.e6b558ca6926f73ffd8c036e6f4096dae8edfb21.mpagano@gentoo \
--to=mpagano@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox