From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id BB19A138334 for ; Sun, 14 Jul 2019 15:48:50 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 800F5E081E; Sun, 14 Jul 2019 15:48:49 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 3AF27E081E for ; Sun, 14 Jul 2019 15:48:49 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 87E7C347B19 for ; Sun, 14 Jul 2019 15:48:47 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 9124F4D3 for ; Sun, 14 Jul 2019 15:48:45 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1563119304.7ce0954a9d6752e81c5252af7a7aa7c2e5e09723.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.1 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1017_linux-5.1.18.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 7ce0954a9d6752e81c5252af7a7aa7c2e5e09723 X-VCS-Branch: 5.1 Date: Sun, 14 Jul 2019 15:48:45 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: a6a806c7-fdae-496b-bc96-64c953d8c1c1 X-Archives-Hash: 2e7fc61f860e9ed3e5cf73f523983a06 commit: 7ce0954a9d6752e81c5252af7a7aa7c2e5e09723 Author: Mike Pagano gentoo org> AuthorDate: Sun Jul 14 15:48:24 2019 +0000 Commit: Mike Pagano gentoo org> CommitDate: Sun Jul 14 15:48:24 2019 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7ce0954a Linux patch 5.1.18 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1017_linux-5.1.18.patch | 6282 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 6286 insertions(+) diff --git a/0000_README b/0000_README index f6eec27..83bea0b 100644 --- a/0000_README +++ b/0000_README @@ -111,6 +111,10 @@ Patch: 1016_linux-5.1.17.patch From: https://www.kernel.org Desc: Linux 5.1.17 +Patch: 1017_linux-5.1.18.patch +From: https://www.kernel.org +Desc: Linux 5.1.18 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1017_linux-5.1.18.patch b/1017_linux-5.1.18.patch new file mode 100644 index 0000000..99403c2 --- /dev/null +++ b/1017_linux-5.1.18.patch @@ -0,0 +1,6282 @@ +diff --git a/Documentation/ABI/testing/sysfs-class-net-qmi b/Documentation/ABI/testing/sysfs-class-net-qmi +index 7122d6264c49..c310db4ccbc2 100644 +--- a/Documentation/ABI/testing/sysfs-class-net-qmi ++++ b/Documentation/ABI/testing/sysfs-class-net-qmi +@@ -29,7 +29,7 @@ Contact: Bjørn Mork + Description: + Unsigned integer. + +- Write a number ranging from 1 to 127 to add a qmap mux ++ Write a number ranging from 1 to 254 to add a qmap mux + based network device, supported by recent Qualcomm based + modems. + +@@ -46,5 +46,5 @@ Contact: Bjørn Mork + Description: + Unsigned integer. + +- Write a number ranging from 1 to 127 to delete a previously ++ Write a number ranging from 1 to 254 to delete a previously + created qmap mux based network device. +diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst +index ffc064c1ec68..49311f3da6f2 100644 +--- a/Documentation/admin-guide/hw-vuln/index.rst ++++ b/Documentation/admin-guide/hw-vuln/index.rst +@@ -9,5 +9,6 @@ are configurable at compile, boot or run time. + .. toctree:: + :maxdepth: 1 + ++ spectre + l1tf + mds +diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst +new file mode 100644 +index 000000000000..25f3b2532198 +--- /dev/null ++++ b/Documentation/admin-guide/hw-vuln/spectre.rst +@@ -0,0 +1,697 @@ ++.. SPDX-License-Identifier: GPL-2.0 ++ ++Spectre Side Channels ++===================== ++ ++Spectre is a class of side channel attacks that exploit branch prediction ++and speculative execution on modern CPUs to read memory, possibly ++bypassing access controls. Speculative execution side channel exploits ++do not modify memory but attempt to infer privileged data in the memory. ++ ++This document covers Spectre variant 1 and Spectre variant 2. ++ ++Affected processors ++------------------- ++ ++Speculative execution side channel methods affect a wide range of modern ++high performance processors, since most modern high speed processors ++use branch prediction and speculative execution. ++ ++The following CPUs are vulnerable: ++ ++ - Intel Core, Atom, Pentium, and Xeon processors ++ ++ - AMD Phenom, EPYC, and Zen processors ++ ++ - IBM POWER and zSeries processors ++ ++ - Higher end ARM processors ++ ++ - Apple CPUs ++ ++ - Higher end MIPS CPUs ++ ++ - Likely most other high performance CPUs. Contact your CPU vendor for details. ++ ++Whether a processor is affected or not can be read out from the Spectre ++vulnerability files in sysfs. See :ref:`spectre_sys_info`. ++ ++Related CVEs ++------------ ++ ++The following CVE entries describe Spectre variants: ++ ++ ============= ======================= ================= ++ CVE-2017-5753 Bounds check bypass Spectre variant 1 ++ CVE-2017-5715 Branch target injection Spectre variant 2 ++ ============= ======================= ================= ++ ++Problem ++------- ++ ++CPUs use speculative operations to improve performance. That may leave ++traces of memory accesses or computations in the processor's caches, ++buffers, and branch predictors. Malicious software may be able to ++influence the speculative execution paths, and then use the side effects ++of the speculative execution in the CPUs' caches and buffers to infer ++privileged data touched during the speculative execution. ++ ++Spectre variant 1 attacks take advantage of speculative execution of ++conditional branches, while Spectre variant 2 attacks use speculative ++execution of indirect branches to leak privileged memory. ++See :ref:`[1] ` :ref:`[5] ` :ref:`[7] ` ++:ref:`[10] ` :ref:`[11] `. ++ ++Spectre variant 1 (Bounds Check Bypass) ++--------------------------------------- ++ ++The bounds check bypass attack :ref:`[2] ` takes advantage ++of speculative execution that bypasses conditional branch instructions ++used for memory access bounds check (e.g. checking if the index of an ++array results in memory access within a valid range). This results in ++memory accesses to invalid memory (with out-of-bound index) that are ++done speculatively before validation checks resolve. Such speculative ++memory accesses can leave side effects, creating side channels which ++leak information to the attacker. ++ ++There are some extensions of Spectre variant 1 attacks for reading data ++over the network, see :ref:`[12] `. However such attacks ++are difficult, low bandwidth, fragile, and are considered low risk. ++ ++Spectre variant 2 (Branch Target Injection) ++------------------------------------------- ++ ++The branch target injection attack takes advantage of speculative ++execution of indirect branches :ref:`[3] `. The indirect ++branch predictors inside the processor used to guess the target of ++indirect branches can be influenced by an attacker, causing gadget code ++to be speculatively executed, thus exposing sensitive data touched by ++the victim. The side effects left in the CPU's caches during speculative ++execution can be measured to infer data values. ++ ++.. _poison_btb: ++ ++In Spectre variant 2 attacks, the attacker can steer speculative indirect ++branches in the victim to gadget code by poisoning the branch target ++buffer of a CPU used for predicting indirect branch addresses. Such ++poisoning could be done by indirect branching into existing code, ++with the address offset of the indirect branch under the attacker's ++control. Since the branch prediction on impacted hardware does not ++fully disambiguate branch address and uses the offset for prediction, ++this could cause privileged code's indirect branch to jump to a gadget ++code with the same offset. ++ ++The most useful gadgets take an attacker-controlled input parameter (such ++as a register value) so that the memory read can be controlled. Gadgets ++without input parameters might be possible, but the attacker would have ++very little control over what memory can be read, reducing the risk of ++the attack revealing useful data. ++ ++One other variant 2 attack vector is for the attacker to poison the ++return stack buffer (RSB) :ref:`[13] ` to cause speculative ++subroutine return instruction execution to go to a gadget. An attacker's ++imbalanced subroutine call instructions might "poison" entries in the ++return stack buffer which are later consumed by a victim's subroutine ++return instructions. This attack can be mitigated by flushing the return ++stack buffer on context switch, or virtual machine (VM) exit. ++ ++On systems with simultaneous multi-threading (SMT), attacks are possible ++from the sibling thread, as level 1 cache and branch target buffer ++(BTB) may be shared between hardware threads in a CPU core. A malicious ++program running on the sibling thread may influence its peer's BTB to ++steer its indirect branch speculations to gadget code, and measure the ++speculative execution's side effects left in level 1 cache to infer the ++victim's data. ++ ++Attack scenarios ++---------------- ++ ++The following list of attack scenarios have been anticipated, but may ++not cover all possible attack vectors. ++ ++1. A user process attacking the kernel ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ The attacker passes a parameter to the kernel via a register or ++ via a known address in memory during a syscall. Such parameter may ++ be used later by the kernel as an index to an array or to derive ++ a pointer for a Spectre variant 1 attack. The index or pointer ++ is invalid, but bound checks are bypassed in the code branch taken ++ for speculative execution. This could cause privileged memory to be ++ accessed and leaked. ++ ++ For kernel code that has been identified where data pointers could ++ potentially be influenced for Spectre attacks, new "nospec" accessor ++ macros are used to prevent speculative loading of data. ++ ++ Spectre variant 2 attacker can :ref:`poison ` the branch ++ target buffer (BTB) before issuing syscall to launch an attack. ++ After entering the kernel, the kernel could use the poisoned branch ++ target buffer on indirect jump and jump to gadget code in speculative ++ execution. ++ ++ If an attacker tries to control the memory addresses leaked during ++ speculative execution, he would also need to pass a parameter to the ++ gadget, either through a register or a known address in memory. After ++ the gadget has executed, he can measure the side effect. ++ ++ The kernel can protect itself against consuming poisoned branch ++ target buffer entries by using return trampolines (also known as ++ "retpoline") :ref:`[3] ` :ref:`[9] ` for all ++ indirect branches. Return trampolines trap speculative execution paths ++ to prevent jumping to gadget code during speculative execution. ++ x86 CPUs with Enhanced Indirect Branch Restricted Speculation ++ (Enhanced IBRS) available in hardware should use the feature to ++ mitigate Spectre variant 2 instead of retpoline. Enhanced IBRS is ++ more efficient than retpoline. ++ ++ There may be gadget code in firmware which could be exploited with ++ Spectre variant 2 attack by a rogue user process. To mitigate such ++ attacks on x86, Indirect Branch Restricted Speculation (IBRS) feature ++ is turned on before the kernel invokes any firmware code. ++ ++2. A user process attacking another user process ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ A malicious user process can try to attack another user process, ++ either via a context switch on the same hardware thread, or from the ++ sibling hyperthread sharing a physical processor core on simultaneous ++ multi-threading (SMT) system. ++ ++ Spectre variant 1 attacks generally require passing parameters ++ between the processes, which needs a data passing relationship, such ++ as remote procedure calls (RPC). Those parameters are used in gadget ++ code to derive invalid data pointers accessing privileged memory in ++ the attacked process. ++ ++ Spectre variant 2 attacks can be launched from a rogue process by ++ :ref:`poisoning ` the branch target buffer. This can ++ influence the indirect branch targets for a victim process that either ++ runs later on the same hardware thread, or running concurrently on ++ a sibling hardware thread sharing the same physical core. ++ ++ A user process can protect itself against Spectre variant 2 attacks ++ by using the prctl() syscall to disable indirect branch speculation ++ for itself. An administrator can also cordon off an unsafe process ++ from polluting the branch target buffer by disabling the process's ++ indirect branch speculation. This comes with a performance cost ++ from not using indirect branch speculation and clearing the branch ++ target buffer. When SMT is enabled on x86, for a process that has ++ indirect branch speculation disabled, Single Threaded Indirect Branch ++ Predictors (STIBP) :ref:`[4] ` are turned on to prevent the ++ sibling thread from controlling branch target buffer. In addition, ++ the Indirect Branch Prediction Barrier (IBPB) is issued to clear the ++ branch target buffer when context switching to and from such process. ++ ++ On x86, the return stack buffer is stuffed on context switch. ++ This prevents the branch target buffer from being used for branch ++ prediction when the return stack buffer underflows while switching to ++ a deeper call stack. Any poisoned entries in the return stack buffer ++ left by the previous process will also be cleared. ++ ++ User programs should use address space randomization to make attacks ++ more difficult (Set /proc/sys/kernel/randomize_va_space = 1 or 2). ++ ++3. A virtualized guest attacking the host ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ The attack mechanism is similar to how user processes attack the ++ kernel. The kernel is entered via hyper-calls or other virtualization ++ exit paths. ++ ++ For Spectre variant 1 attacks, rogue guests can pass parameters ++ (e.g. in registers) via hyper-calls to derive invalid pointers to ++ speculate into privileged memory after entering the kernel. For places ++ where such kernel code has been identified, nospec accessor macros ++ are used to stop speculative memory access. ++ ++ For Spectre variant 2 attacks, rogue guests can :ref:`poison ++ ` the branch target buffer or return stack buffer, causing ++ the kernel to jump to gadget code in the speculative execution paths. ++ ++ To mitigate variant 2, the host kernel can use return trampolines ++ for indirect branches to bypass the poisoned branch target buffer, ++ and flushing the return stack buffer on VM exit. This prevents rogue ++ guests from affecting indirect branching in the host kernel. ++ ++ To protect host processes from rogue guests, host processes can have ++ indirect branch speculation disabled via prctl(). The branch target ++ buffer is cleared before context switching to such processes. ++ ++4. A virtualized guest attacking other guest ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ A rogue guest may attack another guest to get data accessible by the ++ other guest. ++ ++ Spectre variant 1 attacks are possible if parameters can be passed ++ between guests. This may be done via mechanisms such as shared memory ++ or message passing. Such parameters could be used to derive data ++ pointers to privileged data in guest. The privileged data could be ++ accessed by gadget code in the victim's speculation paths. ++ ++ Spectre variant 2 attacks can be launched from a rogue guest by ++ :ref:`poisoning ` the branch target buffer or the return ++ stack buffer. Such poisoned entries could be used to influence ++ speculation execution paths in the victim guest. ++ ++ Linux kernel mitigates attacks to other guests running in the same ++ CPU hardware thread by flushing the return stack buffer on VM exit, ++ and clearing the branch target buffer before switching to a new guest. ++ ++ If SMT is used, Spectre variant 2 attacks from an untrusted guest ++ in the sibling hyperthread can be mitigated by the administrator, ++ by turning off the unsafe guest's indirect branch speculation via ++ prctl(). A guest can also protect itself by turning on microcode ++ based mitigations (such as IBPB or STIBP on x86) within the guest. ++ ++.. _spectre_sys_info: ++ ++Spectre system information ++-------------------------- ++ ++The Linux kernel provides a sysfs interface to enumerate the current ++mitigation status of the system for Spectre: whether the system is ++vulnerable, and which mitigations are active. ++ ++The sysfs file showing Spectre variant 1 mitigation status is: ++ ++ /sys/devices/system/cpu/vulnerabilities/spectre_v1 ++ ++The possible values in this file are: ++ ++ ======================================= ================================= ++ 'Mitigation: __user pointer sanitation' Protection in kernel on a case by ++ case base with explicit pointer ++ sanitation. ++ ======================================= ================================= ++ ++However, the protections are put in place on a case by case basis, ++and there is no guarantee that all possible attack vectors for Spectre ++variant 1 are covered. ++ ++The spectre_v2 kernel file reports if the kernel has been compiled with ++retpoline mitigation or if the CPU has hardware mitigation, and if the ++CPU has support for additional process-specific mitigation. ++ ++This file also reports CPU features enabled by microcode to mitigate ++attack between user processes: ++ ++1. Indirect Branch Prediction Barrier (IBPB) to add additional ++ isolation between processes of different users. ++2. Single Thread Indirect Branch Predictors (STIBP) to add additional ++ isolation between CPU threads running on the same core. ++ ++These CPU features may impact performance when used and can be enabled ++per process on a case-by-case base. ++ ++The sysfs file showing Spectre variant 2 mitigation status is: ++ ++ /sys/devices/system/cpu/vulnerabilities/spectre_v2 ++ ++The possible values in this file are: ++ ++ - Kernel status: ++ ++ ==================================== ================================= ++ 'Not affected' The processor is not vulnerable ++ 'Vulnerable' Vulnerable, no mitigation ++ 'Mitigation: Full generic retpoline' Software-focused mitigation ++ 'Mitigation: Full AMD retpoline' AMD-specific software mitigation ++ 'Mitigation: Enhanced IBRS' Hardware-focused mitigation ++ ==================================== ================================= ++ ++ - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is ++ used to protect against Spectre variant 2 attacks when calling firmware (x86 only). ++ ++ ========== ============================================================= ++ 'IBRS_FW' Protection against user program attacks when calling firmware ++ ========== ============================================================= ++ ++ - Indirect branch prediction barrier (IBPB) status for protection between ++ processes of different users. This feature can be controlled through ++ prctl() per process, or through kernel command line options. This is ++ an x86 only feature. For more details see below. ++ ++ =================== ======================================================== ++ 'IBPB: disabled' IBPB unused ++ 'IBPB: always-on' Use IBPB on all tasks ++ 'IBPB: conditional' Use IBPB on SECCOMP or indirect branch restricted tasks ++ =================== ======================================================== ++ ++ - Single threaded indirect branch prediction (STIBP) status for protection ++ between different hyper threads. This feature can be controlled through ++ prctl per process, or through kernel command line options. This is x86 ++ only feature. For more details see below. ++ ++ ==================== ======================================================== ++ 'STIBP: disabled' STIBP unused ++ 'STIBP: forced' Use STIBP on all tasks ++ 'STIBP: conditional' Use STIBP on SECCOMP or indirect branch restricted tasks ++ ==================== ======================================================== ++ ++ - Return stack buffer (RSB) protection status: ++ ++ ============= =========================================== ++ 'RSB filling' Protection of RSB on context switch enabled ++ ============= =========================================== ++ ++Full mitigation might require a microcode update from the CPU ++vendor. When the necessary microcode is not available, the kernel will ++report vulnerability. ++ ++Turning on mitigation for Spectre variant 1 and Spectre variant 2 ++----------------------------------------------------------------- ++ ++1. Kernel mitigation ++^^^^^^^^^^^^^^^^^^^^ ++ ++ For the Spectre variant 1, vulnerable kernel code (as determined ++ by code audit or scanning tools) is annotated on a case by case ++ basis to use nospec accessor macros for bounds clipping :ref:`[2] ++ ` to avoid any usable disclosure gadgets. However, it may ++ not cover all attack vectors for Spectre variant 1. ++ ++ For Spectre variant 2 mitigation, the compiler turns indirect calls or ++ jumps in the kernel into equivalent return trampolines (retpolines) ++ :ref:`[3] ` :ref:`[9] ` to go to the target ++ addresses. Speculative execution paths under retpolines are trapped ++ in an infinite loop to prevent any speculative execution jumping to ++ a gadget. ++ ++ To turn on retpoline mitigation on a vulnerable CPU, the kernel ++ needs to be compiled with a gcc compiler that supports the ++ -mindirect-branch=thunk-extern -mindirect-branch-register options. ++ If the kernel is compiled with a Clang compiler, the compiler needs ++ to support -mretpoline-external-thunk option. The kernel config ++ CONFIG_RETPOLINE needs to be turned on, and the CPU needs to run with ++ the latest updated microcode. ++ ++ On Intel Skylake-era systems the mitigation covers most, but not all, ++ cases. See :ref:`[3] ` for more details. ++ ++ On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced ++ IBRS on x86), retpoline is automatically disabled at run time. ++ ++ The retpoline mitigation is turned on by default on vulnerable ++ CPUs. It can be forced on or off by the administrator ++ via the kernel command line and sysfs control files. See ++ :ref:`spectre_mitigation_control_command_line`. ++ ++ On x86, indirect branch restricted speculation is turned on by default ++ before invoking any firmware code to prevent Spectre variant 2 exploits ++ using the firmware. ++ ++ Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y ++ and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes ++ attacks on the kernel generally more difficult. ++ ++2. User program mitigation ++^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ User programs can mitigate Spectre variant 1 using LFENCE or "bounds ++ clipping". For more details see :ref:`[2] `. ++ ++ For Spectre variant 2 mitigation, individual user programs ++ can be compiled with return trampolines for indirect branches. ++ This protects them from consuming poisoned entries in the branch ++ target buffer left by malicious software. Alternatively, the ++ programs can disable their indirect branch speculation via prctl() ++ (See :ref:`Documentation/userspace-api/spec_ctrl.rst `). ++ On x86, this will turn on STIBP to guard against attacks from the ++ sibling thread when the user program is running, and use IBPB to ++ flush the branch target buffer when switching to/from the program. ++ ++ Restricting indirect branch speculation on a user program will ++ also prevent the program from launching a variant 2 attack ++ on x86. All sand-boxed SECCOMP programs have indirect branch ++ speculation restricted by default. Administrators can change ++ that behavior via the kernel command line and sysfs control files. ++ See :ref:`spectre_mitigation_control_command_line`. ++ ++ Programs that disable their indirect branch speculation will have ++ more overhead and run slower. ++ ++ User programs should use address space randomization ++ (/proc/sys/kernel/randomize_va_space = 1 or 2) to make attacks more ++ difficult. ++ ++3. VM mitigation ++^^^^^^^^^^^^^^^^ ++ ++ Within the kernel, Spectre variant 1 attacks from rogue guests are ++ mitigated on a case by case basis in VM exit paths. Vulnerable code ++ uses nospec accessor macros for "bounds clipping", to avoid any ++ usable disclosure gadgets. However, this may not cover all variant ++ 1 attack vectors. ++ ++ For Spectre variant 2 attacks from rogue guests to the kernel, the ++ Linux kernel uses retpoline or Enhanced IBRS to prevent consumption of ++ poisoned entries in branch target buffer left by rogue guests. It also ++ flushes the return stack buffer on every VM exit to prevent a return ++ stack buffer underflow so poisoned branch target buffer could be used, ++ or attacker guests leaving poisoned entries in the return stack buffer. ++ ++ To mitigate guest-to-guest attacks in the same CPU hardware thread, ++ the branch target buffer is sanitized by flushing before switching ++ to a new guest on a CPU. ++ ++ The above mitigations are turned on by default on vulnerable CPUs. ++ ++ To mitigate guest-to-guest attacks from sibling thread when SMT is ++ in use, an untrusted guest running in the sibling thread can have ++ its indirect branch speculation disabled by administrator via prctl(). ++ ++ The kernel also allows guests to use any microcode based mitigation ++ they choose to use (such as IBPB or STIBP on x86) to protect themselves. ++ ++.. _spectre_mitigation_control_command_line: ++ ++Mitigation control on the kernel command line ++--------------------------------------------- ++ ++Spectre variant 2 mitigation can be disabled or force enabled at the ++kernel command line. ++ ++ nospectre_v2 ++ ++ [X86] Disable all mitigations for the Spectre variant 2 ++ (indirect branch prediction) vulnerability. System may ++ allow data leaks with this option, which is equivalent ++ to spectre_v2=off. ++ ++ ++ spectre_v2= ++ ++ [X86] Control mitigation of Spectre variant 2 ++ (indirect branch speculation) vulnerability. ++ The default operation protects the kernel from ++ user space attacks. ++ ++ on ++ unconditionally enable, implies ++ spectre_v2_user=on ++ off ++ unconditionally disable, implies ++ spectre_v2_user=off ++ auto ++ kernel detects whether your CPU model is ++ vulnerable ++ ++ Selecting 'on' will, and 'auto' may, choose a ++ mitigation method at run time according to the ++ CPU, the available microcode, the setting of the ++ CONFIG_RETPOLINE configuration option, and the ++ compiler with which the kernel was built. ++ ++ Selecting 'on' will also enable the mitigation ++ against user space to user space task attacks. ++ ++ Selecting 'off' will disable both the kernel and ++ the user space protections. ++ ++ Specific mitigations can also be selected manually: ++ ++ retpoline ++ replace indirect branches ++ retpoline,generic ++ google's original retpoline ++ retpoline,amd ++ AMD-specific minimal thunk ++ ++ Not specifying this option is equivalent to ++ spectre_v2=auto. ++ ++For user space mitigation: ++ ++ spectre_v2_user= ++ ++ [X86] Control mitigation of Spectre variant 2 ++ (indirect branch speculation) vulnerability between ++ user space tasks ++ ++ on ++ Unconditionally enable mitigations. Is ++ enforced by spectre_v2=on ++ ++ off ++ Unconditionally disable mitigations. Is ++ enforced by spectre_v2=off ++ ++ prctl ++ Indirect branch speculation is enabled, ++ but mitigation can be enabled via prctl ++ per thread. The mitigation control state ++ is inherited on fork. ++ ++ prctl,ibpb ++ Like "prctl" above, but only STIBP is ++ controlled per thread. IBPB is issued ++ always when switching between different user ++ space processes. ++ ++ seccomp ++ Same as "prctl" above, but all seccomp ++ threads will enable the mitigation unless ++ they explicitly opt out. ++ ++ seccomp,ibpb ++ Like "seccomp" above, but only STIBP is ++ controlled per thread. IBPB is issued ++ always when switching between different ++ user space processes. ++ ++ auto ++ Kernel selects the mitigation depending on ++ the available CPU features and vulnerability. ++ ++ Default mitigation: ++ If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl" ++ ++ Not specifying this option is equivalent to ++ spectre_v2_user=auto. ++ ++ In general the kernel by default selects ++ reasonable mitigations for the current CPU. To ++ disable Spectre variant 2 mitigations, boot with ++ spectre_v2=off. Spectre variant 1 mitigations ++ cannot be disabled. ++ ++Mitigation selection guide ++-------------------------- ++ ++1. Trusted userspace ++^^^^^^^^^^^^^^^^^^^^ ++ ++ If all userspace applications are from trusted sources and do not ++ execute externally supplied untrusted code, then the mitigations can ++ be disabled. ++ ++2. Protect sensitive programs ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ For security-sensitive programs that have secrets (e.g. crypto ++ keys), protection against Spectre variant 2 can be put in place by ++ disabling indirect branch speculation when the program is running ++ (See :ref:`Documentation/userspace-api/spec_ctrl.rst `). ++ ++3. Sandbox untrusted programs ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ Untrusted programs that could be a source of attacks can be cordoned ++ off by disabling their indirect branch speculation when they are run ++ (See :ref:`Documentation/userspace-api/spec_ctrl.rst `). ++ This prevents untrusted programs from polluting the branch target ++ buffer. All programs running in SECCOMP sandboxes have indirect ++ branch speculation restricted by default. This behavior can be ++ changed via the kernel command line and sysfs control files. See ++ :ref:`spectre_mitigation_control_command_line`. ++ ++3. High security mode ++^^^^^^^^^^^^^^^^^^^^^ ++ ++ All Spectre variant 2 mitigations can be forced on ++ at boot time for all programs (See the "on" option in ++ :ref:`spectre_mitigation_control_command_line`). This will add ++ overhead as indirect branch speculations for all programs will be ++ restricted. ++ ++ On x86, branch target buffer will be flushed with IBPB when switching ++ to a new program. STIBP is left on all the time to protect programs ++ against variant 2 attacks originating from programs running on ++ sibling threads. ++ ++ Alternatively, STIBP can be used only when running programs ++ whose indirect branch speculation is explicitly disabled, ++ while IBPB is still used all the time when switching to a new ++ program to clear the branch target buffer (See "ibpb" option in ++ :ref:`spectre_mitigation_control_command_line`). This "ibpb" option ++ has less performance cost than the "on" option, which leaves STIBP ++ on all the time. ++ ++References on Spectre ++--------------------- ++ ++Intel white papers: ++ ++.. _spec_ref1: ++ ++[1] `Intel analysis of speculative execution side channels `_. ++ ++.. _spec_ref2: ++ ++[2] `Bounds check bypass `_. ++ ++.. _spec_ref3: ++ ++[3] `Deep dive: Retpoline: A branch target injection mitigation `_. ++ ++.. _spec_ref4: ++ ++[4] `Deep Dive: Single Thread Indirect Branch Predictors `_. ++ ++AMD white papers: ++ ++.. _spec_ref5: ++ ++[5] `AMD64 technology indirect branch control extension `_. ++ ++.. _spec_ref6: ++ ++[6] `Software techniques for managing speculation on AMD processors `_. ++ ++ARM white papers: ++ ++.. _spec_ref7: ++ ++[7] `Cache speculation side-channels `_. ++ ++.. _spec_ref8: ++ ++[8] `Cache speculation issues update `_. ++ ++Google white paper: ++ ++.. _spec_ref9: ++ ++[9] `Retpoline: a software construct for preventing branch-target-injection `_. ++ ++MIPS white paper: ++ ++.. _spec_ref10: ++ ++[10] `MIPS: response on speculative execution and side channel vulnerabilities `_. ++ ++Academic papers: ++ ++.. _spec_ref11: ++ ++[11] `Spectre Attacks: Exploiting Speculative Execution `_. ++ ++.. _spec_ref12: ++ ++[12] `NetSpectre: Read Arbitrary Memory over Network `_. ++ ++.. _spec_ref13: ++ ++[13] `Spectre Returns! Speculation Attacks using the Return Stack Buffer `_. +diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt +index c7937f379d22..135438b0fd42 100644 +--- a/Documentation/admin-guide/kernel-parameters.txt ++++ b/Documentation/admin-guide/kernel-parameters.txt +@@ -5074,12 +5074,6 @@ + emulate [default] Vsyscalls turn into traps and are + emulated reasonably safely. + +- native Vsyscalls are native syscall instructions. +- This is a little bit faster than trapping +- and makes a few dynamic recompilers work +- better than they would in emulation mode. +- It also makes exploits much easier to write. +- + none Vsyscalls don't work at all. This makes + them quite hard to use for exploits but + might break your system. +diff --git a/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt b/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt +index 188c8bd4eb67..5a0111d4de58 100644 +--- a/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt ++++ b/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt +@@ -4,6 +4,7 @@ Required properties: + - compatible: Should be one of the following: + - "microchip,mcp2510" for MCP2510. + - "microchip,mcp2515" for MCP2515. ++ - "microchip,mcp25625" for MCP25625. + - reg: SPI chip select. + - clocks: The clock feeding the CAN controller. + - interrupts: Should contain IRQ line for the CAN controller. +diff --git a/Documentation/userspace-api/spec_ctrl.rst b/Documentation/userspace-api/spec_ctrl.rst +index 1129c7550a48..7ddd8f667459 100644 +--- a/Documentation/userspace-api/spec_ctrl.rst ++++ b/Documentation/userspace-api/spec_ctrl.rst +@@ -49,6 +49,8 @@ If PR_SPEC_PRCTL is set, then the per-task control of the mitigation is + available. If not set, prctl(PR_SET_SPECULATION_CTRL) for the speculation + misfeature will fail. + ++.. _set_spec_ctrl: ++ + PR_SET_SPECULATION_CTRL + ----------------------- + +diff --git a/Makefile b/Makefile +index 14c91d46583f..01a0a61f86e7 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 1 +-SUBLEVEL = 17 ++SUBLEVEL = 18 + EXTRAVERSION = + NAME = Shy Crocodile + +diff --git a/arch/arm/boot/dts/am335x-pcm-953.dtsi b/arch/arm/boot/dts/am335x-pcm-953.dtsi +index 1ec8e0d80191..572fbd254690 100644 +--- a/arch/arm/boot/dts/am335x-pcm-953.dtsi ++++ b/arch/arm/boot/dts/am335x-pcm-953.dtsi +@@ -197,7 +197,7 @@ + bus-width = <4>; + pinctrl-names = "default"; + pinctrl-0 = <&mmc1_pins>; +- cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>; ++ cd-gpios = <&gpio0 6 GPIO_ACTIVE_LOW>; + status = "okay"; + }; + +diff --git a/arch/arm/boot/dts/am335x-wega.dtsi b/arch/arm/boot/dts/am335x-wega.dtsi +index 8ce541739b24..83e4fe595e37 100644 +--- a/arch/arm/boot/dts/am335x-wega.dtsi ++++ b/arch/arm/boot/dts/am335x-wega.dtsi +@@ -157,7 +157,7 @@ + bus-width = <4>; + pinctrl-names = "default"; + pinctrl-0 = <&mmc1_pins>; +- cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>; ++ cd-gpios = <&gpio0 6 GPIO_ACTIVE_LOW>; + status = "okay"; + }; + +diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi +index 414f1cd68733..03282d9edf7d 100644 +--- a/arch/arm/boot/dts/dra7-l4.dtsi ++++ b/arch/arm/boot/dts/dra7-l4.dtsi +@@ -4450,8 +4450,6 @@ + timer12: timer@0 { + compatible = "ti,omap5430-timer"; + reg = <0x0 0x80>; +- clocks = <&wkupaon_clkctrl DRA7_WKUPAON_TIMER12_CLKCTRL 24>; +- clock-names = "fck"; + interrupts = ; + ti,timer-alwon; + ti,timer-secure; +diff --git a/arch/arm/mach-davinci/board-da850-evm.c b/arch/arm/mach-davinci/board-da850-evm.c +index 1fdc9283a8c5..2450936b91d0 100644 +--- a/arch/arm/mach-davinci/board-da850-evm.c ++++ b/arch/arm/mach-davinci/board-da850-evm.c +@@ -1479,6 +1479,8 @@ static __init void da850_evm_init(void) + if (ret) + pr_warn("%s: dsp/rproc registration failed: %d\n", + __func__, ret); ++ ++ regulator_has_full_constraints(); + } + + #ifdef CONFIG_SERIAL_8250_CONSOLE +diff --git a/arch/arm/mach-davinci/devices-da8xx.c b/arch/arm/mach-davinci/devices-da8xx.c +index b8dc674e06bc..261240913b45 100644 +--- a/arch/arm/mach-davinci/devices-da8xx.c ++++ b/arch/arm/mach-davinci/devices-da8xx.c +@@ -686,6 +686,9 @@ static struct platform_device da8xx_lcdc_device = { + .id = 0, + .num_resources = ARRAY_SIZE(da8xx_lcdc_resources), + .resource = da8xx_lcdc_resources, ++ .dev = { ++ .coherent_dma_mask = DMA_BIT_MASK(32), ++ } + }; + + int __init da8xx_register_lcdc(struct da8xx_lcdc_platform_data *pdata) +diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h +index ed870468ef6f..d408711d09fb 100644 +--- a/arch/powerpc/include/asm/page.h ++++ b/arch/powerpc/include/asm/page.h +@@ -330,6 +330,13 @@ struct vm_area_struct; + #endif /* __ASSEMBLY__ */ + #include + ++/* ++ * Allow 30-bit DMA for very limited Broadcom wifi chips on many powerbooks. ++ */ ++#ifdef CONFIG_PPC32 ++#define ARCH_ZONE_DMA_BITS 30 ++#else + #define ARCH_ZONE_DMA_BITS 31 ++#endif + + #endif /* _ASM_POWERPC_PAGE_H */ +diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c +index f6787f90e158..b98ce400a889 100644 +--- a/arch/powerpc/mm/mem.c ++++ b/arch/powerpc/mm/mem.c +@@ -255,7 +255,8 @@ void __init paging_init(void) + (long int)((top_of_ram - total_ram) >> 20)); + + #ifdef CONFIG_ZONE_DMA +- max_zone_pfns[ZONE_DMA] = min(max_low_pfn, 0x7fffffffUL >> PAGE_SHIFT); ++ max_zone_pfns[ZONE_DMA] = min(max_low_pfn, ++ ((1UL << ARCH_ZONE_DMA_BITS) - 1) >> PAGE_SHIFT); + #endif + max_zone_pfns[ZONE_NORMAL] = max_low_pfn; + #ifdef CONFIG_HIGHMEM +diff --git a/arch/powerpc/platforms/powermac/Kconfig b/arch/powerpc/platforms/powermac/Kconfig +index f834a19ed772..c02d8c503b29 100644 +--- a/arch/powerpc/platforms/powermac/Kconfig ++++ b/arch/powerpc/platforms/powermac/Kconfig +@@ -7,6 +7,7 @@ config PPC_PMAC + select PPC_INDIRECT_PCI if PPC32 + select PPC_MPC106 if PPC32 + select PPC_NATIVE ++ select ZONE_DMA if PPC32 + default y + + config PPC_PMAC64 +diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig +index 2fd3461e50ab..4f02967e55de 100644 +--- a/arch/riscv/configs/defconfig ++++ b/arch/riscv/configs/defconfig +@@ -49,6 +49,8 @@ CONFIG_SERIAL_8250=y + CONFIG_SERIAL_8250_CONSOLE=y + CONFIG_SERIAL_OF_PLATFORM=y + CONFIG_SERIAL_EARLYCON_RISCV_SBI=y ++CONFIG_SERIAL_SIFIVE=y ++CONFIG_SERIAL_SIFIVE_CONSOLE=y + CONFIG_HVC_RISCV_SBI=y + # CONFIG_PTP_1588_CLOCK is not set + CONFIG_DRM=y +@@ -64,6 +66,8 @@ CONFIG_USB_OHCI_HCD_PLATFORM=y + CONFIG_USB_STORAGE=y + CONFIG_USB_UAS=y + CONFIG_VIRTIO_MMIO=y ++CONFIG_CLK_SIFIVE=y ++CONFIG_CLK_SIFIVE_FU540_PRCI=y + CONFIG_SIFIVE_PLIC=y + CONFIG_EXT4_FS=y + CONFIG_EXT4_FS_POSIX_ACL=y +diff --git a/arch/riscv/lib/delay.c b/arch/riscv/lib/delay.c +index dce8ae24c6d3..ee6853c1e341 100644 +--- a/arch/riscv/lib/delay.c ++++ b/arch/riscv/lib/delay.c +@@ -88,7 +88,7 @@ EXPORT_SYMBOL(__delay); + + void udelay(unsigned long usecs) + { +- unsigned long ucycles = usecs * lpj_fine * UDELAY_MULT; ++ u64 ucycles = (u64)usecs * lpj_fine * UDELAY_MULT; + + if (unlikely(usecs > MAX_UDELAY_US)) { + __delay((u64)usecs * riscv_timebase / 1000000ULL); +diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net/bpf_jit_comp.c +index 80b12aa5e10d..426d5c33ea90 100644 +--- a/arch/riscv/net/bpf_jit_comp.c ++++ b/arch/riscv/net/bpf_jit_comp.c +@@ -751,22 +751,32 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, + case BPF_ALU | BPF_ADD | BPF_X: + case BPF_ALU64 | BPF_ADD | BPF_X: + emit(is64 ? rv_add(rd, rd, rs) : rv_addw(rd, rd, rs), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + case BPF_ALU | BPF_SUB | BPF_X: + case BPF_ALU64 | BPF_SUB | BPF_X: + emit(is64 ? rv_sub(rd, rd, rs) : rv_subw(rd, rd, rs), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + case BPF_ALU | BPF_AND | BPF_X: + case BPF_ALU64 | BPF_AND | BPF_X: + emit(rv_and(rd, rd, rs), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + case BPF_ALU | BPF_OR | BPF_X: + case BPF_ALU64 | BPF_OR | BPF_X: + emit(rv_or(rd, rd, rs), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + case BPF_ALU | BPF_XOR | BPF_X: + case BPF_ALU64 | BPF_XOR | BPF_X: + emit(rv_xor(rd, rd, rs), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + case BPF_ALU | BPF_MUL | BPF_X: + case BPF_ALU64 | BPF_MUL | BPF_X: +@@ -789,14 +799,20 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, + case BPF_ALU | BPF_LSH | BPF_X: + case BPF_ALU64 | BPF_LSH | BPF_X: + emit(is64 ? rv_sll(rd, rd, rs) : rv_sllw(rd, rd, rs), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + case BPF_ALU | BPF_RSH | BPF_X: + case BPF_ALU64 | BPF_RSH | BPF_X: + emit(is64 ? rv_srl(rd, rd, rs) : rv_srlw(rd, rd, rs), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + case BPF_ALU | BPF_ARSH | BPF_X: + case BPF_ALU64 | BPF_ARSH | BPF_X: + emit(is64 ? rv_sra(rd, rd, rs) : rv_sraw(rd, rd, rs), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + + /* dst = -dst */ +@@ -804,6 +820,8 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, + case BPF_ALU64 | BPF_NEG: + emit(is64 ? rv_sub(rd, RV_REG_ZERO, rd) : + rv_subw(rd, RV_REG_ZERO, rd), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + + /* dst = BSWAP##imm(dst) */ +@@ -958,14 +976,20 @@ out_be: + case BPF_ALU | BPF_LSH | BPF_K: + case BPF_ALU64 | BPF_LSH | BPF_K: + emit(is64 ? rv_slli(rd, rd, imm) : rv_slliw(rd, rd, imm), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + case BPF_ALU | BPF_RSH | BPF_K: + case BPF_ALU64 | BPF_RSH | BPF_K: + emit(is64 ? rv_srli(rd, rd, imm) : rv_srliw(rd, rd, imm), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + case BPF_ALU | BPF_ARSH | BPF_K: + case BPF_ALU64 | BPF_ARSH | BPF_K: + emit(is64 ? rv_srai(rd, rd, imm) : rv_sraiw(rd, rd, imm), ctx); ++ if (!is64) ++ emit_zext_32(rd, ctx); + break; + + /* JUMP off */ +diff --git a/arch/s390/Makefile b/arch/s390/Makefile +index e21053e5e0da..bbd2dab6730e 100644 +--- a/arch/s390/Makefile ++++ b/arch/s390/Makefile +@@ -24,6 +24,7 @@ KBUILD_CFLAGS_DECOMPRESSOR += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY + KBUILD_CFLAGS_DECOMPRESSOR += -fno-delete-null-pointer-checks -msoft-float + KBUILD_CFLAGS_DECOMPRESSOR += -fno-asynchronous-unwind-tables + KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-option,-ffreestanding) ++KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, address-of-packed-member) + KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),-g) + KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO_DWARF4), $(call cc-option, -gdwarf-4,)) + UTS_MACHINE := s390x +diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c +index 4b8ee05dd6ad..6d8f108dd563 100644 +--- a/arch/x86/kernel/ptrace.c ++++ b/arch/x86/kernel/ptrace.c +@@ -24,6 +24,7 @@ + #include + #include + #include ++#include + + #include + #include +@@ -642,9 +643,11 @@ static unsigned long ptrace_get_debugreg(struct task_struct *tsk, int n) + { + struct thread_struct *thread = &tsk->thread; + unsigned long val = 0; ++ int index = n; + + if (n < HBP_NUM) { +- struct perf_event *bp = thread->ptrace_bps[n]; ++ struct perf_event *bp = thread->ptrace_bps[index]; ++ index = array_index_nospec(index, HBP_NUM); + + if (bp) + val = bp->hw.info.address; +diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c +index a5b802a12212..71d3fef1edc9 100644 +--- a/arch/x86/kernel/tls.c ++++ b/arch/x86/kernel/tls.c +@@ -5,6 +5,7 @@ + #include + #include + #include ++#include + + #include + #include +@@ -220,6 +221,7 @@ int do_get_thread_area(struct task_struct *p, int idx, + struct user_desc __user *u_info) + { + struct user_desc info; ++ int index; + + if (idx == -1 && get_user(idx, &u_info->entry_number)) + return -EFAULT; +@@ -227,8 +229,11 @@ int do_get_thread_area(struct task_struct *p, int idx, + if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX) + return -EINVAL; + +- fill_user_desc(&info, idx, +- &p->thread.tls_array[idx - GDT_ENTRY_TLS_MIN]); ++ index = idx - GDT_ENTRY_TLS_MIN; ++ index = array_index_nospec(index, ++ GDT_ENTRY_TLS_MAX - GDT_ENTRY_TLS_MIN + 1); ++ ++ fill_user_desc(&info, idx, &p->thread.tls_array[index]); + + if (copy_to_user(u_info, &info, sizeof(info))) + return -EFAULT; +diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c +index 5fa0c17d0b41..4ca834d22169 100644 +--- a/arch/x86/kvm/vmx/nested.c ++++ b/arch/x86/kvm/vmx/nested.c +@@ -1404,7 +1404,7 @@ static int copy_enlightened_to_vmcs12(struct vcpu_vmx *vmx) + } + + if (unlikely(!(evmcs->hv_clean_fields & +- HV_VMX_ENLIGHTENED_CLEAN_FIELD_CONTROL_PROC))) { ++ HV_VMX_ENLIGHTENED_CLEAN_FIELD_CONTROL_EXCPN))) { + vmcs12->exception_bitmap = evmcs->exception_bitmap; + } + +@@ -1444,7 +1444,7 @@ static int copy_enlightened_to_vmcs12(struct vcpu_vmx *vmx) + } + + if (unlikely(!(evmcs->hv_clean_fields & +- HV_VMX_ENLIGHTENED_CLEAN_FIELD_HOST_GRP1))) { ++ HV_VMX_ENLIGHTENED_CLEAN_FIELD_CONTROL_GRP1))) { + vmcs12->pin_based_vm_exec_control = + evmcs->pin_based_vm_exec_control; + vmcs12->vm_exit_controls = evmcs->vm_exit_controls; +diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c +index afabf597c855..d88bc0935886 100644 +--- a/arch/x86/net/bpf_jit_comp.c ++++ b/arch/x86/net/bpf_jit_comp.c +@@ -190,9 +190,7 @@ struct jit_context { + #define BPF_MAX_INSN_SIZE 128 + #define BPF_INSN_SAFETY 64 + +-#define AUX_STACK_SPACE 40 /* Space for RBX, R13, R14, R15, tailcnt */ +- +-#define PROLOGUE_SIZE 37 ++#define PROLOGUE_SIZE 20 + + /* + * Emit x86-64 prologue code for BPF program and check its size. +@@ -203,44 +201,19 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf) + u8 *prog = *pprog; + int cnt = 0; + +- /* push rbp */ +- EMIT1(0x55); +- +- /* mov rbp,rsp */ +- EMIT3(0x48, 0x89, 0xE5); +- +- /* sub rsp, rounded_stack_depth + AUX_STACK_SPACE */ +- EMIT3_off32(0x48, 0x81, 0xEC, +- round_up(stack_depth, 8) + AUX_STACK_SPACE); +- +- /* sub rbp, AUX_STACK_SPACE */ +- EMIT4(0x48, 0x83, 0xED, AUX_STACK_SPACE); +- +- /* mov qword ptr [rbp+0],rbx */ +- EMIT4(0x48, 0x89, 0x5D, 0); +- /* mov qword ptr [rbp+8],r13 */ +- EMIT4(0x4C, 0x89, 0x6D, 8); +- /* mov qword ptr [rbp+16],r14 */ +- EMIT4(0x4C, 0x89, 0x75, 16); +- /* mov qword ptr [rbp+24],r15 */ +- EMIT4(0x4C, 0x89, 0x7D, 24); +- ++ EMIT1(0x55); /* push rbp */ ++ EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */ ++ /* sub rsp, rounded_stack_depth */ ++ EMIT3_off32(0x48, 0x81, 0xEC, round_up(stack_depth, 8)); ++ EMIT1(0x53); /* push rbx */ ++ EMIT2(0x41, 0x55); /* push r13 */ ++ EMIT2(0x41, 0x56); /* push r14 */ ++ EMIT2(0x41, 0x57); /* push r15 */ + if (!ebpf_from_cbpf) { +- /* +- * Clear the tail call counter (tail_call_cnt): for eBPF tail +- * calls we need to reset the counter to 0. It's done in two +- * instructions, resetting RAX register to 0, and moving it +- * to the counter location. +- */ +- +- /* xor eax, eax */ +- EMIT2(0x31, 0xc0); +- /* mov qword ptr [rbp+32], rax */ +- EMIT4(0x48, 0x89, 0x45, 32); +- ++ /* zero init tail_call_cnt */ ++ EMIT2(0x6a, 0x00); + BUILD_BUG_ON(cnt != PROLOGUE_SIZE); + } +- + *pprog = prog; + } + +@@ -285,13 +258,13 @@ static void emit_bpf_tail_call(u8 **pprog) + * if (tail_call_cnt > MAX_TAIL_CALL_CNT) + * goto out; + */ +- EMIT2_off32(0x8B, 0x85, 36); /* mov eax, dword ptr [rbp + 36] */ ++ EMIT2_off32(0x8B, 0x85, -36 - MAX_BPF_STACK); /* mov eax, dword ptr [rbp - 548] */ + EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */ + #define OFFSET2 (30 + RETPOLINE_RAX_BPF_JIT_SIZE) + EMIT2(X86_JA, OFFSET2); /* ja out */ + label2 = cnt; + EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */ +- EMIT2_off32(0x89, 0x85, 36); /* mov dword ptr [rbp + 36], eax */ ++ EMIT2_off32(0x89, 0x85, -36 - MAX_BPF_STACK); /* mov dword ptr [rbp -548], eax */ + + /* prog = array->ptrs[index]; */ + EMIT4_off32(0x48, 0x8B, 0x84, 0xD6, /* mov rax, [rsi + rdx * 8 + offsetof(...)] */ +@@ -1040,19 +1013,14 @@ emit_jmp: + seen_exit = true; + /* Update cleanup_addr */ + ctx->cleanup_addr = proglen; +- /* mov rbx, qword ptr [rbp+0] */ +- EMIT4(0x48, 0x8B, 0x5D, 0); +- /* mov r13, qword ptr [rbp+8] */ +- EMIT4(0x4C, 0x8B, 0x6D, 8); +- /* mov r14, qword ptr [rbp+16] */ +- EMIT4(0x4C, 0x8B, 0x75, 16); +- /* mov r15, qword ptr [rbp+24] */ +- EMIT4(0x4C, 0x8B, 0x7D, 24); +- +- /* add rbp, AUX_STACK_SPACE */ +- EMIT4(0x48, 0x83, 0xC5, AUX_STACK_SPACE); +- EMIT1(0xC9); /* leave */ +- EMIT1(0xC3); /* ret */ ++ if (!bpf_prog_was_classic(bpf_prog)) ++ EMIT1(0x5B); /* get rid of tail_call_cnt */ ++ EMIT2(0x41, 0x5F); /* pop r15 */ ++ EMIT2(0x41, 0x5E); /* pop r14 */ ++ EMIT2(0x41, 0x5D); /* pop r13 */ ++ EMIT1(0x5B); /* pop rbx */ ++ EMIT1(0xC9); /* leave */ ++ EMIT1(0xC3); /* ret */ + break; + + default: +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c +index 679d608347ea..7d4d47177940 100644 +--- a/block/bfq-iosched.c ++++ b/block/bfq-iosched.c +@@ -4265,6 +4265,7 @@ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync) + unsigned long flags; + + spin_lock_irqsave(&bfqd->lock, flags); ++ bfqq->bic = NULL; + bfq_exit_bfqq(bfqd, bfqq); + bic_set_bfqq(bic, NULL, is_sync); + spin_unlock_irqrestore(&bfqd->lock, flags); +diff --git a/crypto/lrw.c b/crypto/lrw.c +index cc5c89246193..ac2a09ede90f 100644 +--- a/crypto/lrw.c ++++ b/crypto/lrw.c +@@ -388,7 +388,7 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb) + inst->alg.base.cra_priority = alg->base.cra_priority; + inst->alg.base.cra_blocksize = LRW_BLOCK_SIZE; + inst->alg.base.cra_alignmask = alg->base.cra_alignmask | +- (__alignof__(__be32) - 1); ++ (__alignof__(be128) - 1); + + inst->alg.ivsize = LRW_BLOCK_SIZE; + inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) + +diff --git a/drivers/android/binder.c b/drivers/android/binder.c +index cc19d91c1688..e64db514f1c4 100644 +--- a/drivers/android/binder.c ++++ b/drivers/android/binder.c +@@ -2068,10 +2068,9 @@ static size_t binder_get_object(struct binder_proc *proc, + + read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset); + if (offset > buffer->data_size || read_size < sizeof(*hdr) || +- !IS_ALIGNED(offset, sizeof(u32))) ++ binder_alloc_copy_from_buffer(&proc->alloc, object, buffer, ++ offset, read_size)) + return 0; +- binder_alloc_copy_from_buffer(&proc->alloc, object, buffer, +- offset, read_size); + + /* Ok, now see if we read a complete object. */ + hdr = &object->hdr; +@@ -2140,8 +2139,10 @@ static struct binder_buffer_object *binder_validate_ptr( + return NULL; + + buffer_offset = start_offset + sizeof(binder_size_t) * index; +- binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, +- b, buffer_offset, sizeof(object_offset)); ++ if (binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, ++ b, buffer_offset, ++ sizeof(object_offset))) ++ return NULL; + object_size = binder_get_object(proc, b, object_offset, object); + if (!object_size || object->hdr.type != BINDER_TYPE_PTR) + return NULL; +@@ -2221,10 +2222,12 @@ static bool binder_validate_fixup(struct binder_proc *proc, + return false; + last_min_offset = last_bbo->parent_offset + sizeof(uintptr_t); + buffer_offset = objects_start_offset + +- sizeof(binder_size_t) * last_bbo->parent, +- binder_alloc_copy_from_buffer(&proc->alloc, &last_obj_offset, +- b, buffer_offset, +- sizeof(last_obj_offset)); ++ sizeof(binder_size_t) * last_bbo->parent; ++ if (binder_alloc_copy_from_buffer(&proc->alloc, ++ &last_obj_offset, ++ b, buffer_offset, ++ sizeof(last_obj_offset))) ++ return false; + } + return (fixup_offset >= last_min_offset); + } +@@ -2310,15 +2313,15 @@ static void binder_transaction_buffer_release(struct binder_proc *proc, + for (buffer_offset = off_start_offset; buffer_offset < off_end_offset; + buffer_offset += sizeof(binder_size_t)) { + struct binder_object_header *hdr; +- size_t object_size; ++ size_t object_size = 0; + struct binder_object object; + binder_size_t object_offset; + +- binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, +- buffer, buffer_offset, +- sizeof(object_offset)); +- object_size = binder_get_object(proc, buffer, +- object_offset, &object); ++ if (!binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, ++ buffer, buffer_offset, ++ sizeof(object_offset))) ++ object_size = binder_get_object(proc, buffer, ++ object_offset, &object); + if (object_size == 0) { + pr_err("transaction release %d bad object at offset %lld, size %zd\n", + debug_id, (u64)object_offset, buffer->data_size); +@@ -2441,15 +2444,16 @@ static void binder_transaction_buffer_release(struct binder_proc *proc, + for (fd_index = 0; fd_index < fda->num_fds; + fd_index++) { + u32 fd; ++ int err; + binder_size_t offset = fda_offset + + fd_index * sizeof(fd); + +- binder_alloc_copy_from_buffer(&proc->alloc, +- &fd, +- buffer, +- offset, +- sizeof(fd)); +- binder_deferred_fd_close(fd); ++ err = binder_alloc_copy_from_buffer( ++ &proc->alloc, &fd, buffer, ++ offset, sizeof(fd)); ++ WARN_ON(err); ++ if (!err) ++ binder_deferred_fd_close(fd); + } + } break; + default: +@@ -2692,11 +2696,12 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda, + int ret; + binder_size_t offset = fda_offset + fdi * sizeof(fd); + +- binder_alloc_copy_from_buffer(&target_proc->alloc, +- &fd, t->buffer, +- offset, sizeof(fd)); +- ret = binder_translate_fd(fd, offset, t, thread, +- in_reply_to); ++ ret = binder_alloc_copy_from_buffer(&target_proc->alloc, ++ &fd, t->buffer, ++ offset, sizeof(fd)); ++ if (!ret) ++ ret = binder_translate_fd(fd, offset, t, thread, ++ in_reply_to); + if (ret < 0) + return ret; + } +@@ -2749,8 +2754,12 @@ static int binder_fixup_parent(struct binder_transaction *t, + } + buffer_offset = bp->parent_offset + + (uintptr_t)parent->buffer - (uintptr_t)b->user_data; +- binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset, +- &bp->buffer, sizeof(bp->buffer)); ++ if (binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset, ++ &bp->buffer, sizeof(bp->buffer))) { ++ binder_user_error("%d:%d got transaction with invalid parent offset\n", ++ proc->pid, thread->pid); ++ return -EINVAL; ++ } + + return 0; + } +@@ -3160,15 +3169,20 @@ static void binder_transaction(struct binder_proc *proc, + goto err_binder_alloc_buf_failed; + } + if (secctx) { ++ int err; + size_t buf_offset = ALIGN(tr->data_size, sizeof(void *)) + + ALIGN(tr->offsets_size, sizeof(void *)) + + ALIGN(extra_buffers_size, sizeof(void *)) - + ALIGN(secctx_sz, sizeof(u64)); + + t->security_ctx = (uintptr_t)t->buffer->user_data + buf_offset; +- binder_alloc_copy_to_buffer(&target_proc->alloc, +- t->buffer, buf_offset, +- secctx, secctx_sz); ++ err = binder_alloc_copy_to_buffer(&target_proc->alloc, ++ t->buffer, buf_offset, ++ secctx, secctx_sz); ++ if (err) { ++ t->security_ctx = 0; ++ WARN_ON(1); ++ } + security_release_secctx(secctx, secctx_sz); + secctx = NULL; + } +@@ -3234,11 +3248,16 @@ static void binder_transaction(struct binder_proc *proc, + struct binder_object object; + binder_size_t object_offset; + +- binder_alloc_copy_from_buffer(&target_proc->alloc, +- &object_offset, +- t->buffer, +- buffer_offset, +- sizeof(object_offset)); ++ if (binder_alloc_copy_from_buffer(&target_proc->alloc, ++ &object_offset, ++ t->buffer, ++ buffer_offset, ++ sizeof(object_offset))) { ++ return_error = BR_FAILED_REPLY; ++ return_error_param = -EINVAL; ++ return_error_line = __LINE__; ++ goto err_bad_offset; ++ } + object_size = binder_get_object(target_proc, t->buffer, + object_offset, &object); + if (object_size == 0 || object_offset < off_min) { +@@ -3262,15 +3281,17 @@ static void binder_transaction(struct binder_proc *proc, + + fp = to_flat_binder_object(hdr); + ret = binder_translate_binder(fp, t, thread); +- if (ret < 0) { ++ ++ if (ret < 0 || ++ binder_alloc_copy_to_buffer(&target_proc->alloc, ++ t->buffer, ++ object_offset, ++ fp, sizeof(*fp))) { + return_error = BR_FAILED_REPLY; + return_error_param = ret; + return_error_line = __LINE__; + goto err_translate_failed; + } +- binder_alloc_copy_to_buffer(&target_proc->alloc, +- t->buffer, object_offset, +- fp, sizeof(*fp)); + } break; + case BINDER_TYPE_HANDLE: + case BINDER_TYPE_WEAK_HANDLE: { +@@ -3278,15 +3299,16 @@ static void binder_transaction(struct binder_proc *proc, + + fp = to_flat_binder_object(hdr); + ret = binder_translate_handle(fp, t, thread); +- if (ret < 0) { ++ if (ret < 0 || ++ binder_alloc_copy_to_buffer(&target_proc->alloc, ++ t->buffer, ++ object_offset, ++ fp, sizeof(*fp))) { + return_error = BR_FAILED_REPLY; + return_error_param = ret; + return_error_line = __LINE__; + goto err_translate_failed; + } +- binder_alloc_copy_to_buffer(&target_proc->alloc, +- t->buffer, object_offset, +- fp, sizeof(*fp)); + } break; + + case BINDER_TYPE_FD: { +@@ -3296,16 +3318,17 @@ static void binder_transaction(struct binder_proc *proc, + int ret = binder_translate_fd(fp->fd, fd_offset, t, + thread, in_reply_to); + +- if (ret < 0) { ++ fp->pad_binder = 0; ++ if (ret < 0 || ++ binder_alloc_copy_to_buffer(&target_proc->alloc, ++ t->buffer, ++ object_offset, ++ fp, sizeof(*fp))) { + return_error = BR_FAILED_REPLY; + return_error_param = ret; + return_error_line = __LINE__; + goto err_translate_failed; + } +- fp->pad_binder = 0; +- binder_alloc_copy_to_buffer(&target_proc->alloc, +- t->buffer, object_offset, +- fp, sizeof(*fp)); + } break; + case BINDER_TYPE_FDA: { + struct binder_object ptr_object; +@@ -3393,15 +3416,16 @@ static void binder_transaction(struct binder_proc *proc, + num_valid, + last_fixup_obj_off, + last_fixup_min_off); +- if (ret < 0) { ++ if (ret < 0 || ++ binder_alloc_copy_to_buffer(&target_proc->alloc, ++ t->buffer, ++ object_offset, ++ bp, sizeof(*bp))) { + return_error = BR_FAILED_REPLY; + return_error_param = ret; + return_error_line = __LINE__; + goto err_translate_failed; + } +- binder_alloc_copy_to_buffer(&target_proc->alloc, +- t->buffer, object_offset, +- bp, sizeof(*bp)); + last_fixup_obj_off = object_offset; + last_fixup_min_off = 0; + } break; +@@ -4139,20 +4163,27 @@ static int binder_apply_fd_fixups(struct binder_proc *proc, + trace_binder_transaction_fd_recv(t, fd, fixup->offset); + fd_install(fd, fixup->file); + fixup->file = NULL; +- binder_alloc_copy_to_buffer(&proc->alloc, t->buffer, +- fixup->offset, &fd, +- sizeof(u32)); ++ if (binder_alloc_copy_to_buffer(&proc->alloc, t->buffer, ++ fixup->offset, &fd, ++ sizeof(u32))) { ++ ret = -EINVAL; ++ break; ++ } + } + list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) { + if (fixup->file) { + fput(fixup->file); + } else if (ret) { + u32 fd; +- +- binder_alloc_copy_from_buffer(&proc->alloc, &fd, +- t->buffer, fixup->offset, +- sizeof(fd)); +- binder_deferred_fd_close(fd); ++ int err; ++ ++ err = binder_alloc_copy_from_buffer(&proc->alloc, &fd, ++ t->buffer, ++ fixup->offset, ++ sizeof(fd)); ++ WARN_ON(err); ++ if (!err) ++ binder_deferred_fd_close(fd); + } + list_del(&fixup->fixup_entry); + kfree(fixup); +@@ -4267,6 +4298,8 @@ retry: + case BINDER_WORK_TRANSACTION_COMPLETE: { + binder_inner_proc_unlock(proc); + cmd = BR_TRANSACTION_COMPLETE; ++ kfree(w); ++ binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE); + if (put_user(cmd, (uint32_t __user *)ptr)) + return -EFAULT; + ptr += sizeof(uint32_t); +@@ -4275,8 +4308,6 @@ retry: + binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE, + "%d:%d BR_TRANSACTION_COMPLETE\n", + proc->pid, thread->pid); +- kfree(w); +- binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE); + } break; + case BINDER_WORK_NODE: { + struct binder_node *node = container_of(w, struct binder_node, work); +diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c +index 195f120c4e8c..80eaf8ab7dc0 100644 +--- a/drivers/android/binder_alloc.c ++++ b/drivers/android/binder_alloc.c +@@ -1128,15 +1128,16 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc, + return 0; + } + +-static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc, +- bool to_buffer, +- struct binder_buffer *buffer, +- binder_size_t buffer_offset, +- void *ptr, +- size_t bytes) ++static int binder_alloc_do_buffer_copy(struct binder_alloc *alloc, ++ bool to_buffer, ++ struct binder_buffer *buffer, ++ binder_size_t buffer_offset, ++ void *ptr, ++ size_t bytes) + { + /* All copies must be 32-bit aligned and 32-bit size */ +- BUG_ON(!check_buffer(alloc, buffer, buffer_offset, bytes)); ++ if (!check_buffer(alloc, buffer, buffer_offset, bytes)) ++ return -EINVAL; + + while (bytes) { + unsigned long size; +@@ -1164,25 +1165,26 @@ static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc, + ptr = ptr + size; + buffer_offset += size; + } ++ return 0; + } + +-void binder_alloc_copy_to_buffer(struct binder_alloc *alloc, +- struct binder_buffer *buffer, +- binder_size_t buffer_offset, +- void *src, +- size_t bytes) ++int binder_alloc_copy_to_buffer(struct binder_alloc *alloc, ++ struct binder_buffer *buffer, ++ binder_size_t buffer_offset, ++ void *src, ++ size_t bytes) + { +- binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset, +- src, bytes); ++ return binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset, ++ src, bytes); + } + +-void binder_alloc_copy_from_buffer(struct binder_alloc *alloc, +- void *dest, +- struct binder_buffer *buffer, +- binder_size_t buffer_offset, +- size_t bytes) ++int binder_alloc_copy_from_buffer(struct binder_alloc *alloc, ++ void *dest, ++ struct binder_buffer *buffer, ++ binder_size_t buffer_offset, ++ size_t bytes) + { +- binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset, +- dest, bytes); ++ return binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset, ++ dest, bytes); + } + +diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h +index b60d161b7a7a..9a51b72624ae 100644 +--- a/drivers/android/binder_alloc.h ++++ b/drivers/android/binder_alloc.h +@@ -168,17 +168,17 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc, + const void __user *from, + size_t bytes); + +-void binder_alloc_copy_to_buffer(struct binder_alloc *alloc, +- struct binder_buffer *buffer, +- binder_size_t buffer_offset, +- void *src, +- size_t bytes); +- +-void binder_alloc_copy_from_buffer(struct binder_alloc *alloc, +- void *dest, +- struct binder_buffer *buffer, +- binder_size_t buffer_offset, +- size_t bytes); ++int binder_alloc_copy_to_buffer(struct binder_alloc *alloc, ++ struct binder_buffer *buffer, ++ binder_size_t buffer_offset, ++ void *src, ++ size_t bytes); ++ ++int binder_alloc_copy_from_buffer(struct binder_alloc *alloc, ++ void *dest, ++ struct binder_buffer *buffer, ++ binder_size_t buffer_offset, ++ size_t bytes); + + #endif /* _LINUX_BINDER_ALLOC_H */ + +diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c +index 8804c9e916fd..f566fa8bf704 100644 +--- a/drivers/char/tpm/tpm-chip.c ++++ b/drivers/char/tpm/tpm-chip.c +@@ -294,15 +294,15 @@ static int tpm_class_shutdown(struct device *dev) + { + struct tpm_chip *chip = container_of(dev, struct tpm_chip, dev); + ++ down_write(&chip->ops_sem); + if (chip->flags & TPM_CHIP_FLAG_TPM2) { +- down_write(&chip->ops_sem); + if (!tpm_chip_start(chip)) { + tpm2_shutdown(chip, TPM2_SU_CLEAR); + tpm_chip_stop(chip); + } +- chip->ops = NULL; +- up_write(&chip->ops_sem); + } ++ chip->ops = NULL; ++ up_write(&chip->ops_sem); + + return 0; + } +diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c +index 85dcf2654d11..faacbe1ffa1a 100644 +--- a/drivers/char/tpm/tpm1-cmd.c ++++ b/drivers/char/tpm/tpm1-cmd.c +@@ -510,7 +510,7 @@ struct tpm1_get_random_out { + * + * Return: + * * number of bytes read +- * * -errno or a TPM return code otherwise ++ * * -errno (positive TPM return codes are masked to -EIO) + */ + int tpm1_get_random(struct tpm_chip *chip, u8 *dest, size_t max) + { +@@ -531,8 +531,11 @@ int tpm1_get_random(struct tpm_chip *chip, u8 *dest, size_t max) + + rc = tpm_transmit_cmd(chip, &buf, sizeof(out->rng_data_len), + "attempting get random"); +- if (rc) ++ if (rc) { ++ if (rc > 0) ++ rc = -EIO; + goto out; ++ } + + out = (struct tpm1_get_random_out *)&buf.data[TPM_HEADER_SIZE]; + +diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c +index e74c5b7b64bf..f57e25ab8f39 100644 +--- a/drivers/char/tpm/tpm2-cmd.c ++++ b/drivers/char/tpm/tpm2-cmd.c +@@ -301,7 +301,7 @@ struct tpm2_get_random_out { + * + * Return: + * size of the buffer on success, +- * -errno otherwise ++ * -errno otherwise (positive TPM return codes are masked to -EIO) + */ + int tpm2_get_random(struct tpm_chip *chip, u8 *dest, size_t max) + { +@@ -328,8 +328,11 @@ int tpm2_get_random(struct tpm_chip *chip, u8 *dest, size_t max) + offsetof(struct tpm2_get_random_out, + buffer), + "attempting get random"); +- if (err) ++ if (err) { ++ if (err > 0) ++ err = -EIO; + goto out; ++ } + + out = (struct tpm2_get_random_out *) + &buf.data[TPM_HEADER_SIZE]; +diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c +index de78b54bcfb1..0fee83b2eb91 100644 +--- a/drivers/crypto/talitos.c ++++ b/drivers/crypto/talitos.c +@@ -2286,7 +2286,7 @@ static struct talitos_alg_template driver_algs[] = { + .base = { + .cra_name = "authenc(hmac(sha1),cbc(aes))", + .cra_driver_name = "authenc-hmac-sha1-" +- "cbc-aes-talitos", ++ "cbc-aes-talitos-hsna", + .cra_blocksize = AES_BLOCK_SIZE, + .cra_flags = CRYPTO_ALG_ASYNC, + }, +@@ -2330,7 +2330,7 @@ static struct talitos_alg_template driver_algs[] = { + .cra_name = "authenc(hmac(sha1)," + "cbc(des3_ede))", + .cra_driver_name = "authenc-hmac-sha1-" +- "cbc-3des-talitos", ++ "cbc-3des-talitos-hsna", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_flags = CRYPTO_ALG_ASYNC, + }, +@@ -2372,7 +2372,7 @@ static struct talitos_alg_template driver_algs[] = { + .base = { + .cra_name = "authenc(hmac(sha224),cbc(aes))", + .cra_driver_name = "authenc-hmac-sha224-" +- "cbc-aes-talitos", ++ "cbc-aes-talitos-hsna", + .cra_blocksize = AES_BLOCK_SIZE, + .cra_flags = CRYPTO_ALG_ASYNC, + }, +@@ -2416,7 +2416,7 @@ static struct talitos_alg_template driver_algs[] = { + .cra_name = "authenc(hmac(sha224)," + "cbc(des3_ede))", + .cra_driver_name = "authenc-hmac-sha224-" +- "cbc-3des-talitos", ++ "cbc-3des-talitos-hsna", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_flags = CRYPTO_ALG_ASYNC, + }, +@@ -2458,7 +2458,7 @@ static struct talitos_alg_template driver_algs[] = { + .base = { + .cra_name = "authenc(hmac(sha256),cbc(aes))", + .cra_driver_name = "authenc-hmac-sha256-" +- "cbc-aes-talitos", ++ "cbc-aes-talitos-hsna", + .cra_blocksize = AES_BLOCK_SIZE, + .cra_flags = CRYPTO_ALG_ASYNC, + }, +@@ -2502,7 +2502,7 @@ static struct talitos_alg_template driver_algs[] = { + .cra_name = "authenc(hmac(sha256)," + "cbc(des3_ede))", + .cra_driver_name = "authenc-hmac-sha256-" +- "cbc-3des-talitos", ++ "cbc-3des-talitos-hsna", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_flags = CRYPTO_ALG_ASYNC, + }, +@@ -2628,7 +2628,7 @@ static struct talitos_alg_template driver_algs[] = { + .base = { + .cra_name = "authenc(hmac(md5),cbc(aes))", + .cra_driver_name = "authenc-hmac-md5-" +- "cbc-aes-talitos", ++ "cbc-aes-talitos-hsna", + .cra_blocksize = AES_BLOCK_SIZE, + .cra_flags = CRYPTO_ALG_ASYNC, + }, +@@ -2670,7 +2670,7 @@ static struct talitos_alg_template driver_algs[] = { + .base = { + .cra_name = "authenc(hmac(md5),cbc(des3_ede))", + .cra_driver_name = "authenc-hmac-md5-" +- "cbc-3des-talitos", ++ "cbc-3des-talitos-hsna", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_flags = CRYPTO_ALG_ASYNC, + }, +diff --git a/drivers/gpu/drm/drm_bufs.c b/drivers/gpu/drm/drm_bufs.c +index e407adb033e7..4fbc34425570 100644 +--- a/drivers/gpu/drm/drm_bufs.c ++++ b/drivers/gpu/drm/drm_bufs.c +@@ -1332,7 +1332,10 @@ static int copy_one_buf(void *data, int count, struct drm_buf_entry *from) + .size = from->buf_size, + .low_mark = from->low_mark, + .high_mark = from->high_mark}; +- return copy_to_user(to, &v, offsetof(struct drm_buf_desc, flags)); ++ ++ if (copy_to_user(to, &v, offsetof(struct drm_buf_desc, flags))) ++ return -EFAULT; ++ return 0; + } + + int drm_legacy_infobufs(struct drm_device *dev, void *data, +diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c +index 0e3043e08c69..f8672238d444 100644 +--- a/drivers/gpu/drm/drm_ioc32.c ++++ b/drivers/gpu/drm/drm_ioc32.c +@@ -372,7 +372,10 @@ static int copy_one_buf32(void *data, int count, struct drm_buf_entry *from) + .size = from->buf_size, + .low_mark = from->low_mark, + .high_mark = from->high_mark}; +- return copy_to_user(to + count, &v, offsetof(drm_buf_desc32_t, flags)); ++ ++ if (copy_to_user(to + count, &v, offsetof(drm_buf_desc32_t, flags))) ++ return -EFAULT; ++ return 0; + } + + static int drm_legacy_infobufs32(struct drm_device *dev, void *data, +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +index 9d2e1ce5c0a6..b77374ea3825 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +@@ -747,6 +747,9 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) + if (unlikely(ret != 0)) + goto out_err0; + ++ dma_set_max_seg_size(dev->dev, min_t(unsigned int, U32_MAX & PAGE_MASK, ++ SCATTERLIST_MAX_SEGMENT)); ++ + if (dev_priv->capabilities & SVGA_CAP_GMR2) { + DRM_INFO("Max GMR ids is %u\n", + (unsigned)dev_priv->max_gmr_ids); +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +index a3357ff7540d..97adee1f0575 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c +@@ -454,11 +454,11 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt) + if (unlikely(ret != 0)) + return ret; + +- ret = sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages, +- vsgt->num_pages, 0, +- (unsigned long) +- vsgt->num_pages << PAGE_SHIFT, +- GFP_KERNEL); ++ ret = __sg_alloc_table_from_pages ++ (&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0, ++ (unsigned long) vsgt->num_pages << PAGE_SHIFT, ++ dma_get_max_seg_size(dev_priv->dev->dev), ++ GFP_KERNEL); + if (unlikely(ret != 0)) + goto out_sg_alloc_fail; + +diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c b/drivers/gpu/ipu-v3/ipu-image-convert.c +index 13103ab86050..e9803e2151f9 100644 +--- a/drivers/gpu/ipu-v3/ipu-image-convert.c ++++ b/drivers/gpu/ipu-v3/ipu-image-convert.c +@@ -409,12 +409,14 @@ static int calc_image_resize_coefficients(struct ipu_image_convert_ctx *ctx, + if (WARN_ON(resized_width == 0 || resized_height == 0)) + return -EINVAL; + +- while (downsized_width >= resized_width * 2) { ++ while (downsized_width > 1024 || ++ downsized_width >= resized_width * 2) { + downsized_width >>= 1; + downsize_coeff_h++; + } + +- while (downsized_height >= resized_height * 2) { ++ while (downsized_height > 1024 || ++ downsized_height >= resized_height * 2) { + downsized_height >>= 1; + downsize_coeff_v++; + } +@@ -1885,7 +1887,8 @@ void ipu_image_convert_adjust(struct ipu_image *in, struct ipu_image *out, + enum ipu_rotate_mode rot_mode) + { + const struct ipu_image_pixfmt *infmt, *outfmt; +- u32 w_align, h_align; ++ u32 w_align_out, h_align_out; ++ u32 w_align_in, h_align_in; + + infmt = get_format(in->pix.pixelformat); + outfmt = get_format(out->pix.pixelformat); +@@ -1917,22 +1920,33 @@ void ipu_image_convert_adjust(struct ipu_image *in, struct ipu_image *out, + } + + /* align input width/height */ +- w_align = ilog2(tile_width_align(IMAGE_CONVERT_IN, infmt, rot_mode)); +- h_align = ilog2(tile_height_align(IMAGE_CONVERT_IN, infmt, rot_mode)); +- in->pix.width = clamp_align(in->pix.width, MIN_W, MAX_W, w_align); +- in->pix.height = clamp_align(in->pix.height, MIN_H, MAX_H, h_align); ++ w_align_in = ilog2(tile_width_align(IMAGE_CONVERT_IN, infmt, ++ rot_mode)); ++ h_align_in = ilog2(tile_height_align(IMAGE_CONVERT_IN, infmt, ++ rot_mode)); ++ in->pix.width = clamp_align(in->pix.width, MIN_W, MAX_W, ++ w_align_in); ++ in->pix.height = clamp_align(in->pix.height, MIN_H, MAX_H, ++ h_align_in); + + /* align output width/height */ +- w_align = ilog2(tile_width_align(IMAGE_CONVERT_OUT, outfmt, rot_mode)); +- h_align = ilog2(tile_height_align(IMAGE_CONVERT_OUT, outfmt, rot_mode)); +- out->pix.width = clamp_align(out->pix.width, MIN_W, MAX_W, w_align); +- out->pix.height = clamp_align(out->pix.height, MIN_H, MAX_H, h_align); ++ w_align_out = ilog2(tile_width_align(IMAGE_CONVERT_OUT, outfmt, ++ rot_mode)); ++ h_align_out = ilog2(tile_height_align(IMAGE_CONVERT_OUT, outfmt, ++ rot_mode)); ++ out->pix.width = clamp_align(out->pix.width, MIN_W, MAX_W, ++ w_align_out); ++ out->pix.height = clamp_align(out->pix.height, MIN_H, MAX_H, ++ h_align_out); + + /* set input/output strides and image sizes */ + in->pix.bytesperline = infmt->planar ? +- clamp_align(in->pix.width, 2 << w_align, MAX_W, w_align) : ++ clamp_align(in->pix.width, 2 << w_align_in, MAX_W, ++ w_align_in) : + clamp_align((in->pix.width * infmt->bpp) >> 3, +- 2 << w_align, MAX_W, w_align); ++ ((2 << w_align_in) * infmt->bpp) >> 3, ++ (MAX_W * infmt->bpp) >> 3, ++ w_align_in); + in->pix.sizeimage = infmt->planar ? + (in->pix.height * in->pix.bytesperline * infmt->bpp) >> 3 : + in->pix.height * in->pix.bytesperline; +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h +index adce58f24f76..6537086fb145 100644 +--- a/drivers/hid/hid-ids.h ++++ b/drivers/hid/hid-ids.h +@@ -1235,6 +1235,7 @@ + #define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05 + #define USB_DEVICE_ID_PRIMAX_REZEL 0x4e72 + #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F 0x4d0f ++#define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65 0x4d65 + #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22 0x4e22 + + +diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c +index 77ffba48cc73..189bf68eb35c 100644 +--- a/drivers/hid/hid-quirks.c ++++ b/drivers/hid/hid-quirks.c +@@ -132,6 +132,7 @@ static const struct hid_device_id hid_quirks[] = { + { HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL }, + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL }, + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65), HID_QUIRK_ALWAYS_POLL }, + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL }, + { HID_USB_DEVICE(USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS), HID_QUIRK_NOGET }, + { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001), HID_QUIRK_NOGET }, +diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c +index 2327ec18b40c..1f7ce5186dfc 100644 +--- a/drivers/iio/adc/stm32-adc-core.c ++++ b/drivers/iio/adc/stm32-adc-core.c +@@ -87,6 +87,7 @@ struct stm32_adc_priv_cfg { + * @domain: irq domain reference + * @aclk: clock reference for the analog circuitry + * @bclk: bus clock common for all ADCs, depends on part used ++ * @vdda: vdda analog supply reference + * @vref: regulator reference + * @cfg: compatible configuration data + * @common: common data for all ADC instances +@@ -97,6 +98,7 @@ struct stm32_adc_priv { + struct irq_domain *domain; + struct clk *aclk; + struct clk *bclk; ++ struct regulator *vdda; + struct regulator *vref; + const struct stm32_adc_priv_cfg *cfg; + struct stm32_adc_common common; +@@ -394,10 +396,16 @@ static int stm32_adc_core_hw_start(struct device *dev) + struct stm32_adc_priv *priv = to_stm32_adc_priv(common); + int ret; + ++ ret = regulator_enable(priv->vdda); ++ if (ret < 0) { ++ dev_err(dev, "vdda enable failed %d\n", ret); ++ return ret; ++ } ++ + ret = regulator_enable(priv->vref); + if (ret < 0) { + dev_err(dev, "vref enable failed\n"); +- return ret; ++ goto err_vdda_disable; + } + + if (priv->bclk) { +@@ -425,6 +433,8 @@ err_bclk_disable: + clk_disable_unprepare(priv->bclk); + err_regulator_disable: + regulator_disable(priv->vref); ++err_vdda_disable: ++ regulator_disable(priv->vdda); + + return ret; + } +@@ -441,6 +451,7 @@ static void stm32_adc_core_hw_stop(struct device *dev) + if (priv->bclk) + clk_disable_unprepare(priv->bclk); + regulator_disable(priv->vref); ++ regulator_disable(priv->vdda); + } + + static int stm32_adc_probe(struct platform_device *pdev) +@@ -468,6 +479,14 @@ static int stm32_adc_probe(struct platform_device *pdev) + return PTR_ERR(priv->common.base); + priv->common.phys_base = res->start; + ++ priv->vdda = devm_regulator_get(&pdev->dev, "vdda"); ++ if (IS_ERR(priv->vdda)) { ++ ret = PTR_ERR(priv->vdda); ++ if (ret != -EPROBE_DEFER) ++ dev_err(&pdev->dev, "vdda get failed, %d\n", ret); ++ return ret; ++ } ++ + priv->vref = devm_regulator_get(&pdev->dev, "vref"); + if (IS_ERR(priv->vref)) { + ret = PTR_ERR(priv->vref); +diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h +index 048b5d73ba39..d85b16a3aaaf 100644 +--- a/drivers/infiniband/hw/hfi1/hfi.h ++++ b/drivers/infiniband/hw/hfi1/hfi.h +@@ -539,6 +539,37 @@ static inline void hfi1_16B_set_qpn(struct opa_16b_mgmt *mgmt, + mgmt->src_qpn = cpu_to_be32(src_qp & OPA_16B_MGMT_QPN_MASK); + } + ++/** ++ * hfi1_get_rc_ohdr - get extended header ++ * @opah - the opaheader ++ */ ++static inline struct ib_other_headers * ++hfi1_get_rc_ohdr(struct hfi1_opa_header *opah) ++{ ++ struct ib_other_headers *ohdr; ++ struct ib_header *hdr = NULL; ++ struct hfi1_16b_header *hdr_16b = NULL; ++ ++ /* Find out where the BTH is */ ++ if (opah->hdr_type == HFI1_PKT_TYPE_9B) { ++ hdr = &opah->ibh; ++ if (ib_get_lnh(hdr) == HFI1_LRH_BTH) ++ ohdr = &hdr->u.oth; ++ else ++ ohdr = &hdr->u.l.oth; ++ } else { ++ u8 l4; ++ ++ hdr_16b = &opah->opah; ++ l4 = hfi1_16B_get_l4(hdr_16b); ++ if (l4 == OPA_16B_L4_IB_LOCAL) ++ ohdr = &hdr_16b->u.oth; ++ else ++ ohdr = &hdr_16b->u.l.oth; ++ } ++ return ohdr; ++} ++ + struct rvt_sge_state; + + /* +diff --git a/drivers/infiniband/hw/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c +index a1de566fe95e..17ea224fbecb 100644 +--- a/drivers/infiniband/hw/hfi1/pio.c ++++ b/drivers/infiniband/hw/hfi1/pio.c +@@ -952,6 +952,22 @@ void sc_disable(struct send_context *sc) + } + } + spin_unlock(&sc->release_lock); ++ ++ write_seqlock(&sc->waitlock); ++ while (!list_empty(&sc->piowait)) { ++ struct iowait *wait; ++ struct rvt_qp *qp; ++ struct hfi1_qp_priv *priv; ++ ++ wait = list_first_entry(&sc->piowait, struct iowait, list); ++ qp = iowait_to_qp(wait); ++ priv = qp->priv; ++ list_del_init(&priv->s_iowait.list); ++ priv->s_iowait.lock = NULL; ++ hfi1_qp_wakeup(qp, RVT_S_WAIT_PIO | HFI1_S_WAIT_PIO_DRAIN); ++ } ++ write_sequnlock(&sc->waitlock); ++ + spin_unlock_irq(&sc->alloc_lock); + } + +@@ -1427,7 +1443,8 @@ void sc_stop(struct send_context *sc, int flag) + * @cb: optional callback to call when the buffer is finished sending + * @arg: argument for cb + * +- * Return a pointer to a PIO buffer if successful, NULL if not enough room. ++ * Return a pointer to a PIO buffer, NULL if not enough room, -ECOMM ++ * when link is down. + */ + struct pio_buf *sc_buffer_alloc(struct send_context *sc, u32 dw_len, + pio_release_cb cb, void *arg) +@@ -1443,7 +1460,7 @@ struct pio_buf *sc_buffer_alloc(struct send_context *sc, u32 dw_len, + spin_lock_irqsave(&sc->alloc_lock, flags); + if (!(sc->flags & SCF_ENABLED)) { + spin_unlock_irqrestore(&sc->alloc_lock, flags); +- goto done; ++ return ERR_PTR(-ECOMM); + } + + retry: +diff --git a/drivers/infiniband/hw/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c +index 5991211d72bd..b7b74222eaf0 100644 +--- a/drivers/infiniband/hw/hfi1/rc.c ++++ b/drivers/infiniband/hw/hfi1/rc.c +@@ -1434,7 +1434,7 @@ void hfi1_send_rc_ack(struct hfi1_packet *packet, bool is_fecn) + pbc = create_pbc(ppd, pbc_flags, qp->srate_mbps, + sc_to_vlt(ppd->dd, sc5), plen); + pbuf = sc_buffer_alloc(rcd->sc, plen, NULL, NULL); +- if (!pbuf) { ++ if (IS_ERR_OR_NULL(pbuf)) { + /* + * We have no room to send at the moment. Pass + * responsibility for sending the ACK to the send engine +@@ -1703,6 +1703,36 @@ static void reset_sending_psn(struct rvt_qp *qp, u32 psn) + } + } + ++/** ++ * hfi1_rc_verbs_aborted - handle abort status ++ * @qp: the QP ++ * @opah: the opa header ++ * ++ * This code modifies both ACK bit in BTH[2] ++ * and the s_flags to go into send one mode. ++ * ++ * This serves to throttle the send engine to only ++ * send a single packet in the likely case the ++ * a link has gone down. ++ */ ++void hfi1_rc_verbs_aborted(struct rvt_qp *qp, struct hfi1_opa_header *opah) ++{ ++ struct ib_other_headers *ohdr = hfi1_get_rc_ohdr(opah); ++ u8 opcode = ib_bth_get_opcode(ohdr); ++ u32 psn; ++ ++ /* ignore responses */ ++ if ((opcode >= OP(RDMA_READ_RESPONSE_FIRST) && ++ opcode <= OP(ATOMIC_ACKNOWLEDGE)) || ++ opcode == TID_OP(READ_RESP) || ++ opcode == TID_OP(WRITE_RESP)) ++ return; ++ ++ psn = ib_bth_get_psn(ohdr) | IB_BTH_REQ_ACK; ++ ohdr->bth[2] = cpu_to_be32(psn); ++ qp->s_flags |= RVT_S_SEND_ONE; ++} ++ + /* + * This should be called with the QP s_lock held and interrupts disabled. + */ +@@ -1711,8 +1741,6 @@ void hfi1_rc_send_complete(struct rvt_qp *qp, struct hfi1_opa_header *opah) + struct ib_other_headers *ohdr; + struct hfi1_qp_priv *priv = qp->priv; + struct rvt_swqe *wqe; +- struct ib_header *hdr = NULL; +- struct hfi1_16b_header *hdr_16b = NULL; + u32 opcode, head, tail; + u32 psn; + struct tid_rdma_request *req; +@@ -1721,24 +1749,7 @@ void hfi1_rc_send_complete(struct rvt_qp *qp, struct hfi1_opa_header *opah) + if (!(ib_rvt_state_ops[qp->state] & RVT_SEND_OR_FLUSH_OR_RECV_OK)) + return; + +- /* Find out where the BTH is */ +- if (priv->hdr_type == HFI1_PKT_TYPE_9B) { +- hdr = &opah->ibh; +- if (ib_get_lnh(hdr) == HFI1_LRH_BTH) +- ohdr = &hdr->u.oth; +- else +- ohdr = &hdr->u.l.oth; +- } else { +- u8 l4; +- +- hdr_16b = &opah->opah; +- l4 = hfi1_16B_get_l4(hdr_16b); +- if (l4 == OPA_16B_L4_IB_LOCAL) +- ohdr = &hdr_16b->u.oth; +- else +- ohdr = &hdr_16b->u.l.oth; +- } +- ++ ohdr = hfi1_get_rc_ohdr(opah); + opcode = ib_bth_get_opcode(ohdr); + if ((opcode >= OP(RDMA_READ_RESPONSE_FIRST) && + opcode <= OP(ATOMIC_ACKNOWLEDGE)) || +diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c +index 70828de7436b..28b66bd70b74 100644 +--- a/drivers/infiniband/hw/hfi1/sdma.c ++++ b/drivers/infiniband/hw/hfi1/sdma.c +@@ -405,6 +405,7 @@ static void sdma_flush(struct sdma_engine *sde) + struct sdma_txreq *txp, *txp_next; + LIST_HEAD(flushlist); + unsigned long flags; ++ uint seq; + + /* flush from head to tail */ + sdma_flush_descq(sde); +@@ -415,6 +416,22 @@ static void sdma_flush(struct sdma_engine *sde) + /* flush from flush list */ + list_for_each_entry_safe(txp, txp_next, &flushlist, list) + complete_tx(sde, txp, SDMA_TXREQ_S_ABORTED); ++ /* wakeup QPs orphaned on the dmawait list */ ++ do { ++ struct iowait *w, *nw; ++ ++ seq = read_seqbegin(&sde->waitlock); ++ if (!list_empty(&sde->dmawait)) { ++ write_seqlock(&sde->waitlock); ++ list_for_each_entry_safe(w, nw, &sde->dmawait, list) { ++ if (w->wakeup) { ++ w->wakeup(w, SDMA_AVAIL_REASON); ++ list_del_init(&w->list); ++ } ++ } ++ write_sequnlock(&sde->waitlock); ++ } ++ } while (read_seqretry(&sde->waitlock, seq)); + } + + /* +diff --git a/drivers/infiniband/hw/hfi1/ud.c b/drivers/infiniband/hw/hfi1/ud.c +index f88ad425664a..4cb0fce5c096 100644 +--- a/drivers/infiniband/hw/hfi1/ud.c ++++ b/drivers/infiniband/hw/hfi1/ud.c +@@ -683,7 +683,7 @@ void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, + pbc = create_pbc(ppd, pbc_flags, qp->srate_mbps, vl, plen); + if (ctxt) { + pbuf = sc_buffer_alloc(ctxt, plen, NULL, NULL); +- if (pbuf) { ++ if (!IS_ERR_OR_NULL(pbuf)) { + trace_pio_output_ibhdr(ppd->dd, &hdr, sc5); + ppd->dd->pio_inline_send(ppd->dd, pbuf, pbc, + &hdr, hwords); +@@ -738,7 +738,7 @@ void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, + pbc = create_pbc(ppd, pbc_flags, qp->srate_mbps, vl, plen); + if (ctxt) { + pbuf = sc_buffer_alloc(ctxt, plen, NULL, NULL); +- if (pbuf) { ++ if (!IS_ERR_OR_NULL(pbuf)) { + trace_pio_output_ibhdr(ppd->dd, &hdr, sc5); + ppd->dd->pio_inline_send(ppd->dd, pbuf, pbc, + &hdr, hwords); +diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c +index ea68eeba3f22..117e73cd69d7 100644 +--- a/drivers/infiniband/hw/hfi1/verbs.c ++++ b/drivers/infiniband/hw/hfi1/verbs.c +@@ -638,6 +638,8 @@ static void verbs_sdma_complete( + struct hfi1_opa_header *hdr; + + hdr = &tx->phdr.hdr; ++ if (unlikely(status == SDMA_TXREQ_S_ABORTED)) ++ hfi1_rc_verbs_aborted(qp, hdr); + hfi1_rc_send_complete(qp, hdr); + } + spin_unlock(&qp->s_lock); +@@ -1037,10 +1039,10 @@ int hfi1_verbs_send_pio(struct rvt_qp *qp, struct hfi1_pkt_state *ps, + if (cb) + iowait_pio_inc(&priv->s_iowait); + pbuf = sc_buffer_alloc(sc, plen, cb, qp); +- if (unlikely(!pbuf)) { ++ if (unlikely(IS_ERR_OR_NULL(pbuf))) { + if (cb) + verbs_pio_complete(qp, 0); +- if (ppd->host_link_state != HLS_UP_ACTIVE) { ++ if (IS_ERR(pbuf)) { + /* + * If we have filled the PIO buffers to capacity and are + * not in an active state this request is not going to +@@ -1095,15 +1097,15 @@ int hfi1_verbs_send_pio(struct rvt_qp *qp, struct hfi1_pkt_state *ps, + &ps->s_txreq->phdr.hdr, ib_is_sc5(sc5)); + + pio_bail: ++ spin_lock_irqsave(&qp->s_lock, flags); + if (qp->s_wqe) { +- spin_lock_irqsave(&qp->s_lock, flags); + rvt_send_complete(qp, qp->s_wqe, wc_status); +- spin_unlock_irqrestore(&qp->s_lock, flags); + } else if (qp->ibqp.qp_type == IB_QPT_RC) { +- spin_lock_irqsave(&qp->s_lock, flags); ++ if (unlikely(wc_status == IB_WC_GENERAL_ERR)) ++ hfi1_rc_verbs_aborted(qp, &ps->s_txreq->phdr.hdr); + hfi1_rc_send_complete(qp, &ps->s_txreq->phdr.hdr); +- spin_unlock_irqrestore(&qp->s_lock, flags); + } ++ spin_unlock_irqrestore(&qp->s_lock, flags); + + ret = 0; + +diff --git a/drivers/infiniband/hw/hfi1/verbs.h b/drivers/infiniband/hw/hfi1/verbs.h +index 62ace0b2d17a..1714c0f6475d 100644 +--- a/drivers/infiniband/hw/hfi1/verbs.h ++++ b/drivers/infiniband/hw/hfi1/verbs.h +@@ -415,6 +415,7 @@ void hfi1_rc_hdrerr( + + u8 ah_to_sc(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr); + ++void hfi1_rc_verbs_aborted(struct rvt_qp *qp, struct hfi1_opa_header *opah); + void hfi1_rc_send_complete(struct rvt_qp *qp, struct hfi1_opa_header *opah); + + void hfi1_ud_rcv(struct hfi1_packet *packet); +diff --git a/drivers/input/keyboard/imx_keypad.c b/drivers/input/keyboard/imx_keypad.c +index 539cb670de41..ae9c51cc85f9 100644 +--- a/drivers/input/keyboard/imx_keypad.c ++++ b/drivers/input/keyboard/imx_keypad.c +@@ -526,11 +526,12 @@ static int imx_keypad_probe(struct platform_device *pdev) + return 0; + } + +-static int __maybe_unused imx_kbd_suspend(struct device *dev) ++static int __maybe_unused imx_kbd_noirq_suspend(struct device *dev) + { + struct platform_device *pdev = to_platform_device(dev); + struct imx_keypad *kbd = platform_get_drvdata(pdev); + struct input_dev *input_dev = kbd->input_dev; ++ unsigned short reg_val = readw(kbd->mmio_base + KPSR); + + /* imx kbd can wake up system even clock is disabled */ + mutex_lock(&input_dev->mutex); +@@ -540,13 +541,20 @@ static int __maybe_unused imx_kbd_suspend(struct device *dev) + + mutex_unlock(&input_dev->mutex); + +- if (device_may_wakeup(&pdev->dev)) ++ if (device_may_wakeup(&pdev->dev)) { ++ if (reg_val & KBD_STAT_KPKD) ++ reg_val |= KBD_STAT_KRIE; ++ if (reg_val & KBD_STAT_KPKR) ++ reg_val |= KBD_STAT_KDIE; ++ writew(reg_val, kbd->mmio_base + KPSR); ++ + enable_irq_wake(kbd->irq); ++ } + + return 0; + } + +-static int __maybe_unused imx_kbd_resume(struct device *dev) ++static int __maybe_unused imx_kbd_noirq_resume(struct device *dev) + { + struct platform_device *pdev = to_platform_device(dev); + struct imx_keypad *kbd = platform_get_drvdata(pdev); +@@ -570,7 +578,9 @@ err_clk: + return ret; + } + +-static SIMPLE_DEV_PM_OPS(imx_kbd_pm_ops, imx_kbd_suspend, imx_kbd_resume); ++static const struct dev_pm_ops imx_kbd_pm_ops = { ++ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx_kbd_noirq_suspend, imx_kbd_noirq_resume) ++}; + + static struct platform_driver imx_keypad_driver = { + .driver = { +diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c +index a7f8b1614559..530142b5a115 100644 +--- a/drivers/input/mouse/elantech.c ++++ b/drivers/input/mouse/elantech.c +@@ -1189,6 +1189,8 @@ static const char * const middle_button_pnp_ids[] = { + "LEN2132", /* ThinkPad P52 */ + "LEN2133", /* ThinkPad P72 w/ NFC */ + "LEN2134", /* ThinkPad P72 */ ++ "LEN0407", ++ "LEN0408", + NULL + }; + +diff --git a/drivers/md/md.c b/drivers/md/md.c +index 295ff09cff4c..84aec3647994 100644 +--- a/drivers/md/md.c ++++ b/drivers/md/md.c +@@ -7617,9 +7617,9 @@ static void status_unused(struct seq_file *seq) + static int status_resync(struct seq_file *seq, struct mddev *mddev) + { + sector_t max_sectors, resync, res; +- unsigned long dt, db; +- sector_t rt; +- int scale; ++ unsigned long dt, db = 0; ++ sector_t rt, curr_mark_cnt, resync_mark_cnt; ++ int scale, recovery_active; + unsigned int per_milli; + + if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) || +@@ -7708,22 +7708,30 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev) + * db: blocks written from mark until now + * rt: remaining time + * +- * rt is a sector_t, so could be 32bit or 64bit. +- * So we divide before multiply in case it is 32bit and close +- * to the limit. +- * We scale the divisor (db) by 32 to avoid losing precision +- * near the end of resync when the number of remaining sectors +- * is close to 'db'. +- * We then divide rt by 32 after multiplying by db to compensate. +- * The '+1' avoids division by zero if db is very small. ++ * rt is a sector_t, which is always 64bit now. We are keeping ++ * the original algorithm, but it is not really necessary. ++ * ++ * Original algorithm: ++ * So we divide before multiply in case it is 32bit and close ++ * to the limit. ++ * We scale the divisor (db) by 32 to avoid losing precision ++ * near the end of resync when the number of remaining sectors ++ * is close to 'db'. ++ * We then divide rt by 32 after multiplying by db to compensate. ++ * The '+1' avoids division by zero if db is very small. + */ + dt = ((jiffies - mddev->resync_mark) / HZ); + if (!dt) dt++; +- db = (mddev->curr_mark_cnt - atomic_read(&mddev->recovery_active)) +- - mddev->resync_mark_cnt; ++ ++ curr_mark_cnt = mddev->curr_mark_cnt; ++ recovery_active = atomic_read(&mddev->recovery_active); ++ resync_mark_cnt = mddev->resync_mark_cnt; ++ ++ if (curr_mark_cnt >= (recovery_active + resync_mark_cnt)) ++ db = curr_mark_cnt - (recovery_active + resync_mark_cnt); + + rt = max_sectors - resync; /* number of remaining sectors */ +- sector_div(rt, db/32+1); ++ rt = div64_u64(rt, db/32+1); + rt *= dt; + rt >>= 5; + +diff --git a/drivers/media/dvb-frontends/stv0297.c b/drivers/media/dvb-frontends/stv0297.c +index 9a9915f71483..3ef31a3a27ff 100644 +--- a/drivers/media/dvb-frontends/stv0297.c ++++ b/drivers/media/dvb-frontends/stv0297.c +@@ -694,7 +694,7 @@ static const struct dvb_frontend_ops stv0297_ops = { + .delsys = { SYS_DVBC_ANNEX_A }, + .info = { + .name = "ST STV0297 DVB-C", +- .frequency_min_hz = 470 * MHz, ++ .frequency_min_hz = 47 * MHz, + .frequency_max_hz = 862 * MHz, + .frequency_stepsize_hz = 62500, + .symbol_rate_min = 870000, +diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile +index 951c984de61a..fb10eafe9bde 100644 +--- a/drivers/misc/lkdtm/Makefile ++++ b/drivers/misc/lkdtm/Makefile +@@ -15,8 +15,7 @@ KCOV_INSTRUMENT_rodata.o := n + + OBJCOPYFLAGS := + OBJCOPYFLAGS_rodata_objcopy.o := \ +- --set-section-flags .text=alloc,readonly \ +- --rename-section .text=.rodata ++ --rename-section .text=.rodata,alloc,readonly,load + targets += rodata.o rodata_objcopy.o + $(obj)/rodata_objcopy.o: $(obj)/rodata.o FORCE + $(call if_changed,objcopy) +diff --git a/drivers/misc/vmw_vmci/vmci_context.c b/drivers/misc/vmw_vmci/vmci_context.c +index 21d0fa592145..bc089e634a75 100644 +--- a/drivers/misc/vmw_vmci/vmci_context.c ++++ b/drivers/misc/vmw_vmci/vmci_context.c +@@ -29,6 +29,9 @@ + #include "vmci_driver.h" + #include "vmci_event.h" + ++/* Use a wide upper bound for the maximum contexts. */ ++#define VMCI_MAX_CONTEXTS 2000 ++ + /* + * List of current VMCI contexts. Contexts can be added by + * vmci_ctx_create() and removed via vmci_ctx_destroy(). +@@ -125,19 +128,22 @@ struct vmci_ctx *vmci_ctx_create(u32 cid, u32 priv_flags, + /* Initialize host-specific VMCI context. */ + init_waitqueue_head(&context->host_context.wait_queue); + +- context->queue_pair_array = vmci_handle_arr_create(0); ++ context->queue_pair_array = ++ vmci_handle_arr_create(0, VMCI_MAX_GUEST_QP_COUNT); + if (!context->queue_pair_array) { + error = -ENOMEM; + goto err_free_ctx; + } + +- context->doorbell_array = vmci_handle_arr_create(0); ++ context->doorbell_array = ++ vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT); + if (!context->doorbell_array) { + error = -ENOMEM; + goto err_free_qp_array; + } + +- context->pending_doorbell_array = vmci_handle_arr_create(0); ++ context->pending_doorbell_array = ++ vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT); + if (!context->pending_doorbell_array) { + error = -ENOMEM; + goto err_free_db_array; +@@ -212,7 +218,7 @@ static int ctx_fire_notification(u32 context_id, u32 priv_flags) + * We create an array to hold the subscribers we find when + * scanning through all contexts. + */ +- subscriber_array = vmci_handle_arr_create(0); ++ subscriber_array = vmci_handle_arr_create(0, VMCI_MAX_CONTEXTS); + if (subscriber_array == NULL) + return VMCI_ERROR_NO_MEM; + +@@ -631,20 +637,26 @@ int vmci_ctx_add_notification(u32 context_id, u32 remote_cid) + + spin_lock(&context->lock); + +- list_for_each_entry(n, &context->notifier_list, node) { +- if (vmci_handle_is_equal(n->handle, notifier->handle)) { +- exists = true; +- break; ++ if (context->n_notifiers < VMCI_MAX_CONTEXTS) { ++ list_for_each_entry(n, &context->notifier_list, node) { ++ if (vmci_handle_is_equal(n->handle, notifier->handle)) { ++ exists = true; ++ break; ++ } + } +- } + +- if (exists) { +- kfree(notifier); +- result = VMCI_ERROR_ALREADY_EXISTS; ++ if (exists) { ++ kfree(notifier); ++ result = VMCI_ERROR_ALREADY_EXISTS; ++ } else { ++ list_add_tail_rcu(¬ifier->node, ++ &context->notifier_list); ++ context->n_notifiers++; ++ result = VMCI_SUCCESS; ++ } + } else { +- list_add_tail_rcu(¬ifier->node, &context->notifier_list); +- context->n_notifiers++; +- result = VMCI_SUCCESS; ++ kfree(notifier); ++ result = VMCI_ERROR_NO_MEM; + } + + spin_unlock(&context->lock); +@@ -729,8 +741,7 @@ static int vmci_ctx_get_chkpt_doorbells(struct vmci_ctx *context, + u32 *buf_size, void **pbuf) + { + struct dbell_cpt_state *dbells; +- size_t n_doorbells; +- int i; ++ u32 i, n_doorbells; + + n_doorbells = vmci_handle_arr_get_size(context->doorbell_array); + if (n_doorbells > 0) { +@@ -868,7 +879,8 @@ int vmci_ctx_rcv_notifications_get(u32 context_id, + spin_lock(&context->lock); + + *db_handle_array = context->pending_doorbell_array; +- context->pending_doorbell_array = vmci_handle_arr_create(0); ++ context->pending_doorbell_array = ++ vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT); + if (!context->pending_doorbell_array) { + context->pending_doorbell_array = *db_handle_array; + *db_handle_array = NULL; +@@ -950,12 +962,11 @@ int vmci_ctx_dbell_create(u32 context_id, struct vmci_handle handle) + return VMCI_ERROR_NOT_FOUND; + + spin_lock(&context->lock); +- if (!vmci_handle_arr_has_entry(context->doorbell_array, handle)) { +- vmci_handle_arr_append_entry(&context->doorbell_array, handle); +- result = VMCI_SUCCESS; +- } else { ++ if (!vmci_handle_arr_has_entry(context->doorbell_array, handle)) ++ result = vmci_handle_arr_append_entry(&context->doorbell_array, ++ handle); ++ else + result = VMCI_ERROR_DUPLICATE_ENTRY; +- } + + spin_unlock(&context->lock); + vmci_ctx_put(context); +@@ -1091,15 +1102,16 @@ int vmci_ctx_notify_dbell(u32 src_cid, + if (!vmci_handle_arr_has_entry( + dst_context->pending_doorbell_array, + handle)) { +- vmci_handle_arr_append_entry( ++ result = vmci_handle_arr_append_entry( + &dst_context->pending_doorbell_array, + handle); +- +- ctx_signal_notify(dst_context); +- wake_up(&dst_context->host_context.wait_queue); +- ++ if (result == VMCI_SUCCESS) { ++ ctx_signal_notify(dst_context); ++ wake_up(&dst_context->host_context.wait_queue); ++ } ++ } else { ++ result = VMCI_SUCCESS; + } +- result = VMCI_SUCCESS; + } + spin_unlock(&dst_context->lock); + } +@@ -1126,13 +1138,11 @@ int vmci_ctx_qp_create(struct vmci_ctx *context, struct vmci_handle handle) + if (context == NULL || vmci_handle_is_invalid(handle)) + return VMCI_ERROR_INVALID_ARGS; + +- if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle)) { +- vmci_handle_arr_append_entry(&context->queue_pair_array, +- handle); +- result = VMCI_SUCCESS; +- } else { ++ if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle)) ++ result = vmci_handle_arr_append_entry( ++ &context->queue_pair_array, handle); ++ else + result = VMCI_ERROR_DUPLICATE_ENTRY; +- } + + return result; + } +diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.c b/drivers/misc/vmw_vmci/vmci_handle_array.c +index 344973a0fb0a..917e18a8af95 100644 +--- a/drivers/misc/vmw_vmci/vmci_handle_array.c ++++ b/drivers/misc/vmw_vmci/vmci_handle_array.c +@@ -16,24 +16,29 @@ + #include + #include "vmci_handle_array.h" + +-static size_t handle_arr_calc_size(size_t capacity) ++static size_t handle_arr_calc_size(u32 capacity) + { +- return sizeof(struct vmci_handle_arr) + ++ return VMCI_HANDLE_ARRAY_HEADER_SIZE + + capacity * sizeof(struct vmci_handle); + } + +-struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity) ++struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity) + { + struct vmci_handle_arr *array; + ++ if (max_capacity == 0 || capacity > max_capacity) ++ return NULL; ++ + if (capacity == 0) +- capacity = VMCI_HANDLE_ARRAY_DEFAULT_SIZE; ++ capacity = min((u32)VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY, ++ max_capacity); + + array = kmalloc(handle_arr_calc_size(capacity), GFP_ATOMIC); + if (!array) + return NULL; + + array->capacity = capacity; ++ array->max_capacity = max_capacity; + array->size = 0; + + return array; +@@ -44,27 +49,34 @@ void vmci_handle_arr_destroy(struct vmci_handle_arr *array) + kfree(array); + } + +-void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, +- struct vmci_handle handle) ++int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, ++ struct vmci_handle handle) + { + struct vmci_handle_arr *array = *array_ptr; + + if (unlikely(array->size >= array->capacity)) { + /* reallocate. */ + struct vmci_handle_arr *new_array; +- size_t new_capacity = array->capacity * VMCI_ARR_CAP_MULT; +- size_t new_size = handle_arr_calc_size(new_capacity); ++ u32 capacity_bump = min(array->max_capacity - array->capacity, ++ array->capacity); ++ size_t new_size = handle_arr_calc_size(array->capacity + ++ capacity_bump); ++ ++ if (array->size >= array->max_capacity) ++ return VMCI_ERROR_NO_MEM; + + new_array = krealloc(array, new_size, GFP_ATOMIC); + if (!new_array) +- return; ++ return VMCI_ERROR_NO_MEM; + +- new_array->capacity = new_capacity; ++ new_array->capacity += capacity_bump; + *array_ptr = array = new_array; + } + + array->entries[array->size] = handle; + array->size++; ++ ++ return VMCI_SUCCESS; + } + + /* +@@ -74,7 +86,7 @@ struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array, + struct vmci_handle entry_handle) + { + struct vmci_handle handle = VMCI_INVALID_HANDLE; +- size_t i; ++ u32 i; + + for (i = 0; i < array->size; i++) { + if (vmci_handle_is_equal(array->entries[i], entry_handle)) { +@@ -109,7 +121,7 @@ struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array) + * Handle at given index, VMCI_INVALID_HANDLE if invalid index. + */ + struct vmci_handle +-vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index) ++vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index) + { + if (unlikely(index >= array->size)) + return VMCI_INVALID_HANDLE; +@@ -120,7 +132,7 @@ vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index) + bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array, + struct vmci_handle entry_handle) + { +- size_t i; ++ u32 i; + + for (i = 0; i < array->size; i++) + if (vmci_handle_is_equal(array->entries[i], entry_handle)) +diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.h b/drivers/misc/vmw_vmci/vmci_handle_array.h +index b5f3a7f98cf1..0fc58597820e 100644 +--- a/drivers/misc/vmw_vmci/vmci_handle_array.h ++++ b/drivers/misc/vmw_vmci/vmci_handle_array.h +@@ -17,32 +17,41 @@ + #define _VMCI_HANDLE_ARRAY_H_ + + #include ++#include + #include + +-#define VMCI_HANDLE_ARRAY_DEFAULT_SIZE 4 +-#define VMCI_ARR_CAP_MULT 2 /* Array capacity multiplier */ +- + struct vmci_handle_arr { +- size_t capacity; +- size_t size; ++ u32 capacity; ++ u32 max_capacity; ++ u32 size; ++ u32 pad; + struct vmci_handle entries[]; + }; + +-struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity); ++#define VMCI_HANDLE_ARRAY_HEADER_SIZE \ ++ offsetof(struct vmci_handle_arr, entries) ++/* Select a default capacity that results in a 64 byte sized array */ ++#define VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY 6 ++/* Make sure that the max array size can be expressed by a u32 */ ++#define VMCI_HANDLE_ARRAY_MAX_CAPACITY \ ++ ((U32_MAX - VMCI_HANDLE_ARRAY_HEADER_SIZE - 1) / \ ++ sizeof(struct vmci_handle)) ++ ++struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity); + void vmci_handle_arr_destroy(struct vmci_handle_arr *array); +-void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, +- struct vmci_handle handle); ++int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, ++ struct vmci_handle handle); + struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array, + struct vmci_handle + entry_handle); + struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array); + struct vmci_handle +-vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index); ++vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index); + bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array, + struct vmci_handle entry_handle); + struct vmci_handle *vmci_handle_arr_get_handles(struct vmci_handle_arr *array); + +-static inline size_t vmci_handle_arr_get_size( ++static inline u32 vmci_handle_arr_get_size( + const struct vmci_handle_arr *array) + { + return array->size; +diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c +index 3e786ba204c3..671bfcceea6a 100644 +--- a/drivers/mmc/core/mmc.c ++++ b/drivers/mmc/core/mmc.c +@@ -1212,13 +1212,13 @@ static int mmc_select_hs400(struct mmc_card *card) + mmc_set_timing(host, MMC_TIMING_MMC_HS400); + mmc_set_bus_speed(card); + ++ if (host->ops->hs400_complete) ++ host->ops->hs400_complete(host); ++ + err = mmc_switch_status(card); + if (err) + goto out_err; + +- if (host->ops->hs400_complete) +- host->ops->hs400_complete(host); +- + return 0; + + out_err: +diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c +index f97c628eb2ad..f2fe344593d5 100644 +--- a/drivers/net/can/flexcan.c ++++ b/drivers/net/can/flexcan.c +@@ -1583,9 +1583,6 @@ static int flexcan_probe(struct platform_device *pdev) + dev_dbg(&pdev->dev, "failed to setup stop-mode\n"); + } + +- dev_info(&pdev->dev, "device registered (reg_base=%p, irq=%d)\n", +- priv->regs, dev->irq); +- + return 0; + + failed_register: +diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c +index 9b449400376b..deb274a19ba0 100644 +--- a/drivers/net/can/m_can/m_can.c ++++ b/drivers/net/can/m_can/m_can.c +@@ -822,6 +822,27 @@ static int m_can_poll(struct napi_struct *napi, int quota) + if (!irqstatus) + goto end; + ++ /* Errata workaround for issue "Needless activation of MRAF irq" ++ * During frame reception while the MCAN is in Error Passive state ++ * and the Receive Error Counter has the value MCAN_ECR.REC = 127, ++ * it may happen that MCAN_IR.MRAF is set although there was no ++ * Message RAM access failure. ++ * If MCAN_IR.MRAF is enabled, an interrupt to the Host CPU is generated ++ * The Message RAM Access Failure interrupt routine needs to check ++ * whether MCAN_ECR.RP = ’1’ and MCAN_ECR.REC = 127. ++ * In this case, reset MCAN_IR.MRAF. No further action is required. ++ */ ++ if ((priv->version <= 31) && (irqstatus & IR_MRAF) && ++ (m_can_read(priv, M_CAN_ECR) & ECR_RP)) { ++ struct can_berr_counter bec; ++ ++ __m_can_get_berr_counter(dev, &bec); ++ if (bec.rxerr == 127) { ++ m_can_write(priv, M_CAN_IR, IR_MRAF); ++ irqstatus &= ~IR_MRAF; ++ } ++ } ++ + psr = m_can_read(priv, M_CAN_PSR); + if (irqstatus & IR_ERR_STATE) + work_done += m_can_handle_state_errors(dev, psr); +diff --git a/drivers/net/can/spi/Kconfig b/drivers/net/can/spi/Kconfig +index 8f2e0dd7b756..792e9c6c4a2f 100644 +--- a/drivers/net/can/spi/Kconfig ++++ b/drivers/net/can/spi/Kconfig +@@ -8,9 +8,10 @@ config CAN_HI311X + Driver for the Holt HI311x SPI CAN controllers. + + config CAN_MCP251X +- tristate "Microchip MCP251x SPI CAN controllers" ++ tristate "Microchip MCP251x and MCP25625 SPI CAN controllers" + depends on HAS_DMA + ---help--- +- Driver for the Microchip MCP251x SPI CAN controllers. ++ Driver for the Microchip MCP251x and MCP25625 SPI CAN ++ controllers. + + endmenu +diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c +index e90817608645..da64e71a62ee 100644 +--- a/drivers/net/can/spi/mcp251x.c ++++ b/drivers/net/can/spi/mcp251x.c +@@ -1,5 +1,5 @@ + /* +- * CAN bus driver for Microchip 251x CAN Controller with SPI Interface ++ * CAN bus driver for Microchip 251x/25625 CAN Controller with SPI Interface + * + * MCP2510 support and bug fixes by Christian Pellegrin + * +@@ -41,7 +41,7 @@ + * static struct spi_board_info spi_board_info[] = { + * { + * .modalias = "mcp2510", +- * // or "mcp2515" depending on your controller ++ * // "mcp2515" or "mcp25625" depending on your controller + * .platform_data = &mcp251x_info, + * .irq = IRQ_EINT13, + * .max_speed_hz = 2*1000*1000, +@@ -238,6 +238,7 @@ static const struct can_bittiming_const mcp251x_bittiming_const = { + enum mcp251x_model { + CAN_MCP251X_MCP2510 = 0x2510, + CAN_MCP251X_MCP2515 = 0x2515, ++ CAN_MCP251X_MCP25625 = 0x25625, + }; + + struct mcp251x_priv { +@@ -280,7 +281,6 @@ static inline int mcp251x_is_##_model(struct spi_device *spi) \ + } + + MCP251X_IS(2510); +-MCP251X_IS(2515); + + static void mcp251x_clean(struct net_device *net) + { +@@ -639,7 +639,7 @@ static int mcp251x_hw_reset(struct spi_device *spi) + + /* Wait for oscillator startup timer after reset */ + mdelay(MCP251X_OST_DELAY_MS); +- ++ + reg = mcp251x_read_reg(spi, CANSTAT); + if ((reg & CANCTRL_REQOP_MASK) != CANCTRL_REQOP_CONF) + return -ENODEV; +@@ -820,9 +820,8 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id) + /* receive buffer 0 */ + if (intf & CANINTF_RX0IF) { + mcp251x_hw_rx(spi, 0); +- /* +- * Free one buffer ASAP +- * (The MCP2515 does this automatically.) ++ /* Free one buffer ASAP ++ * (The MCP2515/25625 does this automatically.) + */ + if (mcp251x_is_2510(spi)) + mcp251x_write_bits(spi, CANINTF, CANINTF_RX0IF, 0x00); +@@ -831,7 +830,7 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id) + /* receive buffer 1 */ + if (intf & CANINTF_RX1IF) { + mcp251x_hw_rx(spi, 1); +- /* the MCP2515 does this automatically */ ++ /* The MCP2515/25625 does this automatically. */ + if (mcp251x_is_2510(spi)) + clear_intf |= CANINTF_RX1IF; + } +@@ -1006,6 +1005,10 @@ static const struct of_device_id mcp251x_of_match[] = { + .compatible = "microchip,mcp2515", + .data = (void *)CAN_MCP251X_MCP2515, + }, ++ { ++ .compatible = "microchip,mcp25625", ++ .data = (void *)CAN_MCP251X_MCP25625, ++ }, + { } + }; + MODULE_DEVICE_TABLE(of, mcp251x_of_match); +@@ -1019,6 +1022,10 @@ static const struct spi_device_id mcp251x_id_table[] = { + .name = "mcp2515", + .driver_data = (kernel_ulong_t)CAN_MCP251X_MCP2515, + }, ++ { ++ .name = "mcp25625", ++ .driver_data = (kernel_ulong_t)CAN_MCP251X_MCP25625, ++ }, + { } + }; + MODULE_DEVICE_TABLE(spi, mcp251x_id_table); +@@ -1259,5 +1266,5 @@ module_spi_driver(mcp251x_can_driver); + + MODULE_AUTHOR("Chris Elston , " + "Christian Pellegrin "); +-MODULE_DESCRIPTION("Microchip 251x CAN driver"); ++MODULE_DESCRIPTION("Microchip 251x/25625 CAN driver"); + MODULE_LICENSE("GPL v2"); +diff --git a/drivers/net/dsa/mv88e6xxx/global1_vtu.c b/drivers/net/dsa/mv88e6xxx/global1_vtu.c +index 058326924f3e..7a6667e0b9f9 100644 +--- a/drivers/net/dsa/mv88e6xxx/global1_vtu.c ++++ b/drivers/net/dsa/mv88e6xxx/global1_vtu.c +@@ -419,7 +419,7 @@ int mv88e6185_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip, + * VTU DBNum[7:4] are located in VTU Operation 11:8 + */ + op |= entry->fid & 0x000f; +- op |= (entry->fid & 0x00f0) << 8; ++ op |= (entry->fid & 0x00f0) << 4; + } + + return mv88e6xxx_g1_vtu_op(chip, op); +diff --git a/drivers/net/ethernet/8390/Kconfig b/drivers/net/ethernet/8390/Kconfig +index f2f0264c58ba..443b34e2725f 100644 +--- a/drivers/net/ethernet/8390/Kconfig ++++ b/drivers/net/ethernet/8390/Kconfig +@@ -49,7 +49,7 @@ config XSURF100 + tristate "Amiga XSurf 100 AX88796/NE2000 clone support" + depends on ZORRO + select AX88796 +- select ASIX_PHY ++ select AX88796B_PHY + help + This driver is for the Individual Computers X-Surf 100 Ethernet + card (based on the Asix AX88796 chip). If you have such a card, +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c +index 749d0ef44371..59f227fcc68b 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c +@@ -1609,7 +1609,8 @@ static int bnx2x_get_module_info(struct net_device *dev, + } + + if (!sff8472_comp || +- (diag_type & SFP_EEPROM_DIAG_ADDR_CHANGE_REQ)) { ++ (diag_type & SFP_EEPROM_DIAG_ADDR_CHANGE_REQ) || ++ !(diag_type & SFP_EEPROM_DDM_IMPLEMENTED)) { + modinfo->type = ETH_MODULE_SFF_8079; + modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN; + } else { +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h +index b7d251108c19..7115f5025664 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h +@@ -62,6 +62,7 @@ + #define SFP_EEPROM_DIAG_TYPE_ADDR 0x5c + #define SFP_EEPROM_DIAG_TYPE_SIZE 1 + #define SFP_EEPROM_DIAG_ADDR_CHANGE_REQ (1<<2) ++#define SFP_EEPROM_DDM_IMPLEMENTED (1<<6) + #define SFP_EEPROM_SFF_8472_COMP_ADDR 0x5e + #define SFP_EEPROM_SFF_8472_COMP_SIZE 1 + +diff --git a/drivers/net/ethernet/cavium/liquidio/lio_core.c b/drivers/net/ethernet/cavium/liquidio/lio_core.c +index 1c50c10b5a16..d7e805749a5b 100644 +--- a/drivers/net/ethernet/cavium/liquidio/lio_core.c ++++ b/drivers/net/ethernet/cavium/liquidio/lio_core.c +@@ -964,7 +964,7 @@ static void liquidio_schedule_droq_pkt_handlers(struct octeon_device *oct) + + if (droq->ops.poll_mode) { + droq->ops.napi_fn(droq); +- oct_priv->napi_mask |= (1 << oq_no); ++ oct_priv->napi_mask |= BIT_ULL(oq_no); + } else { + tasklet_schedule(&oct_priv->droq_tasklet); + } +diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c +index 3dfb2d131eb7..0e4029c54241 100644 +--- a/drivers/net/ethernet/ibm/ibmvnic.c ++++ b/drivers/net/ethernet/ibm/ibmvnic.c +@@ -438,9 +438,10 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter) + if (rx_pool->buff_size != be64_to_cpu(size_array[i])) { + free_long_term_buff(adapter, &rx_pool->long_term_buff); + rx_pool->buff_size = be64_to_cpu(size_array[i]); +- alloc_long_term_buff(adapter, &rx_pool->long_term_buff, +- rx_pool->size * +- rx_pool->buff_size); ++ rc = alloc_long_term_buff(adapter, ++ &rx_pool->long_term_buff, ++ rx_pool->size * ++ rx_pool->buff_size); + } else { + rc = reset_long_term_buff(adapter, + &rx_pool->long_term_buff); +@@ -706,9 +707,9 @@ static int init_tx_pools(struct net_device *netdev) + return rc; + } + +- init_one_tx_pool(netdev, &adapter->tso_pool[i], +- IBMVNIC_TSO_BUFS, +- IBMVNIC_TSO_BUF_SZ); ++ rc = init_one_tx_pool(netdev, &adapter->tso_pool[i], ++ IBMVNIC_TSO_BUFS, ++ IBMVNIC_TSO_BUF_SZ); + if (rc) { + release_tx_pools(adapter); + return rc; +@@ -1751,7 +1752,8 @@ static int do_reset(struct ibmvnic_adapter *adapter, + + ibmvnic_cleanup(netdev); + +- if (adapter->reset_reason != VNIC_RESET_MOBILITY && ++ if (reset_state == VNIC_OPEN && ++ adapter->reset_reason != VNIC_RESET_MOBILITY && + adapter->reset_reason != VNIC_RESET_FAILOVER) { + rc = __ibmvnic_close(netdev); + if (rc) +@@ -1850,6 +1852,9 @@ static int do_reset(struct ibmvnic_adapter *adapter, + return 0; + } + ++ /* refresh device's multicast list */ ++ ibmvnic_set_multi(netdev); ++ + /* kick napi */ + for (i = 0; i < adapter->req_rx_queues; i++) + napi_schedule(&adapter->napi[i]); +diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h +index eb4c5e8964cd..5865597577d6 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h +@@ -997,7 +997,7 @@ static inline void mlxsw_reg_spaft_pack(char *payload, u8 local_port, + MLXSW_REG_ZERO(spaft, payload); + mlxsw_reg_spaft_local_port_set(payload, local_port); + mlxsw_reg_spaft_allow_untagged_set(payload, allow_untagged); +- mlxsw_reg_spaft_allow_prio_tagged_set(payload, true); ++ mlxsw_reg_spaft_allow_prio_tagged_set(payload, allow_untagged); + mlxsw_reg_spaft_allow_tagged_set(payload, true); + } + +diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig +index 520657945b82..b0c13f8c2b62 100644 +--- a/drivers/net/phy/Kconfig ++++ b/drivers/net/phy/Kconfig +@@ -242,7 +242,7 @@ config AQUANTIA_PHY + ---help--- + Currently supports the Aquantia AQ1202, AQ2104, AQR105, AQR405 + +-config ASIX_PHY ++config AX88796B_PHY + tristate "Asix PHYs" + help + Currently supports the Asix Electronics PHY found in the X-Surf 100 +diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile +index ece5dae67174..6d44ab91fbf6 100644 +--- a/drivers/net/phy/Makefile ++++ b/drivers/net/phy/Makefile +@@ -51,7 +51,7 @@ ifdef CONFIG_HWMON + aquantia-objs += aquantia_hwmon.o + endif + obj-$(CONFIG_AQUANTIA_PHY) += aquantia.o +-obj-$(CONFIG_ASIX_PHY) += asix.o ++obj-$(CONFIG_AX88796B_PHY) += ax88796b.o + obj-$(CONFIG_AT803X_PHY) += at803x.o + obj-$(CONFIG_BCM63XX_PHY) += bcm63xx.o + obj-$(CONFIG_BCM7XXX_PHY) += bcm7xxx.o +diff --git a/drivers/net/phy/asix.c b/drivers/net/phy/asix.c +deleted file mode 100644 +index f14ba5366b91..000000000000 +--- a/drivers/net/phy/asix.c ++++ /dev/null +@@ -1,57 +0,0 @@ +-// SPDX-License-Identifier: GPL-2.0+ +-/* Driver for Asix PHYs +- * +- * Author: Michael Schmitz +- */ +-#include +-#include +-#include +-#include +-#include +-#include +- +-#define PHY_ID_ASIX_AX88796B 0x003b1841 +- +-MODULE_DESCRIPTION("Asix PHY driver"); +-MODULE_AUTHOR("Michael Schmitz "); +-MODULE_LICENSE("GPL"); +- +-/** +- * asix_soft_reset - software reset the PHY via BMCR_RESET bit +- * @phydev: target phy_device struct +- * +- * Description: Perform a software PHY reset using the standard +- * BMCR_RESET bit and poll for the reset bit to be cleared. +- * Toggle BMCR_RESET bit off to accommodate broken AX8796B PHY implementation +- * such as used on the Individual Computers' X-Surf 100 Zorro card. +- * +- * Returns: 0 on success, < 0 on failure +- */ +-static int asix_soft_reset(struct phy_device *phydev) +-{ +- int ret; +- +- /* Asix PHY won't reset unless reset bit toggles */ +- ret = phy_write(phydev, MII_BMCR, 0); +- if (ret < 0) +- return ret; +- +- return genphy_soft_reset(phydev); +-} +- +-static struct phy_driver asix_driver[] = { { +- .phy_id = PHY_ID_ASIX_AX88796B, +- .name = "Asix Electronics AX88796B", +- .phy_id_mask = 0xfffffff0, +- .features = PHY_BASIC_FEATURES, +- .soft_reset = asix_soft_reset, +-} }; +- +-module_phy_driver(asix_driver); +- +-static struct mdio_device_id __maybe_unused asix_tbl[] = { +- { PHY_ID_ASIX_AX88796B, 0xfffffff0 }, +- { } +-}; +- +-MODULE_DEVICE_TABLE(mdio, asix_tbl); +diff --git a/drivers/net/phy/ax88796b.c b/drivers/net/phy/ax88796b.c +new file mode 100644 +index 000000000000..f14ba5366b91 +--- /dev/null ++++ b/drivers/net/phy/ax88796b.c +@@ -0,0 +1,57 @@ ++// SPDX-License-Identifier: GPL-2.0+ ++/* Driver for Asix PHYs ++ * ++ * Author: Michael Schmitz ++ */ ++#include ++#include ++#include ++#include ++#include ++#include ++ ++#define PHY_ID_ASIX_AX88796B 0x003b1841 ++ ++MODULE_DESCRIPTION("Asix PHY driver"); ++MODULE_AUTHOR("Michael Schmitz "); ++MODULE_LICENSE("GPL"); ++ ++/** ++ * asix_soft_reset - software reset the PHY via BMCR_RESET bit ++ * @phydev: target phy_device struct ++ * ++ * Description: Perform a software PHY reset using the standard ++ * BMCR_RESET bit and poll for the reset bit to be cleared. ++ * Toggle BMCR_RESET bit off to accommodate broken AX8796B PHY implementation ++ * such as used on the Individual Computers' X-Surf 100 Zorro card. ++ * ++ * Returns: 0 on success, < 0 on failure ++ */ ++static int asix_soft_reset(struct phy_device *phydev) ++{ ++ int ret; ++ ++ /* Asix PHY won't reset unless reset bit toggles */ ++ ret = phy_write(phydev, MII_BMCR, 0); ++ if (ret < 0) ++ return ret; ++ ++ return genphy_soft_reset(phydev); ++} ++ ++static struct phy_driver asix_driver[] = { { ++ .phy_id = PHY_ID_ASIX_AX88796B, ++ .name = "Asix Electronics AX88796B", ++ .phy_id_mask = 0xfffffff0, ++ .features = PHY_BASIC_FEATURES, ++ .soft_reset = asix_soft_reset, ++} }; ++ ++module_phy_driver(asix_driver); ++ ++static struct mdio_device_id __maybe_unused asix_tbl[] = { ++ { PHY_ID_ASIX_AX88796B, 0xfffffff0 }, ++ { } ++}; ++ ++MODULE_DEVICE_TABLE(mdio, asix_tbl); +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c +index e657d8947125..128c8a327d8e 100644 +--- a/drivers/net/usb/qmi_wwan.c ++++ b/drivers/net/usb/qmi_wwan.c +@@ -153,7 +153,7 @@ static bool qmimux_has_slaves(struct usbnet *dev) + + static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb) + { +- unsigned int len, offset = 0; ++ unsigned int len, offset = 0, pad_len, pkt_len; + struct qmimux_hdr *hdr; + struct net_device *net; + struct sk_buff *skbn; +@@ -171,10 +171,16 @@ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb) + if (hdr->pad & 0x80) + goto skip; + ++ /* extract padding length and check for valid length info */ ++ pad_len = hdr->pad & 0x3f; ++ if (len == 0 || pad_len >= len) ++ goto skip; ++ pkt_len = len - pad_len; ++ + net = qmimux_find_dev(dev, hdr->mux_id); + if (!net) + goto skip; +- skbn = netdev_alloc_skb(net, len); ++ skbn = netdev_alloc_skb(net, pkt_len); + if (!skbn) + return 0; + skbn->dev = net; +@@ -191,7 +197,7 @@ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb) + goto skip; + } + +- skb_put_data(skbn, skb->data + offset + qmimux_hdr_sz, len); ++ skb_put_data(skbn, skb->data + offset + qmimux_hdr_sz, pkt_len); + if (netif_rx(skbn) != NET_RX_SUCCESS) + return 0; + +@@ -241,13 +247,14 @@ out_free_newdev: + return err; + } + +-static void qmimux_unregister_device(struct net_device *dev) ++static void qmimux_unregister_device(struct net_device *dev, ++ struct list_head *head) + { + struct qmimux_priv *priv = netdev_priv(dev); + struct net_device *real_dev = priv->real_dev; + + netdev_upper_dev_unlink(real_dev, dev); +- unregister_netdevice(dev); ++ unregister_netdevice_queue(dev, head); + + /* Get rid of the reference to real_dev */ + dev_put(real_dev); +@@ -356,8 +363,8 @@ static ssize_t add_mux_store(struct device *d, struct device_attribute *attr, c + if (kstrtou8(buf, 0, &mux_id)) + return -EINVAL; + +- /* mux_id [1 - 0x7f] range empirically found */ +- if (mux_id < 1 || mux_id > 0x7f) ++ /* mux_id [1 - 254] for compatibility with ip(8) and the rmnet driver */ ++ if (mux_id < 1 || mux_id > 254) + return -EINVAL; + + if (!rtnl_trylock()) +@@ -418,7 +425,7 @@ static ssize_t del_mux_store(struct device *d, struct device_attribute *attr, c + ret = -EINVAL; + goto err; + } +- qmimux_unregister_device(del_dev); ++ qmimux_unregister_device(del_dev, NULL); + + if (!qmimux_has_slaves(dev)) + info->flags &= ~QMI_WWAN_FLAG_MUX; +@@ -1428,6 +1435,7 @@ static void qmi_wwan_disconnect(struct usb_interface *intf) + struct qmi_wwan_state *info; + struct list_head *iter; + struct net_device *ldev; ++ LIST_HEAD(list); + + /* called twice if separate control and data intf */ + if (!dev) +@@ -1440,8 +1448,9 @@ static void qmi_wwan_disconnect(struct usb_interface *intf) + } + rcu_read_lock(); + netdev_for_each_upper_dev_rcu(dev->net, ldev, iter) +- qmimux_unregister_device(ldev); ++ qmimux_unregister_device(ldev, &list); + rcu_read_unlock(); ++ unregister_netdevice_many(&list); + rtnl_unlock(); + info->flags &= ~QMI_WWAN_FLAG_MUX; + } +diff --git a/drivers/net/wireless/ath/carl9170/usb.c b/drivers/net/wireless/ath/carl9170/usb.c +index e7c3f3b8457d..99f1897a775d 100644 +--- a/drivers/net/wireless/ath/carl9170/usb.c ++++ b/drivers/net/wireless/ath/carl9170/usb.c +@@ -128,6 +128,8 @@ static const struct usb_device_id carl9170_usb_ids[] = { + }; + MODULE_DEVICE_TABLE(usb, carl9170_usb_ids); + ++static struct usb_driver carl9170_driver; ++ + static void carl9170_usb_submit_data_urb(struct ar9170 *ar) + { + struct urb *urb; +@@ -966,32 +968,28 @@ err_out: + + static void carl9170_usb_firmware_failed(struct ar9170 *ar) + { +- struct device *parent = ar->udev->dev.parent; +- struct usb_device *udev; +- +- /* +- * Store a copy of the usb_device pointer locally. +- * This is because device_release_driver initiates +- * carl9170_usb_disconnect, which in turn frees our +- * driver context (ar). ++ /* Store a copies of the usb_interface and usb_device pointer locally. ++ * This is because release_driver initiates carl9170_usb_disconnect, ++ * which in turn frees our driver context (ar). + */ +- udev = ar->udev; ++ struct usb_interface *intf = ar->intf; ++ struct usb_device *udev = ar->udev; + + complete(&ar->fw_load_wait); ++ /* at this point 'ar' could be already freed. Don't use it anymore */ ++ ar = NULL; + + /* unbind anything failed */ +- if (parent) +- device_lock(parent); +- +- device_release_driver(&udev->dev); +- if (parent) +- device_unlock(parent); ++ usb_lock_device(udev); ++ usb_driver_release_interface(&carl9170_driver, intf); ++ usb_unlock_device(udev); + +- usb_put_dev(udev); ++ usb_put_intf(intf); + } + + static void carl9170_usb_firmware_finish(struct ar9170 *ar) + { ++ struct usb_interface *intf = ar->intf; + int err; + + err = carl9170_parse_firmware(ar); +@@ -1009,7 +1007,7 @@ static void carl9170_usb_firmware_finish(struct ar9170 *ar) + goto err_unrx; + + complete(&ar->fw_load_wait); +- usb_put_dev(ar->udev); ++ usb_put_intf(intf); + return; + + err_unrx: +@@ -1052,7 +1050,6 @@ static int carl9170_usb_probe(struct usb_interface *intf, + return PTR_ERR(ar); + + udev = interface_to_usbdev(intf); +- usb_get_dev(udev); + ar->udev = udev; + ar->intf = intf; + ar->features = id->driver_info; +@@ -1094,15 +1091,14 @@ static int carl9170_usb_probe(struct usb_interface *intf, + atomic_set(&ar->rx_anch_urbs, 0); + atomic_set(&ar->rx_pool_urbs, 0); + +- usb_get_dev(ar->udev); ++ usb_get_intf(intf); + + carl9170_set_state(ar, CARL9170_STOPPED); + + err = request_firmware_nowait(THIS_MODULE, 1, CARL9170FW_NAME, + &ar->udev->dev, GFP_KERNEL, ar, carl9170_usb_firmware_step2); + if (err) { +- usb_put_dev(udev); +- usb_put_dev(udev); ++ usb_put_intf(intf); + carl9170_free(ar); + } + return err; +@@ -1131,7 +1127,6 @@ static void carl9170_usb_disconnect(struct usb_interface *intf) + + carl9170_release_firmware(ar); + carl9170_free(ar); +- usb_put_dev(udev); + } + + #ifdef CONFIG_PM +diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c +index 689a65b11cc3..4fd1737d768b 100644 +--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c ++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c +@@ -1579,7 +1579,6 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context) + goto free; + + out_free_fw: +- iwl_dealloc_ucode(drv); + release_firmware(ucode_raw); + out_unbind: + complete(&drv->request_firmware_complete); +diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h +index 1af9f9e1ecd4..0c0799d13e15 100644 +--- a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h ++++ b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h +@@ -402,7 +402,12 @@ enum aux_misc_master1_en { + #define AUX_MISC_MASTER1_SMPHR_STATUS 0xA20800 + #define RSA_ENABLE 0xA24B08 + #define PREG_AUX_BUS_WPROT_0 0xA04CC0 +-#define PREG_PRPH_WPROT_0 0xA04CE0 ++ ++/* device family 9000 WPROT register */ ++#define PREG_PRPH_WPROT_9000 0xA04CE0 ++/* device family 22000 WPROT register */ ++#define PREG_PRPH_WPROT_22000 0xA04D00 ++ + #define SB_CPU_1_STATUS 0xA01E30 + #define SB_CPU_2_STATUS 0xA01E34 + #define UMAG_SB_CPU_1_STATUS 0xA038C0 +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +index ab68b5d53ec9..153717587aeb 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +@@ -311,6 +311,8 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm, + int ret; + enum iwl_ucode_type old_type = mvm->fwrt.cur_fw_img; + static const u16 alive_cmd[] = { MVM_ALIVE }; ++ bool run_in_rfkill = ++ ucode_type == IWL_UCODE_INIT || iwl_mvm_has_unified_ucode(mvm); + + if (ucode_type == IWL_UCODE_REGULAR && + iwl_fw_dbg_conf_usniffer(mvm->fw, FW_DBG_START_FROM_ALIVE) && +@@ -328,7 +330,12 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm, + alive_cmd, ARRAY_SIZE(alive_cmd), + iwl_alive_fn, &alive_data); + +- ret = iwl_trans_start_fw(mvm->trans, fw, ucode_type == IWL_UCODE_INIT); ++ /* ++ * We want to load the INIT firmware even in RFKILL ++ * For the unified firmware case, the ucode_type is not ++ * INIT, but we still need to run it. ++ */ ++ ret = iwl_trans_start_fw(mvm->trans, fw, run_in_rfkill); + if (ret) { + iwl_fw_set_current_image(&mvm->fwrt, old_type); + iwl_remove_notification(&mvm->notif_wait, &alive_wait); +@@ -433,7 +440,8 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm) + * commands + */ + ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(SYSTEM_GROUP, +- INIT_EXTENDED_CFG_CMD), 0, ++ INIT_EXTENDED_CFG_CMD), ++ CMD_SEND_IN_RFKILL, + sizeof(init_cfg), &init_cfg); + if (ret) { + IWL_ERR(mvm, "Failed to run init config command: %d\n", +@@ -457,7 +465,8 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm) + } + + ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(REGULATORY_AND_NVM_GROUP, +- NVM_ACCESS_COMPLETE), 0, ++ NVM_ACCESS_COMPLETE), ++ CMD_SEND_IN_RFKILL, + sizeof(nvm_complete), &nvm_complete); + if (ret) { + IWL_ERR(mvm, "Failed to run complete NVM access: %d\n", +@@ -482,6 +491,8 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm) + } + } + ++ mvm->rfkill_safe_init_done = true; ++ + return 0; + + error: +@@ -526,7 +537,7 @@ int iwl_run_init_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm) + + lockdep_assert_held(&mvm->mutex); + +- if (WARN_ON_ONCE(mvm->calibrating)) ++ if (WARN_ON_ONCE(mvm->rfkill_safe_init_done)) + return 0; + + iwl_init_notification_wait(&mvm->notif_wait, +@@ -576,7 +587,7 @@ int iwl_run_init_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm) + goto remove_notif; + } + +- mvm->calibrating = true; ++ mvm->rfkill_safe_init_done = true; + + /* Send TX valid antennas before triggering calibrations */ + ret = iwl_send_tx_ant_cfg(mvm, iwl_mvm_get_valid_tx_ant(mvm)); +@@ -612,7 +623,7 @@ int iwl_run_init_mvm_ucode(struct iwl_mvm *mvm, bool read_nvm) + remove_notif: + iwl_remove_notification(&mvm->notif_wait, &calib_wait); + out: +- mvm->calibrating = false; ++ mvm->rfkill_safe_init_done = false; + if (iwlmvm_mod_params.init_dbg && !mvm->nvm_data) { + /* we want to debug INIT and we have no NVM - fake */ + mvm->nvm_data = kzalloc(sizeof(struct iwl_nvm_data) + +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c +index 6a3b11dd2edf..4ddf620c267d 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c +@@ -1211,7 +1211,7 @@ static void iwl_mvm_restart_cleanup(struct iwl_mvm *mvm) + + mvm->scan_status = 0; + mvm->ps_disabled = false; +- mvm->calibrating = false; ++ mvm->rfkill_safe_init_done = false; + + /* just in case one was running */ + iwl_mvm_cleanup_roc_te(mvm); +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +index a50dc53df086..b698d55ace1b 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h +@@ -877,7 +877,7 @@ struct iwl_mvm { + struct iwl_mvm_vif *bf_allowed_vif; + + bool hw_registered; +- bool calibrating; ++ bool rfkill_safe_init_done; + bool support_umac_log; + + u32 ampdu_ref; +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c +index 13681b03c10e..20115770e75a 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c +@@ -1222,7 +1222,8 @@ void iwl_mvm_set_hw_ctkill_state(struct iwl_mvm *mvm, bool state) + static bool iwl_mvm_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state) + { + struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); +- bool calibrating = READ_ONCE(mvm->calibrating); ++ bool rfkill_safe_init_done = READ_ONCE(mvm->rfkill_safe_init_done); ++ bool unified = iwl_mvm_has_unified_ucode(mvm); + + if (state) + set_bit(IWL_MVM_STATUS_HW_RFKILL, &mvm->status); +@@ -1231,15 +1232,23 @@ static bool iwl_mvm_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state) + + iwl_mvm_set_rfkill_state(mvm); + +- /* iwl_run_init_mvm_ucode is waiting for results, abort it */ +- if (calibrating) ++ /* iwl_run_init_mvm_ucode is waiting for results, abort it. */ ++ if (rfkill_safe_init_done) + iwl_abort_notification_waits(&mvm->notif_wait); + ++ /* ++ * Don't ask the transport to stop the firmware. We'll do it ++ * after cfg80211 takes us down. ++ */ ++ if (unified) ++ return false; ++ + /* + * Stop the device if we run OPERATIONAL firmware or if we are in the + * middle of the calibrations. + */ +- return state && (mvm->fwrt.cur_fw_img != IWL_UCODE_INIT || calibrating); ++ return state && (mvm->fwrt.cur_fw_img != IWL_UCODE_INIT || ++ rfkill_safe_init_done); + } + + static void iwl_mvm_free_skb(struct iwl_op_mode *op_mode, struct sk_buff *skb) +diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h +index 59213164f35e..2afce5c41322 100644 +--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h ++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h +@@ -948,7 +948,7 @@ static inline void iwl_enable_rfkill_int(struct iwl_trans *trans) + MSIX_HW_INT_CAUSES_REG_RF_KILL); + } + +- if (trans->cfg->device_family == IWL_DEVICE_FAMILY_9000) { ++ if (trans->cfg->device_family >= IWL_DEVICE_FAMILY_9000) { + /* + * On 9000-series devices this bit isn't enabled by default, so + * when we power down the device we need set the bit to allow it +diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c +index c4375b868901..80695584e406 100644 +--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c ++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c +@@ -1696,26 +1696,26 @@ static int iwl_pcie_init_msix_handler(struct pci_dev *pdev, + return 0; + } + +-static int _iwl_trans_pcie_start_hw(struct iwl_trans *trans, bool low_power) ++static int iwl_trans_pcie_clear_persistence_bit(struct iwl_trans *trans) + { +- struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); +- u32 hpm; +- int err; +- +- lockdep_assert_held(&trans_pcie->mutex); ++ u32 hpm, wprot; + +- err = iwl_pcie_prepare_card_hw(trans); +- if (err) { +- IWL_ERR(trans, "Error while preparing HW: %d\n", err); +- return err; ++ switch (trans->cfg->device_family) { ++ case IWL_DEVICE_FAMILY_9000: ++ wprot = PREG_PRPH_WPROT_9000; ++ break; ++ case IWL_DEVICE_FAMILY_22000: ++ wprot = PREG_PRPH_WPROT_22000; ++ break; ++ default: ++ return 0; + } + + hpm = iwl_read_umac_prph_no_grab(trans, HPM_DEBUG); + if (hpm != 0xa5a5a5a0 && (hpm & PERSISTENCE_BIT)) { +- int wfpm_val = iwl_read_umac_prph_no_grab(trans, +- PREG_PRPH_WPROT_0); ++ u32 wprot_val = iwl_read_umac_prph_no_grab(trans, wprot); + +- if (wfpm_val & PREG_WFPM_ACCESS) { ++ if (wprot_val & PREG_WFPM_ACCESS) { + IWL_ERR(trans, + "Error, can not clear persistence bit\n"); + return -EPERM; +@@ -1724,6 +1724,26 @@ static int _iwl_trans_pcie_start_hw(struct iwl_trans *trans, bool low_power) + hpm & ~PERSISTENCE_BIT); + } + ++ return 0; ++} ++ ++static int _iwl_trans_pcie_start_hw(struct iwl_trans *trans, bool low_power) ++{ ++ struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); ++ int err; ++ ++ lockdep_assert_held(&trans_pcie->mutex); ++ ++ err = iwl_pcie_prepare_card_hw(trans); ++ if (err) { ++ IWL_ERR(trans, "Error while preparing HW: %d\n", err); ++ return err; ++ } ++ ++ err = iwl_trans_pcie_clear_persistence_bit(trans); ++ if (err) ++ return err; ++ + iwl_trans_pcie_sw_reset(trans); + + err = iwl_pcie_apm_init(trans); +@@ -3565,7 +3585,9 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev, + } + } else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) == + CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR) && +- (trans->cfg != &iwl_ax200_cfg_cc || ++ ((trans->cfg != &iwl_ax200_cfg_cc && ++ trans->cfg != &killer1650x_2ax_cfg && ++ trans->cfg != &killer1650w_2ax_cfg) || + trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0)) { + u32 hw_status; + +diff --git a/drivers/net/wireless/intersil/p54/p54usb.c b/drivers/net/wireless/intersil/p54/p54usb.c +index b0b86f701061..15661da6eedc 100644 +--- a/drivers/net/wireless/intersil/p54/p54usb.c ++++ b/drivers/net/wireless/intersil/p54/p54usb.c +@@ -33,6 +33,8 @@ MODULE_ALIAS("prism54usb"); + MODULE_FIRMWARE("isl3886usb"); + MODULE_FIRMWARE("isl3887usb"); + ++static struct usb_driver p54u_driver; ++ + /* + * Note: + * +@@ -921,9 +923,9 @@ static void p54u_load_firmware_cb(const struct firmware *firmware, + { + struct p54u_priv *priv = context; + struct usb_device *udev = priv->udev; ++ struct usb_interface *intf = priv->intf; + int err; + +- complete(&priv->fw_wait_load); + if (firmware) { + priv->fw = firmware; + err = p54u_start_ops(priv); +@@ -932,26 +934,22 @@ static void p54u_load_firmware_cb(const struct firmware *firmware, + dev_err(&udev->dev, "Firmware not found.\n"); + } + +- if (err) { +- struct device *parent = priv->udev->dev.parent; +- +- dev_err(&udev->dev, "failed to initialize device (%d)\n", err); +- +- if (parent) +- device_lock(parent); ++ complete(&priv->fw_wait_load); ++ /* ++ * At this point p54u_disconnect may have already freed ++ * the "priv" context. Do not use it anymore! ++ */ ++ priv = NULL; + +- device_release_driver(&udev->dev); +- /* +- * At this point p54u_disconnect has already freed +- * the "priv" context. Do not use it anymore! +- */ +- priv = NULL; ++ if (err) { ++ dev_err(&intf->dev, "failed to initialize device (%d)\n", err); + +- if (parent) +- device_unlock(parent); ++ usb_lock_device(udev); ++ usb_driver_release_interface(&p54u_driver, intf); ++ usb_unlock_device(udev); + } + +- usb_put_dev(udev); ++ usb_put_intf(intf); + } + + static int p54u_load_firmware(struct ieee80211_hw *dev, +@@ -972,14 +970,14 @@ static int p54u_load_firmware(struct ieee80211_hw *dev, + dev_info(&priv->udev->dev, "Loading firmware file %s\n", + p54u_fwlist[i].fw); + +- usb_get_dev(udev); ++ usb_get_intf(intf); + err = request_firmware_nowait(THIS_MODULE, 1, p54u_fwlist[i].fw, + device, GFP_KERNEL, priv, + p54u_load_firmware_cb); + if (err) { + dev_err(&priv->udev->dev, "(p54usb) cannot load firmware %s " + "(%d)!\n", p54u_fwlist[i].fw, err); +- usb_put_dev(udev); ++ usb_put_intf(intf); + } + + return err; +@@ -1011,8 +1009,6 @@ static int p54u_probe(struct usb_interface *intf, + skb_queue_head_init(&priv->rx_queue); + init_usb_anchor(&priv->submitted); + +- usb_get_dev(udev); +- + /* really lazy and simple way of figuring out if we're a 3887 */ + /* TODO: should just stick the identification in the device table */ + i = intf->altsetting->desc.bNumEndpoints; +@@ -1053,10 +1049,8 @@ static int p54u_probe(struct usb_interface *intf, + priv->upload_fw = p54u_upload_firmware_net2280; + } + err = p54u_load_firmware(dev, intf); +- if (err) { +- usb_put_dev(udev); ++ if (err) + p54_free_common(dev); +- } + return err; + } + +@@ -1072,7 +1066,6 @@ static void p54u_disconnect(struct usb_interface *intf) + wait_for_completion(&priv->fw_wait_load); + p54_unregister_common(dev); + +- usb_put_dev(interface_to_usbdev(intf)); + release_firmware(priv->fw); + p54_free_common(dev); + } +diff --git a/drivers/net/wireless/intersil/p54/txrx.c b/drivers/net/wireless/intersil/p54/txrx.c +index 790784568ad2..5bf1c19ecced 100644 +--- a/drivers/net/wireless/intersil/p54/txrx.c ++++ b/drivers/net/wireless/intersil/p54/txrx.c +@@ -142,7 +142,10 @@ static int p54_assign_address(struct p54_common *priv, struct sk_buff *skb) + unlikely(GET_HW_QUEUE(skb) == P54_QUEUE_BEACON)) + priv->beacon_req_id = data->req_id; + +- __skb_queue_after(&priv->tx_queue, target_skb, skb); ++ if (target_skb) ++ __skb_queue_after(&priv->tx_queue, target_skb, skb); ++ else ++ __skb_queue_head(&priv->tx_queue, skb); + spin_unlock_irqrestore(&priv->tx_queue.lock, flags); + return 0; + } +diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h +index b73f99dc5a72..1fb76d2f5d3f 100644 +--- a/drivers/net/wireless/marvell/mwifiex/fw.h ++++ b/drivers/net/wireless/marvell/mwifiex/fw.h +@@ -1759,9 +1759,10 @@ struct mwifiex_ie_types_wmm_queue_status { + struct ieee_types_vendor_header { + u8 element_id; + u8 len; +- u8 oui[4]; /* 0~2: oui, 3: oui_type */ +- u8 oui_subtype; +- u8 version; ++ struct { ++ u8 oui[3]; ++ u8 oui_type; ++ } __packed oui; + } __packed; + + struct ieee_types_wmm_parameter { +@@ -1775,6 +1776,9 @@ struct ieee_types_wmm_parameter { + * Version [1] + */ + struct ieee_types_vendor_header vend_hdr; ++ u8 oui_subtype; ++ u8 version; ++ + u8 qos_info_bitmap; + u8 reserved; + struct ieee_types_wmm_ac_parameters ac_params[IEEE80211_NUM_ACS]; +@@ -1792,6 +1796,8 @@ struct ieee_types_wmm_info { + * Version [1] + */ + struct ieee_types_vendor_header vend_hdr; ++ u8 oui_subtype; ++ u8 version; + + u8 qos_info_bitmap; + } __packed; +diff --git a/drivers/net/wireless/marvell/mwifiex/ie.c b/drivers/net/wireless/marvell/mwifiex/ie.c +index 6845eb57b39a..653d347a9a19 100644 +--- a/drivers/net/wireless/marvell/mwifiex/ie.c ++++ b/drivers/net/wireless/marvell/mwifiex/ie.c +@@ -329,6 +329,8 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv, + struct ieee80211_vendor_ie *vendorhdr; + u16 gen_idx = MWIFIEX_AUTO_IDX_MASK, ie_len = 0; + int left_len, parsed_len = 0; ++ unsigned int token_len; ++ int err = 0; + + if (!info->tail || !info->tail_len) + return 0; +@@ -344,6 +346,12 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv, + */ + while (left_len > sizeof(struct ieee_types_header)) { + hdr = (void *)(info->tail + parsed_len); ++ token_len = hdr->len + sizeof(struct ieee_types_header); ++ if (token_len > left_len) { ++ err = -EINVAL; ++ goto out; ++ } ++ + switch (hdr->element_id) { + case WLAN_EID_SSID: + case WLAN_EID_SUPP_RATES: +@@ -361,17 +369,20 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv, + if (cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT, + WLAN_OUI_TYPE_MICROSOFT_WMM, + (const u8 *)hdr, +- hdr->len + sizeof(struct ieee_types_header))) ++ token_len)) + break; + /* fall through */ + default: +- memcpy(gen_ie->ie_buffer + ie_len, hdr, +- hdr->len + sizeof(struct ieee_types_header)); +- ie_len += hdr->len + sizeof(struct ieee_types_header); ++ if (ie_len + token_len > IEEE_MAX_IE_SIZE) { ++ err = -EINVAL; ++ goto out; ++ } ++ memcpy(gen_ie->ie_buffer + ie_len, hdr, token_len); ++ ie_len += token_len; + break; + } +- left_len -= hdr->len + sizeof(struct ieee_types_header); +- parsed_len += hdr->len + sizeof(struct ieee_types_header); ++ left_len -= token_len; ++ parsed_len += token_len; + } + + /* parse only WPA vendor IE from tail, WMM IE is configured by +@@ -381,15 +392,17 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv, + WLAN_OUI_TYPE_MICROSOFT_WPA, + info->tail, info->tail_len); + if (vendorhdr) { +- memcpy(gen_ie->ie_buffer + ie_len, vendorhdr, +- vendorhdr->len + sizeof(struct ieee_types_header)); +- ie_len += vendorhdr->len + sizeof(struct ieee_types_header); ++ token_len = vendorhdr->len + sizeof(struct ieee_types_header); ++ if (ie_len + token_len > IEEE_MAX_IE_SIZE) { ++ err = -EINVAL; ++ goto out; ++ } ++ memcpy(gen_ie->ie_buffer + ie_len, vendorhdr, token_len); ++ ie_len += token_len; + } + +- if (!ie_len) { +- kfree(gen_ie); +- return 0; +- } ++ if (!ie_len) ++ goto out; + + gen_ie->ie_index = cpu_to_le16(gen_idx); + gen_ie->mgmt_subtype_mask = cpu_to_le16(MGMT_MASK_BEACON | +@@ -399,13 +412,15 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv, + + if (mwifiex_update_uap_custom_ie(priv, gen_ie, &gen_idx, NULL, NULL, + NULL, NULL)) { +- kfree(gen_ie); +- return -1; ++ err = -EINVAL; ++ goto out; + } + + priv->gen_idx = gen_idx; ++ ++ out: + kfree(gen_ie); +- return 0; ++ return err; + } + + /* This function parses different IEs-head & tail IEs, beacon IEs, +diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c +index 935778ec9a1b..e2786ab612ca 100644 +--- a/drivers/net/wireless/marvell/mwifiex/scan.c ++++ b/drivers/net/wireless/marvell/mwifiex/scan.c +@@ -1247,6 +1247,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter, + } + switch (element_id) { + case WLAN_EID_SSID: ++ if (element_len > IEEE80211_MAX_SSID_LEN) ++ return -EINVAL; + bss_entry->ssid.ssid_len = element_len; + memcpy(bss_entry->ssid.ssid, (current_ptr + 2), + element_len); +@@ -1256,6 +1258,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter, + break; + + case WLAN_EID_SUPP_RATES: ++ if (element_len > MWIFIEX_SUPPORTED_RATES) ++ return -EINVAL; + memcpy(bss_entry->data_rates, current_ptr + 2, + element_len); + memcpy(bss_entry->supported_rates, current_ptr + 2, +@@ -1265,6 +1269,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter, + break; + + case WLAN_EID_FH_PARAMS: ++ if (element_len + 2 < sizeof(*fh_param_set)) ++ return -EINVAL; + fh_param_set = + (struct ieee_types_fh_param_set *) current_ptr; + memcpy(&bss_entry->phy_param_set.fh_param_set, +@@ -1273,6 +1279,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter, + break; + + case WLAN_EID_DS_PARAMS: ++ if (element_len + 2 < sizeof(*ds_param_set)) ++ return -EINVAL; + ds_param_set = + (struct ieee_types_ds_param_set *) current_ptr; + +@@ -1284,6 +1292,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter, + break; + + case WLAN_EID_CF_PARAMS: ++ if (element_len + 2 < sizeof(*cf_param_set)) ++ return -EINVAL; + cf_param_set = + (struct ieee_types_cf_param_set *) current_ptr; + memcpy(&bss_entry->ss_param_set.cf_param_set, +@@ -1292,6 +1302,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter, + break; + + case WLAN_EID_IBSS_PARAMS: ++ if (element_len + 2 < sizeof(*ibss_param_set)) ++ return -EINVAL; + ibss_param_set = + (struct ieee_types_ibss_param_set *) + current_ptr; +@@ -1301,10 +1313,14 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter, + break; + + case WLAN_EID_ERP_INFO: ++ if (!element_len) ++ return -EINVAL; + bss_entry->erp_flags = *(current_ptr + 2); + break; + + case WLAN_EID_PWR_CONSTRAINT: ++ if (!element_len) ++ return -EINVAL; + bss_entry->local_constraint = *(current_ptr + 2); + bss_entry->sensed_11h = true; + break; +@@ -1348,15 +1364,22 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter, + vendor_ie = (struct ieee_types_vendor_specific *) + current_ptr; + +- if (!memcmp +- (vendor_ie->vend_hdr.oui, wpa_oui, +- sizeof(wpa_oui))) { ++ /* 802.11 requires at least 3-byte OUI. */ ++ if (element_len < sizeof(vendor_ie->vend_hdr.oui.oui)) ++ return -EINVAL; ++ ++ /* Not long enough for a match? Skip it. */ ++ if (element_len < sizeof(wpa_oui)) ++ break; ++ ++ if (!memcmp(&vendor_ie->vend_hdr.oui, wpa_oui, ++ sizeof(wpa_oui))) { + bss_entry->bcn_wpa_ie = + (struct ieee_types_vendor_specific *) + current_ptr; + bss_entry->wpa_offset = (u16) + (current_ptr - bss_entry->beacon_buf); +- } else if (!memcmp(vendor_ie->vend_hdr.oui, wmm_oui, ++ } else if (!memcmp(&vendor_ie->vend_hdr.oui, wmm_oui, + sizeof(wmm_oui))) { + if (total_ie_len == + sizeof(struct ieee_types_wmm_parameter) || +diff --git a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c +index ebc0e41e5d3b..74e50566db1f 100644 +--- a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c ++++ b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c +@@ -1351,7 +1351,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr, + /* Test to see if it is a WPA IE, if not, then + * it is a gen IE + */ +- if (!memcmp(pvendor_ie->oui, wpa_oui, ++ if (!memcmp(&pvendor_ie->oui, wpa_oui, + sizeof(wpa_oui))) { + /* IE is a WPA/WPA2 IE so call set_wpa function + */ +@@ -1361,7 +1361,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr, + goto next_ie; + } + +- if (!memcmp(pvendor_ie->oui, wps_oui, ++ if (!memcmp(&pvendor_ie->oui, wps_oui, + sizeof(wps_oui))) { + /* Test to see if it is a WPS IE, + * if so, enable wps session flag +diff --git a/drivers/net/wireless/marvell/mwifiex/wmm.c b/drivers/net/wireless/marvell/mwifiex/wmm.c +index 407b9932ca4d..64916ba15df5 100644 +--- a/drivers/net/wireless/marvell/mwifiex/wmm.c ++++ b/drivers/net/wireless/marvell/mwifiex/wmm.c +@@ -240,7 +240,7 @@ mwifiex_wmm_setup_queue_priorities(struct mwifiex_private *priv, + mwifiex_dbg(priv->adapter, INFO, + "info: WMM Parameter IE: version=%d,\t" + "qos_info Parameter Set Count=%d, Reserved=%#x\n", +- wmm_ie->vend_hdr.version, wmm_ie->qos_info_bitmap & ++ wmm_ie->version, wmm_ie->qos_info_bitmap & + IEEE80211_WMM_IE_AP_QOSINFO_PARAM_SET_CNT_MASK, + wmm_ie->reserved); + +diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c +index e5db9a9954dc..a6ff7be0210a 100644 +--- a/drivers/scsi/qedi/qedi_main.c ++++ b/drivers/scsi/qedi/qedi_main.c +@@ -990,6 +990,9 @@ static int qedi_find_boot_info(struct qedi_ctx *qedi, + if (!iscsi_is_session_online(cls_sess)) + continue; + ++ if (!sess->targetname) ++ continue; ++ + if (pri_ctrl_flags) { + if (!strcmp(pri_tgt->iscsi_name, sess->targetname) && + !strcmp(pri_tgt->ip_addr, ep_ip_addr)) { +diff --git a/drivers/soc/bcm/brcmstb/biuctrl.c b/drivers/soc/bcm/brcmstb/biuctrl.c +index 6d89ebf13b8a..20b63bee5b09 100644 +--- a/drivers/soc/bcm/brcmstb/biuctrl.c ++++ b/drivers/soc/bcm/brcmstb/biuctrl.c +@@ -56,7 +56,7 @@ static inline void cbc_writel(u32 val, int reg) + if (offset == -1) + return; + +- writel_relaxed(val, cpubiuctrl_base + offset); ++ writel(val, cpubiuctrl_base + offset); + } + + enum cpubiuctrl_regs { +@@ -246,7 +246,9 @@ static int __init brcmstb_biuctrl_init(void) + if (!np) + return 0; + +- setup_hifcpubiuctrl_regs(np); ++ ret = setup_hifcpubiuctrl_regs(np); ++ if (ret) ++ return ret; + + ret = mcp_write_pairing_set(); + if (ret) { +diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c +index fd8d034cfec1..5ba641858e21 100644 +--- a/drivers/soundwire/intel.c ++++ b/drivers/soundwire/intel.c +@@ -714,8 +714,8 @@ static int intel_create_dai(struct sdw_cdns *cdns, + return -ENOMEM; + } + +- dais[i].playback.channels_min = 1; +- dais[i].playback.channels_max = max_ch; ++ dais[i].capture.channels_min = 1; ++ dais[i].capture.channels_max = max_ch; + dais[i].capture.rates = SNDRV_PCM_RATE_48000; + dais[i].capture.formats = SNDRV_PCM_FMTBIT_S16_LE; + } +diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c +index bd879b1a76c8..794ced434cf2 100644 +--- a/drivers/soundwire/stream.c ++++ b/drivers/soundwire/stream.c +@@ -805,7 +805,8 @@ static int do_bank_switch(struct sdw_stream_runtime *stream) + goto error; + } + +- mutex_unlock(&bus->msg_lock); ++ if (bus->multi_link) ++ mutex_unlock(&bus->msg_lock); + } + + return ret; +@@ -1401,9 +1402,7 @@ struct sdw_dpn_prop *sdw_get_slave_dpn_prop(struct sdw_slave *slave, + } + + for (i = 0; i < num_ports; i++) { +- dpn_prop = &dpn_prop[i]; +- +- if (dpn_prop->num == port_num) ++ if (dpn_prop[i].num == port_num) + return &dpn_prop[i]; + } + +diff --git a/drivers/staging/comedi/drivers/amplc_pci230.c b/drivers/staging/comedi/drivers/amplc_pci230.c +index 08ffe26c5d43..0f16e85911f2 100644 +--- a/drivers/staging/comedi/drivers/amplc_pci230.c ++++ b/drivers/staging/comedi/drivers/amplc_pci230.c +@@ -2330,7 +2330,8 @@ static irqreturn_t pci230_interrupt(int irq, void *d) + devpriv->intr_running = false; + spin_unlock_irqrestore(&devpriv->isr_spinlock, irqflags); + +- comedi_handle_events(dev, s_ao); ++ if (s_ao) ++ comedi_handle_events(dev, s_ao); + comedi_handle_events(dev, s_ai); + + return IRQ_HANDLED; +diff --git a/drivers/staging/comedi/drivers/dt282x.c b/drivers/staging/comedi/drivers/dt282x.c +index 3be927f1d3a9..e15e33ed94ae 100644 +--- a/drivers/staging/comedi/drivers/dt282x.c ++++ b/drivers/staging/comedi/drivers/dt282x.c +@@ -557,7 +557,8 @@ static irqreturn_t dt282x_interrupt(int irq, void *d) + } + #endif + comedi_handle_events(dev, s); +- comedi_handle_events(dev, s_ao); ++ if (s_ao) ++ comedi_handle_events(dev, s_ao); + + return IRQ_RETVAL(handled); + } +diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c +index ad577beeb052..3a562179468c 100644 +--- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c ++++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c +@@ -1086,6 +1086,7 @@ static int port_switchdev_event(struct notifier_block *unused, + dev_hold(dev); + break; + default: ++ kfree(switchdev_work); + return NOTIFY_DONE; + } + +diff --git a/drivers/staging/iio/cdc/ad7150.c b/drivers/staging/iio/cdc/ad7150.c +index 24f74ce60f80..14596aa7eaf1 100644 +--- a/drivers/staging/iio/cdc/ad7150.c ++++ b/drivers/staging/iio/cdc/ad7150.c +@@ -6,6 +6,7 @@ + * Licensed under the GPL-2 or later. + */ + ++#include + #include + #include + #include +@@ -131,7 +132,7 @@ static int ad7150_read_event_config(struct iio_dev *indio_dev, + { + int ret; + u8 threshtype; +- bool adaptive; ++ bool thrfixed; + struct ad7150_chip_info *chip = iio_priv(indio_dev); + + ret = i2c_smbus_read_byte_data(chip->client, AD7150_CFG); +@@ -139,21 +140,23 @@ static int ad7150_read_event_config(struct iio_dev *indio_dev, + return ret; + + threshtype = (ret >> 5) & 0x03; +- adaptive = !!(ret & 0x80); ++ ++ /*check if threshold mode is fixed or adaptive*/ ++ thrfixed = FIELD_GET(AD7150_CFG_FIX, ret); + + switch (type) { + case IIO_EV_TYPE_MAG_ADAPTIVE: + if (dir == IIO_EV_DIR_RISING) +- return adaptive && (threshtype == 0x1); +- return adaptive && (threshtype == 0x0); ++ return !thrfixed && (threshtype == 0x1); ++ return !thrfixed && (threshtype == 0x0); + case IIO_EV_TYPE_THRESH_ADAPTIVE: + if (dir == IIO_EV_DIR_RISING) +- return adaptive && (threshtype == 0x3); +- return adaptive && (threshtype == 0x2); ++ return !thrfixed && (threshtype == 0x3); ++ return !thrfixed && (threshtype == 0x2); + case IIO_EV_TYPE_THRESH: + if (dir == IIO_EV_DIR_RISING) +- return !adaptive && (threshtype == 0x1); +- return !adaptive && (threshtype == 0x0); ++ return thrfixed && (threshtype == 0x1); ++ return thrfixed && (threshtype == 0x0); + default: + break; + } +diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c +index 379ae780c691..c1909ff5dee9 100644 +--- a/drivers/staging/mt7621-pci/pci-mt7621.c ++++ b/drivers/staging/mt7621-pci/pci-mt7621.c +@@ -40,7 +40,7 @@ + /* MediaTek specific configuration registers */ + #define PCIE_FTS_NUM 0x70c + #define PCIE_FTS_NUM_MASK GENMASK(15, 8) +-#define PCIE_FTS_NUM_L0(x) ((x) & 0xff << 8) ++#define PCIE_FTS_NUM_L0(x) (((x) & 0xff) << 8) + + /* rt_sysc_membase relative registers */ + #define RALINK_PCIE_CLK_GEN 0x7c +diff --git a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c +index e723357ac8c0..3be305615a40 100644 +--- a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c ++++ b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c +@@ -124,10 +124,91 @@ static inline void handle_group_key(struct ieee_param *param, + } + } + +-static noinline_for_stack char *translate_scan(struct _adapter *padapter, +- struct iw_request_info *info, +- struct wlan_network *pnetwork, +- char *start, char *stop) ++static noinline_for_stack char *translate_scan_wpa(struct iw_request_info *info, ++ struct wlan_network *pnetwork, ++ struct iw_event *iwe, ++ char *start, char *stop) ++{ ++ /* parsing WPA/WPA2 IE */ ++ u8 buf[MAX_WPA_IE_LEN]; ++ u8 wpa_ie[255], rsn_ie[255]; ++ u16 wpa_len = 0, rsn_len = 0; ++ int n, i; ++ ++ r8712_get_sec_ie(pnetwork->network.IEs, ++ pnetwork->network.IELength, rsn_ie, &rsn_len, ++ wpa_ie, &wpa_len); ++ if (wpa_len > 0) { ++ memset(buf, 0, MAX_WPA_IE_LEN); ++ n = sprintf(buf, "wpa_ie="); ++ for (i = 0; i < wpa_len; i++) { ++ n += snprintf(buf + n, MAX_WPA_IE_LEN - n, ++ "%02x", wpa_ie[i]); ++ if (n >= MAX_WPA_IE_LEN) ++ break; ++ } ++ memset(iwe, 0, sizeof(*iwe)); ++ iwe->cmd = IWEVCUSTOM; ++ iwe->u.data.length = (u16)strlen(buf); ++ start = iwe_stream_add_point(info, start, stop, ++ iwe, buf); ++ memset(iwe, 0, sizeof(*iwe)); ++ iwe->cmd = IWEVGENIE; ++ iwe->u.data.length = (u16)wpa_len; ++ start = iwe_stream_add_point(info, start, stop, ++ iwe, wpa_ie); ++ } ++ if (rsn_len > 0) { ++ memset(buf, 0, MAX_WPA_IE_LEN); ++ n = sprintf(buf, "rsn_ie="); ++ for (i = 0; i < rsn_len; i++) { ++ n += snprintf(buf + n, MAX_WPA_IE_LEN - n, ++ "%02x", rsn_ie[i]); ++ if (n >= MAX_WPA_IE_LEN) ++ break; ++ } ++ memset(iwe, 0, sizeof(*iwe)); ++ iwe->cmd = IWEVCUSTOM; ++ iwe->u.data.length = strlen(buf); ++ start = iwe_stream_add_point(info, start, stop, ++ iwe, buf); ++ memset(iwe, 0, sizeof(*iwe)); ++ iwe->cmd = IWEVGENIE; ++ iwe->u.data.length = rsn_len; ++ start = iwe_stream_add_point(info, start, stop, iwe, ++ rsn_ie); ++ } ++ ++ return start; ++} ++ ++static noinline_for_stack char *translate_scan_wps(struct iw_request_info *info, ++ struct wlan_network *pnetwork, ++ struct iw_event *iwe, ++ char *start, char *stop) ++{ ++ /* parsing WPS IE */ ++ u8 wps_ie[512]; ++ uint wps_ielen; ++ ++ if (r8712_get_wps_ie(pnetwork->network.IEs, ++ pnetwork->network.IELength, ++ wps_ie, &wps_ielen)) { ++ if (wps_ielen > 2) { ++ iwe->cmd = IWEVGENIE; ++ iwe->u.data.length = (u16)wps_ielen; ++ start = iwe_stream_add_point(info, start, stop, ++ iwe, wps_ie); ++ } ++ } ++ ++ return start; ++} ++ ++static char *translate_scan(struct _adapter *padapter, ++ struct iw_request_info *info, ++ struct wlan_network *pnetwork, ++ char *start, char *stop) + { + struct iw_event iwe; + struct ieee80211_ht_cap *pht_capie; +@@ -240,73 +321,11 @@ static noinline_for_stack char *translate_scan(struct _adapter *padapter, + /* Check if we added any event */ + if ((current_val - start) > iwe_stream_lcp_len(info)) + start = current_val; +- /* parsing WPA/WPA2 IE */ +- { +- u8 buf[MAX_WPA_IE_LEN]; +- u8 wpa_ie[255], rsn_ie[255]; +- u16 wpa_len = 0, rsn_len = 0; +- int n; +- +- r8712_get_sec_ie(pnetwork->network.IEs, +- pnetwork->network.IELength, rsn_ie, &rsn_len, +- wpa_ie, &wpa_len); +- if (wpa_len > 0) { +- memset(buf, 0, MAX_WPA_IE_LEN); +- n = sprintf(buf, "wpa_ie="); +- for (i = 0; i < wpa_len; i++) { +- n += snprintf(buf + n, MAX_WPA_IE_LEN - n, +- "%02x", wpa_ie[i]); +- if (n >= MAX_WPA_IE_LEN) +- break; +- } +- memset(&iwe, 0, sizeof(iwe)); +- iwe.cmd = IWEVCUSTOM; +- iwe.u.data.length = (u16)strlen(buf); +- start = iwe_stream_add_point(info, start, stop, +- &iwe, buf); +- memset(&iwe, 0, sizeof(iwe)); +- iwe.cmd = IWEVGENIE; +- iwe.u.data.length = (u16)wpa_len; +- start = iwe_stream_add_point(info, start, stop, +- &iwe, wpa_ie); +- } +- if (rsn_len > 0) { +- memset(buf, 0, MAX_WPA_IE_LEN); +- n = sprintf(buf, "rsn_ie="); +- for (i = 0; i < rsn_len; i++) { +- n += snprintf(buf + n, MAX_WPA_IE_LEN - n, +- "%02x", rsn_ie[i]); +- if (n >= MAX_WPA_IE_LEN) +- break; +- } +- memset(&iwe, 0, sizeof(iwe)); +- iwe.cmd = IWEVCUSTOM; +- iwe.u.data.length = strlen(buf); +- start = iwe_stream_add_point(info, start, stop, +- &iwe, buf); +- memset(&iwe, 0, sizeof(iwe)); +- iwe.cmd = IWEVGENIE; +- iwe.u.data.length = rsn_len; +- start = iwe_stream_add_point(info, start, stop, &iwe, +- rsn_ie); +- } +- } + +- { /* parsing WPS IE */ +- u8 wps_ie[512]; +- uint wps_ielen; ++ start = translate_scan_wpa(info, pnetwork, &iwe, start, stop); ++ ++ start = translate_scan_wps(info, pnetwork, &iwe, start, stop); + +- if (r8712_get_wps_ie(pnetwork->network.IEs, +- pnetwork->network.IELength, +- wps_ie, &wps_ielen)) { +- if (wps_ielen > 2) { +- iwe.cmd = IWEVGENIE; +- iwe.u.data.length = (u16)wps_ielen; +- start = iwe_stream_add_point(info, start, stop, +- &iwe, wps_ie); +- } +- } +- } + /* Add quality statistics */ + iwe.cmd = IWEVQUAL; + rssi = r8712_signal_scale_mapping(pnetwork->network.Rssi); +diff --git a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c +index 7c6cf41645eb..3bf61aefcb1e 100644 +--- a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c ++++ b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c +@@ -337,16 +337,13 @@ static void buffer_cb(struct vchiq_mmal_instance *instance, + return; + } else if (length == 0) { + /* stream ended */ +- if (buf) { +- /* this should only ever happen if the port is +- * disabled and there are buffers still queued ++ if (dev->capture.frame_count) { ++ /* empty buffer whilst capturing - expected to be an ++ * EOS, so grab another frame + */ +- vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR); +- pr_debug("Empty buffer"); +- } else if (dev->capture.frame_count) { +- /* grab another frame */ + if (is_capturing(dev)) { +- pr_debug("Grab another frame"); ++ v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, ++ "Grab another frame"); + vchiq_mmal_port_parameter_set( + instance, + dev->capture.camera_port, +@@ -354,8 +351,14 @@ static void buffer_cb(struct vchiq_mmal_instance *instance, + &dev->capture.frame_count, + sizeof(dev->capture.frame_count)); + } ++ if (vchiq_mmal_submit_buffer(instance, port, buf)) ++ v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, ++ "Failed to return EOS buffer"); + } else { +- /* signal frame completion */ ++ /* stopping streaming. ++ * return buffer, and signal frame completion ++ */ ++ vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR); + complete(&dev->capture.frame_cmplt); + } + } else { +@@ -577,6 +580,7 @@ static void stop_streaming(struct vb2_queue *vq) + int ret; + unsigned long timeout; + struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq); ++ struct vchiq_mmal_port *port = dev->capture.port; + + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n", + __func__, dev); +@@ -600,12 +604,6 @@ static void stop_streaming(struct vb2_queue *vq) + &dev->capture.frame_count, + sizeof(dev->capture.frame_count)); + +- /* wait for last frame to complete */ +- timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ); +- if (timeout == 0) +- v4l2_err(&dev->v4l2_dev, +- "timed out waiting for frame completion\n"); +- + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, + "disabling connection\n"); + +@@ -620,6 +618,21 @@ static void stop_streaming(struct vb2_queue *vq) + ret); + } + ++ /* wait for all buffers to be returned */ ++ while (atomic_read(&port->buffers_with_vpu)) { ++ v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, ++ "%s: Waiting for buffers to be returned - %d outstanding\n", ++ __func__, atomic_read(&port->buffers_with_vpu)); ++ timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt, ++ HZ); ++ if (timeout == 0) { ++ v4l2_err(&dev->v4l2_dev, "%s: Timeout waiting for buffers to be returned - %d outstanding\n", ++ __func__, ++ atomic_read(&port->buffers_with_vpu)); ++ break; ++ } ++ } ++ + if (disable_camera(dev) < 0) + v4l2_err(&dev->v4l2_dev, "Failed to disable camera\n"); + } +diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c +index 16af735af5c3..29761f6c3b55 100644 +--- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c ++++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c +@@ -161,7 +161,8 @@ struct vchiq_mmal_instance { + void *bulk_scratch; + + struct idr context_map; +- spinlock_t context_map_lock; ++ /* protect accesses to context_map */ ++ struct mutex context_map_lock; + + /* component to use next */ + int component_idx; +@@ -184,10 +185,10 @@ get_msg_context(struct vchiq_mmal_instance *instance) + * that when we service the VCHI reply, we can look up what + * message is being replied to. + */ +- spin_lock(&instance->context_map_lock); ++ mutex_lock(&instance->context_map_lock); + handle = idr_alloc(&instance->context_map, msg_context, + 0, 0, GFP_KERNEL); +- spin_unlock(&instance->context_map_lock); ++ mutex_unlock(&instance->context_map_lock); + + if (handle < 0) { + kfree(msg_context); +@@ -211,9 +212,9 @@ release_msg_context(struct mmal_msg_context *msg_context) + { + struct vchiq_mmal_instance *instance = msg_context->instance; + +- spin_lock(&instance->context_map_lock); ++ mutex_lock(&instance->context_map_lock); + idr_remove(&instance->context_map, msg_context->handle); +- spin_unlock(&instance->context_map_lock); ++ mutex_unlock(&instance->context_map_lock); + kfree(msg_context); + } + +@@ -239,6 +240,8 @@ static void buffer_work_cb(struct work_struct *work) + struct mmal_msg_context *msg_context = + container_of(work, struct mmal_msg_context, u.bulk.work); + ++ atomic_dec(&msg_context->u.bulk.port->buffers_with_vpu); ++ + msg_context->u.bulk.port->buffer_cb(msg_context->u.bulk.instance, + msg_context->u.bulk.port, + msg_context->u.bulk.status, +@@ -287,8 +290,6 @@ static int bulk_receive(struct vchiq_mmal_instance *instance, + + /* store length */ + msg_context->u.bulk.buffer_used = rd_len; +- msg_context->u.bulk.mmal_flags = +- msg->u.buffer_from_host.buffer_header.flags; + msg_context->u.bulk.dts = msg->u.buffer_from_host.buffer_header.dts; + msg_context->u.bulk.pts = msg->u.buffer_from_host.buffer_header.pts; + +@@ -379,6 +380,8 @@ buffer_from_host(struct vchiq_mmal_instance *instance, + /* initialise work structure ready to schedule callback */ + INIT_WORK(&msg_context->u.bulk.work, buffer_work_cb); + ++ atomic_inc(&port->buffers_with_vpu); ++ + /* prep the buffer from host message */ + memset(&m, 0xbc, sizeof(m)); /* just to make debug clearer */ + +@@ -447,6 +450,9 @@ static void buffer_to_host_cb(struct vchiq_mmal_instance *instance, + return; + } + ++ msg_context->u.bulk.mmal_flags = ++ msg->u.buffer_from_host.buffer_header.flags; ++ + if (msg->h.status != MMAL_MSG_STATUS_SUCCESS) { + /* message reception had an error */ + pr_warn("error %d in reply\n", msg->h.status); +@@ -1323,16 +1329,6 @@ static int port_enable(struct vchiq_mmal_instance *instance, + if (port->enabled) + return 0; + +- /* ensure there are enough buffers queued to cover the buffer headers */ +- if (port->buffer_cb) { +- hdr_count = 0; +- list_for_each(buf_head, &port->buffers) { +- hdr_count++; +- } +- if (hdr_count < port->current_buffer.num) +- return -ENOSPC; +- } +- + ret = port_action_port(instance, port, + MMAL_MSG_PORT_ACTION_TYPE_ENABLE); + if (ret) +@@ -1849,7 +1845,7 @@ int vchiq_mmal_init(struct vchiq_mmal_instance **out_instance) + + instance->bulk_scratch = vmalloc(PAGE_SIZE); + +- spin_lock_init(&instance->context_map_lock); ++ mutex_init(&instance->context_map_lock); + idr_init_base(&instance->context_map, 1); + + params.callback_param = instance; +diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h +index 22b839ecd5f0..b0ee1716525b 100644 +--- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h ++++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h +@@ -71,6 +71,9 @@ struct vchiq_mmal_port { + struct list_head buffers; + /* lock to serialise adding and removing buffers from list */ + spinlock_t slock; ++ ++ /* Count of buffers the VPU has yet to return */ ++ atomic_t buffers_with_vpu; + /* callback on buffer completion */ + vchiq_mmal_buffer_cb buffer_cb; + /* callback context */ +diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c +index a9cc01e8e6c5..833b28e9ba4b 100644 +--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c ++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c +@@ -553,7 +553,7 @@ create_pagelist(char __user *buf, size_t count, unsigned short type) + (g_cache_line_size - 1)))) { + char *fragments; + +- if (down_killable(&g_free_fragments_sema)) { ++ if (down_interruptible(&g_free_fragments_sema) != 0) { + cleanup_pagelistinfo(pagelistinfo); + return NULL; + } +diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c +index 064d0db4c51e..ccfb8218b83c 100644 +--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c ++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c +@@ -560,7 +560,8 @@ add_completion(VCHIQ_INSTANCE_T instance, VCHIQ_REASON_T reason, + vchiq_log_trace(vchiq_arm_log_level, + "%s - completion queue full", __func__); + DEBUG_COUNT(COMPLETION_QUEUE_FULL_COUNT); +- if (wait_for_completion_killable( &instance->remove_event)) { ++ if (wait_for_completion_interruptible( ++ &instance->remove_event)) { + vchiq_log_info(vchiq_arm_log_level, + "service_callback interrupted"); + return VCHIQ_RETRY; +@@ -671,7 +672,7 @@ service_callback(VCHIQ_REASON_T reason, struct vchiq_header *header, + } + + DEBUG_TRACE(SERVICE_CALLBACK_LINE); +- if (wait_for_completion_killable( ++ if (wait_for_completion_interruptible( + &user_service->remove_event) + != 0) { + vchiq_log_info(vchiq_arm_log_level, +@@ -1006,7 +1007,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + has been closed until the client library calls the + CLOSE_DELIVERED ioctl, signalling close_event. */ + if (user_service->close_pending && +- wait_for_completion_killable( ++ wait_for_completion_interruptible( + &user_service->close_event)) + status = VCHIQ_RETRY; + break; +@@ -1182,7 +1183,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + + DEBUG_TRACE(AWAIT_COMPLETION_LINE); + mutex_unlock(&instance->completion_mutex); +- rc = wait_for_completion_killable( ++ rc = wait_for_completion_interruptible( + &instance->insert_event); + mutex_lock(&instance->completion_mutex); + if (rc != 0) { +@@ -1352,7 +1353,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + do { + spin_unlock(&msg_queue_spinlock); + DEBUG_TRACE(DEQUEUE_MESSAGE_LINE); +- if (wait_for_completion_killable( ++ if (wait_for_completion_interruptible( + &user_service->insert_event)) { + vchiq_log_info(vchiq_arm_log_level, + "DEQUEUE_MESSAGE interrupted"); +@@ -2360,7 +2361,7 @@ vchiq_keepalive_thread_func(void *v) + while (1) { + long rc = 0, uc = 0; + +- if (wait_for_completion_killable(&arm_state->ka_evt) ++ if (wait_for_completion_interruptible(&arm_state->ka_evt) + != 0) { + vchiq_log_error(vchiq_susp_log_level, + "%s interrupted", __func__); +@@ -2611,7 +2612,7 @@ block_resume(struct vchiq_arm_state *arm_state) + write_unlock_bh(&arm_state->susp_res_lock); + vchiq_log_info(vchiq_susp_log_level, "%s wait for previously " + "blocked clients", __func__); +- if (wait_for_completion_killable_timeout( ++ if (wait_for_completion_interruptible_timeout( + &arm_state->blocked_blocker, timeout_val) + <= 0) { + vchiq_log_error(vchiq_susp_log_level, "%s wait for " +@@ -2637,7 +2638,7 @@ block_resume(struct vchiq_arm_state *arm_state) + write_unlock_bh(&arm_state->susp_res_lock); + vchiq_log_info(vchiq_susp_log_level, "%s wait for resume", + __func__); +- if (wait_for_completion_killable_timeout( ++ if (wait_for_completion_interruptible_timeout( + &arm_state->vc_resume_complete, timeout_val) + <= 0) { + vchiq_log_error(vchiq_susp_log_level, "%s wait for " +@@ -2844,7 +2845,7 @@ vchiq_arm_force_suspend(struct vchiq_state *state) + do { + write_unlock_bh(&arm_state->susp_res_lock); + +- rc = wait_for_completion_killable_timeout( ++ rc = wait_for_completion_interruptible_timeout( + &arm_state->vc_suspend_complete, + msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS)); + +@@ -2940,7 +2941,7 @@ vchiq_arm_allow_resume(struct vchiq_state *state) + write_unlock_bh(&arm_state->susp_res_lock); + + if (resume) { +- if (wait_for_completion_killable( ++ if (wait_for_completion_interruptible( + &arm_state->vc_resume_complete) < 0) { + vchiq_log_error(vchiq_susp_log_level, + "%s interrupted", __func__); +diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c +index 819813e742d8..0958d86aebe6 100644 +--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c ++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c +@@ -425,13 +425,21 @@ remote_event_create(wait_queue_head_t *wq, struct remote_event *event) + init_waitqueue_head(wq); + } + ++/* ++ * All the event waiting routines in VCHIQ used a custom semaphore ++ * implementation that filtered most signals. This achieved a behaviour similar ++ * to the "killable" family of functions. While cleaning up this code all the ++ * routines where switched to the "interruptible" family of functions, as the ++ * former was deemed unjustified and the use "killable" set all VCHIQ's ++ * threads in D state. ++ */ + static inline int + remote_event_wait(wait_queue_head_t *wq, struct remote_event *event) + { + if (!event->fired) { + event->armed = 1; + dsb(sy); +- if (wait_event_killable(*wq, event->fired)) { ++ if (wait_event_interruptible(*wq, event->fired)) { + event->armed = 0; + return 0; + } +@@ -590,7 +598,7 @@ reserve_space(struct vchiq_state *state, size_t space, int is_blocking) + remote_event_signal(&state->remote->trigger); + + if (!is_blocking || +- (wait_for_completion_killable( ++ (wait_for_completion_interruptible( + &state->slot_available_event))) + return NULL; /* No space available */ + } +@@ -860,7 +868,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service, + spin_unlock("a_spinlock); + mutex_unlock(&state->slot_mutex); + +- if (wait_for_completion_killable( ++ if (wait_for_completion_interruptible( + &state->data_quota_event)) + return VCHIQ_RETRY; + +@@ -891,7 +899,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service, + service_quota->slot_use_count); + VCHIQ_SERVICE_STATS_INC(service, quota_stalls); + mutex_unlock(&state->slot_mutex); +- if (wait_for_completion_killable( ++ if (wait_for_completion_interruptible( + &service_quota->quota_event)) + return VCHIQ_RETRY; + if (service->closing) +@@ -1740,7 +1748,8 @@ parse_rx_slots(struct vchiq_state *state) + &service->bulk_rx : &service->bulk_tx; + + DEBUG_TRACE(PARSE_LINE); +- if (mutex_lock_killable(&service->bulk_mutex)) { ++ if (mutex_lock_killable( ++ &service->bulk_mutex) != 0) { + DEBUG_TRACE(PARSE_LINE); + goto bail_not_ready; + } +@@ -2458,7 +2467,7 @@ vchiq_open_service_internal(struct vchiq_service *service, int client_id) + QMFLAGS_IS_BLOCKING); + if (status == VCHIQ_SUCCESS) { + /* Wait for the ACK/NAK */ +- if (wait_for_completion_killable(&service->remove_event)) { ++ if (wait_for_completion_interruptible(&service->remove_event)) { + status = VCHIQ_RETRY; + vchiq_release_service_internal(service); + } else if ((service->srvstate != VCHIQ_SRVSTATE_OPEN) && +@@ -2825,7 +2834,7 @@ vchiq_connect_internal(struct vchiq_state *state, VCHIQ_INSTANCE_T instance) + } + + if (state->conn_state == VCHIQ_CONNSTATE_CONNECTING) { +- if (wait_for_completion_killable(&state->connect)) ++ if (wait_for_completion_interruptible(&state->connect)) + return VCHIQ_RETRY; + + vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED); +@@ -2924,7 +2933,7 @@ vchiq_close_service(VCHIQ_SERVICE_HANDLE_T handle) + } + + while (1) { +- if (wait_for_completion_killable(&service->remove_event)) { ++ if (wait_for_completion_interruptible(&service->remove_event)) { + status = VCHIQ_RETRY; + break; + } +@@ -2985,7 +2994,7 @@ vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T handle) + request_poll(service->state, service, VCHIQ_POLL_REMOVE); + } + while (1) { +- if (wait_for_completion_killable(&service->remove_event)) { ++ if (wait_for_completion_interruptible(&service->remove_event)) { + status = VCHIQ_RETRY; + break; + } +@@ -3068,7 +3077,7 @@ VCHIQ_STATUS_T vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, + VCHIQ_SERVICE_STATS_INC(service, bulk_stalls); + do { + mutex_unlock(&service->bulk_mutex); +- if (wait_for_completion_killable( ++ if (wait_for_completion_interruptible( + &service->bulk_remove_event)) { + status = VCHIQ_RETRY; + goto error_exit; +@@ -3145,7 +3154,7 @@ waiting: + + if (bulk_waiter) { + bulk_waiter->bulk = bulk; +- if (wait_for_completion_killable(&bulk_waiter->event)) ++ if (wait_for_completion_interruptible(&bulk_waiter->event)) + status = VCHIQ_RETRY; + else if (bulk_waiter->actual == VCHIQ_BULK_ACTUAL_ABORTED) + status = VCHIQ_ERROR; +diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c +index 55c5fd82b911..30deea1b57f7 100644 +--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c ++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c +@@ -80,7 +80,7 @@ void vchiu_queue_push(struct vchiu_queue *queue, struct vchiq_header *header) + return; + + while (queue->write == queue->read + queue->size) { +- if (wait_for_completion_killable(&queue->pop)) ++ if (wait_for_completion_interruptible(&queue->pop)) + flush_signals(current); + } + +@@ -93,7 +93,7 @@ void vchiu_queue_push(struct vchiu_queue *queue, struct vchiq_header *header) + struct vchiq_header *vchiu_queue_peek(struct vchiu_queue *queue) + { + while (queue->write == queue->read) { +- if (wait_for_completion_killable(&queue->push)) ++ if (wait_for_completion_interruptible(&queue->push)) + flush_signals(current); + } + +@@ -107,7 +107,7 @@ struct vchiq_header *vchiu_queue_pop(struct vchiu_queue *queue) + struct vchiq_header *header; + + while (queue->write == queue->read) { +- if (wait_for_completion_killable(&queue->push)) ++ if (wait_for_completion_interruptible(&queue->push)) + flush_signals(current); + } + +diff --git a/drivers/staging/wilc1000/wilc_netdev.c b/drivers/staging/wilc1000/wilc_netdev.c +index ba78c08a17f1..5338d7d2b248 100644 +--- a/drivers/staging/wilc1000/wilc_netdev.c ++++ b/drivers/staging/wilc1000/wilc_netdev.c +@@ -530,17 +530,17 @@ static int wilc_wlan_initialize(struct net_device *dev, struct wilc_vif *vif) + goto fail_locks; + } + +- if (wl->gpio_irq && init_irq(dev)) { +- ret = -EIO; +- goto fail_locks; +- } +- + ret = wlan_initialize_threads(dev); + if (ret < 0) { + ret = -EIO; + goto fail_wilc_wlan; + } + ++ if (wl->gpio_irq && init_irq(dev)) { ++ ret = -EIO; ++ goto fail_threads; ++ } ++ + if (!wl->dev_irq_num && + wl->hif_func->enable_interrupt && + wl->hif_func->enable_interrupt(wl)) { +@@ -596,7 +596,7 @@ fail_irq_enable: + fail_irq_init: + if (wl->dev_irq_num) + deinit_irq(dev); +- ++fail_threads: + wlan_deinitialize_threads(dev); + fail_wilc_wlan: + wilc_wlan_cleanup(dev); +diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c +index d2f3310abe54..682300713be4 100644 +--- a/drivers/tty/serial/8250/8250_port.c ++++ b/drivers/tty/serial/8250/8250_port.c +@@ -1869,8 +1869,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir) + + status = serial_port_in(port, UART_LSR); + +- if (status & (UART_LSR_DR | UART_LSR_BI) && +- iir & UART_IIR_RDI) { ++ if (status & (UART_LSR_DR | UART_LSR_BI)) { + if (!up->dma || handle_rx_dma(up, iir)) + status = serial8250_rx_chars(up, status); + } +diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c +index 55d5ae2a7ec7..51d83f77dc04 100644 +--- a/drivers/usb/dwc2/core.c ++++ b/drivers/usb/dwc2/core.c +@@ -531,7 +531,7 @@ int dwc2_core_reset(struct dwc2_hsotg *hsotg, bool skip_wait) + } + + /* Wait for AHB master IDLE state */ +- if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 50)) { ++ if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 10000)) { + dev_warn(hsotg->dev, "%s: HANG! AHB Idle timeout GRSTCTL GRSTCTL_AHBIDLE\n", + __func__); + return -EBUSY; +diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c +index 47be961f1bf3..c7ed90084d1a 100644 +--- a/drivers/usb/gadget/function/f_fs.c ++++ b/drivers/usb/gadget/function/f_fs.c +@@ -997,7 +997,6 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data) + * earlier + */ + gadget = epfile->ffs->gadget; +- io_data->use_sg = gadget->sg_supported && data_len > PAGE_SIZE; + + spin_lock_irq(&epfile->ffs->eps_lock); + /* In the meantime, endpoint got disabled or changed. */ +@@ -1012,6 +1011,8 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data) + */ + if (io_data->read) + data_len = usb_ep_align_maybe(gadget, ep->ep, data_len); ++ ++ io_data->use_sg = gadget->sg_supported && data_len > PAGE_SIZE; + spin_unlock_irq(&epfile->ffs->eps_lock); + + data = ffs_alloc_buffer(io_data, data_len); +diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c +index 737bd77a575d..2929bb47a618 100644 +--- a/drivers/usb/gadget/function/u_ether.c ++++ b/drivers/usb/gadget/function/u_ether.c +@@ -186,11 +186,12 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags) + out = dev->port_usb->out_ep; + else + out = NULL; +- spin_unlock_irqrestore(&dev->lock, flags); + + if (!out) ++ { ++ spin_unlock_irqrestore(&dev->lock, flags); + return -ENOTCONN; +- ++ } + + /* Padding up to RX_EXTRA handles minor disagreements with host. + * Normally we use the USB "terminate on short read" convention; +@@ -214,6 +215,7 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags) + + if (dev->port_usb->is_fixed) + size = max_t(size_t, size, dev->port_usb->fixed_out_len); ++ spin_unlock_irqrestore(&dev->lock, flags); + + skb = __netdev_alloc_skb(dev->net, size + NET_IP_ALIGN, gfp_flags); + if (skb == NULL) { +diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c +index 39fa2fc1b8b7..6036cbae8c78 100644 +--- a/drivers/usb/renesas_usbhs/fifo.c ++++ b/drivers/usb/renesas_usbhs/fifo.c +@@ -802,9 +802,8 @@ static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map) + } + + static void usbhsf_dma_complete(void *arg); +-static void xfer_work(struct work_struct *work) ++static void usbhsf_dma_xfer_preparing(struct usbhs_pkt *pkt) + { +- struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work); + struct usbhs_pipe *pipe = pkt->pipe; + struct usbhs_fifo *fifo; + struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe); +@@ -812,12 +811,10 @@ static void xfer_work(struct work_struct *work) + struct dma_chan *chan; + struct device *dev = usbhs_priv_to_dev(priv); + enum dma_transfer_direction dir; +- unsigned long flags; + +- usbhs_lock(priv, flags); + fifo = usbhs_pipe_to_fifo(pipe); + if (!fifo) +- goto xfer_work_end; ++ return; + + chan = usbhsf_dma_chan_get(fifo, pkt); + dir = usbhs_pipe_is_dir_in(pipe) ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV; +@@ -826,7 +823,7 @@ static void xfer_work(struct work_struct *work) + pkt->trans, dir, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!desc) +- goto xfer_work_end; ++ return; + + desc->callback = usbhsf_dma_complete; + desc->callback_param = pipe; +@@ -834,7 +831,7 @@ static void xfer_work(struct work_struct *work) + pkt->cookie = dmaengine_submit(desc); + if (pkt->cookie < 0) { + dev_err(dev, "Failed to submit dma descriptor\n"); +- goto xfer_work_end; ++ return; + } + + dev_dbg(dev, " %s %d (%d/ %d)\n", +@@ -845,8 +842,17 @@ static void xfer_work(struct work_struct *work) + dma_async_issue_pending(chan); + usbhsf_dma_start(pipe, fifo); + usbhs_pipe_enable(pipe); ++} ++ ++static void xfer_work(struct work_struct *work) ++{ ++ struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work); ++ struct usbhs_pipe *pipe = pkt->pipe; ++ struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe); ++ unsigned long flags; + +-xfer_work_end: ++ usbhs_lock(priv, flags); ++ usbhsf_dma_xfer_preparing(pkt); + usbhs_unlock(priv, flags); + } + +@@ -899,8 +905,13 @@ static int usbhsf_dma_prepare_push(struct usbhs_pkt *pkt, int *is_done) + pkt->trans = len; + + usbhsf_tx_irq_ctrl(pipe, 0); +- INIT_WORK(&pkt->work, xfer_work); +- schedule_work(&pkt->work); ++ /* FIXME: Workaound for usb dmac that driver can be used in atomic */ ++ if (usbhs_get_dparam(priv, has_usb_dmac)) { ++ usbhsf_dma_xfer_preparing(pkt); ++ } else { ++ INIT_WORK(&pkt->work, xfer_work); ++ schedule_work(&pkt->work); ++ } + + return 0; + +@@ -1006,8 +1017,7 @@ static int usbhsf_dma_prepare_pop_with_usb_dmac(struct usbhs_pkt *pkt, + + pkt->trans = pkt->length; + +- INIT_WORK(&pkt->work, xfer_work); +- schedule_work(&pkt->work); ++ usbhsf_dma_xfer_preparing(pkt); + + return 0; + +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c +index 1d8461ae2c34..23669a584bae 100644 +--- a/drivers/usb/serial/ftdi_sio.c ++++ b/drivers/usb/serial/ftdi_sio.c +@@ -1029,6 +1029,7 @@ static const struct usb_device_id id_table_combined[] = { + { USB_DEVICE(AIRBUS_DS_VID, AIRBUS_DS_P8GR) }, + /* EZPrototypes devices */ + { USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) }, ++ { USB_DEVICE_INTERFACE_NUMBER(UNJO_VID, UNJO_ISODEBUG_V1_PID, 1) }, + { } /* Terminating entry */ + }; + +diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h +index 5755f0df0025..f12d806220b4 100644 +--- a/drivers/usb/serial/ftdi_sio_ids.h ++++ b/drivers/usb/serial/ftdi_sio_ids.h +@@ -1543,3 +1543,9 @@ + #define CHETCO_SEASMART_DISPLAY_PID 0xA5AD /* SeaSmart NMEA2000 Display */ + #define CHETCO_SEASMART_LITE_PID 0xA5AE /* SeaSmart Lite USB Adapter */ + #define CHETCO_SEASMART_ANALOG_PID 0xA5AF /* SeaSmart Analog Adapter */ ++ ++/* ++ * Unjo AB ++ */ ++#define UNJO_VID 0x22B7 ++#define UNJO_ISODEBUG_V1_PID 0x150D +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index a0aaf0635359..c1582fbd1150 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -1343,6 +1343,7 @@ static const struct usb_device_id option_ids[] = { + .driver_info = RSVD(4) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0414, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0417, 0xff, 0xff, 0xff) }, ++ { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0601, 0xff) }, /* GosunCn ZTE WeLink ME3630 (RNDIS mode) */ + { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0602, 0xff) }, /* GosunCn ZTE WeLink ME3630 (MBIM mode) */ + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1008, 0xff, 0xff, 0xff), + .driver_info = RSVD(4) }, +diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c +index c674abe3cf99..a38d1409f15b 100644 +--- a/drivers/usb/typec/tps6598x.c ++++ b/drivers/usb/typec/tps6598x.c +@@ -41,7 +41,7 @@ + #define TPS_STATUS_VCONN(s) (!!((s) & BIT(7))) + + /* TPS_REG_SYSTEM_CONF bits */ +-#define TPS_SYSCONF_PORTINFO(c) ((c) & 3) ++#define TPS_SYSCONF_PORTINFO(c) ((c) & 7) + + enum { + TPS_PORTINFO_SINK, +@@ -127,7 +127,7 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len) + } + + static int tps6598x_block_write(struct tps6598x *tps, u8 reg, +- void *val, size_t len) ++ const void *val, size_t len) + { + u8 data[TPS_MAX_LEN + 1]; + +@@ -173,7 +173,7 @@ static inline int tps6598x_write64(struct tps6598x *tps, u8 reg, u64 val) + static inline int + tps6598x_write_4cc(struct tps6598x *tps, u8 reg, const char *val) + { +- return tps6598x_block_write(tps, reg, &val, sizeof(u32)); ++ return tps6598x_block_write(tps, reg, val, 4); + } + + static int tps6598x_read_partner_identity(struct tps6598x *tps) +diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c +index d536889ac31b..4941fe8471ce 100644 +--- a/fs/crypto/policy.c ++++ b/fs/crypto/policy.c +@@ -81,6 +81,8 @@ int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg) + if (ret == -ENODATA) { + if (!S_ISDIR(inode->i_mode)) + ret = -ENOTDIR; ++ else if (IS_DEADDIR(inode)) ++ ret = -ENOENT; + else if (!inode->i_sb->s_cop->empty_dir(inode)) + ret = -ENOTEMPTY; + else +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index eeee100785a5..fd2c19eea647 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -1234,10 +1234,20 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry, + atomic_inc(&sp->so_count); + p->o_arg.open_flags = flags; + p->o_arg.fmode = fmode & (FMODE_READ|FMODE_WRITE); +- p->o_arg.umask = current_umask(); + p->o_arg.claim = nfs4_map_atomic_open_claim(server, claim); + p->o_arg.share_access = nfs4_map_atomic_open_share(server, + fmode, flags); ++ if (flags & O_CREAT) { ++ p->o_arg.umask = current_umask(); ++ p->o_arg.label = nfs4_label_copy(p->a_label, label); ++ if (c->sattr != NULL && c->sattr->ia_valid != 0) { ++ p->o_arg.u.attrs = &p->attrs; ++ memcpy(&p->attrs, c->sattr, sizeof(p->attrs)); ++ ++ memcpy(p->o_arg.u.verifier.data, c->verf, ++ sizeof(p->o_arg.u.verifier.data)); ++ } ++ } + /* don't put an ACCESS op in OPEN compound if O_EXCL, because ACCESS + * will return permission denied for all bits until close */ + if (!(flags & O_EXCL)) { +@@ -1261,7 +1271,6 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry, + p->o_arg.server = server; + p->o_arg.bitmask = nfs4_bitmask(server, label); + p->o_arg.open_bitmap = &nfs4_fattr_bitmap[0]; +- p->o_arg.label = nfs4_label_copy(p->a_label, label); + switch (p->o_arg.claim) { + case NFS4_OPEN_CLAIM_NULL: + case NFS4_OPEN_CLAIM_DELEGATE_CUR: +@@ -1274,13 +1283,6 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry, + case NFS4_OPEN_CLAIM_DELEG_PREV_FH: + p->o_arg.fh = NFS_FH(d_inode(dentry)); + } +- if (c != NULL && c->sattr != NULL && c->sattr->ia_valid != 0) { +- p->o_arg.u.attrs = &p->attrs; +- memcpy(&p->attrs, c->sattr, sizeof(p->attrs)); +- +- memcpy(p->o_arg.u.verifier.data, c->verf, +- sizeof(p->o_arg.u.verifier.data)); +- } + p->c_arg.fh = &p->o_res.fh; + p->c_arg.stateid = &p->o_res.stateid; + p->c_arg.seqid = p->o_arg.seqid; +diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c +index fc20e06c56ba..dd1783ea7003 100644 +--- a/fs/quota/dquot.c ++++ b/fs/quota/dquot.c +@@ -1993,8 +1993,8 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to) + &warn_to[cnt]); + if (ret) + goto over_quota; +- ret = dquot_add_space(transfer_to[cnt], cur_space, rsv_space, 0, +- &warn_to[cnt]); ++ ret = dquot_add_space(transfer_to[cnt], cur_space, rsv_space, ++ DQUOT_SPACE_WARN, &warn_to[cnt]); + if (ret) { + spin_lock(&transfer_to[cnt]->dq_dqb_lock); + dquot_decr_inodes(transfer_to[cnt], inode_usage); +diff --git a/fs/udf/inode.c b/fs/udf/inode.c +index e7276932e433..9bb18311a22f 100644 +--- a/fs/udf/inode.c ++++ b/fs/udf/inode.c +@@ -470,13 +470,15 @@ static struct buffer_head *udf_getblk(struct inode *inode, udf_pblk_t block, + return NULL; + } + +-/* Extend the file by 'blocks' blocks, return the number of extents added */ ++/* Extend the file with new blocks totaling 'new_block_bytes', ++ * return the number of extents added ++ */ + static int udf_do_extend_file(struct inode *inode, + struct extent_position *last_pos, + struct kernel_long_ad *last_ext, +- sector_t blocks) ++ loff_t new_block_bytes) + { +- sector_t add; ++ uint32_t add; + int count = 0, fake = !(last_ext->extLength & UDF_EXTENT_LENGTH_MASK); + struct super_block *sb = inode->i_sb; + struct kernel_lb_addr prealloc_loc = {}; +@@ -486,7 +488,7 @@ static int udf_do_extend_file(struct inode *inode, + + /* The previous extent is fake and we should not extend by anything + * - there's nothing to do... */ +- if (!blocks && fake) ++ if (!new_block_bytes && fake) + return 0; + + iinfo = UDF_I(inode); +@@ -517,13 +519,12 @@ static int udf_do_extend_file(struct inode *inode, + /* Can we merge with the previous extent? */ + if ((last_ext->extLength & UDF_EXTENT_FLAG_MASK) == + EXT_NOT_RECORDED_NOT_ALLOCATED) { +- add = ((1 << 30) - sb->s_blocksize - +- (last_ext->extLength & UDF_EXTENT_LENGTH_MASK)) >> +- sb->s_blocksize_bits; +- if (add > blocks) +- add = blocks; +- blocks -= add; +- last_ext->extLength += add << sb->s_blocksize_bits; ++ add = (1 << 30) - sb->s_blocksize - ++ (last_ext->extLength & UDF_EXTENT_LENGTH_MASK); ++ if (add > new_block_bytes) ++ add = new_block_bytes; ++ new_block_bytes -= add; ++ last_ext->extLength += add; + } + + if (fake) { +@@ -544,28 +545,27 @@ static int udf_do_extend_file(struct inode *inode, + } + + /* Managed to do everything necessary? */ +- if (!blocks) ++ if (!new_block_bytes) + goto out; + + /* All further extents will be NOT_RECORDED_NOT_ALLOCATED */ + last_ext->extLocation.logicalBlockNum = 0; + last_ext->extLocation.partitionReferenceNum = 0; +- add = (1 << (30-sb->s_blocksize_bits)) - 1; +- last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED | +- (add << sb->s_blocksize_bits); ++ add = (1 << 30) - sb->s_blocksize; ++ last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED | add; + + /* Create enough extents to cover the whole hole */ +- while (blocks > add) { +- blocks -= add; ++ while (new_block_bytes > add) { ++ new_block_bytes -= add; + err = udf_add_aext(inode, last_pos, &last_ext->extLocation, + last_ext->extLength, 1); + if (err) + return err; + count++; + } +- if (blocks) { ++ if (new_block_bytes) { + last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED | +- (blocks << sb->s_blocksize_bits); ++ new_block_bytes; + err = udf_add_aext(inode, last_pos, &last_ext->extLocation, + last_ext->extLength, 1); + if (err) +@@ -596,6 +596,24 @@ out: + return count; + } + ++/* Extend the final block of the file to final_block_len bytes */ ++static void udf_do_extend_final_block(struct inode *inode, ++ struct extent_position *last_pos, ++ struct kernel_long_ad *last_ext, ++ uint32_t final_block_len) ++{ ++ struct super_block *sb = inode->i_sb; ++ uint32_t added_bytes; ++ ++ added_bytes = final_block_len - ++ (last_ext->extLength & (sb->s_blocksize - 1)); ++ last_ext->extLength += added_bytes; ++ UDF_I(inode)->i_lenExtents += added_bytes; ++ ++ udf_write_aext(inode, last_pos, &last_ext->extLocation, ++ last_ext->extLength, 1); ++} ++ + static int udf_extend_file(struct inode *inode, loff_t newsize) + { + +@@ -605,10 +623,12 @@ static int udf_extend_file(struct inode *inode, loff_t newsize) + int8_t etype; + struct super_block *sb = inode->i_sb; + sector_t first_block = newsize >> sb->s_blocksize_bits, offset; ++ unsigned long partial_final_block; + int adsize; + struct udf_inode_info *iinfo = UDF_I(inode); + struct kernel_long_ad extent; +- int err; ++ int err = 0; ++ int within_final_block; + + if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT) + adsize = sizeof(struct short_ad); +@@ -618,18 +638,8 @@ static int udf_extend_file(struct inode *inode, loff_t newsize) + BUG(); + + etype = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset); ++ within_final_block = (etype != -1); + +- /* File has extent covering the new size (could happen when extending +- * inside a block)? */ +- if (etype != -1) +- return 0; +- if (newsize & (sb->s_blocksize - 1)) +- offset++; +- /* Extended file just to the boundary of the last file block? */ +- if (offset == 0) +- return 0; +- +- /* Truncate is extending the file by 'offset' blocks */ + if ((!epos.bh && epos.offset == udf_file_entry_alloc_offset(inode)) || + (epos.bh && epos.offset == sizeof(struct allocExtDesc))) { + /* File has no extents at all or has empty last +@@ -643,7 +653,22 @@ static int udf_extend_file(struct inode *inode, loff_t newsize) + &extent.extLength, 0); + extent.extLength |= etype << 30; + } +- err = udf_do_extend_file(inode, &epos, &extent, offset); ++ ++ partial_final_block = newsize & (sb->s_blocksize - 1); ++ ++ /* File has extent covering the new size (could happen when extending ++ * inside a block)? ++ */ ++ if (within_final_block) { ++ /* Extending file within the last file block */ ++ udf_do_extend_final_block(inode, &epos, &extent, ++ partial_final_block); ++ } else { ++ loff_t add = ((loff_t)offset << sb->s_blocksize_bits) | ++ partial_final_block; ++ err = udf_do_extend_file(inode, &epos, &extent, add); ++ } ++ + if (err < 0) + goto out; + err = 0; +@@ -745,6 +770,7 @@ static sector_t inode_getblk(struct inode *inode, sector_t block, + /* Are we beyond EOF? */ + if (etype == -1) { + int ret; ++ loff_t hole_len; + isBeyondEOF = true; + if (count) { + if (c) +@@ -760,7 +786,8 @@ static sector_t inode_getblk(struct inode *inode, sector_t block, + startnum = (offset > 0); + } + /* Create extents for the hole between EOF and offset */ +- ret = udf_do_extend_file(inode, &prev_epos, laarr, offset); ++ hole_len = (loff_t)offset << inode->i_blkbits; ++ ret = udf_do_extend_file(inode, &prev_epos, laarr, hole_len); + if (ret < 0) { + *err = ret; + newblock = 0; +diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h +index 178a3933a71b..50ced8aba9db 100644 +--- a/include/linux/skmsg.h ++++ b/include/linux/skmsg.h +@@ -351,6 +351,8 @@ static inline void sk_psock_update_proto(struct sock *sk, + static inline void sk_psock_restore_proto(struct sock *sk, + struct sk_psock *psock) + { ++ sk->sk_write_space = psock->saved_write_space; ++ + if (psock->sk_proto) { + sk->sk_prot = psock->sk_proto; + psock->sk_proto = NULL; +diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h +index eaa1e762bf06..6124b4cebb42 100644 +--- a/include/linux/vmw_vmci_defs.h ++++ b/include/linux/vmw_vmci_defs.h +@@ -69,9 +69,18 @@ enum { + + /* + * A single VMCI device has an upper limit of 128MB on the amount of +- * memory that can be used for queue pairs. ++ * memory that can be used for queue pairs. Since each queue pair ++ * consists of at least two pages, the memory limit also dictates the ++ * number of queue pairs a guest can create. + */ + #define VMCI_MAX_GUEST_QP_MEMORY (128 * 1024 * 1024) ++#define VMCI_MAX_GUEST_QP_COUNT (VMCI_MAX_GUEST_QP_MEMORY / PAGE_SIZE / 2) ++ ++/* ++ * There can be at most PAGE_SIZE doorbells since there is one doorbell ++ * per byte in the doorbell bitmap page. ++ */ ++#define VMCI_MAX_GUEST_DOORBELL_COUNT PAGE_SIZE + + /* + * Queues with pre-mapped data pages must be small, so that we don't pin +diff --git a/include/net/ip6_tunnel.h b/include/net/ip6_tunnel.h +index 69b4bcf880c9..028eaea1c854 100644 +--- a/include/net/ip6_tunnel.h ++++ b/include/net/ip6_tunnel.h +@@ -158,9 +158,12 @@ static inline void ip6tunnel_xmit(struct sock *sk, struct sk_buff *skb, + memset(skb->cb, 0, sizeof(struct inet6_skb_parm)); + pkt_len = skb->len - skb_inner_network_offset(skb); + err = ip6_local_out(dev_net(skb_dst(skb)->dev), sk, skb); +- if (unlikely(net_xmit_eval(err))) +- pkt_len = -1; +- iptunnel_xmit_stats(dev, pkt_len); ++ ++ if (dev) { ++ if (unlikely(net_xmit_eval(err))) ++ pkt_len = -1; ++ iptunnel_xmit_stats(dev, pkt_len); ++ } + } + #endif + #endif +diff --git a/include/uapi/linux/usb/audio.h b/include/uapi/linux/usb/audio.h +index ddc5396800aa..76b7c3f6cd0d 100644 +--- a/include/uapi/linux/usb/audio.h ++++ b/include/uapi/linux/usb/audio.h +@@ -450,6 +450,43 @@ static inline __u8 *uac_processing_unit_specific(struct uac_processing_unit_desc + } + } + ++/* ++ * Extension Unit (XU) has almost compatible layout with Processing Unit, but ++ * on UAC2, it has a different bmControls size (bControlSize); it's 1 byte for ++ * XU while 2 bytes for PU. The last iExtension field is a one-byte index as ++ * well as iProcessing field of PU. ++ */ ++static inline __u8 uac_extension_unit_bControlSize(struct uac_processing_unit_descriptor *desc, ++ int protocol) ++{ ++ switch (protocol) { ++ case UAC_VERSION_1: ++ return desc->baSourceID[desc->bNrInPins + 4]; ++ case UAC_VERSION_2: ++ return 1; /* in UAC2, this value is constant */ ++ case UAC_VERSION_3: ++ return 4; /* in UAC3, this value is constant */ ++ default: ++ return 1; ++ } ++} ++ ++static inline __u8 uac_extension_unit_iExtension(struct uac_processing_unit_descriptor *desc, ++ int protocol) ++{ ++ __u8 control_size = uac_extension_unit_bControlSize(desc, protocol); ++ ++ switch (protocol) { ++ case UAC_VERSION_1: ++ case UAC_VERSION_2: ++ default: ++ return *(uac_processing_unit_bmControls(desc, protocol) ++ + control_size); ++ case UAC_VERSION_3: ++ return 0; /* UAC3 does not have this field */ ++ } ++} ++ + /* 4.5.2 Class-Specific AS Interface Descriptor */ + struct uac1_as_header_descriptor { + __u8 bLength; /* in bytes: 7 */ +diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c +index 1e525d70f833..1defea4b2755 100644 +--- a/kernel/bpf/devmap.c ++++ b/kernel/bpf/devmap.c +@@ -186,6 +186,7 @@ static void dev_map_free(struct bpf_map *map) + if (!dev) + continue; + ++ free_percpu(dev->bulkq); + dev_put(dev->dev); + kfree(dev); + } +@@ -281,6 +282,7 @@ void __dev_map_flush(struct bpf_map *map) + unsigned long *bitmap = this_cpu_ptr(dtab->flush_needed); + u32 bit; + ++ rcu_read_lock(); + for_each_set_bit(bit, bitmap, map->max_entries) { + struct bpf_dtab_netdev *dev = READ_ONCE(dtab->netdev_map[bit]); + struct xdp_bulk_queue *bq; +@@ -291,11 +293,12 @@ void __dev_map_flush(struct bpf_map *map) + if (unlikely(!dev)) + continue; + +- __clear_bit(bit, bitmap); +- + bq = this_cpu_ptr(dev->bulkq); + bq_xmit_all(dev, bq, XDP_XMIT_FLUSH, true); ++ ++ __clear_bit(bit, bitmap); + } ++ rcu_read_unlock(); + } + + /* rcu_read_lock (from syscall and BPF contexts) ensures that if a delete and/or +@@ -388,6 +391,7 @@ static void dev_map_flush_old(struct bpf_dtab_netdev *dev) + + int cpu; + ++ rcu_read_lock(); + for_each_online_cpu(cpu) { + bitmap = per_cpu_ptr(dev->dtab->flush_needed, cpu); + __clear_bit(dev->bit, bitmap); +@@ -395,6 +399,7 @@ static void dev_map_flush_old(struct bpf_dtab_netdev *dev) + bq = per_cpu_ptr(dev->bulkq, cpu); + bq_xmit_all(dev, bq, XDP_XMIT_FLUSH, false); + } ++ rcu_read_unlock(); + } + } + +diff --git a/net/can/af_can.c b/net/can/af_can.c +index e386d654116d..04132b0b5d36 100644 +--- a/net/can/af_can.c ++++ b/net/can/af_can.c +@@ -959,6 +959,8 @@ static struct pernet_operations can_pernet_ops __read_mostly = { + + static __init int can_init(void) + { ++ int err; ++ + /* check for correct padding to be able to use the structs similarly */ + BUILD_BUG_ON(offsetof(struct can_frame, can_dlc) != + offsetof(struct canfd_frame, len) || +@@ -972,15 +974,31 @@ static __init int can_init(void) + if (!rcv_cache) + return -ENOMEM; + +- register_pernet_subsys(&can_pernet_ops); ++ err = register_pernet_subsys(&can_pernet_ops); ++ if (err) ++ goto out_pernet; + + /* protocol register */ +- sock_register(&can_family_ops); +- register_netdevice_notifier(&can_netdev_notifier); ++ err = sock_register(&can_family_ops); ++ if (err) ++ goto out_sock; ++ err = register_netdevice_notifier(&can_netdev_notifier); ++ if (err) ++ goto out_notifier; ++ + dev_add_pack(&can_packet); + dev_add_pack(&canfd_packet); + + return 0; ++ ++out_notifier: ++ sock_unregister(PF_CAN); ++out_sock: ++ unregister_pernet_subsys(&can_pernet_ops); ++out_pernet: ++ kmem_cache_destroy(rcv_cache); ++ ++ return err; + } + + static __exit void can_exit(void) +diff --git a/net/core/skbuff.c b/net/core/skbuff.c +index e5bfd42fd083..4ea96fbf3b49 100644 +--- a/net/core/skbuff.c ++++ b/net/core/skbuff.c +@@ -2309,6 +2309,7 @@ do_frag_list: + kv.iov_base = skb->data + offset; + kv.iov_len = slen; + memset(&msg, 0, sizeof(msg)); ++ msg.msg_flags = MSG_DONTWAIT; + + ret = kernel_sendmsg_locked(sk, &msg, &kv, 1, slen); + if (ret <= 0) +diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c +index 3de0e9b0a482..8951de8b568f 100644 +--- a/net/ipv6/netfilter/nf_conntrack_reasm.c ++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c +@@ -265,8 +265,14 @@ static int nf_ct_frag6_queue(struct frag_queue *fq, struct sk_buff *skb, + + prev = fq->q.fragments_tail; + err = inet_frag_queue_insert(&fq->q, skb, offset, end); +- if (err) ++ if (err) { ++ if (err == IPFRAG_DUP) { ++ /* No error for duplicates, pretend they got queued. */ ++ kfree_skb(skb); ++ return -EINPROGRESS; ++ } + goto insert_error; ++ } + + if (dev) + fq->iif = dev->ifindex; +@@ -293,15 +299,17 @@ static int nf_ct_frag6_queue(struct frag_queue *fq, struct sk_buff *skb, + skb->_skb_refdst = 0UL; + err = nf_ct_frag6_reasm(fq, skb, prev, dev); + skb->_skb_refdst = orefdst; +- return err; ++ ++ /* After queue has assumed skb ownership, only 0 or ++ * -EINPROGRESS must be returned. ++ */ ++ return err ? -EINPROGRESS : 0; + } + + skb_dst_drop(skb); + return -EINPROGRESS; + + insert_error: +- if (err == IPFRAG_DUP) +- goto err; + inet_frag_kill(&fq->q); + err: + skb_dst_drop(skb); +@@ -480,12 +488,6 @@ int nf_ct_frag6_gather(struct net *net, struct sk_buff *skb, u32 user) + ret = 0; + } + +- /* after queue has assumed skb ownership, only 0 or -EINPROGRESS +- * must be returned. +- */ +- if (ret) +- ret = -EINPROGRESS; +- + spin_unlock_bh(&fq->q.lock); + inet_frag_put(&fq->q); + return ret; +diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h +index c875d45f1e1d..6708c1640207 100644 +--- a/net/mac80211/ieee80211_i.h ++++ b/net/mac80211/ieee80211_i.h +@@ -1434,7 +1434,7 @@ ieee80211_get_sband(struct ieee80211_sub_if_data *sdata) + rcu_read_lock(); + chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf); + +- if (WARN_ON(!chanctx_conf)) { ++ if (WARN_ON_ONCE(!chanctx_conf)) { + rcu_read_unlock(); + return NULL; + } +@@ -2034,6 +2034,13 @@ void __ieee80211_flush_queues(struct ieee80211_local *local, + + static inline bool ieee80211_can_run_worker(struct ieee80211_local *local) + { ++ /* ++ * It's unsafe to try to do any work during reconfigure flow. ++ * When the flow ends the work will be requeued. ++ */ ++ if (local->in_reconfig) ++ return false; ++ + /* + * If quiescing is set, we are racing with __ieee80211_suspend. + * __ieee80211_suspend flushes the workers after setting quiescing, +diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c +index 766e5e5bab8a..fe44f0d98de0 100644 +--- a/net/mac80211/mesh.c ++++ b/net/mac80211/mesh.c +@@ -929,6 +929,7 @@ void ieee80211_stop_mesh(struct ieee80211_sub_if_data *sdata) + + /* flush STAs and mpaths on this iface */ + sta_info_flush(sdata); ++ ieee80211_free_keys(sdata, true); + mesh_path_flush_by_iface(sdata); + + /* stop the beacon */ +@@ -1220,7 +1221,8 @@ int ieee80211_mesh_finish_csa(struct ieee80211_sub_if_data *sdata) + ifmsh->chsw_ttl = 0; + + /* Remove the CSA and MCSP elements from the beacon */ +- tmp_csa_settings = rcu_dereference(ifmsh->csa); ++ tmp_csa_settings = rcu_dereference_protected(ifmsh->csa, ++ lockdep_is_held(&sdata->wdev.mtx)); + RCU_INIT_POINTER(ifmsh->csa, NULL); + if (tmp_csa_settings) + kfree_rcu(tmp_csa_settings, rcu_head); +@@ -1242,6 +1244,8 @@ int ieee80211_mesh_csa_beacon(struct ieee80211_sub_if_data *sdata, + struct mesh_csa_settings *tmp_csa_settings; + int ret = 0; + ++ lockdep_assert_held(&sdata->wdev.mtx); ++ + tmp_csa_settings = kmalloc(sizeof(*tmp_csa_settings), + GFP_ATOMIC); + if (!tmp_csa_settings) +diff --git a/net/mac80211/util.c b/net/mac80211/util.c +index 447a55ae9df1..3400e2da7297 100644 +--- a/net/mac80211/util.c ++++ b/net/mac80211/util.c +@@ -2442,6 +2442,10 @@ int ieee80211_reconfig(struct ieee80211_local *local) + mutex_lock(&local->mtx); + ieee80211_start_next_roc(local); + mutex_unlock(&local->mtx); ++ ++ /* Requeue all works */ ++ list_for_each_entry(sdata, &local->interfaces, list) ++ ieee80211_queue_work(&local->hw, &sdata->work); + } + + ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP, +diff --git a/net/wireless/pmsr.c b/net/wireless/pmsr.c +index 5e2ab01d325c..d06d514f0bba 100644 +--- a/net/wireless/pmsr.c ++++ b/net/wireless/pmsr.c +@@ -1,6 +1,6 @@ + /* SPDX-License-Identifier: GPL-2.0 */ + /* +- * Copyright (C) 2018 Intel Corporation ++ * Copyright (C) 2018 - 2019 Intel Corporation + */ + #ifndef __PMSR_H + #define __PMSR_H +@@ -446,7 +446,7 @@ static int nl80211_pmsr_send_result(struct sk_buff *msg, + + if (res->ap_tsf_valid && + nla_put_u64_64bit(msg, NL80211_PMSR_RESP_ATTR_AP_TSF, +- res->host_time, NL80211_PMSR_RESP_ATTR_PAD)) ++ res->ap_tsf, NL80211_PMSR_RESP_ATTR_PAD)) + goto error; + + if (res->final && nla_put_flag(msg, NL80211_PMSR_RESP_ATTR_FINAL)) +diff --git a/net/wireless/util.c b/net/wireless/util.c +index 75899b62bdc9..5ac66a571e33 100644 +--- a/net/wireless/util.c ++++ b/net/wireless/util.c +@@ -1237,7 +1237,7 @@ static u32 cfg80211_calculate_bitrate_he(struct rate_info *rate) + if (rate->he_dcm) + result /= 2; + +- return result; ++ return result / 10000; + } + + u32 cfg80211_calculate_bitrate(struct rate_info *rate) +@@ -1989,7 +1989,7 @@ int ieee80211_get_vht_max_nss(struct ieee80211_vht_cap *cap, + continue; + + if (supp >= mcs_encoding) { +- max_vht_nss = i; ++ max_vht_nss = i + 1; + break; + } + } +diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c +index 989e52386c35..2f7e2c33a812 100644 +--- a/net/xdp/xdp_umem.c ++++ b/net/xdp/xdp_umem.c +@@ -143,6 +143,9 @@ static void xdp_umem_clear_dev(struct xdp_umem *umem) + struct netdev_bpf bpf; + int err; + ++ if (!umem->dev) ++ return; ++ + if (umem->zc) { + bpf.command = XDP_SETUP_XSK_UMEM; + bpf.xsk.umem = NULL; +@@ -156,11 +159,9 @@ static void xdp_umem_clear_dev(struct xdp_umem *umem) + WARN(1, "failed to disable umem!\n"); + } + +- if (umem->dev) { +- rtnl_lock(); +- xdp_clear_umem_at_qid(umem->dev, umem->queue_id); +- rtnl_unlock(); +- } ++ rtnl_lock(); ++ xdp_clear_umem_at_qid(umem->dev, umem->queue_id); ++ rtnl_unlock(); + + if (umem->zc) { + dev_put(umem->dev); +diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c +index eae7b635343d..6e87cc831e84 100644 +--- a/samples/bpf/bpf_load.c ++++ b/samples/bpf/bpf_load.c +@@ -678,7 +678,7 @@ void read_trace_pipe(void) + static char buf[4096]; + ssize_t sz; + +- sz = read(trace_fd, buf, sizeof(buf)); ++ sz = read(trace_fd, buf, sizeof(buf) - 1); + if (sz > 0) { + buf[sz] = 0; + puts(buf); +diff --git a/samples/bpf/task_fd_query_user.c b/samples/bpf/task_fd_query_user.c +index aff2b4ae914e..e39938058223 100644 +--- a/samples/bpf/task_fd_query_user.c ++++ b/samples/bpf/task_fd_query_user.c +@@ -216,7 +216,7 @@ static int test_debug_fs_uprobe(char *binary_path, long offset, bool is_return) + { + const char *event_type = "uprobe"; + struct perf_event_attr attr = {}; +- char buf[256], event_alias[256]; ++ char buf[256], event_alias[sizeof("test_1234567890")]; + __u64 probe_offset, probe_addr; + __u32 len, prog_id, fd_type; + int err, res, kfd, efd; +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index ee620f39dbe3..dde9a49ded78 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -3236,6 +3236,7 @@ static void alc256_init(struct hda_codec *codec) + alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */ + alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */ + alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15); ++ alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/ + } + + static void alc256_shutup(struct hda_codec *codec) +@@ -7782,7 +7783,6 @@ static int patch_alc269(struct hda_codec *codec) + spec->shutup = alc256_shutup; + spec->init_hook = alc256_init; + spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */ +- alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/ + break; + case 0x10ec0257: + spec->codec_variant = ALC269_TYPE_ALC257; +diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c +index 53dccbfe392b..34fbecab2210 100644 +--- a/sound/usb/mixer.c ++++ b/sound/usb/mixer.c +@@ -2318,7 +2318,7 @@ static struct procunit_info extunits[] = { + */ + static int build_audio_procunit(struct mixer_build *state, int unitid, + void *raw_desc, struct procunit_info *list, +- char *name) ++ bool extension_unit) + { + struct uac_processing_unit_descriptor *desc = raw_desc; + int num_ins; +@@ -2335,6 +2335,8 @@ static int build_audio_procunit(struct mixer_build *state, int unitid, + static struct procunit_info default_info = { + 0, NULL, default_value_info + }; ++ const char *name = extension_unit ? ++ "Extension Unit" : "Processing Unit"; + + if (desc->bLength < 13) { + usb_audio_err(state->chip, "invalid %s descriptor (id %d)\n", name, unitid); +@@ -2448,7 +2450,10 @@ static int build_audio_procunit(struct mixer_build *state, int unitid, + } else if (info->name) { + strlcpy(kctl->id.name, info->name, sizeof(kctl->id.name)); + } else { +- nameid = uac_processing_unit_iProcessing(desc, state->mixer->protocol); ++ if (extension_unit) ++ nameid = uac_extension_unit_iExtension(desc, state->mixer->protocol); ++ else ++ nameid = uac_processing_unit_iProcessing(desc, state->mixer->protocol); + len = 0; + if (nameid) + len = snd_usb_copy_string_desc(state->chip, +@@ -2481,10 +2486,10 @@ static int parse_audio_processing_unit(struct mixer_build *state, int unitid, + case UAC_VERSION_2: + default: + return build_audio_procunit(state, unitid, raw_desc, +- procunits, "Processing Unit"); ++ procunits, false); + case UAC_VERSION_3: + return build_audio_procunit(state, unitid, raw_desc, +- uac3_procunits, "Processing Unit"); ++ uac3_procunits, false); + } + } + +@@ -2495,8 +2500,7 @@ static int parse_audio_extension_unit(struct mixer_build *state, int unitid, + * Note that we parse extension units with processing unit descriptors. + * That's ok as the layout is the same. + */ +- return build_audio_procunit(state, unitid, raw_desc, +- extunits, "Extension Unit"); ++ return build_audio_procunit(state, unitid, raw_desc, extunits, true); + } + + /* +diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c +index 994a7e0d16fb..14f581b562bd 100644 +--- a/tools/bpf/bpftool/map.c ++++ b/tools/bpf/bpftool/map.c +@@ -713,12 +713,14 @@ static int dump_map_elem(int fd, void *key, void *value, + return 0; + + if (json_output) { ++ jsonw_start_object(json_wtr); + jsonw_name(json_wtr, "key"); + print_hex_data_json(key, map_info->key_size); + jsonw_name(json_wtr, "value"); + jsonw_start_object(json_wtr); + jsonw_string_field(json_wtr, "error", strerror(lookup_errno)); + jsonw_end_object(json_wtr); ++ jsonw_end_object(json_wtr); + } else { + if (errno == ENOENT) + print_entry_plain(map_info, key, NULL); +diff --git a/tools/perf/Documentation/intel-pt.txt b/tools/perf/Documentation/intel-pt.txt +index 115eaacc455f..60d99e5e7921 100644 +--- a/tools/perf/Documentation/intel-pt.txt ++++ b/tools/perf/Documentation/intel-pt.txt +@@ -88,16 +88,16 @@ smaller. + + To represent software control flow, "branches" samples are produced. By default + a branch sample is synthesized for every single branch. To get an idea what +-data is available you can use the 'perf script' tool with no parameters, which +-will list all the samples. ++data is available you can use the 'perf script' tool with all itrace sampling ++options, which will list all the samples. + + perf record -e intel_pt//u ls +- perf script ++ perf script --itrace=ibxwpe + + An interesting field that is not printed by default is 'flags' which can be + displayed as follows: + +- perf script -Fcomm,tid,pid,time,cpu,event,trace,ip,sym,dso,addr,symoff,flags ++ perf script --itrace=ibxwpe -F+flags + + The flags are "bcrosyiABEx" which stand for branch, call, return, conditional, + system, asynchronous, interrupt, transaction abort, trace begin, trace end, and +@@ -713,7 +713,7 @@ Having no option is the same as + + which, in turn, is the same as + +- --itrace=ibxwpe ++ --itrace=cepwx + + The letters are: + +diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c +index fb76b6b232d4..5dd9d1893b89 100644 +--- a/tools/perf/util/auxtrace.c ++++ b/tools/perf/util/auxtrace.c +@@ -1010,7 +1010,8 @@ int itrace_parse_synth_opts(const struct option *opt, const char *str, + } + + if (!str) { +- itrace_synth_opts__set_default(synth_opts, false); ++ itrace_synth_opts__set_default(synth_opts, ++ synth_opts->default_no_sample); + return 0; + } + +diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c +index 2d2af2ac2b1e..682e3d524d3c 100644 +--- a/tools/perf/util/header.c ++++ b/tools/perf/util/header.c +@@ -3549,6 +3549,7 @@ int perf_event__synthesize_features(struct perf_tool *tool, + return -ENOMEM; + + ff.size = sz - sz_hdr; ++ ff.ph = &session->header; + + for_each_set_bit(feat, header->adds_features, HEADER_FEAT_BITS) { + if (!feat_ops[feat].synthesize) { +diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c +index 6d288237887b..03b1da6d1da4 100644 +--- a/tools/perf/util/intel-pt.c ++++ b/tools/perf/util/intel-pt.c +@@ -2588,7 +2588,8 @@ int intel_pt_process_auxtrace_info(union perf_event *event, + } else { + itrace_synth_opts__set_default(&pt->synth_opts, + session->itrace_synth_opts->default_no_sample); +- if (use_browser != -1) { ++ if (!session->itrace_synth_opts->default_no_sample && ++ !session->itrace_synth_opts->inject) { + pt->synth_opts.branches = false; + pt->synth_opts.callchain = true; + } +diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c +index e0429f4ef335..faa8eb231e1b 100644 +--- a/tools/perf/util/pmu.c ++++ b/tools/perf/util/pmu.c +@@ -709,9 +709,7 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu) + { + int i; + struct pmu_events_map *map; +- struct pmu_event *pe; + const char *name = pmu->name; +- const char *pname; + + map = perf_pmu__find_map(pmu); + if (!map) +@@ -722,28 +720,26 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu) + */ + i = 0; + while (1) { ++ const char *cpu_name = is_arm_pmu_core(name) ? name : "cpu"; ++ struct pmu_event *pe = &map->table[i++]; ++ const char *pname = pe->pmu ? pe->pmu : cpu_name; + +- pe = &map->table[i++]; + if (!pe->name) { + if (pe->metric_group || pe->metric_name) + continue; + break; + } + +- if (!is_arm_pmu_core(name)) { +- pname = pe->pmu ? pe->pmu : "cpu"; +- +- /* +- * uncore alias may be from different PMU +- * with common prefix +- */ +- if (pmu_is_uncore(name) && +- !strncmp(pname, name, strlen(pname))) +- goto new_alias; ++ /* ++ * uncore alias may be from different PMU ++ * with common prefix ++ */ ++ if (pmu_is_uncore(name) && ++ !strncmp(pname, name, strlen(pname))) ++ goto new_alias; + +- if (strcmp(pname, name)) +- continue; +- } ++ if (strcmp(pname, name)) ++ continue; + + new_alias: + /* need type casts to override 'const' */ +diff --git a/tools/perf/util/thread-stack.c b/tools/perf/util/thread-stack.c +index 41942c2aaa18..402d099eddaf 100644 +--- a/tools/perf/util/thread-stack.c ++++ b/tools/perf/util/thread-stack.c +@@ -625,6 +625,23 @@ static int thread_stack__bottom(struct thread_stack *ts, + true, false); + } + ++static int thread_stack__pop_ks(struct thread *thread, struct thread_stack *ts, ++ struct perf_sample *sample, u64 ref) ++{ ++ u64 tm = sample->time; ++ int err; ++ ++ /* Return to userspace, so pop all kernel addresses */ ++ while (thread_stack__in_kernel(ts)) { ++ err = thread_stack__call_return(thread, ts, --ts->cnt, ++ tm, ref, true); ++ if (err) ++ return err; ++ } ++ ++ return 0; ++} ++ + static int thread_stack__no_call_return(struct thread *thread, + struct thread_stack *ts, + struct perf_sample *sample, +@@ -905,7 +922,18 @@ int thread_stack__process(struct thread *thread, struct comm *comm, + ts->rstate = X86_RETPOLINE_DETECTED; + + } else if (sample->flags & PERF_IP_FLAG_RETURN) { +- if (!sample->ip || !sample->addr) ++ if (!sample->addr) { ++ u32 return_from_kernel = PERF_IP_FLAG_SYSCALLRET | ++ PERF_IP_FLAG_INTERRUPT; ++ ++ if (!(sample->flags & return_from_kernel)) ++ return 0; ++ ++ /* Pop kernel stack */ ++ return thread_stack__pop_ks(thread, ts, sample, ref); ++ } ++ ++ if (!sample->ip) + return 0; + + /* x86 retpoline 'return' doesn't match the stack */ +diff --git a/tools/testing/selftests/bpf/verifier/div_overflow.c b/tools/testing/selftests/bpf/verifier/div_overflow.c +index bd3f38dbe796..acab4f00819f 100644 +--- a/tools/testing/selftests/bpf/verifier/div_overflow.c ++++ b/tools/testing/selftests/bpf/verifier/div_overflow.c +@@ -29,8 +29,11 @@ + "DIV64 overflow, check 1", + .insns = { + BPF_MOV64_IMM(BPF_REG_1, -1), +- BPF_LD_IMM64(BPF_REG_0, LLONG_MIN), +- BPF_ALU64_REG(BPF_DIV, BPF_REG_0, BPF_REG_1), ++ BPF_LD_IMM64(BPF_REG_2, LLONG_MIN), ++ BPF_ALU64_REG(BPF_DIV, BPF_REG_2, BPF_REG_1), ++ BPF_MOV32_IMM(BPF_REG_0, 0), ++ BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 1), ++ BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_SCHED_CLS, +@@ -40,8 +43,11 @@ + { + "DIV64 overflow, check 2", + .insns = { +- BPF_LD_IMM64(BPF_REG_0, LLONG_MIN), +- BPF_ALU64_IMM(BPF_DIV, BPF_REG_0, -1), ++ BPF_LD_IMM64(BPF_REG_1, LLONG_MIN), ++ BPF_ALU64_IMM(BPF_DIV, BPF_REG_1, -1), ++ BPF_MOV32_IMM(BPF_REG_0, 0), ++ BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_1, 1), ++ BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_SCHED_CLS, +diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c +index 7fc272ecae16..1b1c449ceaf4 100644 +--- a/virt/kvm/arm/arch_timer.c ++++ b/virt/kvm/arm/arch_timer.c +@@ -321,14 +321,15 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level, + } + } + ++/* Only called for a fully emulated timer */ + static void timer_emulate(struct arch_timer_context *ctx) + { + bool should_fire = kvm_timer_should_fire(ctx); + + trace_kvm_timer_emulate(ctx, should_fire); + +- if (should_fire) { +- kvm_timer_update_irq(ctx->vcpu, true, ctx); ++ if (should_fire != ctx->irq.level) { ++ kvm_timer_update_irq(ctx->vcpu, should_fire, ctx); + return; + } + +diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c +index 44ceaccb18cf..8c9fe831bce4 100644 +--- a/virt/kvm/arm/vgic/vgic-its.c ++++ b/virt/kvm/arm/vgic/vgic-its.c +@@ -1734,6 +1734,7 @@ static void vgic_its_destroy(struct kvm_device *kvm_dev) + + mutex_unlock(&its->its_lock); + kfree(its); ++ kfree(kvm_dev);/* alloc by kvm_ioctl_create_device, free by .destroy */ + } + + static int vgic_its_has_attr_regs(struct kvm_device *dev,