From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 758B61382C5 for ; Wed, 2 May 2018 16:15:44 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id B1FEEE09C8; Wed, 2 May 2018 16:15:43 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 678C5E09C8 for ; Wed, 2 May 2018 16:15:43 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id ED627335C2C for ; Wed, 2 May 2018 16:15:41 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 39A97243 for ; Wed, 2 May 2018 16:15:40 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1525277730.7cde86eb95c44a9bfb0eab9ae50e3ac566563f2c.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.16 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1006_linux-4.16.7.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 7cde86eb95c44a9bfb0eab9ae50e3ac566563f2c X-VCS-Branch: 4.16 Date: Wed, 2 May 2018 16:15:40 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: c12e2372-c11f-4897-b848-95c93359b32b X-Archives-Hash: 5b7f616fc4387fc0d8a6ca55c732348c commit: 7cde86eb95c44a9bfb0eab9ae50e3ac566563f2c Author: Mike Pagano gentoo org> AuthorDate: Wed May 2 16:15:30 2018 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed May 2 16:15:30 2018 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7cde86eb Linux patch 4.16.7 0000_README | 4 + 1006_linux-4.16.7.patch | 4737 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 4741 insertions(+) diff --git a/0000_README b/0000_README index d4182dc..1139362 100644 --- a/0000_README +++ b/0000_README @@ -67,6 +67,10 @@ Patch: 1005_linux-4.16.6.patch From: http://www.kernel.org Desc: Linux 4.16.6 +Patch: 1006_linux-4.16.7.patch +From: http://www.kernel.org +Desc: Linux 4.16.7 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1006_linux-4.16.7.patch b/1006_linux-4.16.7.patch new file mode 100644 index 0000000..4dec6c8 --- /dev/null +++ b/1006_linux-4.16.7.patch @@ -0,0 +1,4737 @@ +diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt +index d6b3ff51a14f..36187fc32ab2 100644 +--- a/Documentation/virtual/kvm/api.txt ++++ b/Documentation/virtual/kvm/api.txt +@@ -1960,6 +1960,9 @@ ARM 32-bit VFP control registers have the following id bit patterns: + ARM 64-bit FP registers have the following id bit patterns: + 0x4030 0000 0012 0 + ++ARM firmware pseudo-registers have the following bit pattern: ++ 0x4030 0000 0014 ++ + + arm64 registers are mapped using the lower 32 bits. The upper 16 of + that is the register group type, or coprocessor number: +@@ -1976,6 +1979,9 @@ arm64 CCSIDR registers are demultiplexed by CSSELR value: + arm64 system registers have the following id bit patterns: + 0x6030 0000 0013 + ++arm64 firmware pseudo-registers have the following bit pattern: ++ 0x6030 0000 0014 ++ + + MIPS registers are mapped using the lower 32 bits. The upper 16 of that is + the register group type: +@@ -2510,7 +2516,8 @@ Possible features: + and execute guest code when KVM_RUN is called. + - KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode. + Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only). +- - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU. ++ - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 (or a future revision ++ backward compatible with v0.2) for the CPU. + Depends on KVM_CAP_ARM_PSCI_0_2. + - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU. + Depends on KVM_CAP_ARM_PMU_V3. +diff --git a/Documentation/virtual/kvm/arm/psci.txt b/Documentation/virtual/kvm/arm/psci.txt +new file mode 100644 +index 000000000000..aafdab887b04 +--- /dev/null ++++ b/Documentation/virtual/kvm/arm/psci.txt +@@ -0,0 +1,30 @@ ++KVM implements the PSCI (Power State Coordination Interface) ++specification in order to provide services such as CPU on/off, reset ++and power-off to the guest. ++ ++The PSCI specification is regularly updated to provide new features, ++and KVM implements these updates if they make sense from a virtualization ++point of view. ++ ++This means that a guest booted on two different versions of KVM can ++observe two different "firmware" revisions. This could cause issues if ++a given guest is tied to a particular PSCI revision (unlikely), or if ++a migration causes a different PSCI version to be exposed out of the ++blue to an unsuspecting guest. ++ ++In order to remedy this situation, KVM exposes a set of "firmware ++pseudo-registers" that can be manipulated using the GET/SET_ONE_REG ++interface. These registers can be saved/restored by userspace, and set ++to a convenient value if required. ++ ++The following register is defined: ++ ++* KVM_REG_ARM_PSCI_VERSION: ++ ++ - Only valid if the vcpu has the KVM_ARM_VCPU_PSCI_0_2 feature set ++ (and thus has already been initialized) ++ - Returns the current PSCI version on GET_ONE_REG (defaulting to the ++ highest PSCI version implemented by KVM and compatible with v0.2) ++ - Allows any PSCI version implemented by KVM and compatible with ++ v0.2 to be set with SET_ONE_REG ++ - Affects the whole VM (even if the register view is per-vcpu) +diff --git a/Makefile b/Makefile +index 41f07b2b7905..1c5d5d8c45e2 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 4 + PATCHLEVEL = 16 +-SUBLEVEL = 6 ++SUBLEVEL = 7 + EXTRAVERSION = + NAME = Fearless Coyote + +diff --git a/arch/arm/boot/dts/gemini-nas4220b.dts b/arch/arm/boot/dts/gemini-nas4220b.dts +index 8bbb6f85d161..4785fbcc41ed 100644 +--- a/arch/arm/boot/dts/gemini-nas4220b.dts ++++ b/arch/arm/boot/dts/gemini-nas4220b.dts +@@ -134,37 +134,37 @@ + function = "gmii"; + groups = "gmii_gmac0_grp"; + }; +- /* Settings come from OpenWRT */ ++ /* Settings come from OpenWRT, pins on SL3516 */ + conf0 { +- pins = "R8 GMAC0 RXDV", "U11 GMAC1 RXDV"; ++ pins = "V8 GMAC0 RXDV", "T10 GMAC1 RXDV"; + skew-delay = <0>; + }; + conf1 { +- pins = "T8 GMAC0 RXC", "T11 GMAC1 RXC"; ++ pins = "Y7 GMAC0 RXC", "Y11 GMAC1 RXC"; + skew-delay = <15>; + }; + conf2 { +- pins = "P8 GMAC0 TXEN", "V11 GMAC1 TXEN"; ++ pins = "T8 GMAC0 TXEN", "W11 GMAC1 TXEN"; + skew-delay = <7>; + }; + conf3 { +- pins = "V7 GMAC0 TXC"; ++ pins = "U8 GMAC0 TXC"; + skew-delay = <11>; + }; + conf4 { +- pins = "P10 GMAC1 TXC"; ++ pins = "V11 GMAC1 TXC"; + skew-delay = <10>; + }; + conf5 { + /* The data lines all have default skew */ +- pins = "U8 GMAC0 RXD0", "V8 GMAC0 RXD1", +- "P9 GMAC0 RXD2", "R9 GMAC0 RXD3", +- "U7 GMAC0 TXD0", "T7 GMAC0 TXD1", +- "R7 GMAC0 TXD2", "P7 GMAC0 TXD3", +- "R11 GMAC1 RXD0", "P11 GMAC1 RXD1", +- "V12 GMAC1 RXD2", "U12 GMAC1 RXD3", +- "R10 GMAC1 TXD0", "T10 GMAC1 TXD1", +- "U10 GMAC1 TXD2", "V10 GMAC1 TXD3"; ++ pins = "W8 GMAC0 RXD0", "V9 GMAC0 RXD1", ++ "Y8 GMAC0 RXD2", "U9 GMAC0 RXD3", ++ "T7 GMAC0 TXD0", "U6 GMAC0 TXD1", ++ "V7 GMAC0 TXD2", "U7 GMAC0 TXD3", ++ "Y12 GMAC1 RXD0", "V12 GMAC1 RXD1", ++ "T11 GMAC1 RXD2", "W12 GMAC1 RXD3", ++ "U10 GMAC1 TXD0", "Y10 GMAC1 TXD1", ++ "W10 GMAC1 TXD2", "T9 GMAC1 TXD3"; + skew-delay = <7>; + }; + /* Set up drive strength on GMAC0 to 16 mA */ +diff --git a/arch/arm/configs/socfpga_defconfig b/arch/arm/configs/socfpga_defconfig +index 2620ce790db0..371fca4e1ab7 100644 +--- a/arch/arm/configs/socfpga_defconfig ++++ b/arch/arm/configs/socfpga_defconfig +@@ -57,6 +57,7 @@ CONFIG_MTD_M25P80=y + CONFIG_MTD_NAND=y + CONFIG_MTD_NAND_DENALI_DT=y + CONFIG_MTD_SPI_NOR=y ++# CONFIG_MTD_SPI_NOR_USE_4K_SECTORS is not set + CONFIG_SPI_CADENCE_QUADSPI=y + CONFIG_OF_OVERLAY=y + CONFIG_OF_CONFIGFS=y +diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h +index 248b930563e5..8b908d23c58a 100644 +--- a/arch/arm/include/asm/kvm_host.h ++++ b/arch/arm/include/asm/kvm_host.h +@@ -77,6 +77,9 @@ struct kvm_arch { + /* Interrupt controller */ + struct vgic_dist vgic; + int max_vcpus; ++ ++ /* Mandated version of PSCI */ ++ u32 psci_version; + }; + + #define KVM_NR_MEM_OBJS 40 +diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h +index 6edd177bb1c7..47dfc99f5cd0 100644 +--- a/arch/arm/include/uapi/asm/kvm.h ++++ b/arch/arm/include/uapi/asm/kvm.h +@@ -186,6 +186,12 @@ struct kvm_arch_memory_slot { + #define KVM_REG_ARM_VFP_FPINST 0x1009 + #define KVM_REG_ARM_VFP_FPINST2 0x100A + ++/* KVM-as-firmware specific pseudo-registers */ ++#define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT) ++#define KVM_REG_ARM_FW_REG(r) (KVM_REG_ARM | KVM_REG_SIZE_U64 | \ ++ KVM_REG_ARM_FW | ((r) & 0xffff)) ++#define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0) ++ + /* Device Control API: ARM VGIC */ + #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 + #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 +diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c +index 1e0784ebbfd6..a18f33edc471 100644 +--- a/arch/arm/kvm/guest.c ++++ b/arch/arm/kvm/guest.c +@@ -22,6 +22,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -176,6 +177,7 @@ static unsigned long num_core_regs(void) + unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) + { + return num_core_regs() + kvm_arm_num_coproc_regs(vcpu) ++ + kvm_arm_get_fw_num_regs(vcpu) + + NUM_TIMER_REGS; + } + +@@ -196,6 +198,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) + uindices++; + } + ++ ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices); ++ if (ret) ++ return ret; ++ uindices += kvm_arm_get_fw_num_regs(vcpu); ++ + ret = copy_timer_indices(vcpu, uindices); + if (ret) + return ret; +@@ -214,6 +221,9 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) + if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) + return get_core_reg(vcpu, reg); + ++ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) ++ return kvm_arm_get_fw_reg(vcpu, reg); ++ + if (is_timer_reg(reg->id)) + return get_timer_reg(vcpu, reg); + +@@ -230,6 +240,9 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) + if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) + return set_core_reg(vcpu, reg); + ++ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) ++ return kvm_arm_set_fw_reg(vcpu, reg); ++ + if (is_timer_reg(reg->id)) + return set_timer_reg(vcpu, reg); + +diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h +index 596f8e414a4c..b9e355bd3b78 100644 +--- a/arch/arm64/include/asm/kvm_host.h ++++ b/arch/arm64/include/asm/kvm_host.h +@@ -75,6 +75,9 @@ struct kvm_arch { + + /* Interrupt controller */ + struct vgic_dist vgic; ++ ++ /* Mandated version of PSCI */ ++ u32 psci_version; + }; + + #define KVM_NR_MEM_OBJS 40 +diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h +index 9abbf3044654..04b3256f8e6d 100644 +--- a/arch/arm64/include/uapi/asm/kvm.h ++++ b/arch/arm64/include/uapi/asm/kvm.h +@@ -206,6 +206,12 @@ struct kvm_arch_memory_slot { + #define KVM_REG_ARM_TIMER_CNT ARM64_SYS_REG(3, 3, 14, 3, 2) + #define KVM_REG_ARM_TIMER_CVAL ARM64_SYS_REG(3, 3, 14, 0, 2) + ++/* KVM-as-firmware specific pseudo-registers */ ++#define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT) ++#define KVM_REG_ARM_FW_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \ ++ KVM_REG_ARM_FW | ((r) & 0xffff)) ++#define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0) ++ + /* Device Control API: ARM VGIC */ + #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 + #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 +diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c +index 959e50d2588c..56a0260ceb11 100644 +--- a/arch/arm64/kvm/guest.c ++++ b/arch/arm64/kvm/guest.c +@@ -25,6 +25,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -205,7 +206,7 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) + unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) + { + return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu) +- + NUM_TIMER_REGS; ++ + kvm_arm_get_fw_num_regs(vcpu) + NUM_TIMER_REGS; + } + + /** +@@ -225,6 +226,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) + uindices++; + } + ++ ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices); ++ if (ret) ++ return ret; ++ uindices += kvm_arm_get_fw_num_regs(vcpu); ++ + ret = copy_timer_indices(vcpu, uindices); + if (ret) + return ret; +@@ -243,6 +249,9 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) + if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) + return get_core_reg(vcpu, reg); + ++ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) ++ return kvm_arm_get_fw_reg(vcpu, reg); ++ + if (is_timer_reg(reg->id)) + return get_timer_reg(vcpu, reg); + +@@ -259,6 +268,9 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) + if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) + return set_core_reg(vcpu, reg); + ++ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) ++ return kvm_arm_set_fw_reg(vcpu, reg); ++ + if (is_timer_reg(reg->id)) + return set_timer_reg(vcpu, reg); + +diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c +index fe6fc63251fe..38c5b4764bfe 100644 +--- a/arch/powerpc/kernel/mce_power.c ++++ b/arch/powerpc/kernel/mce_power.c +@@ -441,7 +441,6 @@ static int mce_handle_ierror(struct pt_regs *regs, + if (pfn != ULONG_MAX) { + *phys_addr = + (pfn << PAGE_SHIFT); +- handled = 1; + } + } + } +@@ -532,9 +531,7 @@ static int mce_handle_derror(struct pt_regs *regs, + * kernel/exception-64s.h + */ + if (get_paca()->in_mce < MAX_MCE_DEPTH) +- if (!mce_find_instr_ea_and_pfn(regs, addr, +- phys_addr)) +- handled = 1; ++ mce_find_instr_ea_and_pfn(regs, addr, phys_addr); + } + found = 1; + } +@@ -572,7 +569,7 @@ static long mce_handle_error(struct pt_regs *regs, + const struct mce_ierror_table itable[]) + { + struct mce_error_info mce_err = { 0 }; +- uint64_t addr, phys_addr; ++ uint64_t addr, phys_addr = ULONG_MAX; + uint64_t srr1 = regs->msr; + long handled; + +diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c +index fe8c61149fb8..0cd9031b6b54 100644 +--- a/arch/powerpc/mm/mem.c ++++ b/arch/powerpc/mm/mem.c +@@ -143,6 +143,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, + start, start + size, rc); + return -EFAULT; + } ++ flush_inval_dcache_range(start, start + size); + + return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); + } +@@ -169,6 +170,7 @@ int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) + + /* Remove htab bolted mappings for this section of memory */ + start = (unsigned long)__va(start); ++ flush_inval_dcache_range(start, start + size); + ret = remove_section_mapping(start, start + size); + + /* Ensure all vmalloc mappings are flushed in case they also +diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c +index 0a253b64ac5f..e7b621f619b2 100644 +--- a/arch/powerpc/platforms/powernv/npu-dma.c ++++ b/arch/powerpc/platforms/powernv/npu-dma.c +@@ -33,6 +33,13 @@ + + #define npu_to_phb(x) container_of(x, struct pnv_phb, npu) + ++/* ++ * When an address shootdown range exceeds this threshold we invalidate the ++ * entire TLB on the GPU for the given PID rather than each specific address in ++ * the range. ++ */ ++#define ATSD_THRESHOLD (2*1024*1024) ++ + /* + * Other types of TCE cache invalidation are not functional in the + * hardware. +@@ -627,11 +634,19 @@ static void pnv_npu2_mn_invalidate_range(struct mmu_notifier *mn, + struct npu_context *npu_context = mn_to_npu_context(mn); + unsigned long address; + +- for (address = start; address < end; address += PAGE_SIZE) +- mmio_invalidate(npu_context, 1, address, false); ++ if (end - start > ATSD_THRESHOLD) { ++ /* ++ * Just invalidate the entire PID if the address range is too ++ * large. ++ */ ++ mmio_invalidate(npu_context, 0, 0, true); ++ } else { ++ for (address = start; address < end; address += PAGE_SIZE) ++ mmio_invalidate(npu_context, 1, address, false); + +- /* Do the flush only on the final addess == end */ +- mmio_invalidate(npu_context, 1, address, true); ++ /* Do the flush only on the final addess == end */ ++ mmio_invalidate(npu_context, 1, address, true); ++ } + } + + static const struct mmu_notifier_ops nv_nmmu_notifier_ops = { +diff --git a/arch/powerpc/platforms/powernv/opal-rtc.c b/arch/powerpc/platforms/powernv/opal-rtc.c +index f8868864f373..aa2a5139462e 100644 +--- a/arch/powerpc/platforms/powernv/opal-rtc.c ++++ b/arch/powerpc/platforms/powernv/opal-rtc.c +@@ -48,10 +48,12 @@ unsigned long __init opal_get_boot_time(void) + + while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { + rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms); +- if (rc == OPAL_BUSY_EVENT) ++ if (rc == OPAL_BUSY_EVENT) { ++ mdelay(OPAL_BUSY_DELAY_MS); + opal_poll_events(NULL); +- else if (rc == OPAL_BUSY) +- mdelay(10); ++ } else if (rc == OPAL_BUSY) { ++ mdelay(OPAL_BUSY_DELAY_MS); ++ } + } + if (rc != OPAL_SUCCESS) + return 0; +diff --git a/arch/sparc/include/uapi/asm/oradax.h b/arch/sparc/include/uapi/asm/oradax.h +index 722951908b0a..4f6676fe4bcc 100644 +--- a/arch/sparc/include/uapi/asm/oradax.h ++++ b/arch/sparc/include/uapi/asm/oradax.h +@@ -3,7 +3,7 @@ + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by +- * the Free Software Foundation, either version 3 of the License, or ++ * the Free Software Foundation, either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, +diff --git a/arch/x86/include/uapi/asm/msgbuf.h b/arch/x86/include/uapi/asm/msgbuf.h +index 809134c644a6..90ab9a795b49 100644 +--- a/arch/x86/include/uapi/asm/msgbuf.h ++++ b/arch/x86/include/uapi/asm/msgbuf.h +@@ -1 +1,32 @@ ++/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ ++#ifndef __ASM_X64_MSGBUF_H ++#define __ASM_X64_MSGBUF_H ++ ++#if !defined(__x86_64__) || !defined(__ILP32__) + #include ++#else ++/* ++ * The msqid64_ds structure for x86 architecture with x32 ABI. ++ * ++ * On x86-32 and x86-64 we can just use the generic definition, but ++ * x32 uses the same binary layout as x86_64, which is differnet ++ * from other 32-bit architectures. ++ */ ++ ++struct msqid64_ds { ++ struct ipc64_perm msg_perm; ++ __kernel_time_t msg_stime; /* last msgsnd time */ ++ __kernel_time_t msg_rtime; /* last msgrcv time */ ++ __kernel_time_t msg_ctime; /* last change time */ ++ __kernel_ulong_t msg_cbytes; /* current number of bytes on queue */ ++ __kernel_ulong_t msg_qnum; /* number of messages in queue */ ++ __kernel_ulong_t msg_qbytes; /* max number of bytes on queue */ ++ __kernel_pid_t msg_lspid; /* pid of last msgsnd */ ++ __kernel_pid_t msg_lrpid; /* last receive pid */ ++ __kernel_ulong_t __unused4; ++ __kernel_ulong_t __unused5; ++}; ++ ++#endif ++ ++#endif /* __ASM_GENERIC_MSGBUF_H */ +diff --git a/arch/x86/include/uapi/asm/shmbuf.h b/arch/x86/include/uapi/asm/shmbuf.h +index 83c05fc2de38..644421f3823b 100644 +--- a/arch/x86/include/uapi/asm/shmbuf.h ++++ b/arch/x86/include/uapi/asm/shmbuf.h +@@ -1 +1,43 @@ ++/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ ++#ifndef __ASM_X86_SHMBUF_H ++#define __ASM_X86_SHMBUF_H ++ ++#if !defined(__x86_64__) || !defined(__ILP32__) + #include ++#else ++/* ++ * The shmid64_ds structure for x86 architecture with x32 ABI. ++ * ++ * On x86-32 and x86-64 we can just use the generic definition, but ++ * x32 uses the same binary layout as x86_64, which is differnet ++ * from other 32-bit architectures. ++ */ ++ ++struct shmid64_ds { ++ struct ipc64_perm shm_perm; /* operation perms */ ++ size_t shm_segsz; /* size of segment (bytes) */ ++ __kernel_time_t shm_atime; /* last attach time */ ++ __kernel_time_t shm_dtime; /* last detach time */ ++ __kernel_time_t shm_ctime; /* last change time */ ++ __kernel_pid_t shm_cpid; /* pid of creator */ ++ __kernel_pid_t shm_lpid; /* pid of last operator */ ++ __kernel_ulong_t shm_nattch; /* no. of current attaches */ ++ __kernel_ulong_t __unused4; ++ __kernel_ulong_t __unused5; ++}; ++ ++struct shminfo64 { ++ __kernel_ulong_t shmmax; ++ __kernel_ulong_t shmmin; ++ __kernel_ulong_t shmmni; ++ __kernel_ulong_t shmseg; ++ __kernel_ulong_t shmall; ++ __kernel_ulong_t __unused1; ++ __kernel_ulong_t __unused2; ++ __kernel_ulong_t __unused3; ++ __kernel_ulong_t __unused4; ++}; ++ ++#endif ++ ++#endif /* __ASM_X86_SHMBUF_H */ +diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c +index 10c4fc2c91f8..77e201301528 100644 +--- a/arch/x86/kernel/cpu/microcode/core.c ++++ b/arch/x86/kernel/cpu/microcode/core.c +@@ -564,14 +564,12 @@ static int __reload_late(void *info) + apply_microcode_local(&err); + spin_unlock(&update_lock); + ++ /* siblings return UCODE_OK because their engine got updated already */ + if (err > UCODE_NFOUND) { + pr_warn("Error reloading microcode on CPU %d\n", cpu); +- return -1; +- /* siblings return UCODE_OK because their engine got updated already */ ++ ret = -1; + } else if (err == UCODE_UPDATED || err == UCODE_OK) { + ret = 1; +- } else { +- return ret; + } + + /* +diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c +index 32b8e5724f96..1c2cfa0644aa 100644 +--- a/arch/x86/kernel/cpu/microcode/intel.c ++++ b/arch/x86/kernel/cpu/microcode/intel.c +@@ -485,7 +485,6 @@ static void show_saved_mc(void) + */ + static void save_mc_for_early(u8 *mc, unsigned int size) + { +-#ifdef CONFIG_HOTPLUG_CPU + /* Synchronization during CPU hotplug. */ + static DEFINE_MUTEX(x86_cpu_microcode_mutex); + +@@ -495,7 +494,6 @@ static void save_mc_for_early(u8 *mc, unsigned int size) + show_saved_mc(); + + mutex_unlock(&x86_cpu_microcode_mutex); +-#endif + } + + static bool load_builtin_intel_microcode(struct cpio_data *cp) +diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c +index ff99e2b6fc54..12599e55e040 100644 +--- a/arch/x86/kernel/smpboot.c ++++ b/arch/x86/kernel/smpboot.c +@@ -1536,6 +1536,8 @@ static inline void mwait_play_dead(void) + void *mwait_ptr; + int i; + ++ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) ++ return; + if (!this_cpu_has(X86_FEATURE_MWAIT)) + return; + if (!this_cpu_has(X86_FEATURE_CLFLUSH)) +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c +index aeca22d91101..3193b2663bed 100644 +--- a/block/bfq-iosched.c ++++ b/block/bfq-iosched.c +@@ -4911,8 +4911,16 @@ static void bfq_prepare_request(struct request *rq, struct bio *bio) + bool new_queue = false; + bool bfqq_already_existing = false, split = false; + +- if (!rq->elv.icq) ++ /* ++ * Even if we don't have an icq attached, we should still clear ++ * the scheduler pointers, as they might point to previously ++ * allocated bic/bfqq structs. ++ */ ++ if (!rq->elv.icq) { ++ rq->elv.priv[0] = rq->elv.priv[1] = NULL; + return; ++ } ++ + bic = icq_to_bic(rq->elv.icq); + + spin_lock_irq(&bfqd->lock); +diff --git a/block/blk-core.c b/block/blk-core.c +index 3b489527c8f2..b459d277d170 100644 +--- a/block/blk-core.c ++++ b/block/blk-core.c +@@ -129,6 +129,10 @@ void blk_rq_init(struct request_queue *q, struct request *rq) + rq->part = NULL; + seqcount_init(&rq->gstate_seq); + u64_stats_init(&rq->aborted_gstate_sync); ++ /* ++ * See comment of blk_mq_init_request ++ */ ++ WRITE_ONCE(rq->gstate, MQ_RQ_GEN_INC); + } + EXPORT_SYMBOL(blk_rq_init); + +@@ -825,7 +829,6 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) + + while (true) { + bool success = false; +- int ret; + + rcu_read_lock(); + if (percpu_ref_tryget_live(&q->q_usage_counter)) { +@@ -857,14 +860,12 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags) + */ + smp_rmb(); + +- ret = wait_event_interruptible(q->mq_freeze_wq, +- (atomic_read(&q->mq_freeze_depth) == 0 && +- (preempt || !blk_queue_preempt_only(q))) || +- blk_queue_dying(q)); ++ wait_event(q->mq_freeze_wq, ++ (atomic_read(&q->mq_freeze_depth) == 0 && ++ (preempt || !blk_queue_preempt_only(q))) || ++ blk_queue_dying(q)); + if (blk_queue_dying(q)) + return -ENODEV; +- if (ret) +- return ret; + } + } + +diff --git a/block/blk-mq.c b/block/blk-mq.c +index 56e0c3699f9e..96de7aa4f62a 100644 +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -2076,6 +2076,13 @@ static int blk_mq_init_request(struct blk_mq_tag_set *set, struct request *rq, + + seqcount_init(&rq->gstate_seq); + u64_stats_init(&rq->aborted_gstate_sync); ++ /* ++ * start gstate with gen 1 instead of 0, otherwise it will be equal ++ * to aborted_gstate, and be identified timed out by ++ * blk_mq_terminate_expired. ++ */ ++ WRITE_ONCE(rq->gstate, MQ_RQ_GEN_INC); ++ + return 0; + } + +diff --git a/crypto/drbg.c b/crypto/drbg.c +index 4faa2781c964..466a112a4446 100644 +--- a/crypto/drbg.c ++++ b/crypto/drbg.c +@@ -1134,8 +1134,10 @@ static inline void drbg_dealloc_state(struct drbg_state *drbg) + if (!drbg) + return; + kzfree(drbg->Vbuf); ++ drbg->Vbuf = NULL; + drbg->V = NULL; + kzfree(drbg->Cbuf); ++ drbg->Cbuf = NULL; + drbg->C = NULL; + kzfree(drbg->scratchpadbuf); + drbg->scratchpadbuf = NULL; +diff --git a/drivers/amba/bus.c b/drivers/amba/bus.c +index 594c228d2f02..4a3ac31c07d0 100644 +--- a/drivers/amba/bus.c ++++ b/drivers/amba/bus.c +@@ -69,11 +69,12 @@ static ssize_t driver_override_show(struct device *_dev, + struct device_attribute *attr, char *buf) + { + struct amba_device *dev = to_amba_device(_dev); ++ ssize_t len; + +- if (!dev->driver_override) +- return 0; +- +- return sprintf(buf, "%s\n", dev->driver_override); ++ device_lock(_dev); ++ len = sprintf(buf, "%s\n", dev->driver_override); ++ device_unlock(_dev); ++ return len; + } + + static ssize_t driver_override_store(struct device *_dev, +@@ -81,9 +82,10 @@ static ssize_t driver_override_store(struct device *_dev, + const char *buf, size_t count) + { + struct amba_device *dev = to_amba_device(_dev); +- char *driver_override, *old = dev->driver_override, *cp; ++ char *driver_override, *old, *cp; + +- if (count > PATH_MAX) ++ /* We need to keep extra room for a newline */ ++ if (count >= (PAGE_SIZE - 1)) + return -EINVAL; + + driver_override = kstrndup(buf, count, GFP_KERNEL); +@@ -94,12 +96,15 @@ static ssize_t driver_override_store(struct device *_dev, + if (cp) + *cp = '\0'; + ++ device_lock(_dev); ++ old = dev->driver_override; + if (strlen(driver_override)) { + dev->driver_override = driver_override; + } else { + kfree(driver_override); + dev->driver_override = NULL; + } ++ device_unlock(_dev); + + kfree(old); + +diff --git a/drivers/android/binder.c b/drivers/android/binder.c +index 764b63a5aade..e578eee31589 100644 +--- a/drivers/android/binder.c ++++ b/drivers/android/binder.c +@@ -2839,6 +2839,14 @@ static void binder_transaction(struct binder_proc *proc, + else + return_error = BR_DEAD_REPLY; + mutex_unlock(&context->context_mgr_node_lock); ++ if (target_node && target_proc == proc) { ++ binder_user_error("%d:%d got transaction to context manager from process owning it\n", ++ proc->pid, thread->pid); ++ return_error = BR_FAILED_REPLY; ++ return_error_param = -EINVAL; ++ return_error_line = __LINE__; ++ goto err_invalid_target_handle; ++ } + } + if (!target_node) { + /* +diff --git a/drivers/char/random.c b/drivers/char/random.c +index 38729baed6ee..8f4e11842c60 100644 +--- a/drivers/char/random.c ++++ b/drivers/char/random.c +@@ -261,6 +261,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -438,6 +439,16 @@ static void _crng_backtrack_protect(struct crng_state *crng, + static void process_random_ready_list(void); + static void _get_random_bytes(void *buf, int nbytes); + ++static struct ratelimit_state unseeded_warning = ++ RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3); ++static struct ratelimit_state urandom_warning = ++ RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3); ++ ++static int ratelimit_disable __read_mostly; ++ ++module_param_named(ratelimit_disable, ratelimit_disable, int, 0644); ++MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); ++ + /********************************************************************** + * + * OS independent entropy store. Here are the functions which handle +@@ -787,6 +798,39 @@ static void crng_initialize(struct crng_state *crng) + crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1; + } + ++#ifdef CONFIG_NUMA ++static void do_numa_crng_init(struct work_struct *work) ++{ ++ int i; ++ struct crng_state *crng; ++ struct crng_state **pool; ++ ++ pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL|__GFP_NOFAIL); ++ for_each_online_node(i) { ++ crng = kmalloc_node(sizeof(struct crng_state), ++ GFP_KERNEL | __GFP_NOFAIL, i); ++ spin_lock_init(&crng->lock); ++ crng_initialize(crng); ++ pool[i] = crng; ++ } ++ mb(); ++ if (cmpxchg(&crng_node_pool, NULL, pool)) { ++ for_each_node(i) ++ kfree(pool[i]); ++ kfree(pool); ++ } ++} ++ ++static DECLARE_WORK(numa_crng_init_work, do_numa_crng_init); ++ ++static void numa_crng_init(void) ++{ ++ schedule_work(&numa_crng_init_work); ++} ++#else ++static void numa_crng_init(void) {} ++#endif ++ + /* + * crng_fast_load() can be called by code in the interrupt service + * path. So we can't afford to dilly-dally. +@@ -893,10 +937,23 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) + spin_unlock_irqrestore(&crng->lock, flags); + if (crng == &primary_crng && crng_init < 2) { + invalidate_batched_entropy(); ++ numa_crng_init(); + crng_init = 2; + process_random_ready_list(); + wake_up_interruptible(&crng_init_wait); + pr_notice("random: crng init done\n"); ++ if (unseeded_warning.missed) { ++ pr_notice("random: %d get_random_xx warning(s) missed " ++ "due to ratelimiting\n", ++ unseeded_warning.missed); ++ unseeded_warning.missed = 0; ++ } ++ if (urandom_warning.missed) { ++ pr_notice("random: %d urandom warning(s) missed " ++ "due to ratelimiting\n", ++ urandom_warning.missed); ++ urandom_warning.missed = 0; ++ } + } + } + +@@ -1540,8 +1597,9 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, + #ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM + print_once = true; + #endif +- pr_notice("random: %s called from %pS with crng_init=%d\n", +- func_name, caller, crng_init); ++ if (__ratelimit(&unseeded_warning)) ++ pr_notice("random: %s called from %pS with crng_init=%d\n", ++ func_name, caller, crng_init); + } + + /* +@@ -1731,29 +1789,14 @@ static void init_std_data(struct entropy_store *r) + */ + static int rand_initialize(void) + { +-#ifdef CONFIG_NUMA +- int i; +- struct crng_state *crng; +- struct crng_state **pool; +-#endif +- + init_std_data(&input_pool); + init_std_data(&blocking_pool); + crng_initialize(&primary_crng); + crng_global_init_time = jiffies; +- +-#ifdef CONFIG_NUMA +- pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL|__GFP_NOFAIL); +- for_each_online_node(i) { +- crng = kmalloc_node(sizeof(struct crng_state), +- GFP_KERNEL | __GFP_NOFAIL, i); +- spin_lock_init(&crng->lock); +- crng_initialize(crng); +- pool[i] = crng; ++ if (ratelimit_disable) { ++ urandom_warning.interval = 0; ++ unseeded_warning.interval = 0; + } +- mb(); +- crng_node_pool = pool; +-#endif + return 0; + } + early_initcall(rand_initialize); +@@ -1821,9 +1864,10 @@ urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) + + if (!crng_ready() && maxwarn > 0) { + maxwarn--; +- printk(KERN_NOTICE "random: %s: uninitialized urandom read " +- "(%zd bytes read)\n", +- current->comm, nbytes); ++ if (__ratelimit(&urandom_warning)) ++ printk(KERN_NOTICE "random: %s: uninitialized " ++ "urandom read (%zd bytes read)\n", ++ current->comm, nbytes); + spin_lock_irqsave(&primary_crng.lock, flags); + crng_init_cnt = 0; + spin_unlock_irqrestore(&primary_crng.lock, flags); +diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c +index 468f06134012..21085515814f 100644 +--- a/drivers/char/virtio_console.c ++++ b/drivers/char/virtio_console.c +@@ -422,7 +422,7 @@ static void reclaim_dma_bufs(void) + } + } + +-static struct port_buffer *alloc_buf(struct virtqueue *vq, size_t buf_size, ++static struct port_buffer *alloc_buf(struct virtio_device *vdev, size_t buf_size, + int pages) + { + struct port_buffer *buf; +@@ -445,16 +445,16 @@ static struct port_buffer *alloc_buf(struct virtqueue *vq, size_t buf_size, + return buf; + } + +- if (is_rproc_serial(vq->vdev)) { ++ if (is_rproc_serial(vdev)) { + /* + * Allocate DMA memory from ancestor. When a virtio + * device is created by remoteproc, the DMA memory is + * associated with the grandparent device: + * vdev => rproc => platform-dev. + */ +- if (!vq->vdev->dev.parent || !vq->vdev->dev.parent->parent) ++ if (!vdev->dev.parent || !vdev->dev.parent->parent) + goto free_buf; +- buf->dev = vq->vdev->dev.parent->parent; ++ buf->dev = vdev->dev.parent->parent; + + /* Increase device refcnt to avoid freeing it */ + get_device(buf->dev); +@@ -838,7 +838,7 @@ static ssize_t port_fops_write(struct file *filp, const char __user *ubuf, + + count = min((size_t)(32 * 1024), count); + +- buf = alloc_buf(port->out_vq, count, 0); ++ buf = alloc_buf(port->portdev->vdev, count, 0); + if (!buf) + return -ENOMEM; + +@@ -957,7 +957,7 @@ static ssize_t port_fops_splice_write(struct pipe_inode_info *pipe, + if (ret < 0) + goto error_out; + +- buf = alloc_buf(port->out_vq, 0, pipe->nrbufs); ++ buf = alloc_buf(port->portdev->vdev, 0, pipe->nrbufs); + if (!buf) { + ret = -ENOMEM; + goto error_out; +@@ -1374,7 +1374,7 @@ static unsigned int fill_queue(struct virtqueue *vq, spinlock_t *lock) + + nr_added_bufs = 0; + do { +- buf = alloc_buf(vq, PAGE_SIZE, 0); ++ buf = alloc_buf(vq->vdev, PAGE_SIZE, 0); + if (!buf) + break; + +@@ -1402,7 +1402,6 @@ static int add_port(struct ports_device *portdev, u32 id) + { + char debugfs_name[16]; + struct port *port; +- struct port_buffer *buf; + dev_t devt; + unsigned int nr_added_bufs; + int err; +@@ -1513,8 +1512,6 @@ static int add_port(struct ports_device *portdev, u32 id) + return 0; + + free_inbufs: +- while ((buf = virtqueue_detach_unused_buf(port->in_vq))) +- free_buf(buf, true); + free_device: + device_destroy(pdrvdata.class, port->dev->devt); + free_cdev: +@@ -1539,34 +1536,14 @@ static void remove_port(struct kref *kref) + + static void remove_port_data(struct port *port) + { +- struct port_buffer *buf; +- + spin_lock_irq(&port->inbuf_lock); + /* Remove unused data this port might have received. */ + discard_port_data(port); + spin_unlock_irq(&port->inbuf_lock); + +- /* Remove buffers we queued up for the Host to send us data in. */ +- do { +- spin_lock_irq(&port->inbuf_lock); +- buf = virtqueue_detach_unused_buf(port->in_vq); +- spin_unlock_irq(&port->inbuf_lock); +- if (buf) +- free_buf(buf, true); +- } while (buf); +- + spin_lock_irq(&port->outvq_lock); + reclaim_consumed_buffers(port); + spin_unlock_irq(&port->outvq_lock); +- +- /* Free pending buffers from the out-queue. */ +- do { +- spin_lock_irq(&port->outvq_lock); +- buf = virtqueue_detach_unused_buf(port->out_vq); +- spin_unlock_irq(&port->outvq_lock); +- if (buf) +- free_buf(buf, true); +- } while (buf); + } + + /* +@@ -1791,13 +1768,24 @@ static void control_work_handler(struct work_struct *work) + spin_unlock(&portdev->c_ivq_lock); + } + ++static void flush_bufs(struct virtqueue *vq, bool can_sleep) ++{ ++ struct port_buffer *buf; ++ unsigned int len; ++ ++ while ((buf = virtqueue_get_buf(vq, &len))) ++ free_buf(buf, can_sleep); ++} ++ + static void out_intr(struct virtqueue *vq) + { + struct port *port; + + port = find_port_by_vq(vq->vdev->priv, vq); +- if (!port) ++ if (!port) { ++ flush_bufs(vq, false); + return; ++ } + + wake_up_interruptible(&port->waitqueue); + } +@@ -1808,8 +1796,10 @@ static void in_intr(struct virtqueue *vq) + unsigned long flags; + + port = find_port_by_vq(vq->vdev->priv, vq); +- if (!port) ++ if (!port) { ++ flush_bufs(vq, false); + return; ++ } + + spin_lock_irqsave(&port->inbuf_lock, flags); + port->inbuf = get_inbuf(port); +@@ -1984,24 +1974,54 @@ static const struct file_operations portdev_fops = { + + static void remove_vqs(struct ports_device *portdev) + { ++ struct virtqueue *vq; ++ ++ virtio_device_for_each_vq(portdev->vdev, vq) { ++ struct port_buffer *buf; ++ ++ flush_bufs(vq, true); ++ while ((buf = virtqueue_detach_unused_buf(vq))) ++ free_buf(buf, true); ++ } + portdev->vdev->config->del_vqs(portdev->vdev); + kfree(portdev->in_vqs); + kfree(portdev->out_vqs); + } + +-static void remove_controlq_data(struct ports_device *portdev) ++static void virtcons_remove(struct virtio_device *vdev) + { +- struct port_buffer *buf; +- unsigned int len; ++ struct ports_device *portdev; ++ struct port *port, *port2; + +- if (!use_multiport(portdev)) +- return; ++ portdev = vdev->priv; + +- while ((buf = virtqueue_get_buf(portdev->c_ivq, &len))) +- free_buf(buf, true); ++ spin_lock_irq(&pdrvdata_lock); ++ list_del(&portdev->list); ++ spin_unlock_irq(&pdrvdata_lock); + +- while ((buf = virtqueue_detach_unused_buf(portdev->c_ivq))) +- free_buf(buf, true); ++ /* Disable interrupts for vqs */ ++ vdev->config->reset(vdev); ++ /* Finish up work that's lined up */ ++ if (use_multiport(portdev)) ++ cancel_work_sync(&portdev->control_work); ++ else ++ cancel_work_sync(&portdev->config_work); ++ ++ list_for_each_entry_safe(port, port2, &portdev->ports, list) ++ unplug_port(port); ++ ++ unregister_chrdev(portdev->chr_major, "virtio-portsdev"); ++ ++ /* ++ * When yanking out a device, we immediately lose the ++ * (device-side) queues. So there's no point in keeping the ++ * guest side around till we drop our final reference. This ++ * also means that any ports which are in an open state will ++ * have to just stop using the port, as the vqs are going ++ * away. ++ */ ++ remove_vqs(portdev); ++ kfree(portdev); + } + + /* +@@ -2070,6 +2090,7 @@ static int virtcons_probe(struct virtio_device *vdev) + + spin_lock_init(&portdev->ports_lock); + INIT_LIST_HEAD(&portdev->ports); ++ INIT_LIST_HEAD(&portdev->list); + + virtio_device_ready(portdev->vdev); + +@@ -2087,8 +2108,15 @@ static int virtcons_probe(struct virtio_device *vdev) + if (!nr_added_bufs) { + dev_err(&vdev->dev, + "Error allocating buffers for control queue\n"); +- err = -ENOMEM; +- goto free_vqs; ++ /* ++ * The host might want to notify mgmt sw about device ++ * add failure. ++ */ ++ __send_control_msg(portdev, VIRTIO_CONSOLE_BAD_ID, ++ VIRTIO_CONSOLE_DEVICE_READY, 0); ++ /* Device was functional: we need full cleanup. */ ++ virtcons_remove(vdev); ++ return -ENOMEM; + } + } else { + /* +@@ -2119,11 +2147,6 @@ static int virtcons_probe(struct virtio_device *vdev) + + return 0; + +-free_vqs: +- /* The host might want to notify mgmt sw about device add failure */ +- __send_control_msg(portdev, VIRTIO_CONSOLE_BAD_ID, +- VIRTIO_CONSOLE_DEVICE_READY, 0); +- remove_vqs(portdev); + free_chrdev: + unregister_chrdev(portdev->chr_major, "virtio-portsdev"); + free: +@@ -2132,43 +2155,6 @@ static int virtcons_probe(struct virtio_device *vdev) + return err; + } + +-static void virtcons_remove(struct virtio_device *vdev) +-{ +- struct ports_device *portdev; +- struct port *port, *port2; +- +- portdev = vdev->priv; +- +- spin_lock_irq(&pdrvdata_lock); +- list_del(&portdev->list); +- spin_unlock_irq(&pdrvdata_lock); +- +- /* Disable interrupts for vqs */ +- vdev->config->reset(vdev); +- /* Finish up work that's lined up */ +- if (use_multiport(portdev)) +- cancel_work_sync(&portdev->control_work); +- else +- cancel_work_sync(&portdev->config_work); +- +- list_for_each_entry_safe(port, port2, &portdev->ports, list) +- unplug_port(port); +- +- unregister_chrdev(portdev->chr_major, "virtio-portsdev"); +- +- /* +- * When yanking out a device, we immediately lose the +- * (device-side) queues. So there's no point in keeping the +- * guest side around till we drop our final reference. This +- * also means that any ports which are in an open state will +- * have to just stop using the port, as the vqs are going +- * away. +- */ +- remove_controlq_data(portdev); +- remove_vqs(portdev); +- kfree(portdev); +-} +- + static struct virtio_device_id id_table[] = { + { VIRTIO_ID_CONSOLE, VIRTIO_DEV_ANY_ID }, + { 0 }, +@@ -2209,7 +2195,6 @@ static int virtcons_freeze(struct virtio_device *vdev) + */ + if (use_multiport(portdev)) + virtqueue_disable_cb(portdev->c_ivq); +- remove_controlq_data(portdev); + + list_for_each_entry(port, &portdev->ports, list) { + virtqueue_disable_cb(port->in_vq); +diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c +index 29cdec198657..422e1fc38b43 100644 +--- a/drivers/cpufreq/powernv-cpufreq.c ++++ b/drivers/cpufreq/powernv-cpufreq.c +@@ -679,6 +679,16 @@ void gpstate_timer_handler(struct timer_list *t) + + if (!spin_trylock(&gpstates->gpstate_lock)) + return; ++ /* ++ * If the timer has migrated to the different cpu then bring ++ * it back to one of the policy->cpus ++ */ ++ if (!cpumask_test_cpu(raw_smp_processor_id(), policy->cpus)) { ++ gpstates->timer.expires = jiffies + msecs_to_jiffies(1); ++ add_timer_on(&gpstates->timer, cpumask_first(policy->cpus)); ++ spin_unlock(&gpstates->gpstate_lock); ++ return; ++ } + + /* + * If PMCR was last updated was using fast_swtich then +@@ -718,10 +728,8 @@ void gpstate_timer_handler(struct timer_list *t) + if (gpstate_idx != gpstates->last_lpstate_idx) + queue_gpstate_timer(gpstates); + ++ set_pstate(&freq_data); + spin_unlock(&gpstates->gpstate_lock); +- +- /* Timer may get migrated to a different cpu on cpu hot unplug */ +- smp_call_function_any(policy->cpus, set_pstate, &freq_data, 1); + } + + /* +diff --git a/drivers/crypto/ccp/sp-dev.c b/drivers/crypto/ccp/sp-dev.c +index eb0da6572720..e0459002eb71 100644 +--- a/drivers/crypto/ccp/sp-dev.c ++++ b/drivers/crypto/ccp/sp-dev.c +@@ -252,12 +252,12 @@ struct sp_device *sp_get_psp_master_device(void) + goto unlock; + + list_for_each_entry(i, &sp_units, entry) { +- if (i->psp_data) ++ if (i->psp_data && i->get_psp_master_device) { ++ ret = i->get_psp_master_device(); + break; ++ } + } + +- if (i->get_psp_master_device) +- ret = i->get_psp_master_device(); + unlock: + write_unlock_irqrestore(&sp_unit_lock, flags); + return ret; +diff --git a/drivers/fpga/altera-ps-spi.c b/drivers/fpga/altera-ps-spi.c +index 14f14efdf0d5..06d212a3d49d 100644 +--- a/drivers/fpga/altera-ps-spi.c ++++ b/drivers/fpga/altera-ps-spi.c +@@ -249,7 +249,7 @@ static int altera_ps_probe(struct spi_device *spi) + + conf->data = of_id->data; + conf->spi = spi; +- conf->config = devm_gpiod_get(&spi->dev, "nconfig", GPIOD_OUT_HIGH); ++ conf->config = devm_gpiod_get(&spi->dev, "nconfig", GPIOD_OUT_LOW); + if (IS_ERR(conf->config)) { + dev_err(&spi->dev, "Failed to get config gpio: %ld\n", + PTR_ERR(conf->config)); +diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c +index 4e694ae9f308..45cc4d572897 100644 +--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c +@@ -1459,10 +1459,11 @@ static const u32 sgpr_init_compute_shader[] = + static const u32 vgpr_init_regs[] = + { + mmCOMPUTE_STATIC_THREAD_MGMT_SE0, 0xffffffff, +- mmCOMPUTE_RESOURCE_LIMITS, 0, ++ mmCOMPUTE_RESOURCE_LIMITS, 0x1000000, /* CU_GROUP_COUNT=1 */ + mmCOMPUTE_NUM_THREAD_X, 256*4, + mmCOMPUTE_NUM_THREAD_Y, 1, + mmCOMPUTE_NUM_THREAD_Z, 1, ++ mmCOMPUTE_PGM_RSRC1, 0x100004f, /* VGPRS=15 (64 logical VGPRs), SGPRS=1 (16 SGPRs), BULKY=1 */ + mmCOMPUTE_PGM_RSRC2, 20, + mmCOMPUTE_USER_DATA_0, 0xedcedc00, + mmCOMPUTE_USER_DATA_1, 0xedcedc01, +@@ -1479,10 +1480,11 @@ static const u32 vgpr_init_regs[] = + static const u32 sgpr1_init_regs[] = + { + mmCOMPUTE_STATIC_THREAD_MGMT_SE0, 0x0f, +- mmCOMPUTE_RESOURCE_LIMITS, 0x1000000, ++ mmCOMPUTE_RESOURCE_LIMITS, 0x1000000, /* CU_GROUP_COUNT=1 */ + mmCOMPUTE_NUM_THREAD_X, 256*5, + mmCOMPUTE_NUM_THREAD_Y, 1, + mmCOMPUTE_NUM_THREAD_Z, 1, ++ mmCOMPUTE_PGM_RSRC1, 0x240, /* SGPRS=9 (80 GPRS) */ + mmCOMPUTE_PGM_RSRC2, 20, + mmCOMPUTE_USER_DATA_0, 0xedcedc00, + mmCOMPUTE_USER_DATA_1, 0xedcedc01, +@@ -1503,6 +1505,7 @@ static const u32 sgpr2_init_regs[] = + mmCOMPUTE_NUM_THREAD_X, 256*5, + mmCOMPUTE_NUM_THREAD_Y, 1, + mmCOMPUTE_NUM_THREAD_Z, 1, ++ mmCOMPUTE_PGM_RSRC1, 0x240, /* SGPRS=9 (80 GPRS) */ + mmCOMPUTE_PGM_RSRC2, 20, + mmCOMPUTE_USER_DATA_0, 0xedcedc00, + mmCOMPUTE_USER_DATA_1, 0xedcedc01, +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index 8a6e6fbc78cd..2e94881d4f7f 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -4506,6 +4506,7 @@ static int dm_update_crtcs_state(struct dc *dc, + struct amdgpu_dm_connector *aconnector = NULL; + struct drm_connector_state *new_con_state = NULL; + struct dm_connector_state *dm_conn_state = NULL; ++ struct drm_plane_state *new_plane_state = NULL; + + new_stream = NULL; + +@@ -4513,6 +4514,13 @@ static int dm_update_crtcs_state(struct dc *dc, + dm_new_crtc_state = to_dm_crtc_state(new_crtc_state); + acrtc = to_amdgpu_crtc(crtc); + ++ new_plane_state = drm_atomic_get_new_plane_state(state, new_crtc_state->crtc->primary); ++ ++ if (new_crtc_state->enable && new_plane_state && !new_plane_state->fb) { ++ ret = -EINVAL; ++ goto fail; ++ } ++ + aconnector = amdgpu_dm_find_first_crtc_matching_connector(state, crtc); + + /* TODO This hack should go away */ +@@ -4685,7 +4693,7 @@ static int dm_update_planes_state(struct dc *dc, + if (!dm_old_crtc_state->stream) + continue; + +- DRM_DEBUG_DRIVER("Disabling DRM plane: %d on DRM crtc %d\n", ++ DRM_DEBUG_ATOMIC("Disabling DRM plane: %d on DRM crtc %d\n", + plane->base.id, old_plane_crtc->base.id); + + if (!dc_remove_plane_from_context( +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c +index 422055080df4..54a25fb048fb 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c +@@ -400,14 +400,15 @@ void amdgpu_dm_irq_fini(struct amdgpu_device *adev) + { + int src; + struct irq_list_head *lh; ++ unsigned long irq_table_flags; + DRM_DEBUG_KMS("DM_IRQ: releasing resources.\n"); +- + for (src = 0; src < DAL_IRQ_SOURCES_NUMBER; src++) { +- ++ DM_IRQ_TABLE_LOCK(adev, irq_table_flags); + /* The handler was removed from the table, + * it means it is safe to flush all the 'work' + * (because no code can schedule a new one). */ + lh = &adev->dm.irq_handler_list_low_tab[src]; ++ DM_IRQ_TABLE_UNLOCK(adev, irq_table_flags); + flush_work(&lh->work); + } + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +index 93421dad21bd..160933c16461 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +@@ -157,6 +157,11 @@ dm_dp_mst_connector_destroy(struct drm_connector *connector) + struct amdgpu_dm_connector *amdgpu_dm_connector = to_amdgpu_dm_connector(connector); + struct amdgpu_encoder *amdgpu_encoder = amdgpu_dm_connector->mst_encoder; + ++ if (amdgpu_dm_connector->edid) { ++ kfree(amdgpu_dm_connector->edid); ++ amdgpu_dm_connector->edid = NULL; ++ } ++ + drm_encoder_cleanup(&amdgpu_encoder->base); + kfree(amdgpu_encoder); + drm_connector_cleanup(connector); +@@ -183,28 +188,22 @@ static int dm_connector_update_modes(struct drm_connector *connector, + void dm_dp_mst_dc_sink_create(struct drm_connector *connector) + { + struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector); +- struct edid *edid; + struct dc_sink *dc_sink; + struct dc_sink_init_data init_params = { + .link = aconnector->dc_link, + .sink_signal = SIGNAL_TYPE_DISPLAY_PORT_MST }; + ++ /* FIXME none of this is safe. we shouldn't touch aconnector here in ++ * atomic_check ++ */ ++ + /* + * TODO: Need to further figure out why ddc.algo is NULL while MST port exists + */ + if (!aconnector->port || !aconnector->port->aux.ddc.algo) + return; + +- edid = drm_dp_mst_get_edid(connector, &aconnector->mst_port->mst_mgr, aconnector->port); +- +- if (!edid) { +- drm_mode_connector_update_edid_property( +- &aconnector->base, +- NULL); +- return; +- } +- +- aconnector->edid = edid; ++ ASSERT(aconnector->edid); + + dc_sink = dc_link_add_remote_sink( + aconnector->dc_link, +@@ -217,9 +216,6 @@ void dm_dp_mst_dc_sink_create(struct drm_connector *connector) + + amdgpu_dm_add_sink_to_freesync_module( + connector, aconnector->edid); +- +- drm_mode_connector_update_edid_property( +- &aconnector->base, aconnector->edid); + } + + static int dm_dp_mst_get_modes(struct drm_connector *connector) +@@ -426,14 +422,6 @@ static void dm_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr, + dc_sink_release(aconnector->dc_sink); + aconnector->dc_sink = NULL; + } +- if (aconnector->edid) { +- kfree(aconnector->edid); +- aconnector->edid = NULL; +- } +- +- drm_mode_connector_update_edid_property( +- &aconnector->base, +- NULL); + + aconnector->mst_connected = false; + } +diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c +index 4f751a9d71a3..2368ad0b3f4d 100644 +--- a/drivers/gpu/drm/drm_edid.c ++++ b/drivers/gpu/drm/drm_edid.c +@@ -4450,6 +4450,7 @@ drm_reset_display_info(struct drm_connector *connector) + info->max_tmds_clock = 0; + info->dvi_dual = false; + info->has_hdmi_infoframe = false; ++ memset(&info->hdmi, 0, sizeof(info->hdmi)); + + info->non_desktop = 0; + } +@@ -4461,17 +4462,11 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi + + u32 quirks = edid_get_quirks(edid); + ++ drm_reset_display_info(connector); ++ + info->width_mm = edid->width_cm * 10; + info->height_mm = edid->height_cm * 10; + +- /* driver figures it out in this case */ +- info->bpc = 0; +- info->color_formats = 0; +- info->cea_rev = 0; +- info->max_tmds_clock = 0; +- info->dvi_dual = false; +- info->has_hdmi_infoframe = false; +- + info->non_desktop = !!(quirks & EDID_QUIRK_NON_DESKTOP); + + DRM_DEBUG_KMS("non_desktop set to %d\n", info->non_desktop); +diff --git a/drivers/gpu/drm/i915/intel_cdclk.c b/drivers/gpu/drm/i915/intel_cdclk.c +index 1704c8897afd..fd58647fbff3 100644 +--- a/drivers/gpu/drm/i915/intel_cdclk.c ++++ b/drivers/gpu/drm/i915/intel_cdclk.c +@@ -1946,10 +1946,22 @@ int intel_crtc_compute_min_cdclk(const struct intel_crtc_state *crtc_state) + } + } + +- /* According to BSpec, "The CD clock frequency must be at least twice ++ /* ++ * According to BSpec, "The CD clock frequency must be at least twice + * the frequency of the Azalia BCLK." and BCLK is 96 MHz by default. ++ * ++ * FIXME: Check the actual, not default, BCLK being used. ++ * ++ * FIXME: This does not depend on ->has_audio because the higher CDCLK ++ * is required for audio probe, also when there are no audio capable ++ * displays connected at probe time. This leads to unnecessarily high ++ * CDCLK when audio is not required. ++ * ++ * FIXME: This limit is only applied when there are displays connected ++ * at probe time. If we probe without displays, we'll still end up using ++ * the platform minimum CDCLK, failing audio probe. + */ +- if (crtc_state->has_audio && INTEL_GEN(dev_priv) >= 9) ++ if (INTEL_GEN(dev_priv) >= 9) + min_cdclk = max(2 * 96000, min_cdclk); + + /* +diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c +index da48af11eb6b..0cf33034a8ba 100644 +--- a/drivers/gpu/drm/i915/intel_fbdev.c ++++ b/drivers/gpu/drm/i915/intel_fbdev.c +@@ -801,7 +801,7 @@ void intel_fbdev_output_poll_changed(struct drm_device *dev) + return; + + intel_fbdev_sync(ifbdev); +- if (ifbdev->vma) ++ if (ifbdev->vma || ifbdev->helper.deferred_setup) + drm_fb_helper_hotplug_event(&ifbdev->helper); + } + +diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c +index d758da6156a8..9faee4875ddf 100644 +--- a/drivers/gpu/drm/i915/intel_runtime_pm.c ++++ b/drivers/gpu/drm/i915/intel_runtime_pm.c +@@ -624,19 +624,18 @@ void skl_enable_dc6(struct drm_i915_private *dev_priv) + + DRM_DEBUG_KMS("Enabling DC6\n"); + +- gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6); ++ /* Wa Display #1183: skl,kbl,cfl */ ++ if (IS_GEN9_BC(dev_priv)) ++ I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) | ++ SKL_SELECT_ALTERNATE_DC_EXIT); + ++ gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6); + } + + void skl_disable_dc6(struct drm_i915_private *dev_priv) + { + DRM_DEBUG_KMS("Disabling DC6\n"); + +- /* Wa Display #1183: skl,kbl,cfl */ +- if (IS_GEN9_BC(dev_priv)) +- I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) | +- SKL_SELECT_ALTERNATE_DC_EXIT); +- + gen9_set_dc_state(dev_priv, DC_STATE_DISABLE); + } + +diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c +index 9eb96fb2c147..26a2da1f712d 100644 +--- a/drivers/gpu/drm/virtio/virtgpu_vq.c ++++ b/drivers/gpu/drm/virtio/virtgpu_vq.c +@@ -291,7 +291,7 @@ static int virtio_gpu_queue_ctrl_buffer_locked(struct virtio_gpu_device *vgdev, + ret = virtqueue_add_sgs(vq, sgs, outcnt, incnt, vbuf, GFP_ATOMIC); + if (ret == -ENOSPC) { + spin_unlock(&vgdev->ctrlq.qlock); +- wait_event(vgdev->ctrlq.ack_queue, vq->num_free); ++ wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= outcnt + incnt); + spin_lock(&vgdev->ctrlq.qlock); + goto retry; + } else { +@@ -366,7 +366,7 @@ static int virtio_gpu_queue_cursor(struct virtio_gpu_device *vgdev, + ret = virtqueue_add_sgs(vq, sgs, outcnt, 0, vbuf, GFP_ATOMIC); + if (ret == -ENOSPC) { + spin_unlock(&vgdev->cursorq.qlock); +- wait_event(vgdev->cursorq.ack_queue, vq->num_free); ++ wait_event(vgdev->cursorq.ack_queue, vq->num_free >= outcnt); + spin_lock(&vgdev->cursorq.qlock); + goto retry; + } else { +diff --git a/drivers/mtd/chips/cfi_cmdset_0001.c b/drivers/mtd/chips/cfi_cmdset_0001.c +index 5e1b68cbcd0a..e1b603ca0170 100644 +--- a/drivers/mtd/chips/cfi_cmdset_0001.c ++++ b/drivers/mtd/chips/cfi_cmdset_0001.c +@@ -45,6 +45,7 @@ + #define I82802AB 0x00ad + #define I82802AC 0x00ac + #define PF38F4476 0x881c ++#define M28F00AP30 0x8963 + /* STMicroelectronics chips */ + #define M50LPW080 0x002F + #define M50FLW080A 0x0080 +@@ -375,6 +376,17 @@ static void cfi_fixup_major_minor(struct cfi_private *cfi, + extp->MinorVersion = '1'; + } + ++static int cfi_is_micron_28F00AP30(struct cfi_private *cfi, struct flchip *chip) ++{ ++ /* ++ * Micron(was Numonyx) 1Gbit bottom boot are buggy w.r.t ++ * Erase Supend for their small Erase Blocks(0x8000) ++ */ ++ if (cfi->mfr == CFI_MFR_INTEL && cfi->id == M28F00AP30) ++ return 1; ++ return 0; ++} ++ + static inline struct cfi_pri_intelext * + read_pri_intelext(struct map_info *map, __u16 adr) + { +@@ -831,21 +843,30 @@ static int chip_ready (struct map_info *map, struct flchip *chip, unsigned long + (mode == FL_WRITING && (cfip->SuspendCmdSupport & 1)))) + goto sleep; + ++ /* Do not allow suspend iff read/write to EB address */ ++ if ((adr & chip->in_progress_block_mask) == ++ chip->in_progress_block_addr) ++ goto sleep; ++ ++ /* do not suspend small EBs, buggy Micron Chips */ ++ if (cfi_is_micron_28F00AP30(cfi, chip) && ++ (chip->in_progress_block_mask == ~(0x8000-1))) ++ goto sleep; + + /* Erase suspend */ +- map_write(map, CMD(0xB0), adr); ++ map_write(map, CMD(0xB0), chip->in_progress_block_addr); + + /* If the flash has finished erasing, then 'erase suspend' + * appears to make some (28F320) flash devices switch to + * 'read' mode. Make sure that we switch to 'read status' + * mode so we get the right data. --rmk + */ +- map_write(map, CMD(0x70), adr); ++ map_write(map, CMD(0x70), chip->in_progress_block_addr); + chip->oldstate = FL_ERASING; + chip->state = FL_ERASE_SUSPENDING; + chip->erase_suspended = 1; + for (;;) { +- status = map_read(map, adr); ++ status = map_read(map, chip->in_progress_block_addr); + if (map_word_andequal(map, status, status_OK, status_OK)) + break; + +@@ -1041,8 +1062,8 @@ static void put_chip(struct map_info *map, struct flchip *chip, unsigned long ad + sending the 0x70 (Read Status) command to an erasing + chip and expecting it to be ignored, that's what we + do. */ +- map_write(map, CMD(0xd0), adr); +- map_write(map, CMD(0x70), adr); ++ map_write(map, CMD(0xd0), chip->in_progress_block_addr); ++ map_write(map, CMD(0x70), chip->in_progress_block_addr); + chip->oldstate = FL_READY; + chip->state = FL_ERASING; + break; +@@ -1933,6 +1954,8 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip, + map_write(map, CMD(0xD0), adr); + chip->state = FL_ERASING; + chip->erase_suspended = 0; ++ chip->in_progress_block_addr = adr; ++ chip->in_progress_block_mask = ~(len - 1); + + ret = INVAL_CACHE_AND_WAIT(map, chip, adr, + adr, len, +diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c +index 56aa6b75213d..d524a64ed754 100644 +--- a/drivers/mtd/chips/cfi_cmdset_0002.c ++++ b/drivers/mtd/chips/cfi_cmdset_0002.c +@@ -816,9 +816,10 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr + (mode == FL_WRITING && (cfip->EraseSuspend & 0x2)))) + goto sleep; + +- /* We could check to see if we're trying to access the sector +- * that is currently being erased. However, no user will try +- * anything like that so we just wait for the timeout. */ ++ /* Do not allow suspend iff read/write to EB address */ ++ if ((adr & chip->in_progress_block_mask) == ++ chip->in_progress_block_addr) ++ goto sleep; + + /* Erase suspend */ + /* It's harmless to issue the Erase-Suspend and Erase-Resume +@@ -2267,6 +2268,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip) + chip->state = FL_ERASING; + chip->erase_suspended = 0; + chip->in_progress_block_addr = adr; ++ chip->in_progress_block_mask = ~(map->size - 1); + + INVALIDATE_CACHE_UDELAY(map, chip, + adr, map->size, +@@ -2356,6 +2358,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip, + chip->state = FL_ERASING; + chip->erase_suspended = 0; + chip->in_progress_block_addr = adr; ++ chip->in_progress_block_mask = ~(len - 1); + + INVALIDATE_CACHE_UDELAY(map, chip, + adr, len, +diff --git a/drivers/mtd/nand/marvell_nand.c b/drivers/mtd/nand/marvell_nand.c +index 2196f2a233d6..795f868fe1f7 100644 +--- a/drivers/mtd/nand/marvell_nand.c ++++ b/drivers/mtd/nand/marvell_nand.c +@@ -2277,29 +2277,20 @@ static int marvell_nand_chip_init(struct device *dev, struct marvell_nfc *nfc, + /* + * The legacy "num-cs" property indicates the number of CS on the only + * chip connected to the controller (legacy bindings does not support +- * more than one chip). CS are only incremented one by one while the RB +- * pin is always the #0. ++ * more than one chip). The CS and RB pins are always the #0. + * + * When not using legacy bindings, a couple of "reg" and "nand-rb" + * properties must be filled. For each chip, expressed as a subnode, + * "reg" points to the CS lines and "nand-rb" to the RB line. + */ +- if (pdata) { ++ if (pdata || nfc->caps->legacy_of_bindings) { + nsels = 1; +- } else if (nfc->caps->legacy_of_bindings && +- !of_get_property(np, "num-cs", &nsels)) { +- dev_err(dev, "missing num-cs property\n"); +- return -EINVAL; +- } else if (!of_get_property(np, "reg", &nsels)) { +- dev_err(dev, "missing reg property\n"); +- return -EINVAL; +- } +- +- if (!pdata) +- nsels /= sizeof(u32); +- if (!nsels) { +- dev_err(dev, "invalid reg property size\n"); +- return -EINVAL; ++ } else { ++ nsels = of_property_count_elems_of_size(np, "reg", sizeof(u32)); ++ if (nsels <= 0) { ++ dev_err(dev, "missing/invalid reg property\n"); ++ return -EINVAL; ++ } + } + + /* Alloc the nand chip structure */ +diff --git a/drivers/mtd/nand/tango_nand.c b/drivers/mtd/nand/tango_nand.c +index c5bee00b7f5e..76761b841f1f 100644 +--- a/drivers/mtd/nand/tango_nand.c ++++ b/drivers/mtd/nand/tango_nand.c +@@ -643,7 +643,7 @@ static int tango_nand_probe(struct platform_device *pdev) + + writel_relaxed(MODE_RAW, nfc->pbus_base + PBUS_PAD_MODE); + +- clk = clk_get(&pdev->dev, NULL); ++ clk = devm_clk_get(&pdev->dev, NULL); + if (IS_ERR(clk)) + return PTR_ERR(clk); + +diff --git a/drivers/mtd/spi-nor/cadence-quadspi.c b/drivers/mtd/spi-nor/cadence-quadspi.c +index 4b8e9183489a..5872f31eaa60 100644 +--- a/drivers/mtd/spi-nor/cadence-quadspi.c ++++ b/drivers/mtd/spi-nor/cadence-quadspi.c +@@ -501,7 +501,9 @@ static int cqspi_indirect_read_execute(struct spi_nor *nor, u8 *rxbuf, + void __iomem *reg_base = cqspi->iobase; + void __iomem *ahb_base = cqspi->ahb_base; + unsigned int remaining = n_rx; ++ unsigned int mod_bytes = n_rx % 4; + unsigned int bytes_to_read = 0; ++ u8 *rxbuf_end = rxbuf + n_rx; + int ret = 0; + + writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR); +@@ -530,11 +532,24 @@ static int cqspi_indirect_read_execute(struct spi_nor *nor, u8 *rxbuf, + } + + while (bytes_to_read != 0) { ++ unsigned int word_remain = round_down(remaining, 4); ++ + bytes_to_read *= cqspi->fifo_width; + bytes_to_read = bytes_to_read > remaining ? + remaining : bytes_to_read; +- ioread32_rep(ahb_base, rxbuf, +- DIV_ROUND_UP(bytes_to_read, 4)); ++ bytes_to_read = round_down(bytes_to_read, 4); ++ /* Read 4 byte word chunks then single bytes */ ++ if (bytes_to_read) { ++ ioread32_rep(ahb_base, rxbuf, ++ (bytes_to_read / 4)); ++ } else if (!word_remain && mod_bytes) { ++ unsigned int temp = ioread32(ahb_base); ++ ++ bytes_to_read = mod_bytes; ++ memcpy(rxbuf, &temp, min((unsigned int) ++ (rxbuf_end - rxbuf), ++ bytes_to_read)); ++ } + rxbuf += bytes_to_read; + remaining -= bytes_to_read; + bytes_to_read = cqspi_get_rd_sram_level(cqspi); +diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c +index 84aa9d676375..6da20b9688f7 100644 +--- a/drivers/of/fdt.c ++++ b/drivers/of/fdt.c +@@ -942,7 +942,7 @@ int __init early_init_dt_scan_chosen_stdout(void) + int offset; + const char *p, *q, *options = NULL; + int l; +- const struct earlycon_id *match; ++ const struct earlycon_id **p_match; + const void *fdt = initial_boot_params; + + offset = fdt_path_offset(fdt, "/chosen"); +@@ -969,7 +969,10 @@ int __init early_init_dt_scan_chosen_stdout(void) + return 0; + } + +- for (match = __earlycon_table; match < __earlycon_table_end; match++) { ++ for (p_match = __earlycon_table; p_match < __earlycon_table_end; ++ p_match++) { ++ const struct earlycon_id *match = *p_match; ++ + if (!match->compatible[0]) + continue; + +diff --git a/drivers/pci/host/pci-aardvark.c b/drivers/pci/host/pci-aardvark.c +index b04d37b3c5de..9abf549631b4 100644 +--- a/drivers/pci/host/pci-aardvark.c ++++ b/drivers/pci/host/pci-aardvark.c +@@ -29,6 +29,7 @@ + #define PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT 5 + #define PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE (0 << 11) + #define PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT 12 ++#define PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SZ 0x2 + #define PCIE_CORE_LINK_CTRL_STAT_REG 0xd0 + #define PCIE_CORE_LINK_L0S_ENTRY BIT(0) + #define PCIE_CORE_LINK_TRAINING BIT(5) +@@ -100,7 +101,8 @@ + #define PCIE_ISR1_MASK_REG (CONTROL_BASE_ADDR + 0x4C) + #define PCIE_ISR1_POWER_STATE_CHANGE BIT(4) + #define PCIE_ISR1_FLUSH BIT(5) +-#define PCIE_ISR1_ALL_MASK GENMASK(5, 4) ++#define PCIE_ISR1_INTX_ASSERT(val) BIT(8 + (val)) ++#define PCIE_ISR1_ALL_MASK GENMASK(11, 4) + #define PCIE_MSI_ADDR_LOW_REG (CONTROL_BASE_ADDR + 0x50) + #define PCIE_MSI_ADDR_HIGH_REG (CONTROL_BASE_ADDR + 0x54) + #define PCIE_MSI_STATUS_REG (CONTROL_BASE_ADDR + 0x58) +@@ -172,8 +174,6 @@ + #define PCIE_CONFIG_WR_TYPE0 0xa + #define PCIE_CONFIG_WR_TYPE1 0xb + +-/* PCI_BDF shifts 8bit, so we need extra 4bit shift */ +-#define PCIE_BDF(dev) (dev << 4) + #define PCIE_CONF_BUS(bus) (((bus) & 0xff) << 20) + #define PCIE_CONF_DEV(dev) (((dev) & 0x1f) << 15) + #define PCIE_CONF_FUNC(fun) (((fun) & 0x7) << 12) +@@ -296,7 +296,8 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie) + reg = PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE | + (7 << PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT) | + PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE | +- PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT; ++ (PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SZ << ++ PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT); + advk_writel(pcie, reg, PCIE_CORE_DEV_CTRL_STATS_REG); + + /* Program PCIe Control 2 to disable strict ordering */ +@@ -437,7 +438,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, + u32 reg; + int ret; + +- if (PCI_SLOT(devfn) != 0) { ++ if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0) { + *val = 0xffffffff; + return PCIBIOS_DEVICE_NOT_FOUND; + } +@@ -456,7 +457,7 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, + advk_writel(pcie, reg, PIO_CTRL); + + /* Program the address registers */ +- reg = PCIE_BDF(devfn) | PCIE_CONF_REG(where); ++ reg = PCIE_CONF_ADDR(bus->number, devfn, where); + advk_writel(pcie, reg, PIO_ADDR_LS); + advk_writel(pcie, 0, PIO_ADDR_MS); + +@@ -491,7 +492,7 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn, + int offset; + int ret; + +- if (PCI_SLOT(devfn) != 0) ++ if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0) + return PCIBIOS_DEVICE_NOT_FOUND; + + if (where % size) +@@ -609,9 +610,9 @@ static void advk_pcie_irq_mask(struct irq_data *d) + irq_hw_number_t hwirq = irqd_to_hwirq(d); + u32 mask; + +- mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); +- mask |= PCIE_ISR0_INTX_ASSERT(hwirq); +- advk_writel(pcie, mask, PCIE_ISR0_MASK_REG); ++ mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); ++ mask |= PCIE_ISR1_INTX_ASSERT(hwirq); ++ advk_writel(pcie, mask, PCIE_ISR1_MASK_REG); + } + + static void advk_pcie_irq_unmask(struct irq_data *d) +@@ -620,9 +621,9 @@ static void advk_pcie_irq_unmask(struct irq_data *d) + irq_hw_number_t hwirq = irqd_to_hwirq(d); + u32 mask; + +- mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); +- mask &= ~PCIE_ISR0_INTX_ASSERT(hwirq); +- advk_writel(pcie, mask, PCIE_ISR0_MASK_REG); ++ mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); ++ mask &= ~PCIE_ISR1_INTX_ASSERT(hwirq); ++ advk_writel(pcie, mask, PCIE_ISR1_MASK_REG); + } + + static int advk_pcie_irq_map(struct irq_domain *h, +@@ -765,29 +766,35 @@ static void advk_pcie_handle_msi(struct advk_pcie *pcie) + + static void advk_pcie_handle_int(struct advk_pcie *pcie) + { +- u32 val, mask, status; ++ u32 isr0_val, isr0_mask, isr0_status; ++ u32 isr1_val, isr1_mask, isr1_status; + int i, virq; + +- val = advk_readl(pcie, PCIE_ISR0_REG); +- mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); +- status = val & ((~mask) & PCIE_ISR0_ALL_MASK); ++ isr0_val = advk_readl(pcie, PCIE_ISR0_REG); ++ isr0_mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); ++ isr0_status = isr0_val & ((~isr0_mask) & PCIE_ISR0_ALL_MASK); ++ ++ isr1_val = advk_readl(pcie, PCIE_ISR1_REG); ++ isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); ++ isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK); + +- if (!status) { +- advk_writel(pcie, val, PCIE_ISR0_REG); ++ if (!isr0_status && !isr1_status) { ++ advk_writel(pcie, isr0_val, PCIE_ISR0_REG); ++ advk_writel(pcie, isr1_val, PCIE_ISR1_REG); + return; + } + + /* Process MSI interrupts */ +- if (status & PCIE_ISR0_MSI_INT_PENDING) ++ if (isr0_status & PCIE_ISR0_MSI_INT_PENDING) + advk_pcie_handle_msi(pcie); + + /* Process legacy interrupts */ + for (i = 0; i < PCI_NUM_INTX; i++) { +- if (!(status & PCIE_ISR0_INTX_ASSERT(i))) ++ if (!(isr1_status & PCIE_ISR1_INTX_ASSERT(i))) + continue; + +- advk_writel(pcie, PCIE_ISR0_INTX_ASSERT(i), +- PCIE_ISR0_REG); ++ advk_writel(pcie, PCIE_ISR1_INTX_ASSERT(i), ++ PCIE_ISR1_REG); + + virq = irq_find_mapping(pcie->irq_domain, i); + generic_handle_irq(virq); +diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c +index 3bed6beda051..eede34e5ada2 100644 +--- a/drivers/pci/pci-driver.c ++++ b/drivers/pci/pci-driver.c +@@ -945,10 +945,11 @@ static int pci_pm_freeze(struct device *dev) + * devices should not be touched during freeze/thaw transitions, + * however. + */ +- if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND)) ++ if (!dev_pm_smart_suspend_and_suspended(dev)) { + pm_runtime_resume(dev); ++ pci_dev->state_saved = false; ++ } + +- pci_dev->state_saved = false; + if (pm->freeze) { + int error; + +diff --git a/drivers/rtc/rtc-opal.c b/drivers/rtc/rtc-opal.c +index 304e891e35fc..60f2250fd96b 100644 +--- a/drivers/rtc/rtc-opal.c ++++ b/drivers/rtc/rtc-opal.c +@@ -57,7 +57,7 @@ static void tm_to_opal(struct rtc_time *tm, u32 *y_m_d, u64 *h_m_s_ms) + + static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm) + { +- long rc = OPAL_BUSY; ++ s64 rc = OPAL_BUSY; + int retries = 10; + u32 y_m_d; + u64 h_m_s_ms; +@@ -66,13 +66,17 @@ static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm) + + while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { + rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms); +- if (rc == OPAL_BUSY_EVENT) ++ if (rc == OPAL_BUSY_EVENT) { ++ msleep(OPAL_BUSY_DELAY_MS); + opal_poll_events(NULL); +- else if (retries-- && (rc == OPAL_HARDWARE +- || rc == OPAL_INTERNAL_ERROR)) +- msleep(10); +- else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT) +- break; ++ } else if (rc == OPAL_BUSY) { ++ msleep(OPAL_BUSY_DELAY_MS); ++ } else if (rc == OPAL_HARDWARE || rc == OPAL_INTERNAL_ERROR) { ++ if (retries--) { ++ msleep(10); /* Wait 10ms before retry */ ++ rc = OPAL_BUSY; /* go around again */ ++ } ++ } + } + + if (rc != OPAL_SUCCESS) +@@ -87,21 +91,26 @@ static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm) + + static int opal_set_rtc_time(struct device *dev, struct rtc_time *tm) + { +- long rc = OPAL_BUSY; ++ s64 rc = OPAL_BUSY; + int retries = 10; + u32 y_m_d = 0; + u64 h_m_s_ms = 0; + + tm_to_opal(tm, &y_m_d, &h_m_s_ms); ++ + while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { + rc = opal_rtc_write(y_m_d, h_m_s_ms); +- if (rc == OPAL_BUSY_EVENT) ++ if (rc == OPAL_BUSY_EVENT) { ++ msleep(OPAL_BUSY_DELAY_MS); + opal_poll_events(NULL); +- else if (retries-- && (rc == OPAL_HARDWARE +- || rc == OPAL_INTERNAL_ERROR)) +- msleep(10); +- else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT) +- break; ++ } else if (rc == OPAL_BUSY) { ++ msleep(OPAL_BUSY_DELAY_MS); ++ } else if (rc == OPAL_HARDWARE || rc == OPAL_INTERNAL_ERROR) { ++ if (retries--) { ++ msleep(10); /* Wait 10ms before retry */ ++ rc = OPAL_BUSY; /* go around again */ ++ } ++ } + } + + return rc == OPAL_SUCCESS ? 0 : -EIO; +diff --git a/drivers/s390/cio/vfio_ccw_fsm.c b/drivers/s390/cio/vfio_ccw_fsm.c +index c30420c517b1..e96b85579f21 100644 +--- a/drivers/s390/cio/vfio_ccw_fsm.c ++++ b/drivers/s390/cio/vfio_ccw_fsm.c +@@ -20,12 +20,12 @@ static int fsm_io_helper(struct vfio_ccw_private *private) + int ccode; + __u8 lpm; + unsigned long flags; ++ int ret; + + sch = private->sch; + + spin_lock_irqsave(sch->lock, flags); + private->state = VFIO_CCW_STATE_BUSY; +- spin_unlock_irqrestore(sch->lock, flags); + + orb = cp_get_orb(&private->cp, (u32)(addr_t)sch, sch->lpm); + +@@ -38,10 +38,12 @@ static int fsm_io_helper(struct vfio_ccw_private *private) + * Initialize device status information + */ + sch->schib.scsw.cmd.actl |= SCSW_ACTL_START_PEND; +- return 0; ++ ret = 0; ++ break; + case 1: /* Status pending */ + case 2: /* Busy */ +- return -EBUSY; ++ ret = -EBUSY; ++ break; + case 3: /* Device/path not operational */ + { + lpm = orb->cmd.lpm; +@@ -51,13 +53,16 @@ static int fsm_io_helper(struct vfio_ccw_private *private) + sch->lpm = 0; + + if (cio_update_schib(sch)) +- return -ENODEV; +- +- return sch->lpm ? -EACCES : -ENODEV; ++ ret = -ENODEV; ++ else ++ ret = sch->lpm ? -EACCES : -ENODEV; ++ break; + } + default: +- return ccode; ++ ret = ccode; + } ++ spin_unlock_irqrestore(sch->lock, flags); ++ return ret; + } + + static void fsm_notoper(struct vfio_ccw_private *private, +diff --git a/drivers/sbus/char/oradax.c b/drivers/sbus/char/oradax.c +index c44d7c7ffc92..1754f55e2fac 100644 +--- a/drivers/sbus/char/oradax.c ++++ b/drivers/sbus/char/oradax.c +@@ -3,7 +3,7 @@ + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by +- * the Free Software Foundation, either version 3 of the License, or ++ * the Free Software Foundation, either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, +diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c +index 1fa84d6a0f8b..d19b41bcebea 100644 +--- a/drivers/scsi/sd.c ++++ b/drivers/scsi/sd.c +@@ -2121,6 +2121,8 @@ sd_spinup_disk(struct scsi_disk *sdkp) + break; /* standby */ + if (sshdr.asc == 4 && sshdr.ascq == 0xc) + break; /* unavailable */ ++ if (sshdr.asc == 4 && sshdr.ascq == 0x1b) ++ break; /* sanitize in progress */ + /* + * Issue command to spin up drive when not ready + */ +diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c +index 89cf4498f535..973a497739f0 100644 +--- a/drivers/scsi/sd_zbc.c ++++ b/drivers/scsi/sd_zbc.c +@@ -400,8 +400,10 @@ static int sd_zbc_check_capacity(struct scsi_disk *sdkp, unsigned char *buf) + * + * Check that all zones of the device are equal. The last zone can however + * be smaller. The zone size must also be a power of two number of LBAs. ++ * ++ * Returns the zone size in bytes upon success or an error code upon failure. + */ +-static int sd_zbc_check_zone_size(struct scsi_disk *sdkp) ++static s64 sd_zbc_check_zone_size(struct scsi_disk *sdkp) + { + u64 zone_blocks = 0; + sector_t block = 0; +@@ -412,8 +414,6 @@ static int sd_zbc_check_zone_size(struct scsi_disk *sdkp) + int ret; + u8 same; + +- sdkp->zone_blocks = 0; +- + /* Get a buffer */ + buf = kmalloc(SD_ZBC_BUF_SIZE, GFP_KERNEL); + if (!buf) +@@ -445,16 +445,17 @@ static int sd_zbc_check_zone_size(struct scsi_disk *sdkp) + + /* Parse zone descriptors */ + while (rec < buf + buf_len) { +- zone_blocks = get_unaligned_be64(&rec[8]); +- if (sdkp->zone_blocks == 0) { +- sdkp->zone_blocks = zone_blocks; +- } else if (zone_blocks != sdkp->zone_blocks && +- (block + zone_blocks < sdkp->capacity +- || zone_blocks > sdkp->zone_blocks)) { +- zone_blocks = 0; ++ u64 this_zone_blocks = get_unaligned_be64(&rec[8]); ++ ++ if (zone_blocks == 0) { ++ zone_blocks = this_zone_blocks; ++ } else if (this_zone_blocks != zone_blocks && ++ (block + this_zone_blocks < sdkp->capacity ++ || this_zone_blocks > zone_blocks)) { ++ this_zone_blocks = 0; + goto out; + } +- block += zone_blocks; ++ block += this_zone_blocks; + rec += 64; + } + +@@ -467,8 +468,6 @@ static int sd_zbc_check_zone_size(struct scsi_disk *sdkp) + + } while (block < sdkp->capacity); + +- zone_blocks = sdkp->zone_blocks; +- + out: + if (!zone_blocks) { + if (sdkp->first_scan) +@@ -488,8 +487,7 @@ static int sd_zbc_check_zone_size(struct scsi_disk *sdkp) + "Zone size too large\n"); + ret = -ENODEV; + } else { +- sdkp->zone_blocks = zone_blocks; +- sdkp->zone_shift = ilog2(zone_blocks); ++ ret = zone_blocks; + } + + out_free: +@@ -500,21 +498,21 @@ static int sd_zbc_check_zone_size(struct scsi_disk *sdkp) + + /** + * sd_zbc_alloc_zone_bitmap - Allocate a zone bitmap (one bit per zone). +- * @sdkp: The disk of the bitmap ++ * @nr_zones: Number of zones to allocate space for. ++ * @numa_node: NUMA node to allocate the memory from. + */ +-static inline unsigned long *sd_zbc_alloc_zone_bitmap(struct scsi_disk *sdkp) ++static inline unsigned long * ++sd_zbc_alloc_zone_bitmap(u32 nr_zones, int numa_node) + { +- struct request_queue *q = sdkp->disk->queue; +- +- return kzalloc_node(BITS_TO_LONGS(sdkp->nr_zones) +- * sizeof(unsigned long), +- GFP_KERNEL, q->node); ++ return kzalloc_node(BITS_TO_LONGS(nr_zones) * sizeof(unsigned long), ++ GFP_KERNEL, numa_node); + } + + /** + * sd_zbc_get_seq_zones - Parse report zones reply to identify sequential zones + * @sdkp: disk used + * @buf: report reply buffer ++ * @zone_shift: logarithm base 2 of the number of blocks in a zone + * @seq_zone_bitamp: bitmap of sequential zones to set + * + * Parse reported zone descriptors in @buf to identify sequential zones and +@@ -524,7 +522,7 @@ static inline unsigned long *sd_zbc_alloc_zone_bitmap(struct scsi_disk *sdkp) + * Return the LBA after the last zone reported. + */ + static sector_t sd_zbc_get_seq_zones(struct scsi_disk *sdkp, unsigned char *buf, +- unsigned int buflen, ++ unsigned int buflen, u32 zone_shift, + unsigned long *seq_zones_bitmap) + { + sector_t lba, next_lba = sdkp->capacity; +@@ -543,7 +541,7 @@ static sector_t sd_zbc_get_seq_zones(struct scsi_disk *sdkp, unsigned char *buf, + if (type != ZBC_ZONE_TYPE_CONV && + cond != ZBC_ZONE_COND_READONLY && + cond != ZBC_ZONE_COND_OFFLINE) +- set_bit(lba >> sdkp->zone_shift, seq_zones_bitmap); ++ set_bit(lba >> zone_shift, seq_zones_bitmap); + next_lba = lba + get_unaligned_be64(&rec[8]); + rec += 64; + } +@@ -552,12 +550,16 @@ static sector_t sd_zbc_get_seq_zones(struct scsi_disk *sdkp, unsigned char *buf, + } + + /** +- * sd_zbc_setup_seq_zones_bitmap - Initialize the disk seq zone bitmap. ++ * sd_zbc_setup_seq_zones_bitmap - Initialize a seq zone bitmap. + * @sdkp: target disk ++ * @zone_shift: logarithm base 2 of the number of blocks in a zone ++ * @nr_zones: number of zones to set up a seq zone bitmap for + * + * Allocate a zone bitmap and initialize it by identifying sequential zones. + */ +-static int sd_zbc_setup_seq_zones_bitmap(struct scsi_disk *sdkp) ++static unsigned long * ++sd_zbc_setup_seq_zones_bitmap(struct scsi_disk *sdkp, u32 zone_shift, ++ u32 nr_zones) + { + struct request_queue *q = sdkp->disk->queue; + unsigned long *seq_zones_bitmap; +@@ -565,9 +567,9 @@ static int sd_zbc_setup_seq_zones_bitmap(struct scsi_disk *sdkp) + unsigned char *buf; + int ret = -ENOMEM; + +- seq_zones_bitmap = sd_zbc_alloc_zone_bitmap(sdkp); ++ seq_zones_bitmap = sd_zbc_alloc_zone_bitmap(nr_zones, q->node); + if (!seq_zones_bitmap) +- return -ENOMEM; ++ return ERR_PTR(-ENOMEM); + + buf = kmalloc(SD_ZBC_BUF_SIZE, GFP_KERNEL); + if (!buf) +@@ -578,7 +580,7 @@ static int sd_zbc_setup_seq_zones_bitmap(struct scsi_disk *sdkp) + if (ret) + goto out; + lba = sd_zbc_get_seq_zones(sdkp, buf, SD_ZBC_BUF_SIZE, +- seq_zones_bitmap); ++ zone_shift, seq_zones_bitmap); + } + + if (lba != sdkp->capacity) { +@@ -590,12 +592,9 @@ static int sd_zbc_setup_seq_zones_bitmap(struct scsi_disk *sdkp) + kfree(buf); + if (ret) { + kfree(seq_zones_bitmap); +- return ret; ++ return ERR_PTR(ret); + } +- +- q->seq_zones_bitmap = seq_zones_bitmap; +- +- return 0; ++ return seq_zones_bitmap; + } + + static void sd_zbc_cleanup(struct scsi_disk *sdkp) +@@ -611,44 +610,64 @@ static void sd_zbc_cleanup(struct scsi_disk *sdkp) + q->nr_zones = 0; + } + +-static int sd_zbc_setup(struct scsi_disk *sdkp) ++static int sd_zbc_setup(struct scsi_disk *sdkp, u32 zone_blocks) + { + struct request_queue *q = sdkp->disk->queue; ++ u32 zone_shift = ilog2(zone_blocks); ++ u32 nr_zones; + int ret; + +- /* READ16/WRITE16 is mandatory for ZBC disks */ +- sdkp->device->use_16_for_rw = 1; +- sdkp->device->use_10_for_rw = 0; +- + /* chunk_sectors indicates the zone size */ +- blk_queue_chunk_sectors(sdkp->disk->queue, +- logical_to_sectors(sdkp->device, sdkp->zone_blocks)); +- sdkp->nr_zones = +- round_up(sdkp->capacity, sdkp->zone_blocks) >> sdkp->zone_shift; ++ blk_queue_chunk_sectors(q, ++ logical_to_sectors(sdkp->device, zone_blocks)); ++ nr_zones = round_up(sdkp->capacity, zone_blocks) >> zone_shift; + + /* + * Initialize the device request queue information if the number + * of zones changed. + */ +- if (sdkp->nr_zones != q->nr_zones) { +- +- sd_zbc_cleanup(sdkp); +- +- q->nr_zones = sdkp->nr_zones; +- if (sdkp->nr_zones) { +- q->seq_zones_wlock = sd_zbc_alloc_zone_bitmap(sdkp); +- if (!q->seq_zones_wlock) { ++ if (nr_zones != sdkp->nr_zones || nr_zones != q->nr_zones) { ++ unsigned long *seq_zones_wlock = NULL, *seq_zones_bitmap = NULL; ++ size_t zone_bitmap_size; ++ ++ if (nr_zones) { ++ seq_zones_wlock = sd_zbc_alloc_zone_bitmap(nr_zones, ++ q->node); ++ if (!seq_zones_wlock) { + ret = -ENOMEM; + goto err; + } + +- ret = sd_zbc_setup_seq_zones_bitmap(sdkp); +- if (ret) { +- sd_zbc_cleanup(sdkp); ++ seq_zones_bitmap = sd_zbc_setup_seq_zones_bitmap(sdkp, ++ zone_shift, nr_zones); ++ if (IS_ERR(seq_zones_bitmap)) { ++ ret = PTR_ERR(seq_zones_bitmap); ++ kfree(seq_zones_wlock); + goto err; + } + } +- ++ zone_bitmap_size = BITS_TO_LONGS(nr_zones) * ++ sizeof(unsigned long); ++ blk_mq_freeze_queue(q); ++ if (q->nr_zones != nr_zones) { ++ /* READ16/WRITE16 is mandatory for ZBC disks */ ++ sdkp->device->use_16_for_rw = 1; ++ sdkp->device->use_10_for_rw = 0; ++ ++ sdkp->zone_blocks = zone_blocks; ++ sdkp->zone_shift = zone_shift; ++ sdkp->nr_zones = nr_zones; ++ q->nr_zones = nr_zones; ++ swap(q->seq_zones_wlock, seq_zones_wlock); ++ swap(q->seq_zones_bitmap, seq_zones_bitmap); ++ } else if (memcmp(q->seq_zones_bitmap, seq_zones_bitmap, ++ zone_bitmap_size) != 0) { ++ memcpy(q->seq_zones_bitmap, seq_zones_bitmap, ++ zone_bitmap_size); ++ } ++ blk_mq_unfreeze_queue(q); ++ kfree(seq_zones_wlock); ++ kfree(seq_zones_bitmap); + } + + return 0; +@@ -660,6 +679,7 @@ static int sd_zbc_setup(struct scsi_disk *sdkp) + + int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf) + { ++ int64_t zone_blocks; + int ret; + + if (!sd_is_zoned(sdkp)) +@@ -696,12 +716,16 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf) + * Check zone size: only devices with a constant zone size (except + * an eventual last runt zone) that is a power of 2 are supported. + */ +- ret = sd_zbc_check_zone_size(sdkp); +- if (ret) ++ zone_blocks = sd_zbc_check_zone_size(sdkp); ++ ret = -EFBIG; ++ if (zone_blocks != (u32)zone_blocks) ++ goto err; ++ ret = zone_blocks; ++ if (ret < 0) + goto err; + + /* The drive satisfies the kernel restrictions: set it up */ +- ret = sd_zbc_setup(sdkp); ++ ret = sd_zbc_setup(sdkp, zone_blocks); + if (ret) + goto err; + +diff --git a/drivers/slimbus/messaging.c b/drivers/slimbus/messaging.c +index 884419c37e84..457ea1f8db30 100644 +--- a/drivers/slimbus/messaging.c ++++ b/drivers/slimbus/messaging.c +@@ -183,7 +183,7 @@ static u16 slim_slicesize(int code) + 0, 1, 2, 3, 3, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7 + }; + +- clamp(code, 1, (int)ARRAY_SIZE(sizetocode)); ++ code = clamp(code, 1, (int)ARRAY_SIZE(sizetocode)); + + return sizetocode[code - 1]; + } +diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c +index 3b3e1f6632d7..1dbe27c9946c 100644 +--- a/drivers/tty/n_gsm.c ++++ b/drivers/tty/n_gsm.c +@@ -121,6 +121,9 @@ struct gsm_dlci { + struct mutex mutex; + + /* Link layer */ ++ int mode; ++#define DLCI_MODE_ABM 0 /* Normal Asynchronous Balanced Mode */ ++#define DLCI_MODE_ADM 1 /* Asynchronous Disconnected Mode */ + spinlock_t lock; /* Protects the internal state */ + struct timer_list t1; /* Retransmit timer for SABM and UA */ + int retries; +@@ -1364,7 +1367,13 @@ static struct gsm_control *gsm_control_send(struct gsm_mux *gsm, + ctrl->data = data; + ctrl->len = clen; + gsm->pending_cmd = ctrl; +- gsm->cretries = gsm->n2; ++ ++ /* If DLCI0 is in ADM mode skip retries, it won't respond */ ++ if (gsm->dlci[0]->mode == DLCI_MODE_ADM) ++ gsm->cretries = 1; ++ else ++ gsm->cretries = gsm->n2; ++ + mod_timer(&gsm->t2_timer, jiffies + gsm->t2 * HZ / 100); + gsm_control_transmit(gsm, ctrl); + spin_unlock_irqrestore(&gsm->control_lock, flags); +@@ -1472,6 +1481,7 @@ static void gsm_dlci_t1(struct timer_list *t) + if (debug & 8) + pr_info("DLCI %d opening in ADM mode.\n", + dlci->addr); ++ dlci->mode = DLCI_MODE_ADM; + gsm_dlci_open(dlci); + } else { + gsm_dlci_close(dlci); +@@ -2861,11 +2871,22 @@ static int gsmtty_modem_update(struct gsm_dlci *dlci, u8 brk) + static int gsm_carrier_raised(struct tty_port *port) + { + struct gsm_dlci *dlci = container_of(port, struct gsm_dlci, port); ++ struct gsm_mux *gsm = dlci->gsm; ++ + /* Not yet open so no carrier info */ + if (dlci->state != DLCI_OPEN) + return 0; + if (debug & 2) + return 1; ++ ++ /* ++ * Basic mode with control channel in ADM mode may not respond ++ * to CMD_MSC at all and modem_rx is empty. ++ */ ++ if (gsm->encoding == 0 && gsm->dlci[0]->mode == DLCI_MODE_ADM && ++ !dlci->modem_rx) ++ return 1; ++ + return dlci->modem_rx & TIOCM_CD; + } + +diff --git a/drivers/tty/serial/earlycon.c b/drivers/tty/serial/earlycon.c +index a24278380fec..22683393a0f2 100644 +--- a/drivers/tty/serial/earlycon.c ++++ b/drivers/tty/serial/earlycon.c +@@ -169,7 +169,7 @@ static int __init register_earlycon(char *buf, const struct earlycon_id *match) + */ + int __init setup_earlycon(char *buf) + { +- const struct earlycon_id *match; ++ const struct earlycon_id **p_match; + + if (!buf || !buf[0]) + return -EINVAL; +@@ -177,7 +177,9 @@ int __init setup_earlycon(char *buf) + if (early_con.flags & CON_ENABLED) + return -EALREADY; + +- for (match = __earlycon_table; match < __earlycon_table_end; match++) { ++ for (p_match = __earlycon_table; p_match < __earlycon_table_end; ++ p_match++) { ++ const struct earlycon_id *match = *p_match; + size_t len = strlen(match->name); + + if (strncmp(buf, match->name, len)) +diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c +index a100e98259d7..03d26aabb0c4 100644 +--- a/drivers/tty/serial/mvebu-uart.c ++++ b/drivers/tty/serial/mvebu-uart.c +@@ -495,7 +495,6 @@ static void mvebu_uart_set_termios(struct uart_port *port, + termios->c_iflag |= old->c_iflag & ~(INPCK | IGNPAR); + termios->c_cflag &= CREAD | CBAUD; + termios->c_cflag |= old->c_cflag & ~(CREAD | CBAUD); +- termios->c_lflag = old->c_lflag; + } + + spin_unlock_irqrestore(&port->lock, flags); +diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c +index 63114ea35ec1..7c838b90a31d 100644 +--- a/drivers/tty/tty_io.c ++++ b/drivers/tty/tty_io.c +@@ -2816,7 +2816,10 @@ struct tty_struct *alloc_tty_struct(struct tty_driver *driver, int idx) + + kref_init(&tty->kref); + tty->magic = TTY_MAGIC; +- tty_ldisc_init(tty); ++ if (tty_ldisc_init(tty)) { ++ kfree(tty); ++ return NULL; ++ } + tty->session = NULL; + tty->pgrp = NULL; + mutex_init(&tty->legacy_mutex); +diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c +index 050f4d650891..fb7329ab2b37 100644 +--- a/drivers/tty/tty_ldisc.c ++++ b/drivers/tty/tty_ldisc.c +@@ -176,12 +176,11 @@ static struct tty_ldisc *tty_ldisc_get(struct tty_struct *tty, int disc) + return ERR_CAST(ldops); + } + +- ld = kmalloc(sizeof(struct tty_ldisc), GFP_KERNEL); +- if (ld == NULL) { +- put_ldops(ldops); +- return ERR_PTR(-ENOMEM); +- } +- ++ /* ++ * There is no way to handle allocation failure of only 16 bytes. ++ * Let's simplify error handling and save more memory. ++ */ ++ ld = kmalloc(sizeof(struct tty_ldisc), GFP_KERNEL | __GFP_NOFAIL); + ld->ops = ldops; + ld->tty = tty; + +@@ -527,19 +526,16 @@ static int tty_ldisc_failto(struct tty_struct *tty, int ld) + static void tty_ldisc_restore(struct tty_struct *tty, struct tty_ldisc *old) + { + /* There is an outstanding reference here so this is safe */ +- old = tty_ldisc_get(tty, old->ops->num); +- WARN_ON(IS_ERR(old)); +- tty->ldisc = old; +- tty_set_termios_ldisc(tty, old->ops->num); +- if (tty_ldisc_open(tty, old) < 0) { +- tty_ldisc_put(old); ++ if (tty_ldisc_failto(tty, old->ops->num) < 0) { ++ const char *name = tty_name(tty); ++ ++ pr_warn("Falling back ldisc for %s.\n", name); + /* The traditional behaviour is to fall back to N_TTY, we + want to avoid falling back to N_NULL unless we have no + choice to avoid the risk of breaking anything */ + if (tty_ldisc_failto(tty, N_TTY) < 0 && + tty_ldisc_failto(tty, N_NULL) < 0) +- panic("Couldn't open N_NULL ldisc for %s.", +- tty_name(tty)); ++ panic("Couldn't open N_NULL ldisc for %s.", name); + } + } + +@@ -824,12 +820,13 @@ EXPORT_SYMBOL_GPL(tty_ldisc_release); + * the tty structure is not completely set up when this call is made. + */ + +-void tty_ldisc_init(struct tty_struct *tty) ++int tty_ldisc_init(struct tty_struct *tty) + { + struct tty_ldisc *ld = tty_ldisc_get(tty, N_TTY); + if (IS_ERR(ld)) +- panic("n_tty: init_tty"); ++ return PTR_ERR(ld); + tty->ldisc = ld; ++ return 0; + } + + /** +diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c +index fc32391a34d5..15736b462c55 100644 +--- a/drivers/usb/core/hcd.c ++++ b/drivers/usb/core/hcd.c +@@ -2365,6 +2365,7 @@ void usb_hcd_resume_root_hub (struct usb_hcd *hcd) + + spin_lock_irqsave (&hcd_root_hub_lock, flags); + if (hcd->rh_registered) { ++ pm_wakeup_event(&hcd->self.root_hub->dev, 0); + set_bit(HCD_FLAG_WAKEUP_PENDING, &hcd->flags); + queue_work(pm_wq, &hcd->wakeup_work); + } +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index c5c1f6cf3228..83c58a20d16f 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -653,12 +653,17 @@ void usb_wakeup_notification(struct usb_device *hdev, + unsigned int portnum) + { + struct usb_hub *hub; ++ struct usb_port *port_dev; + + if (!hdev) + return; + + hub = usb_hub_to_struct_hub(hdev); + if (hub) { ++ port_dev = hub->ports[portnum - 1]; ++ if (port_dev && port_dev->child) ++ pm_wakeup_event(&port_dev->child->dev, 0); ++ + set_bit(portnum, hub->wakeup_bits); + kick_hub_wq(hub); + } +@@ -3430,8 +3435,11 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg) + + /* Skip the initial Clear-Suspend step for a remote wakeup */ + status = hub_port_status(hub, port1, &portstatus, &portchange); +- if (status == 0 && !port_is_suspended(hub, portstatus)) ++ if (status == 0 && !port_is_suspended(hub, portstatus)) { ++ if (portchange & USB_PORT_STAT_C_SUSPEND) ++ pm_wakeup_event(&udev->dev, 0); + goto SuspendCleared; ++ } + + /* see 7.1.7.7; affects power usage, but not budgeting */ + if (hub_is_superspeed(hub->hdev)) +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c +index 54b019e267c5..9f5f78b7bb55 100644 +--- a/drivers/usb/core/quirks.c ++++ b/drivers/usb/core/quirks.c +@@ -40,6 +40,9 @@ static const struct usb_device_id usb_quirk_list[] = { + { USB_DEVICE(0x03f0, 0x0701), .driver_info = + USB_QUIRK_STRING_FETCH_255 }, + ++ /* HP v222w 16GB Mini USB Drive */ ++ { USB_DEVICE(0x03f0, 0x3f40), .driver_info = USB_QUIRK_DELAY_INIT }, ++ + /* Creative SB Audigy 2 NX */ + { USB_DEVICE(0x041e, 0x3020), .driver_info = USB_QUIRK_RESET_RESUME }, + +diff --git a/drivers/usb/host/xhci-dbgtty.c b/drivers/usb/host/xhci-dbgtty.c +index 75f0b92694ba..50203e77c925 100644 +--- a/drivers/usb/host/xhci-dbgtty.c ++++ b/drivers/usb/host/xhci-dbgtty.c +@@ -320,9 +320,11 @@ int xhci_dbc_tty_register_driver(struct xhci_hcd *xhci) + + void xhci_dbc_tty_unregister_driver(void) + { +- tty_unregister_driver(dbc_tty_driver); +- put_tty_driver(dbc_tty_driver); +- dbc_tty_driver = NULL; ++ if (dbc_tty_driver) { ++ tty_unregister_driver(dbc_tty_driver); ++ put_tty_driver(dbc_tty_driver); ++ dbc_tty_driver = NULL; ++ } + } + + static void dbc_rx_push(unsigned long _port) +diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c +index d9f831b67e57..93ce34bce7b5 100644 +--- a/drivers/usb/host/xhci-pci.c ++++ b/drivers/usb/host/xhci-pci.c +@@ -126,7 +126,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) + if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info()) + xhci->quirks |= XHCI_AMD_PLL_FIX; + +- if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x43bb) ++ if (pdev->vendor == PCI_VENDOR_ID_AMD && ++ (pdev->device == 0x15e0 || ++ pdev->device == 0x15e1 || ++ pdev->device == 0x43bb)) + xhci->quirks |= XHCI_SUSPEND_DELAY; + + if (pdev->vendor == PCI_VENDOR_ID_AMD) +diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c +index 6652e2d5bd2e..c435df29cdb8 100644 +--- a/drivers/usb/host/xhci-plat.c ++++ b/drivers/usb/host/xhci-plat.c +@@ -419,7 +419,6 @@ MODULE_DEVICE_TABLE(acpi, usb_xhci_acpi_match); + static struct platform_driver usb_xhci_driver = { + .probe = xhci_plat_probe, + .remove = xhci_plat_remove, +- .shutdown = usb_hcd_platform_shutdown, + .driver = { + .name = "xhci-hcd", + .pm = &xhci_plat_pm_ops, +diff --git a/drivers/usb/serial/Kconfig b/drivers/usb/serial/Kconfig +index a646820f5a78..533f127c30ad 100644 +--- a/drivers/usb/serial/Kconfig ++++ b/drivers/usb/serial/Kconfig +@@ -62,6 +62,7 @@ config USB_SERIAL_SIMPLE + - Fundamental Software dongle. + - Google USB serial devices + - HP4x calculators ++ - Libtransistor USB console + - a number of Motorola phones + - Motorola Tetra devices + - Novatel Wireless GPS receivers +diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c +index de1e759dd512..eb6c26cbe579 100644 +--- a/drivers/usb/serial/cp210x.c ++++ b/drivers/usb/serial/cp210x.c +@@ -214,6 +214,7 @@ static const struct usb_device_id id_table[] = { + { USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */ + { USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */ + { USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */ ++ { USB_DEVICE(0x3923, 0x7A0B) }, /* National Instruments USB Serial Console */ + { USB_DEVICE(0x413C, 0x9500) }, /* DW700 GPS USB interface */ + { } /* Terminating Entry */ + }; +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c +index 87202ad5a50d..7ea221d42dba 100644 +--- a/drivers/usb/serial/ftdi_sio.c ++++ b/drivers/usb/serial/ftdi_sio.c +@@ -1898,7 +1898,8 @@ static int ftdi_8u2232c_probe(struct usb_serial *serial) + return ftdi_jtag_probe(serial); + + if (udev->product && +- (!strcmp(udev->product, "BeagleBone/XDS100V2") || ++ (!strcmp(udev->product, "Arrow USB Blaster") || ++ !strcmp(udev->product, "BeagleBone/XDS100V2") || + !strcmp(udev->product, "SNAP Connect E10"))) + return ftdi_jtag_probe(serial); + +diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c +index 4ef79e29cb26..40864c2bd9dc 100644 +--- a/drivers/usb/serial/usb-serial-simple.c ++++ b/drivers/usb/serial/usb-serial-simple.c +@@ -63,6 +63,11 @@ DEVICE(flashloader, FLASHLOADER_IDS); + 0x01) } + DEVICE(google, GOOGLE_IDS); + ++/* Libtransistor USB console */ ++#define LIBTRANSISTOR_IDS() \ ++ { USB_DEVICE(0x1209, 0x8b00) } ++DEVICE(libtransistor, LIBTRANSISTOR_IDS); ++ + /* ViVOpay USB Serial Driver */ + #define VIVOPAY_IDS() \ + { USB_DEVICE(0x1d5f, 0x1004) } /* ViVOpay 8800 */ +@@ -110,6 +115,7 @@ static struct usb_serial_driver * const serial_drivers[] = { + &funsoft_device, + &flashloader_device, + &google_device, ++ &libtransistor_device, + &vivopay_device, + &moto_modem_device, + &motorola_tetra_device, +@@ -126,6 +132,7 @@ static const struct usb_device_id id_table[] = { + FUNSOFT_IDS(), + FLASHLOADER_IDS(), + GOOGLE_IDS(), ++ LIBTRANSISTOR_IDS(), + VIVOPAY_IDS(), + MOTO_IDS(), + MOTOROLA_TETRA_IDS(), +diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c +index 79046fe66426..8d95b3a168d2 100644 +--- a/drivers/usb/typec/ucsi/ucsi.c ++++ b/drivers/usb/typec/ucsi/ucsi.c +@@ -28,7 +28,7 @@ + * difficult to estimate the time it takes for the system to process the command + * before it is actually passed to the PPM. + */ +-#define UCSI_TIMEOUT_MS 1000 ++#define UCSI_TIMEOUT_MS 5000 + + /* + * UCSI_SWAP_TIMEOUT_MS - Timeout for role swap requests +diff --git a/drivers/usb/usbip/stub_main.c b/drivers/usb/usbip/stub_main.c +index c31c8402a0c5..d41d0cdeec0f 100644 +--- a/drivers/usb/usbip/stub_main.c ++++ b/drivers/usb/usbip/stub_main.c +@@ -186,7 +186,12 @@ static ssize_t rebind_store(struct device_driver *dev, const char *buf, + if (!bid) + return -ENODEV; + ++ /* device_attach() callers should hold parent lock for USB */ ++ if (bid->udev->dev.parent) ++ device_lock(bid->udev->dev.parent); + ret = device_attach(&bid->udev->dev); ++ if (bid->udev->dev.parent) ++ device_unlock(bid->udev->dev.parent); + if (ret < 0) { + dev_err(&bid->udev->dev, "rebind failed\n"); + return ret; +diff --git a/drivers/usb/usbip/usbip_common.h b/drivers/usb/usbip/usbip_common.h +index 473fb8a87289..bf8afe9b5883 100644 +--- a/drivers/usb/usbip/usbip_common.h ++++ b/drivers/usb/usbip/usbip_common.h +@@ -243,7 +243,7 @@ enum usbip_side { + #define VUDC_EVENT_ERROR_USB (USBIP_EH_SHUTDOWN | USBIP_EH_UNUSABLE) + #define VUDC_EVENT_ERROR_MALLOC (USBIP_EH_SHUTDOWN | USBIP_EH_UNUSABLE) + +-#define VDEV_EVENT_REMOVED (USBIP_EH_SHUTDOWN | USBIP_EH_BYE) ++#define VDEV_EVENT_REMOVED (USBIP_EH_SHUTDOWN | USBIP_EH_RESET | USBIP_EH_BYE) + #define VDEV_EVENT_DOWN (USBIP_EH_SHUTDOWN | USBIP_EH_RESET) + #define VDEV_EVENT_ERROR_TCP (USBIP_EH_SHUTDOWN | USBIP_EH_RESET) + #define VDEV_EVENT_ERROR_MALLOC (USBIP_EH_SHUTDOWN | USBIP_EH_UNUSABLE) +diff --git a/drivers/usb/usbip/usbip_event.c b/drivers/usb/usbip/usbip_event.c +index 5b4c0864ad92..5d88917c9631 100644 +--- a/drivers/usb/usbip/usbip_event.c ++++ b/drivers/usb/usbip/usbip_event.c +@@ -91,10 +91,6 @@ static void event_handler(struct work_struct *work) + unset_event(ud, USBIP_EH_UNUSABLE); + } + +- /* Stop the error handler. */ +- if (ud->event & USBIP_EH_BYE) +- usbip_dbg_eh("removed %p\n", ud); +- + wake_up(&ud->eh_waitq); + } + } +diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c +index 20e3d4609583..d11f3f8dad40 100644 +--- a/drivers/usb/usbip/vhci_hcd.c ++++ b/drivers/usb/usbip/vhci_hcd.c +@@ -354,6 +354,8 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, + usbip_dbg_vhci_rh(" ClearHubFeature\n"); + break; + case ClearPortFeature: ++ if (rhport < 0) ++ goto error; + switch (wValue) { + case USB_PORT_FEAT_SUSPEND: + if (hcd->speed == HCD_USB3) { +@@ -511,11 +513,16 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, + goto error; + } + ++ if (rhport < 0) ++ goto error; ++ + vhci_hcd->port_status[rhport] |= USB_PORT_STAT_SUSPEND; + break; + case USB_PORT_FEAT_POWER: + usbip_dbg_vhci_rh( + " SetPortFeature: USB_PORT_FEAT_POWER\n"); ++ if (rhport < 0) ++ goto error; + if (hcd->speed == HCD_USB3) + vhci_hcd->port_status[rhport] |= USB_SS_PORT_STAT_POWER; + else +@@ -524,6 +531,8 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, + case USB_PORT_FEAT_BH_PORT_RESET: + usbip_dbg_vhci_rh( + " SetPortFeature: USB_PORT_FEAT_BH_PORT_RESET\n"); ++ if (rhport < 0) ++ goto error; + /* Applicable only for USB3.0 hub */ + if (hcd->speed != HCD_USB3) { + pr_err("USB_PORT_FEAT_BH_PORT_RESET req not " +@@ -534,6 +543,8 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, + case USB_PORT_FEAT_RESET: + usbip_dbg_vhci_rh( + " SetPortFeature: USB_PORT_FEAT_RESET\n"); ++ if (rhport < 0) ++ goto error; + /* if it's already enabled, disable */ + if (hcd->speed == HCD_USB3) { + vhci_hcd->port_status[rhport] = 0; +@@ -554,6 +565,8 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, + default: + usbip_dbg_vhci_rh(" SetPortFeature: default %d\n", + wValue); ++ if (rhport < 0) ++ goto error; + if (hcd->speed == HCD_USB3) { + if ((vhci_hcd->port_status[rhport] & + USB_SS_PORT_STAT_POWER) != 0) { +diff --git a/drivers/virt/vboxguest/vboxguest_core.c b/drivers/virt/vboxguest/vboxguest_core.c +index 190dbf8cfcb5..7411a535fda2 100644 +--- a/drivers/virt/vboxguest/vboxguest_core.c ++++ b/drivers/virt/vboxguest/vboxguest_core.c +@@ -114,7 +114,7 @@ static void vbg_guest_mappings_init(struct vbg_dev *gdev) + } + + out: +- kfree(req); ++ vbg_req_free(req, sizeof(*req)); + kfree(pages); + } + +@@ -144,7 +144,7 @@ static void vbg_guest_mappings_exit(struct vbg_dev *gdev) + + rc = vbg_req_perform(gdev, req); + +- kfree(req); ++ vbg_req_free(req, sizeof(*req)); + + if (rc < 0) { + vbg_err("%s error: %d\n", __func__, rc); +@@ -214,8 +214,8 @@ static int vbg_report_guest_info(struct vbg_dev *gdev) + ret = vbg_status_code_to_errno(rc); + + out_free: +- kfree(req2); +- kfree(req1); ++ vbg_req_free(req2, sizeof(*req2)); ++ vbg_req_free(req1, sizeof(*req1)); + return ret; + } + +@@ -245,7 +245,7 @@ static int vbg_report_driver_status(struct vbg_dev *gdev, bool active) + if (rc == VERR_NOT_IMPLEMENTED) /* Compatibility with older hosts. */ + rc = VINF_SUCCESS; + +- kfree(req); ++ vbg_req_free(req, sizeof(*req)); + + return vbg_status_code_to_errno(rc); + } +@@ -431,7 +431,7 @@ static int vbg_heartbeat_host_config(struct vbg_dev *gdev, bool enabled) + rc = vbg_req_perform(gdev, req); + do_div(req->interval_ns, 1000000); /* ns -> ms */ + gdev->heartbeat_interval_ms = req->interval_ns; +- kfree(req); ++ vbg_req_free(req, sizeof(*req)); + + return vbg_status_code_to_errno(rc); + } +@@ -454,12 +454,6 @@ static int vbg_heartbeat_init(struct vbg_dev *gdev) + if (ret < 0) + return ret; + +- /* +- * Preallocate the request to use it from the timer callback because: +- * 1) on Windows vbg_req_alloc must be called at IRQL <= APC_LEVEL +- * and the timer callback runs at DISPATCH_LEVEL; +- * 2) avoid repeated allocations. +- */ + gdev->guest_heartbeat_req = vbg_req_alloc( + sizeof(*gdev->guest_heartbeat_req), + VMMDEVREQ_GUEST_HEARTBEAT); +@@ -481,8 +475,8 @@ static void vbg_heartbeat_exit(struct vbg_dev *gdev) + { + del_timer_sync(&gdev->heartbeat_timer); + vbg_heartbeat_host_config(gdev, false); +- kfree(gdev->guest_heartbeat_req); +- ++ vbg_req_free(gdev->guest_heartbeat_req, ++ sizeof(*gdev->guest_heartbeat_req)); + } + + /** +@@ -543,7 +537,7 @@ static int vbg_reset_host_event_filter(struct vbg_dev *gdev, + if (rc < 0) + vbg_err("%s error, rc: %d\n", __func__, rc); + +- kfree(req); ++ vbg_req_free(req, sizeof(*req)); + return vbg_status_code_to_errno(rc); + } + +@@ -617,7 +611,7 @@ static int vbg_set_session_event_filter(struct vbg_dev *gdev, + + out: + mutex_unlock(&gdev->session_mutex); +- kfree(req); ++ vbg_req_free(req, sizeof(*req)); + + return ret; + } +@@ -642,7 +636,7 @@ static int vbg_reset_host_capabilities(struct vbg_dev *gdev) + if (rc < 0) + vbg_err("%s error, rc: %d\n", __func__, rc); + +- kfree(req); ++ vbg_req_free(req, sizeof(*req)); + return vbg_status_code_to_errno(rc); + } + +@@ -712,7 +706,7 @@ static int vbg_set_session_capabilities(struct vbg_dev *gdev, + + out: + mutex_unlock(&gdev->session_mutex); +- kfree(req); ++ vbg_req_free(req, sizeof(*req)); + + return ret; + } +@@ -749,7 +743,7 @@ static int vbg_query_host_version(struct vbg_dev *gdev) + } + + out: +- kfree(req); ++ vbg_req_free(req, sizeof(*req)); + return ret; + } + +@@ -847,11 +841,16 @@ int vbg_core_init(struct vbg_dev *gdev, u32 fixed_events) + return 0; + + err_free_reqs: +- kfree(gdev->mouse_status_req); +- kfree(gdev->ack_events_req); +- kfree(gdev->cancel_req); +- kfree(gdev->mem_balloon.change_req); +- kfree(gdev->mem_balloon.get_req); ++ vbg_req_free(gdev->mouse_status_req, ++ sizeof(*gdev->mouse_status_req)); ++ vbg_req_free(gdev->ack_events_req, ++ sizeof(*gdev->ack_events_req)); ++ vbg_req_free(gdev->cancel_req, ++ sizeof(*gdev->cancel_req)); ++ vbg_req_free(gdev->mem_balloon.change_req, ++ sizeof(*gdev->mem_balloon.change_req)); ++ vbg_req_free(gdev->mem_balloon.get_req, ++ sizeof(*gdev->mem_balloon.get_req)); + return ret; + } + +@@ -872,11 +871,16 @@ void vbg_core_exit(struct vbg_dev *gdev) + vbg_reset_host_capabilities(gdev); + vbg_core_set_mouse_status(gdev, 0); + +- kfree(gdev->mouse_status_req); +- kfree(gdev->ack_events_req); +- kfree(gdev->cancel_req); +- kfree(gdev->mem_balloon.change_req); +- kfree(gdev->mem_balloon.get_req); ++ vbg_req_free(gdev->mouse_status_req, ++ sizeof(*gdev->mouse_status_req)); ++ vbg_req_free(gdev->ack_events_req, ++ sizeof(*gdev->ack_events_req)); ++ vbg_req_free(gdev->cancel_req, ++ sizeof(*gdev->cancel_req)); ++ vbg_req_free(gdev->mem_balloon.change_req, ++ sizeof(*gdev->mem_balloon.change_req)); ++ vbg_req_free(gdev->mem_balloon.get_req, ++ sizeof(*gdev->mem_balloon.get_req)); + } + + /** +@@ -1415,7 +1419,7 @@ static int vbg_ioctl_write_core_dump(struct vbg_dev *gdev, + req->flags = dump->u.in.flags; + dump->hdr.rc = vbg_req_perform(gdev, req); + +- kfree(req); ++ vbg_req_free(req, sizeof(*req)); + return 0; + } + +@@ -1513,7 +1517,7 @@ int vbg_core_set_mouse_status(struct vbg_dev *gdev, u32 features) + if (rc < 0) + vbg_err("%s error, rc: %d\n", __func__, rc); + +- kfree(req); ++ vbg_req_free(req, sizeof(*req)); + return vbg_status_code_to_errno(rc); + } + +diff --git a/drivers/virt/vboxguest/vboxguest_core.h b/drivers/virt/vboxguest/vboxguest_core.h +index 6c784bf4fa6d..7ad9ec45bfa9 100644 +--- a/drivers/virt/vboxguest/vboxguest_core.h ++++ b/drivers/virt/vboxguest/vboxguest_core.h +@@ -171,4 +171,13 @@ irqreturn_t vbg_core_isr(int irq, void *dev_id); + + void vbg_linux_mouse_event(struct vbg_dev *gdev); + ++/* Private (non exported) functions form vboxguest_utils.c */ ++void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type); ++void vbg_req_free(void *req, size_t len); ++int vbg_req_perform(struct vbg_dev *gdev, void *req); ++int vbg_hgcm_call32( ++ struct vbg_dev *gdev, u32 client_id, u32 function, u32 timeout_ms, ++ struct vmmdev_hgcm_function_parameter32 *parm32, u32 parm_count, ++ int *vbox_status); ++ + #endif +diff --git a/drivers/virt/vboxguest/vboxguest_linux.c b/drivers/virt/vboxguest/vboxguest_linux.c +index 82e280d38cc2..398d22693234 100644 +--- a/drivers/virt/vboxguest/vboxguest_linux.c ++++ b/drivers/virt/vboxguest/vboxguest_linux.c +@@ -87,6 +87,7 @@ static long vbg_misc_device_ioctl(struct file *filp, unsigned int req, + struct vbg_session *session = filp->private_data; + size_t returned_size, size; + struct vbg_ioctl_hdr hdr; ++ bool is_vmmdev_req; + int ret = 0; + void *buf; + +@@ -106,8 +107,17 @@ static long vbg_misc_device_ioctl(struct file *filp, unsigned int req, + if (size > SZ_16M) + return -E2BIG; + +- /* __GFP_DMA32 because IOCTL_VMMDEV_REQUEST passes this to the host */ +- buf = kmalloc(size, GFP_KERNEL | __GFP_DMA32); ++ /* ++ * IOCTL_VMMDEV_REQUEST needs the buffer to be below 4G to avoid ++ * the need for a bounce-buffer and another copy later on. ++ */ ++ is_vmmdev_req = (req & ~IOCSIZE_MASK) == VBG_IOCTL_VMMDEV_REQUEST(0) || ++ req == VBG_IOCTL_VMMDEV_REQUEST_BIG; ++ ++ if (is_vmmdev_req) ++ buf = vbg_req_alloc(size, VBG_IOCTL_HDR_TYPE_DEFAULT); ++ else ++ buf = kmalloc(size, GFP_KERNEL); + if (!buf) + return -ENOMEM; + +@@ -132,7 +142,10 @@ static long vbg_misc_device_ioctl(struct file *filp, unsigned int req, + ret = -EFAULT; + + out: +- kfree(buf); ++ if (is_vmmdev_req) ++ vbg_req_free(buf, size); ++ else ++ kfree(buf); + + return ret; + } +diff --git a/drivers/virt/vboxguest/vboxguest_utils.c b/drivers/virt/vboxguest/vboxguest_utils.c +index 0f0dab8023cf..bf4474214b4d 100644 +--- a/drivers/virt/vboxguest/vboxguest_utils.c ++++ b/drivers/virt/vboxguest/vboxguest_utils.c +@@ -65,8 +65,9 @@ VBG_LOG(vbg_debug, pr_debug); + void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type) + { + struct vmmdev_request_header *req; ++ int order = get_order(PAGE_ALIGN(len)); + +- req = kmalloc(len, GFP_KERNEL | __GFP_DMA32); ++ req = (void *)__get_free_pages(GFP_KERNEL | GFP_DMA32, order); + if (!req) + return NULL; + +@@ -82,6 +83,14 @@ void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type) + return req; + } + ++void vbg_req_free(void *req, size_t len) ++{ ++ if (!req) ++ return; ++ ++ free_pages((unsigned long)req, get_order(PAGE_ALIGN(len))); ++} ++ + /* Note this function returns a VBox status code, not a negative errno!! */ + int vbg_req_perform(struct vbg_dev *gdev, void *req) + { +@@ -137,7 +146,7 @@ int vbg_hgcm_connect(struct vbg_dev *gdev, + rc = hgcm_connect->header.result; + } + +- kfree(hgcm_connect); ++ vbg_req_free(hgcm_connect, sizeof(*hgcm_connect)); + + *vbox_status = rc; + return 0; +@@ -166,7 +175,7 @@ int vbg_hgcm_disconnect(struct vbg_dev *gdev, u32 client_id, int *vbox_status) + if (rc >= 0) + rc = hgcm_disconnect->header.result; + +- kfree(hgcm_disconnect); ++ vbg_req_free(hgcm_disconnect, sizeof(*hgcm_disconnect)); + + *vbox_status = rc; + return 0; +@@ -623,7 +632,7 @@ int vbg_hgcm_call(struct vbg_dev *gdev, u32 client_id, u32 function, + } + + if (!leak_it) +- kfree(call); ++ vbg_req_free(call, size); + + free_bounce_bufs: + if (bounce_bufs) { +diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c +index 9ceebf30eb22..a82f91d75f29 100644 +--- a/fs/cifs/cifssmb.c ++++ b/fs/cifs/cifssmb.c +@@ -453,6 +453,9 @@ cifs_enable_signing(struct TCP_Server_Info *server, bool mnt_sign_required) + server->sign = true; + } + ++ if (cifs_rdma_enabled(server) && server->sign) ++ cifs_dbg(VFS, "Signing is enabled, and RDMA read/write will be disabled"); ++ + return 0; + } + +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c +index dfd6fb02b7a3..1c1940d90c96 100644 +--- a/fs/cifs/smb2ops.c ++++ b/fs/cifs/smb2ops.c +@@ -252,9 +252,14 @@ smb2_negotiate_wsize(struct cifs_tcon *tcon, struct smb_vol *volume_info) + wsize = volume_info->wsize ? volume_info->wsize : CIFS_DEFAULT_IOSIZE; + wsize = min_t(unsigned int, wsize, server->max_write); + #ifdef CONFIG_CIFS_SMB_DIRECT +- if (server->rdma) +- wsize = min_t(unsigned int, ++ if (server->rdma) { ++ if (server->sign) ++ wsize = min_t(unsigned int, ++ wsize, server->smbd_conn->max_fragmented_send_size); ++ else ++ wsize = min_t(unsigned int, + wsize, server->smbd_conn->max_readwrite_size); ++ } + #endif + if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU)) + wsize = min_t(unsigned int, wsize, SMB2_MAX_BUFFER_SIZE); +@@ -272,9 +277,14 @@ smb2_negotiate_rsize(struct cifs_tcon *tcon, struct smb_vol *volume_info) + rsize = volume_info->rsize ? volume_info->rsize : CIFS_DEFAULT_IOSIZE; + rsize = min_t(unsigned int, rsize, server->max_read); + #ifdef CONFIG_CIFS_SMB_DIRECT +- if (server->rdma) +- rsize = min_t(unsigned int, ++ if (server->rdma) { ++ if (server->sign) ++ rsize = min_t(unsigned int, ++ rsize, server->smbd_conn->max_fragmented_recv_size); ++ else ++ rsize = min_t(unsigned int, + rsize, server->smbd_conn->max_readwrite_size); ++ } + #endif + + if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU)) +diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c +index af62c75b17c4..8ae6a089489c 100644 +--- a/fs/cifs/smb2pdu.c ++++ b/fs/cifs/smb2pdu.c +@@ -2479,7 +2479,7 @@ smb2_new_read_req(void **buf, unsigned int *total_len, + * If we want to do a RDMA write, fill in and append + * smbd_buffer_descriptor_v1 to the end of read request + */ +- if (server->rdma && rdata && ++ if (server->rdma && rdata && !server->sign && + rdata->bytes >= server->smbd_conn->rdma_readwrite_threshold) { + + struct smbd_buffer_descriptor_v1 *v1; +@@ -2857,7 +2857,7 @@ smb2_async_writev(struct cifs_writedata *wdata, + * If we want to do a server RDMA read, fill in and append + * smbd_buffer_descriptor_v1 to the end of write request + */ +- if (server->rdma && wdata->bytes >= ++ if (server->rdma && !server->sign && wdata->bytes >= + server->smbd_conn->rdma_readwrite_threshold) { + + struct smbd_buffer_descriptor_v1 *v1; +diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c +index 34be5c5d027f..608ce9abd240 100644 +--- a/fs/cifs/smbdirect.c ++++ b/fs/cifs/smbdirect.c +@@ -2086,7 +2086,7 @@ int smbd_send(struct smbd_connection *info, struct smb_rqst *rqst) + int start, i, j; + int max_iov_size = + info->max_send_size - sizeof(struct smbd_data_transfer); +- struct kvec iov[SMBDIRECT_MAX_SGE]; ++ struct kvec *iov; + int rc; + + info->smbd_send_pending++; +@@ -2096,32 +2096,20 @@ int smbd_send(struct smbd_connection *info, struct smb_rqst *rqst) + } + + /* +- * This usually means a configuration error +- * We use RDMA read/write for packet size > rdma_readwrite_threshold +- * as long as it's properly configured we should never get into this +- * situation +- */ +- if (rqst->rq_nvec + rqst->rq_npages > SMBDIRECT_MAX_SGE) { +- log_write(ERR, "maximum send segment %x exceeding %x\n", +- rqst->rq_nvec + rqst->rq_npages, SMBDIRECT_MAX_SGE); +- rc = -EINVAL; +- goto done; +- } +- +- /* +- * Remove the RFC1002 length defined in MS-SMB2 section 2.1 +- * It is used only for TCP transport ++ * Skip the RFC1002 length defined in MS-SMB2 section 2.1 ++ * It is used only for TCP transport in the iov[0] + * In future we may want to add a transport layer under protocol + * layer so this will only be issued to TCP transport + */ +- iov[0].iov_base = (char *)rqst->rq_iov[0].iov_base + 4; +- iov[0].iov_len = rqst->rq_iov[0].iov_len - 4; +- buflen += iov[0].iov_len; ++ ++ if (rqst->rq_iov[0].iov_len != 4) { ++ log_write(ERR, "expected the pdu length in 1st iov, but got %zu\n", rqst->rq_iov[0].iov_len); ++ return -EINVAL; ++ } ++ iov = &rqst->rq_iov[1]; + + /* total up iov array first */ +- for (i = 1; i < rqst->rq_nvec; i++) { +- iov[i].iov_base = rqst->rq_iov[i].iov_base; +- iov[i].iov_len = rqst->rq_iov[i].iov_len; ++ for (i = 0; i < rqst->rq_nvec-1; i++) { + buflen += iov[i].iov_len; + } + +@@ -2194,14 +2182,14 @@ int smbd_send(struct smbd_connection *info, struct smb_rqst *rqst) + goto done; + } + i++; +- if (i == rqst->rq_nvec) ++ if (i == rqst->rq_nvec-1) + break; + } + start = i; + buflen = 0; + } else { + i++; +- if (i == rqst->rq_nvec) { ++ if (i == rqst->rq_nvec-1) { + /* send out all remaining vecs */ + remaining_data_length -= buflen; + log_write(INFO, +diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c +index 665661464067..1b5cd3b8617c 100644 +--- a/fs/cifs/transport.c ++++ b/fs/cifs/transport.c +@@ -753,7 +753,7 @@ cifs_send_recv(const unsigned int xid, struct cifs_ses *ses, + goto out; + + #ifdef CONFIG_CIFS_SMB311 +- if (ses->status == CifsNew) ++ if ((ses->status == CifsNew) || (optype & CIFS_NEG_OP)) + smb311_update_preauth_hash(ses, rqst->rq_iov+1, + rqst->rq_nvec-1); + #endif +@@ -797,7 +797,7 @@ cifs_send_recv(const unsigned int xid, struct cifs_ses *ses, + *resp_buf_type = CIFS_SMALL_BUFFER; + + #ifdef CONFIG_CIFS_SMB311 +- if (ses->status == CifsNew) { ++ if ((ses->status == CifsNew) || (optype & CIFS_NEG_OP)) { + struct kvec iov = { + .iov_base = buf + 4, + .iov_len = get_rfc1002_length(buf) +diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c +index f82c4966f4ce..508b905d744d 100644 +--- a/fs/ext4/balloc.c ++++ b/fs/ext4/balloc.c +@@ -321,6 +321,7 @@ static ext4_fsblk_t ext4_valid_block_bitmap(struct super_block *sb, + struct ext4_sb_info *sbi = EXT4_SB(sb); + ext4_grpblk_t offset; + ext4_grpblk_t next_zero_bit; ++ ext4_grpblk_t max_bit = EXT4_CLUSTERS_PER_GROUP(sb); + ext4_fsblk_t blk; + ext4_fsblk_t group_first_block; + +@@ -338,20 +339,25 @@ static ext4_fsblk_t ext4_valid_block_bitmap(struct super_block *sb, + /* check whether block bitmap block number is set */ + blk = ext4_block_bitmap(sb, desc); + offset = blk - group_first_block; +- if (!ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data)) ++ if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit || ++ !ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data)) + /* bad block bitmap */ + return blk; + + /* check whether the inode bitmap block number is set */ + blk = ext4_inode_bitmap(sb, desc); + offset = blk - group_first_block; +- if (!ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data)) ++ if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit || ++ !ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data)) + /* bad block bitmap */ + return blk; + + /* check whether the inode table block number is set */ + blk = ext4_inode_table(sb, desc); + offset = blk - group_first_block; ++ if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit || ++ EXT4_B2C(sbi, offset + sbi->s_itb_per_group) >= max_bit) ++ return blk; + next_zero_bit = ext4_find_next_zero_bit(bh->b_data, + EXT4_B2C(sbi, offset + sbi->s_itb_per_group), + EXT4_B2C(sbi, offset)); +@@ -417,6 +423,7 @@ struct buffer_head * + ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group) + { + struct ext4_group_desc *desc; ++ struct ext4_sb_info *sbi = EXT4_SB(sb); + struct buffer_head *bh; + ext4_fsblk_t bitmap_blk; + int err; +@@ -425,6 +432,12 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group) + if (!desc) + return ERR_PTR(-EFSCORRUPTED); + bitmap_blk = ext4_block_bitmap(sb, desc); ++ if ((bitmap_blk <= le32_to_cpu(sbi->s_es->s_first_data_block)) || ++ (bitmap_blk >= ext4_blocks_count(sbi->s_es))) { ++ ext4_error(sb, "Invalid block bitmap block %llu in " ++ "block_group %u", bitmap_blk, block_group); ++ return ERR_PTR(-EFSCORRUPTED); ++ } + bh = sb_getblk(sb, bitmap_blk); + if (unlikely(!bh)) { + ext4_error(sb, "Cannot get buffer for block bitmap - " +diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c +index 054416e9d827..a7ca193a7480 100644 +--- a/fs/ext4/extents.c ++++ b/fs/ext4/extents.c +@@ -5334,8 +5334,9 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle, + stop = le32_to_cpu(extent->ee_block); + + /* +- * In case of left shift, Don't start shifting extents until we make +- * sure the hole is big enough to accommodate the shift. ++ * For left shifts, make sure the hole on the left is big enough to ++ * accommodate the shift. For right shifts, make sure the last extent ++ * won't be shifted beyond EXT_MAX_BLOCKS. + */ + if (SHIFT == SHIFT_LEFT) { + path = ext4_find_extent(inode, start - 1, &path, +@@ -5355,9 +5356,14 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle, + + if ((start == ex_start && shift > ex_start) || + (shift > start - ex_end)) { +- ext4_ext_drop_refs(path); +- kfree(path); +- return -EINVAL; ++ ret = -EINVAL; ++ goto out; ++ } ++ } else { ++ if (shift > EXT_MAX_BLOCKS - ++ (stop + ext4_ext_get_actual_len(extent))) { ++ ret = -EINVAL; ++ goto out; + } + } + +diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c +index 3fa93665b4a3..df92e3ec9913 100644 +--- a/fs/ext4/ialloc.c ++++ b/fs/ext4/ialloc.c +@@ -122,6 +122,7 @@ static struct buffer_head * + ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group) + { + struct ext4_group_desc *desc; ++ struct ext4_sb_info *sbi = EXT4_SB(sb); + struct buffer_head *bh = NULL; + ext4_fsblk_t bitmap_blk; + int err; +@@ -131,6 +132,12 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group) + return ERR_PTR(-EFSCORRUPTED); + + bitmap_blk = ext4_inode_bitmap(sb, desc); ++ if ((bitmap_blk <= le32_to_cpu(sbi->s_es->s_first_data_block)) || ++ (bitmap_blk >= ext4_blocks_count(sbi->s_es))) { ++ ext4_error(sb, "Invalid inode bitmap blk %llu in " ++ "block_group %u", bitmap_blk, block_group); ++ return ERR_PTR(-EFSCORRUPTED); ++ } + bh = sb_getblk(sb, bitmap_blk); + if (unlikely(!bh)) { + ext4_error(sb, "Cannot read inode bitmap - " +diff --git a/fs/ext4/super.c b/fs/ext4/super.c +index 192c5ad09d71..b8dace7abe09 100644 +--- a/fs/ext4/super.c ++++ b/fs/ext4/super.c +@@ -5868,5 +5868,6 @@ static void __exit ext4_exit_fs(void) + MODULE_AUTHOR("Remy Card, Stephen Tweedie, Andrew Morton, Andreas Dilger, Theodore Ts'o and others"); + MODULE_DESCRIPTION("Fourth Extended Filesystem"); + MODULE_LICENSE("GPL"); ++MODULE_SOFTDEP("pre: crc32c"); + module_init(ext4_init_fs) + module_exit(ext4_exit_fs) +diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c +index ac311037d7a5..8aa453784402 100644 +--- a/fs/jbd2/transaction.c ++++ b/fs/jbd2/transaction.c +@@ -532,6 +532,7 @@ int jbd2_journal_start_reserved(handle_t *handle, unsigned int type, + */ + ret = start_this_handle(journal, handle, GFP_NOFS); + if (ret < 0) { ++ handle->h_journal = journal; + jbd2_journal_free_reserved(handle); + return ret; + } +diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h +index 1ab0e520d6fc..e17de55c2542 100644 +--- a/include/asm-generic/vmlinux.lds.h ++++ b/include/asm-generic/vmlinux.lds.h +@@ -179,7 +179,7 @@ + #endif + + #ifdef CONFIG_SERIAL_EARLYCON +-#define EARLYCON_TABLE() STRUCT_ALIGN(); \ ++#define EARLYCON_TABLE() . = ALIGN(8); \ + VMLINUX_SYMBOL(__earlycon_table) = .; \ + KEEP(*(__earlycon_table)) \ + VMLINUX_SYMBOL(__earlycon_table_end) = .; +diff --git a/include/kvm/arm_psci.h b/include/kvm/arm_psci.h +index e518e4e3dfb5..4b1548129fa2 100644 +--- a/include/kvm/arm_psci.h ++++ b/include/kvm/arm_psci.h +@@ -37,10 +37,15 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu, struct kvm *kvm) + * Our PSCI implementation stays the same across versions from + * v0.2 onward, only adding the few mandatory functions (such + * as FEATURES with 1.0) that are required by newer +- * revisions. It is thus safe to return the latest. ++ * revisions. It is thus safe to return the latest, unless ++ * userspace has instructed us otherwise. + */ +- if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) ++ if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) { ++ if (vcpu->kvm->arch.psci_version) ++ return vcpu->kvm->arch.psci_version; ++ + return KVM_ARM_PSCI_LATEST; ++ } + + return KVM_ARM_PSCI_0_1; + } +@@ -48,4 +53,11 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu, struct kvm *kvm) + + int kvm_hvc_call_handler(struct kvm_vcpu *vcpu); + ++struct kvm_one_reg; ++ ++int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu); ++int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices); ++int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); ++int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); ++ + #endif /* __KVM_ARM_PSCI_H__ */ +diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h +index ed63f3b69c12..c9e601dce06f 100644 +--- a/include/linux/blkdev.h ++++ b/include/linux/blkdev.h +@@ -605,6 +605,11 @@ struct request_queue { + * initialized by the low level device driver (e.g. scsi/sd.c). + * Stacking drivers (device mappers) may or may not initialize + * these fields. ++ * ++ * Reads of this information must be protected with blk_queue_enter() / ++ * blk_queue_exit(). Modifying this information is only allowed while ++ * no requests are being processed. See also blk_mq_freeze_queue() and ++ * blk_mq_unfreeze_queue(). + */ + unsigned int nr_zones; + unsigned long *seq_zones_bitmap; +diff --git a/include/linux/mtd/flashchip.h b/include/linux/mtd/flashchip.h +index b63fa457febd..3529683f691e 100644 +--- a/include/linux/mtd/flashchip.h ++++ b/include/linux/mtd/flashchip.h +@@ -85,6 +85,7 @@ struct flchip { + unsigned int write_suspended:1; + unsigned int erase_suspended:1; + unsigned long in_progress_block_addr; ++ unsigned long in_progress_block_mask; + + struct mutex mutex; + wait_queue_head_t wq; /* Wait on here when we're waiting for the chip +diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h +index b32df49a3bd5..c4219b9cbb70 100644 +--- a/include/linux/serial_core.h ++++ b/include/linux/serial_core.h +@@ -351,10 +351,10 @@ struct earlycon_id { + char name[16]; + char compatible[128]; + int (*setup)(struct earlycon_device *, const char *options); +-} __aligned(32); ++}; + +-extern const struct earlycon_id __earlycon_table[]; +-extern const struct earlycon_id __earlycon_table_end[]; ++extern const struct earlycon_id *__earlycon_table[]; ++extern const struct earlycon_id *__earlycon_table_end[]; + + #if defined(CONFIG_SERIAL_EARLYCON) && !defined(MODULE) + #define EARLYCON_USED_OR_UNUSED __used +@@ -362,12 +362,19 @@ extern const struct earlycon_id __earlycon_table_end[]; + #define EARLYCON_USED_OR_UNUSED __maybe_unused + #endif + +-#define OF_EARLYCON_DECLARE(_name, compat, fn) \ +- static const struct earlycon_id __UNIQUE_ID(__earlycon_##_name) \ +- EARLYCON_USED_OR_UNUSED __section(__earlycon_table) \ ++#define _OF_EARLYCON_DECLARE(_name, compat, fn, unique_id) \ ++ static const struct earlycon_id unique_id \ ++ EARLYCON_USED_OR_UNUSED __initconst \ + = { .name = __stringify(_name), \ + .compatible = compat, \ +- .setup = fn } ++ .setup = fn }; \ ++ static const struct earlycon_id EARLYCON_USED_OR_UNUSED \ ++ __section(__earlycon_table) \ ++ * const __PASTE(__p, unique_id) = &unique_id ++ ++#define OF_EARLYCON_DECLARE(_name, compat, fn) \ ++ _OF_EARLYCON_DECLARE(_name, compat, fn, \ ++ __UNIQUE_ID(__earlycon_##_name)) + + #define EARLYCON_DECLARE(_name, fn) OF_EARLYCON_DECLARE(_name, "", fn) + +diff --git a/include/linux/tty.h b/include/linux/tty.h +index 47f8af22f216..1dd587ba6d88 100644 +--- a/include/linux/tty.h ++++ b/include/linux/tty.h +@@ -701,7 +701,7 @@ extern int tty_unregister_ldisc(int disc); + extern int tty_set_ldisc(struct tty_struct *tty, int disc); + extern int tty_ldisc_setup(struct tty_struct *tty, struct tty_struct *o_tty); + extern void tty_ldisc_release(struct tty_struct *tty); +-extern void tty_ldisc_init(struct tty_struct *tty); ++extern int __must_check tty_ldisc_init(struct tty_struct *tty); + extern void tty_ldisc_deinit(struct tty_struct *tty); + extern int tty_ldisc_receive_buf(struct tty_ldisc *ld, const unsigned char *p, + char *f, int count); +diff --git a/include/linux/vbox_utils.h b/include/linux/vbox_utils.h +index c71def6b310f..a240ed2a0372 100644 +--- a/include/linux/vbox_utils.h ++++ b/include/linux/vbox_utils.h +@@ -24,24 +24,6 @@ __printf(1, 2) void vbg_debug(const char *fmt, ...); + #define vbg_debug pr_debug + #endif + +-/** +- * Allocate memory for generic request and initialize the request header. +- * +- * Return: the allocated memory +- * @len: Size of memory block required for the request. +- * @req_type: The generic request type. +- */ +-void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type); +- +-/** +- * Perform a generic request. +- * +- * Return: VBox status code +- * @gdev: The Guest extension device. +- * @req: Pointer to the request structure. +- */ +-int vbg_req_perform(struct vbg_dev *gdev, void *req); +- + int vbg_hgcm_connect(struct vbg_dev *gdev, + struct vmmdev_hgcm_service_location *loc, + u32 *client_id, int *vbox_status); +@@ -52,11 +34,6 @@ int vbg_hgcm_call(struct vbg_dev *gdev, u32 client_id, u32 function, + u32 timeout_ms, struct vmmdev_hgcm_function_parameter *parms, + u32 parm_count, int *vbox_status); + +-int vbg_hgcm_call32( +- struct vbg_dev *gdev, u32 client_id, u32 function, u32 timeout_ms, +- struct vmmdev_hgcm_function_parameter32 *parm32, u32 parm_count, +- int *vbox_status); +- + /** + * Convert a VirtualBox status code to a standard Linux kernel return value. + * Return: 0 or negative errno value. +diff --git a/include/linux/virtio.h b/include/linux/virtio.h +index 988c7355bc22..fa1b5da2804e 100644 +--- a/include/linux/virtio.h ++++ b/include/linux/virtio.h +@@ -157,6 +157,9 @@ int virtio_device_freeze(struct virtio_device *dev); + int virtio_device_restore(struct virtio_device *dev); + #endif + ++#define virtio_device_for_each_vq(vdev, vq) \ ++ list_for_each_entry(vq, &vdev->vqs, list) ++ + /** + * virtio_driver - operations for a virtio I/O driver + * @driver: underlying device driver (populate name and owner). +diff --git a/include/sound/control.h b/include/sound/control.h +index ca13a44ae9d4..6011a58d3e20 100644 +--- a/include/sound/control.h ++++ b/include/sound/control.h +@@ -23,6 +23,7 @@ + */ + + #include ++#include + #include + + #define snd_kcontrol_chip(kcontrol) ((kcontrol)->private_data) +@@ -148,12 +149,14 @@ int snd_ctl_get_preferred_subdevice(struct snd_card *card, int type); + + static inline unsigned int snd_ctl_get_ioffnum(struct snd_kcontrol *kctl, struct snd_ctl_elem_id *id) + { +- return id->numid - kctl->id.numid; ++ unsigned int ioff = id->numid - kctl->id.numid; ++ return array_index_nospec(ioff, kctl->count); + } + + static inline unsigned int snd_ctl_get_ioffidx(struct snd_kcontrol *kctl, struct snd_ctl_elem_id *id) + { +- return id->index - kctl->id.index; ++ unsigned int ioff = id->index - kctl->id.index; ++ return array_index_nospec(ioff, kctl->count); + } + + static inline unsigned int snd_ctl_get_ioff(struct snd_kcontrol *kctl, struct snd_ctl_elem_id *id) +diff --git a/kernel/module.c b/kernel/module.c +index e42764acedb4..bbb45c038321 100644 +--- a/kernel/module.c ++++ b/kernel/module.c +@@ -1472,7 +1472,8 @@ static ssize_t module_sect_show(struct module_attribute *mattr, + { + struct module_sect_attr *sattr = + container_of(mattr, struct module_sect_attr, mattr); +- return sprintf(buf, "0x%pK\n", (void *)sattr->address); ++ return sprintf(buf, "0x%px\n", kptr_restrict < 2 ? ++ (void *)sattr->address : NULL); + } + + static void free_sect_attrs(struct module_sect_attrs *sect_attrs) +diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c +index 29a5733eff83..741eadbeba58 100644 +--- a/kernel/time/tick-sched.c ++++ b/kernel/time/tick-sched.c +@@ -797,12 +797,13 @@ static ktime_t tick_nohz_stop_sched_tick(struct tick_sched *ts, + goto out; + } + +- hrtimer_set_expires(&ts->sched_timer, tick); +- +- if (ts->nohz_mode == NOHZ_MODE_HIGHRES) +- hrtimer_start_expires(&ts->sched_timer, HRTIMER_MODE_ABS_PINNED); +- else ++ if (ts->nohz_mode == NOHZ_MODE_HIGHRES) { ++ hrtimer_start(&ts->sched_timer, tick, HRTIMER_MODE_ABS_PINNED); ++ } else { ++ hrtimer_set_expires(&ts->sched_timer, tick); + tick_program_event(tick, 1); ++ } ++ + out: + /* + * Update the estimated sleep length until the next timer +diff --git a/lib/kobject.c b/lib/kobject.c +index afd5a3fc6123..d20a97a7e168 100644 +--- a/lib/kobject.c ++++ b/lib/kobject.c +@@ -232,14 +232,12 @@ static int kobject_add_internal(struct kobject *kobj) + + /* be noisy on error issues */ + if (error == -EEXIST) +- WARN(1, "%s failed for %s with " +- "-EEXIST, don't try to register things with " +- "the same name in the same directory.\n", +- __func__, kobject_name(kobj)); ++ pr_err("%s failed for %s with -EEXIST, don't try to register things with the same name in the same directory.\n", ++ __func__, kobject_name(kobj)); + else +- WARN(1, "%s failed for %s (error: %d parent: %s)\n", +- __func__, kobject_name(kobj), error, +- parent ? kobject_name(parent) : "'none'"); ++ pr_err("%s failed for %s (error: %d parent: %s)\n", ++ __func__, kobject_name(kobj), error, ++ parent ? kobject_name(parent) : "'none'"); + } else + kobj->state_in_sysfs = 1; + +diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c +index 8a4d3758030b..02572130a77a 100644 +--- a/net/ceph/messenger.c ++++ b/net/ceph/messenger.c +@@ -2531,6 +2531,11 @@ static int try_write(struct ceph_connection *con) + int ret = 1; + + dout("try_write start %p state %lu\n", con, con->state); ++ if (con->state != CON_STATE_PREOPEN && ++ con->state != CON_STATE_CONNECTING && ++ con->state != CON_STATE_NEGOTIATING && ++ con->state != CON_STATE_OPEN) ++ return 0; + + more: + dout("try_write out_kvec_bytes %d\n", con->out_kvec_bytes); +@@ -2556,6 +2561,8 @@ static int try_write(struct ceph_connection *con) + } + + more_kvec: ++ BUG_ON(!con->sock); ++ + /* kvec data queued? */ + if (con->out_kvec_left) { + ret = write_partial_kvec(con); +diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c +index 1547107f4854..4887443f52dd 100644 +--- a/net/ceph/mon_client.c ++++ b/net/ceph/mon_client.c +@@ -209,6 +209,14 @@ static void reopen_session(struct ceph_mon_client *monc) + __open_session(monc); + } + ++static void un_backoff(struct ceph_mon_client *monc) ++{ ++ monc->hunt_mult /= 2; /* reduce by 50% */ ++ if (monc->hunt_mult < 1) ++ monc->hunt_mult = 1; ++ dout("%s hunt_mult now %d\n", __func__, monc->hunt_mult); ++} ++ + /* + * Reschedule delayed work timer. + */ +@@ -963,6 +971,7 @@ static void delayed_work(struct work_struct *work) + if (!monc->hunting) { + ceph_con_keepalive(&monc->con); + __validate_auth(monc); ++ un_backoff(monc); + } + + if (is_auth && +@@ -1123,9 +1132,8 @@ static void finish_hunting(struct ceph_mon_client *monc) + dout("%s found mon%d\n", __func__, monc->cur_mon); + monc->hunting = false; + monc->had_a_connection = true; +- monc->hunt_mult /= 2; /* reduce by 50% */ +- if (monc->hunt_mult < 1) +- monc->hunt_mult = 1; ++ un_backoff(monc); ++ __schedule_delayed(monc); + } + } + +diff --git a/sound/core/pcm_compat.c b/sound/core/pcm_compat.c +index b719d0bd833e..06d7c40af570 100644 +--- a/sound/core/pcm_compat.c ++++ b/sound/core/pcm_compat.c +@@ -27,10 +27,11 @@ static int snd_pcm_ioctl_delay_compat(struct snd_pcm_substream *substream, + s32 __user *src) + { + snd_pcm_sframes_t delay; ++ int err; + +- delay = snd_pcm_delay(substream); +- if (delay < 0) +- return delay; ++ err = snd_pcm_delay(substream, &delay); ++ if (err) ++ return err; + if (put_user(delay, src)) + return -EFAULT; + return 0; +diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c +index d18b3982548b..5ea0c1a3bbe6 100644 +--- a/sound/core/pcm_native.c ++++ b/sound/core/pcm_native.c +@@ -2687,7 +2687,8 @@ static int snd_pcm_hwsync(struct snd_pcm_substream *substream) + return err; + } + +-static snd_pcm_sframes_t snd_pcm_delay(struct snd_pcm_substream *substream) ++static int snd_pcm_delay(struct snd_pcm_substream *substream, ++ snd_pcm_sframes_t *delay) + { + struct snd_pcm_runtime *runtime = substream->runtime; + int err; +@@ -2703,7 +2704,9 @@ static snd_pcm_sframes_t snd_pcm_delay(struct snd_pcm_substream *substream) + n += runtime->delay; + } + snd_pcm_stream_unlock_irq(substream); +- return err < 0 ? err : n; ++ if (!err) ++ *delay = n; ++ return err; + } + + static int snd_pcm_sync_ptr(struct snd_pcm_substream *substream, +@@ -2746,6 +2749,7 @@ static int snd_pcm_sync_ptr(struct snd_pcm_substream *substream, + sync_ptr.s.status.hw_ptr = status->hw_ptr; + sync_ptr.s.status.tstamp = status->tstamp; + sync_ptr.s.status.suspended_state = status->suspended_state; ++ sync_ptr.s.status.audio_tstamp = status->audio_tstamp; + snd_pcm_stream_unlock_irq(substream); + if (copy_to_user(_sync_ptr, &sync_ptr, sizeof(sync_ptr))) + return -EFAULT; +@@ -2911,11 +2915,13 @@ static int snd_pcm_common_ioctl(struct file *file, + return snd_pcm_hwsync(substream); + case SNDRV_PCM_IOCTL_DELAY: + { +- snd_pcm_sframes_t delay = snd_pcm_delay(substream); ++ snd_pcm_sframes_t delay; + snd_pcm_sframes_t __user *res = arg; ++ int err; + +- if (delay < 0) +- return delay; ++ err = snd_pcm_delay(substream, &delay); ++ if (err) ++ return err; + if (put_user(delay, res)) + return -EFAULT; + return 0; +@@ -3003,13 +3009,7 @@ int snd_pcm_kernel_ioctl(struct snd_pcm_substream *substream, + case SNDRV_PCM_IOCTL_DROP: + return snd_pcm_drop(substream); + case SNDRV_PCM_IOCTL_DELAY: +- { +- result = snd_pcm_delay(substream); +- if (result < 0) +- return result; +- *frames = result; +- return 0; +- } ++ return snd_pcm_delay(substream, frames); + default: + return -EINVAL; + } +diff --git a/sound/core/seq/oss/seq_oss_event.c b/sound/core/seq/oss/seq_oss_event.c +index c3908862bc8b..86ca584c27b2 100644 +--- a/sound/core/seq/oss/seq_oss_event.c ++++ b/sound/core/seq/oss/seq_oss_event.c +@@ -26,6 +26,7 @@ + #include + #include "seq_oss_readq.h" + #include "seq_oss_writeq.h" ++#include + + + /* +@@ -287,10 +288,10 @@ note_on_event(struct seq_oss_devinfo *dp, int dev, int ch, int note, int vel, st + { + struct seq_oss_synthinfo *info; + +- if (!snd_seq_oss_synth_is_valid(dp, dev)) ++ info = snd_seq_oss_synth_info(dp, dev); ++ if (!info) + return -ENXIO; + +- info = &dp->synths[dev]; + switch (info->arg.event_passing) { + case SNDRV_SEQ_OSS_PROCESS_EVENTS: + if (! info->ch || ch < 0 || ch >= info->nr_voices) { +@@ -298,6 +299,7 @@ note_on_event(struct seq_oss_devinfo *dp, int dev, int ch, int note, int vel, st + return set_note_event(dp, dev, SNDRV_SEQ_EVENT_NOTEON, ch, note, vel, ev); + } + ++ ch = array_index_nospec(ch, info->nr_voices); + if (note == 255 && info->ch[ch].note >= 0) { + /* volume control */ + int type; +@@ -347,10 +349,10 @@ note_off_event(struct seq_oss_devinfo *dp, int dev, int ch, int note, int vel, s + { + struct seq_oss_synthinfo *info; + +- if (!snd_seq_oss_synth_is_valid(dp, dev)) ++ info = snd_seq_oss_synth_info(dp, dev); ++ if (!info) + return -ENXIO; + +- info = &dp->synths[dev]; + switch (info->arg.event_passing) { + case SNDRV_SEQ_OSS_PROCESS_EVENTS: + if (! info->ch || ch < 0 || ch >= info->nr_voices) { +@@ -358,6 +360,7 @@ note_off_event(struct seq_oss_devinfo *dp, int dev, int ch, int note, int vel, s + return set_note_event(dp, dev, SNDRV_SEQ_EVENT_NOTEON, ch, note, vel, ev); + } + ++ ch = array_index_nospec(ch, info->nr_voices); + if (info->ch[ch].note >= 0) { + note = info->ch[ch].note; + info->ch[ch].vel = 0; +@@ -381,7 +384,7 @@ note_off_event(struct seq_oss_devinfo *dp, int dev, int ch, int note, int vel, s + static int + set_note_event(struct seq_oss_devinfo *dp, int dev, int type, int ch, int note, int vel, struct snd_seq_event *ev) + { +- if (! snd_seq_oss_synth_is_valid(dp, dev)) ++ if (!snd_seq_oss_synth_info(dp, dev)) + return -ENXIO; + + ev->type = type; +@@ -399,7 +402,7 @@ set_note_event(struct seq_oss_devinfo *dp, int dev, int type, int ch, int note, + static int + set_control_event(struct seq_oss_devinfo *dp, int dev, int type, int ch, int param, int val, struct snd_seq_event *ev) + { +- if (! snd_seq_oss_synth_is_valid(dp, dev)) ++ if (!snd_seq_oss_synth_info(dp, dev)) + return -ENXIO; + + ev->type = type; +diff --git a/sound/core/seq/oss/seq_oss_midi.c b/sound/core/seq/oss/seq_oss_midi.c +index b30b2139e3f0..9debd1b8fd28 100644 +--- a/sound/core/seq/oss/seq_oss_midi.c ++++ b/sound/core/seq/oss/seq_oss_midi.c +@@ -29,6 +29,7 @@ + #include "../seq_lock.h" + #include + #include ++#include + + + /* +@@ -315,6 +316,7 @@ get_mididev(struct seq_oss_devinfo *dp, int dev) + { + if (dev < 0 || dev >= dp->max_mididev) + return NULL; ++ dev = array_index_nospec(dev, dp->max_mididev); + return get_mdev(dev); + } + +diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c +index cd0e0ebbfdb1..278ebb993122 100644 +--- a/sound/core/seq/oss/seq_oss_synth.c ++++ b/sound/core/seq/oss/seq_oss_synth.c +@@ -26,6 +26,7 @@ + #include + #include + #include ++#include + + /* + * constants +@@ -339,17 +340,13 @@ snd_seq_oss_synth_cleanup(struct seq_oss_devinfo *dp) + dp->max_synthdev = 0; + } + +-/* +- * check if the specified device is MIDI mapped device +- */ +-static int +-is_midi_dev(struct seq_oss_devinfo *dp, int dev) ++static struct seq_oss_synthinfo * ++get_synthinfo_nospec(struct seq_oss_devinfo *dp, int dev) + { + if (dev < 0 || dev >= dp->max_synthdev) +- return 0; +- if (dp->synths[dev].is_midi) +- return 1; +- return 0; ++ return NULL; ++ dev = array_index_nospec(dev, SNDRV_SEQ_OSS_MAX_SYNTH_DEVS); ++ return &dp->synths[dev]; + } + + /* +@@ -359,14 +356,20 @@ static struct seq_oss_synth * + get_synthdev(struct seq_oss_devinfo *dp, int dev) + { + struct seq_oss_synth *rec; +- if (dev < 0 || dev >= dp->max_synthdev) +- return NULL; +- if (! dp->synths[dev].opened) ++ struct seq_oss_synthinfo *info = get_synthinfo_nospec(dp, dev); ++ ++ if (!info) + return NULL; +- if (dp->synths[dev].is_midi) +- return &midi_synth_dev; +- if ((rec = get_sdev(dev)) == NULL) ++ if (!info->opened) + return NULL; ++ if (info->is_midi) { ++ rec = &midi_synth_dev; ++ snd_use_lock_use(&rec->use_lock); ++ } else { ++ rec = get_sdev(dev); ++ if (!rec) ++ return NULL; ++ } + if (! rec->opened) { + snd_use_lock_free(&rec->use_lock); + return NULL; +@@ -402,10 +405,8 @@ snd_seq_oss_synth_reset(struct seq_oss_devinfo *dp, int dev) + struct seq_oss_synth *rec; + struct seq_oss_synthinfo *info; + +- if (snd_BUG_ON(dev < 0 || dev >= dp->max_synthdev)) +- return; +- info = &dp->synths[dev]; +- if (! info->opened) ++ info = get_synthinfo_nospec(dp, dev); ++ if (!info || !info->opened) + return; + if (info->sysex) + info->sysex->len = 0; /* reset sysex */ +@@ -454,12 +455,14 @@ snd_seq_oss_synth_load_patch(struct seq_oss_devinfo *dp, int dev, int fmt, + const char __user *buf, int p, int c) + { + struct seq_oss_synth *rec; ++ struct seq_oss_synthinfo *info; + int rc; + +- if (dev < 0 || dev >= dp->max_synthdev) ++ info = get_synthinfo_nospec(dp, dev); ++ if (!info) + return -ENXIO; + +- if (is_midi_dev(dp, dev)) ++ if (info->is_midi) + return 0; + if ((rec = get_synthdev(dp, dev)) == NULL) + return -ENXIO; +@@ -467,24 +470,25 @@ snd_seq_oss_synth_load_patch(struct seq_oss_devinfo *dp, int dev, int fmt, + if (rec->oper.load_patch == NULL) + rc = -ENXIO; + else +- rc = rec->oper.load_patch(&dp->synths[dev].arg, fmt, buf, p, c); ++ rc = rec->oper.load_patch(&info->arg, fmt, buf, p, c); + snd_use_lock_free(&rec->use_lock); + return rc; + } + + /* +- * check if the device is valid synth device ++ * check if the device is valid synth device and return the synth info + */ +-int +-snd_seq_oss_synth_is_valid(struct seq_oss_devinfo *dp, int dev) ++struct seq_oss_synthinfo * ++snd_seq_oss_synth_info(struct seq_oss_devinfo *dp, int dev) + { + struct seq_oss_synth *rec; ++ + rec = get_synthdev(dp, dev); + if (rec) { + snd_use_lock_free(&rec->use_lock); +- return 1; ++ return get_synthinfo_nospec(dp, dev); + } +- return 0; ++ return NULL; + } + + +@@ -499,16 +503,18 @@ snd_seq_oss_synth_sysex(struct seq_oss_devinfo *dp, int dev, unsigned char *buf, + int i, send; + unsigned char *dest; + struct seq_oss_synth_sysex *sysex; ++ struct seq_oss_synthinfo *info; + +- if (! snd_seq_oss_synth_is_valid(dp, dev)) ++ info = snd_seq_oss_synth_info(dp, dev); ++ if (!info) + return -ENXIO; + +- sysex = dp->synths[dev].sysex; ++ sysex = info->sysex; + if (sysex == NULL) { + sysex = kzalloc(sizeof(*sysex), GFP_KERNEL); + if (sysex == NULL) + return -ENOMEM; +- dp->synths[dev].sysex = sysex; ++ info->sysex = sysex; + } + + send = 0; +@@ -553,10 +559,12 @@ snd_seq_oss_synth_sysex(struct seq_oss_devinfo *dp, int dev, unsigned char *buf, + int + snd_seq_oss_synth_addr(struct seq_oss_devinfo *dp, int dev, struct snd_seq_event *ev) + { +- if (! snd_seq_oss_synth_is_valid(dp, dev)) ++ struct seq_oss_synthinfo *info = snd_seq_oss_synth_info(dp, dev); ++ ++ if (!info) + return -EINVAL; +- snd_seq_oss_fill_addr(dp, ev, dp->synths[dev].arg.addr.client, +- dp->synths[dev].arg.addr.port); ++ snd_seq_oss_fill_addr(dp, ev, info->arg.addr.client, ++ info->arg.addr.port); + return 0; + } + +@@ -568,16 +576,18 @@ int + snd_seq_oss_synth_ioctl(struct seq_oss_devinfo *dp, int dev, unsigned int cmd, unsigned long addr) + { + struct seq_oss_synth *rec; ++ struct seq_oss_synthinfo *info; + int rc; + +- if (is_midi_dev(dp, dev)) ++ info = get_synthinfo_nospec(dp, dev); ++ if (!info || info->is_midi) + return -ENXIO; + if ((rec = get_synthdev(dp, dev)) == NULL) + return -ENXIO; + if (rec->oper.ioctl == NULL) + rc = -ENXIO; + else +- rc = rec->oper.ioctl(&dp->synths[dev].arg, cmd, addr); ++ rc = rec->oper.ioctl(&info->arg, cmd, addr); + snd_use_lock_free(&rec->use_lock); + return rc; + } +@@ -589,7 +599,10 @@ snd_seq_oss_synth_ioctl(struct seq_oss_devinfo *dp, int dev, unsigned int cmd, u + int + snd_seq_oss_synth_raw_event(struct seq_oss_devinfo *dp, int dev, unsigned char *data, struct snd_seq_event *ev) + { +- if (! snd_seq_oss_synth_is_valid(dp, dev) || is_midi_dev(dp, dev)) ++ struct seq_oss_synthinfo *info; ++ ++ info = snd_seq_oss_synth_info(dp, dev); ++ if (!info || info->is_midi) + return -ENXIO; + ev->type = SNDRV_SEQ_EVENT_OSS; + memcpy(ev->data.raw8.d, data, 8); +diff --git a/sound/core/seq/oss/seq_oss_synth.h b/sound/core/seq/oss/seq_oss_synth.h +index 74ac55f166b6..a63f9e22974d 100644 +--- a/sound/core/seq/oss/seq_oss_synth.h ++++ b/sound/core/seq/oss/seq_oss_synth.h +@@ -37,7 +37,8 @@ void snd_seq_oss_synth_cleanup(struct seq_oss_devinfo *dp); + void snd_seq_oss_synth_reset(struct seq_oss_devinfo *dp, int dev); + int snd_seq_oss_synth_load_patch(struct seq_oss_devinfo *dp, int dev, int fmt, + const char __user *buf, int p, int c); +-int snd_seq_oss_synth_is_valid(struct seq_oss_devinfo *dp, int dev); ++struct seq_oss_synthinfo *snd_seq_oss_synth_info(struct seq_oss_devinfo *dp, ++ int dev); + int snd_seq_oss_synth_sysex(struct seq_oss_devinfo *dp, int dev, unsigned char *buf, + struct snd_seq_event *ev); + int snd_seq_oss_synth_addr(struct seq_oss_devinfo *dp, int dev, struct snd_seq_event *ev); +diff --git a/sound/drivers/opl3/opl3_synth.c b/sound/drivers/opl3/opl3_synth.c +index ddcc1a325a61..42920a243328 100644 +--- a/sound/drivers/opl3/opl3_synth.c ++++ b/sound/drivers/opl3/opl3_synth.c +@@ -21,6 +21,7 @@ + + #include + #include ++#include + #include + #include + +@@ -448,7 +449,7 @@ static int snd_opl3_set_voice(struct snd_opl3 * opl3, struct snd_dm_fm_voice * v + { + unsigned short reg_side; + unsigned char op_offset; +- unsigned char voice_offset; ++ unsigned char voice_offset, voice_op; + + unsigned short opl3_reg; + unsigned char reg_val; +@@ -473,7 +474,9 @@ static int snd_opl3_set_voice(struct snd_opl3 * opl3, struct snd_dm_fm_voice * v + voice_offset = voice->voice - MAX_OPL2_VOICES; + } + /* Get register offset of operator */ +- op_offset = snd_opl3_regmap[voice_offset][voice->op]; ++ voice_offset = array_index_nospec(voice_offset, MAX_OPL2_VOICES); ++ voice_op = array_index_nospec(voice->op, 4); ++ op_offset = snd_opl3_regmap[voice_offset][voice_op]; + + reg_val = 0x00; + /* Set amplitude modulation (tremolo) effect */ +diff --git a/sound/firewire/dice/dice-stream.c b/sound/firewire/dice/dice-stream.c +index 8573289c381e..928a255bfc35 100644 +--- a/sound/firewire/dice/dice-stream.c ++++ b/sound/firewire/dice/dice-stream.c +@@ -435,7 +435,7 @@ int snd_dice_stream_init_duplex(struct snd_dice *dice) + err = init_stream(dice, AMDTP_IN_STREAM, i); + if (err < 0) { + for (; i >= 0; i--) +- destroy_stream(dice, AMDTP_OUT_STREAM, i); ++ destroy_stream(dice, AMDTP_IN_STREAM, i); + goto end; + } + } +diff --git a/sound/firewire/dice/dice.c b/sound/firewire/dice/dice.c +index 4ddb4cdd054b..96bb01b6b751 100644 +--- a/sound/firewire/dice/dice.c ++++ b/sound/firewire/dice/dice.c +@@ -14,7 +14,7 @@ MODULE_LICENSE("GPL v2"); + #define OUI_WEISS 0x001c6a + #define OUI_LOUD 0x000ff2 + #define OUI_FOCUSRITE 0x00130e +-#define OUI_TCELECTRONIC 0x001486 ++#define OUI_TCELECTRONIC 0x000166 + + #define DICE_CATEGORY_ID 0x04 + #define WEISS_CATEGORY_ID 0x00 +diff --git a/sound/pci/asihpi/hpimsginit.c b/sound/pci/asihpi/hpimsginit.c +index 7eb617175fde..a31a70dccecf 100644 +--- a/sound/pci/asihpi/hpimsginit.c ++++ b/sound/pci/asihpi/hpimsginit.c +@@ -23,6 +23,7 @@ + + #include "hpi_internal.h" + #include "hpimsginit.h" ++#include + + /* The actual message size for each object type */ + static u16 msg_size[HPI_OBJ_MAXINDEX + 1] = HPI_MESSAGE_SIZE_BY_OBJECT; +@@ -39,10 +40,12 @@ static void hpi_init_message(struct hpi_message *phm, u16 object, + { + u16 size; + +- if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) ++ if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) { ++ object = array_index_nospec(object, HPI_OBJ_MAXINDEX + 1); + size = msg_size[object]; +- else ++ } else { + size = sizeof(*phm); ++ } + + memset(phm, 0, size); + phm->size = size; +@@ -66,10 +69,12 @@ void hpi_init_response(struct hpi_response *phr, u16 object, u16 function, + { + u16 size; + +- if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) ++ if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) { ++ object = array_index_nospec(object, HPI_OBJ_MAXINDEX + 1); + size = res_size[object]; +- else ++ } else { + size = sizeof(*phr); ++ } + + memset(phr, 0, sizeof(*phr)); + phr->size = size; +diff --git a/sound/pci/asihpi/hpioctl.c b/sound/pci/asihpi/hpioctl.c +index 5badd08e1d69..b1a2a7ea4172 100644 +--- a/sound/pci/asihpi/hpioctl.c ++++ b/sound/pci/asihpi/hpioctl.c +@@ -33,6 +33,7 @@ + #include + #include + #include ++#include + + #ifdef MODULE_FIRMWARE + MODULE_FIRMWARE("asihpi/dsp5000.bin"); +@@ -186,7 +187,8 @@ long asihpi_hpi_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + struct hpi_adapter *pa = NULL; + + if (hm->h.adapter_index < ARRAY_SIZE(adapters)) +- pa = &adapters[hm->h.adapter_index]; ++ pa = &adapters[array_index_nospec(hm->h.adapter_index, ++ ARRAY_SIZE(adapters))]; + + if (!pa || !pa->adapter || !pa->adapter->type) { + hpi_init_response(&hr->r0, hm->h.object, +diff --git a/sound/pci/hda/hda_hwdep.c b/sound/pci/hda/hda_hwdep.c +index 57df06e76968..cc009a4a3d1d 100644 +--- a/sound/pci/hda/hda_hwdep.c ++++ b/sound/pci/hda/hda_hwdep.c +@@ -21,6 +21,7 @@ + #include + #include + #include ++#include + #include + #include "hda_codec.h" + #include "hda_local.h" +@@ -51,7 +52,16 @@ static int get_wcap_ioctl(struct hda_codec *codec, + + if (get_user(verb, &arg->verb)) + return -EFAULT; +- res = get_wcaps(codec, verb >> 24); ++ /* open-code get_wcaps(verb>>24) with nospec */ ++ verb >>= 24; ++ if (verb < codec->core.start_nid || ++ verb >= codec->core.start_nid + codec->core.num_nodes) { ++ res = 0; ++ } else { ++ verb -= codec->core.start_nid; ++ verb = array_index_nospec(verb, codec->core.num_nodes); ++ res = codec->wcaps[verb]; ++ } + if (put_user(res, &arg->res)) + return -EFAULT; + return 0; +diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c +index b4f1b6e88305..7d7eb1354eee 100644 +--- a/sound/pci/hda/patch_hdmi.c ++++ b/sound/pci/hda/patch_hdmi.c +@@ -1383,6 +1383,8 @@ static void hdmi_pcm_setup_pin(struct hdmi_spec *spec, + pcm = get_pcm_rec(spec, per_pin->pcm_idx); + else + return; ++ if (!pcm->pcm) ++ return; + if (!test_bit(per_pin->pcm_idx, &spec->pcm_in_use)) + return; + +@@ -2151,8 +2153,13 @@ static int generic_hdmi_build_controls(struct hda_codec *codec) + int dev, err; + int pin_idx, pcm_idx; + +- + for (pcm_idx = 0; pcm_idx < spec->pcm_used; pcm_idx++) { ++ if (!get_pcm_rec(spec, pcm_idx)->pcm) { ++ /* no PCM: mark this for skipping permanently */ ++ set_bit(pcm_idx, &spec->pcm_bitmap); ++ continue; ++ } ++ + err = generic_hdmi_build_jack(codec, pcm_idx); + if (err < 0) + return err; +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index fc77bf7a1544..8c238e51bb5a 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -331,6 +331,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec) + /* fallthrough */ + case 0x10ec0215: + case 0x10ec0233: ++ case 0x10ec0235: + case 0x10ec0236: + case 0x10ec0255: + case 0x10ec0256: +@@ -6575,6 +6576,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), + SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), + SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), ++ SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), + SND_PCI_QUIRK(0x17aa, 0x3138, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), + SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), + SND_PCI_QUIRK(0x17aa, 0x3112, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), +@@ -7160,8 +7162,11 @@ static int patch_alc269(struct hda_codec *codec) + case 0x10ec0298: + spec->codec_variant = ALC269_TYPE_ALC298; + break; ++ case 0x10ec0235: + case 0x10ec0255: + spec->codec_variant = ALC269_TYPE_ALC255; ++ spec->shutup = alc256_shutup; ++ spec->init_hook = alc256_init; + break; + case 0x10ec0236: + case 0x10ec0256: +diff --git a/sound/pci/rme9652/hdspm.c b/sound/pci/rme9652/hdspm.c +index 4c59983158e0..11b5b5e0e058 100644 +--- a/sound/pci/rme9652/hdspm.c ++++ b/sound/pci/rme9652/hdspm.c +@@ -137,6 +137,7 @@ + #include + #include + #include ++#include + + #include + #include +@@ -5698,40 +5699,43 @@ static int snd_hdspm_channel_info(struct snd_pcm_substream *substream, + struct snd_pcm_channel_info *info) + { + struct hdspm *hdspm = snd_pcm_substream_chip(substream); ++ unsigned int channel = info->channel; + + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { +- if (snd_BUG_ON(info->channel >= hdspm->max_channels_out)) { ++ if (snd_BUG_ON(channel >= hdspm->max_channels_out)) { + dev_info(hdspm->card->dev, + "snd_hdspm_channel_info: output channel out of range (%d)\n", +- info->channel); ++ channel); + return -EINVAL; + } + +- if (hdspm->channel_map_out[info->channel] < 0) { ++ channel = array_index_nospec(channel, hdspm->max_channels_out); ++ if (hdspm->channel_map_out[channel] < 0) { + dev_info(hdspm->card->dev, + "snd_hdspm_channel_info: output channel %d mapped out\n", +- info->channel); ++ channel); + return -EINVAL; + } + +- info->offset = hdspm->channel_map_out[info->channel] * ++ info->offset = hdspm->channel_map_out[channel] * + HDSPM_CHANNEL_BUFFER_BYTES; + } else { +- if (snd_BUG_ON(info->channel >= hdspm->max_channels_in)) { ++ if (snd_BUG_ON(channel >= hdspm->max_channels_in)) { + dev_info(hdspm->card->dev, + "snd_hdspm_channel_info: input channel out of range (%d)\n", +- info->channel); ++ channel); + return -EINVAL; + } + +- if (hdspm->channel_map_in[info->channel] < 0) { ++ channel = array_index_nospec(channel, hdspm->max_channels_in); ++ if (hdspm->channel_map_in[channel] < 0) { + dev_info(hdspm->card->dev, + "snd_hdspm_channel_info: input channel %d mapped out\n", +- info->channel); ++ channel); + return -EINVAL; + } + +- info->offset = hdspm->channel_map_in[info->channel] * ++ info->offset = hdspm->channel_map_in[channel] * + HDSPM_CHANNEL_BUFFER_BYTES; + } + +diff --git a/sound/pci/rme9652/rme9652.c b/sound/pci/rme9652/rme9652.c +index df648b1d9217..edd765e22377 100644 +--- a/sound/pci/rme9652/rme9652.c ++++ b/sound/pci/rme9652/rme9652.c +@@ -26,6 +26,7 @@ + #include + #include + #include ++#include + + #include + #include +@@ -2071,9 +2072,10 @@ static int snd_rme9652_channel_info(struct snd_pcm_substream *substream, + if (snd_BUG_ON(info->channel >= RME9652_NCHANNELS)) + return -EINVAL; + +- if ((chn = rme9652->channel_map[info->channel]) < 0) { ++ chn = rme9652->channel_map[array_index_nospec(info->channel, ++ RME9652_NCHANNELS)]; ++ if (chn < 0) + return -EINVAL; +- } + + info->offset = chn * RME9652_CHANNEL_BUFFER_BYTES; + info->first = 0; +diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c +index cef79a1a620b..81268760b7a9 100644 +--- a/sound/soc/fsl/fsl_esai.c ++++ b/sound/soc/fsl/fsl_esai.c +@@ -144,6 +144,13 @@ static int fsl_esai_divisor_cal(struct snd_soc_dai *dai, bool tx, u32 ratio, + + psr = ratio <= 256 * maxfp ? ESAI_xCCR_xPSR_BYPASS : ESAI_xCCR_xPSR_DIV8; + ++ /* Do not loop-search if PM (1 ~ 256) alone can serve the ratio */ ++ if (ratio <= 256) { ++ pm = ratio; ++ fp = 1; ++ goto out; ++ } ++ + /* Set the max fluctuation -- 0.1% of the max devisor */ + savesub = (psr ? 1 : 8) * 256 * maxfp / 1000; + +diff --git a/sound/soc/omap/omap-dmic.c b/sound/soc/omap/omap-dmic.c +index 09db2aec12a3..b2f5d2fa354d 100644 +--- a/sound/soc/omap/omap-dmic.c ++++ b/sound/soc/omap/omap-dmic.c +@@ -281,7 +281,7 @@ static int omap_dmic_dai_trigger(struct snd_pcm_substream *substream, + static int omap_dmic_select_fclk(struct omap_dmic *dmic, int clk_id, + unsigned int freq) + { +- struct clk *parent_clk; ++ struct clk *parent_clk, *mux; + char *parent_clk_name; + int ret = 0; + +@@ -329,14 +329,21 @@ static int omap_dmic_select_fclk(struct omap_dmic *dmic, int clk_id, + return -ENODEV; + } + ++ mux = clk_get_parent(dmic->fclk); ++ if (IS_ERR(mux)) { ++ dev_err(dmic->dev, "can't get fck mux parent\n"); ++ clk_put(parent_clk); ++ return -ENODEV; ++ } ++ + mutex_lock(&dmic->mutex); + if (dmic->active) { + /* disable clock while reparenting */ + pm_runtime_put_sync(dmic->dev); +- ret = clk_set_parent(dmic->fclk, parent_clk); ++ ret = clk_set_parent(mux, parent_clk); + pm_runtime_get_sync(dmic->dev); + } else { +- ret = clk_set_parent(dmic->fclk, parent_clk); ++ ret = clk_set_parent(mux, parent_clk); + } + mutex_unlock(&dmic->mutex); + +@@ -349,6 +356,7 @@ static int omap_dmic_select_fclk(struct omap_dmic *dmic, int clk_id, + dmic->fclk_freq = freq; + + err_busy: ++ clk_put(mux); + clk_put(parent_clk); + + return ret; +diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c +index 9038b2e7df73..eaa03acd4686 100644 +--- a/sound/usb/mixer_maps.c ++++ b/sound/usb/mixer_maps.c +@@ -353,8 +353,11 @@ static struct usbmix_name_map bose_companion5_map[] = { + /* + * Dell usb dock with ALC4020 codec had a firmware problem where it got + * screwed up when zero volume is passed; just skip it as a workaround ++ * ++ * Also the extension unit gives an access error, so skip it as well. + */ + static const struct usbmix_name_map dell_alc4020_map[] = { ++ { 4, NULL }, /* extension unit */ + { 16, NULL }, + { 19, NULL }, + { 0 } +diff --git a/tools/lib/str_error_r.c b/tools/lib/str_error_r.c +index d6d65537b0d9..6aad8308a0ac 100644 +--- a/tools/lib/str_error_r.c ++++ b/tools/lib/str_error_r.c +@@ -22,6 +22,6 @@ char *str_error_r(int errnum, char *buf, size_t buflen) + { + int err = strerror_r(errnum, buf, buflen); + if (err) +- snprintf(buf, buflen, "INTERNAL ERROR: strerror_r(%d, %p, %zd)=%d", errnum, buf, buflen, err); ++ snprintf(buf, buflen, "INTERNAL ERROR: strerror_r(%d, [buf], %zd)=%d", errnum, buflen, err); + return buf; + } +diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c +index 53572304843b..a6483b5576fd 100644 +--- a/virt/kvm/arm/arm.c ++++ b/virt/kvm/arm/arm.c +@@ -63,7 +63,7 @@ static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu); + static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); + static u32 kvm_next_vmid; + static unsigned int kvm_vmid_bits __read_mostly; +-static DEFINE_SPINLOCK(kvm_vmid_lock); ++static DEFINE_RWLOCK(kvm_vmid_lock); + + static bool vgic_present; + +@@ -470,11 +470,16 @@ static void update_vttbr(struct kvm *kvm) + { + phys_addr_t pgd_phys; + u64 vmid; ++ bool new_gen; + +- if (!need_new_vmid_gen(kvm)) ++ read_lock(&kvm_vmid_lock); ++ new_gen = need_new_vmid_gen(kvm); ++ read_unlock(&kvm_vmid_lock); ++ ++ if (!new_gen) + return; + +- spin_lock(&kvm_vmid_lock); ++ write_lock(&kvm_vmid_lock); + + /* + * We need to re-check the vmid_gen here to ensure that if another vcpu +@@ -482,7 +487,7 @@ static void update_vttbr(struct kvm *kvm) + * use the same vmid. + */ + if (!need_new_vmid_gen(kvm)) { +- spin_unlock(&kvm_vmid_lock); ++ write_unlock(&kvm_vmid_lock); + return; + } + +@@ -516,7 +521,7 @@ static void update_vttbr(struct kvm *kvm) + vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits); + kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid; + +- spin_unlock(&kvm_vmid_lock); ++ write_unlock(&kvm_vmid_lock); + } + + static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) +diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c +index 6919352cbf15..c4762bef13c6 100644 +--- a/virt/kvm/arm/psci.c ++++ b/virt/kvm/arm/psci.c +@@ -18,6 +18,7 @@ + #include + #include + #include ++#include + #include + + #include +@@ -427,3 +428,62 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu) + smccc_set_retval(vcpu, val, 0, 0, 0); + return 1; + } ++ ++int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu) ++{ ++ return 1; /* PSCI version */ ++} ++ ++int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) ++{ ++ if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices)) ++ return -EFAULT; ++ ++ return 0; ++} ++ ++int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) ++{ ++ if (reg->id == KVM_REG_ARM_PSCI_VERSION) { ++ void __user *uaddr = (void __user *)(long)reg->addr; ++ u64 val; ++ ++ val = kvm_psci_version(vcpu, vcpu->kvm); ++ if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id))) ++ return -EFAULT; ++ ++ return 0; ++ } ++ ++ return -EINVAL; ++} ++ ++int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) ++{ ++ if (reg->id == KVM_REG_ARM_PSCI_VERSION) { ++ void __user *uaddr = (void __user *)(long)reg->addr; ++ bool wants_02; ++ u64 val; ++ ++ if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id))) ++ return -EFAULT; ++ ++ wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features); ++ ++ switch (val) { ++ case KVM_ARM_PSCI_0_1: ++ if (wants_02) ++ return -EINVAL; ++ vcpu->kvm->arch.psci_version = val; ++ return 0; ++ case KVM_ARM_PSCI_0_2: ++ case KVM_ARM_PSCI_1_0: ++ if (!wants_02) ++ return -EINVAL; ++ vcpu->kvm->arch.psci_version = val; ++ return 0; ++ } ++ } ++ ++ return -EINVAL; ++}