From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <gentoo-commits+bounces-1286463-garchives=archives.gentoo.org@lists.gentoo.org> Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 92DCC1382C5 for <garchives@archives.gentoo.org>; Wed, 26 May 2021 12:08:30 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 76593E1469; Wed, 26 May 2021 12:08:23 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 29086E1469 for <gentoo-commits@lists.gentoo.org>; Wed, 26 May 2021 12:08:23 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 7C952340E71 for <gentoo-commits@lists.gentoo.org>; Wed, 26 May 2021 12:08:21 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id D3901581 for <gentoo-commits@lists.gentoo.org>; Wed, 26 May 2021 12:08:19 +0000 (UTC) From: "Mike Pagano" <mpagano@gentoo.org> To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" <mpagano@gentoo.org> Message-ID: <1622030886.3e880daf8c1c2089abdf0b092cb6143b35fddb3c.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.12 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1006_linux-5.12.7.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 3e880daf8c1c2089abdf0b092cb6143b35fddb3c X-VCS-Branch: 5.12 Date: Wed, 26 May 2021 12:08:19 +0000 (UTC) Precedence: bulk List-Post: <mailto:gentoo-commits@lists.gentoo.org> List-Help: <mailto:gentoo-commits+help@lists.gentoo.org> List-Unsubscribe: <mailto:gentoo-commits+unsubscribe@lists.gentoo.org> List-Subscribe: <mailto:gentoo-commits+subscribe@lists.gentoo.org> List-Id: Gentoo Linux mail <gentoo-commits.gentoo.org> X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 78ccc6ca-441e-453f-90f0-912635381813 X-Archives-Hash: f7ea534cdb485e1376411cfbab3540d5 commit: 3e880daf8c1c2089abdf0b092cb6143b35fddb3c Author: Mike Pagano <mpagano <AT> gentoo <DOT> org> AuthorDate: Wed May 26 12:08:06 2021 +0000 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org> CommitDate: Wed May 26 12:08:06 2021 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3e880daf Linux patch 5.12.7 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org> 0000_README | 4 + 1006_linux-5.12.7.patch | 5240 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 5244 insertions(+) diff --git a/0000_README b/0000_README index 528cc11..22c40ca 100644 --- a/0000_README +++ b/0000_README @@ -67,6 +67,10 @@ Patch: 1005_linux-5.12.6.patch From: http://www.kernel.org Desc: Linux 5.12.6 +Patch: 1006_linux-5.12.7.patch +From: http://www.kernel.org +Desc: Linux 5.12.7 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1006_linux-5.12.7.patch b/1006_linux-5.12.7.patch new file mode 100644 index 0000000..6bf34f2 --- /dev/null +++ b/1006_linux-5.12.7.patch @@ -0,0 +1,5240 @@ +diff --git a/Documentation/powerpc/syscall64-abi.rst b/Documentation/powerpc/syscall64-abi.rst +index dabee3729e5a5..56490c4c0c07a 100644 +--- a/Documentation/powerpc/syscall64-abi.rst ++++ b/Documentation/powerpc/syscall64-abi.rst +@@ -109,6 +109,16 @@ auxiliary vector. + + scv 0 syscalls will always behave as PPC_FEATURE2_HTM_NOSC. + ++ptrace ++------ ++When ptracing system calls (PTRACE_SYSCALL), the pt_regs.trap value contains ++the system call type that can be used to distinguish between sc and scv 0 ++system calls, and the different register conventions can be accounted for. ++ ++If the value of (pt_regs.trap & 0xfff0) is 0xc00 then the system call was ++performed with the sc instruction, if it is 0x3000 then the system call was ++performed with the scv 0 instruction. ++ + vsyscall + ======== + +diff --git a/Makefile b/Makefile +index dd021135838b8..6a73dee7c2219 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 12 +-SUBLEVEL = 6 ++SUBLEVEL = 7 + EXTRAVERSION = + NAME = Frozen Wasteland + +diff --git a/arch/openrisc/kernel/setup.c b/arch/openrisc/kernel/setup.c +index 2416a9f915330..c6f9e7b9f7cb2 100644 +--- a/arch/openrisc/kernel/setup.c ++++ b/arch/openrisc/kernel/setup.c +@@ -278,6 +278,8 @@ void calibrate_delay(void) + pr_cont("%lu.%02lu BogoMIPS (lpj=%lu)\n", + loops_per_jiffy / (500000 / HZ), + (loops_per_jiffy / (5000 / HZ)) % 100, loops_per_jiffy); ++ ++ of_node_put(cpu); + } + + void __init setup_arch(char **cmdline_p) +diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c +index bf9b2310fc936..f3fa02b8838af 100644 +--- a/arch/openrisc/mm/init.c ++++ b/arch/openrisc/mm/init.c +@@ -75,7 +75,6 @@ static void __init map_ram(void) + /* These mark extents of read-only kernel pages... + * ...from vmlinux.lds.S + */ +- struct memblock_region *region; + + v = PAGE_OFFSET; + +@@ -121,7 +120,7 @@ static void __init map_ram(void) + } + + printk(KERN_INFO "%s: Memory: 0x%x-0x%x\n", __func__, +- region->base, region->base + region->size); ++ start, end); + } + } + +diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h +index ed6086d57b22e..0c92b01a3c3c1 100644 +--- a/arch/powerpc/include/asm/hvcall.h ++++ b/arch/powerpc/include/asm/hvcall.h +@@ -446,6 +446,9 @@ + */ + long plpar_hcall_norets(unsigned long opcode, ...); + ++/* Variant which does not do hcall tracing */ ++long plpar_hcall_norets_notrace(unsigned long opcode, ...); ++ + /** + * plpar_hcall: - Make a pseries hypervisor call + * @opcode: The hypervisor call to make. +diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h +index 5d1726bb28e79..bcb7b5f917be6 100644 +--- a/arch/powerpc/include/asm/paravirt.h ++++ b/arch/powerpc/include/asm/paravirt.h +@@ -28,19 +28,35 @@ static inline u32 yield_count_of(int cpu) + return be32_to_cpu(yield_count); + } + ++/* ++ * Spinlock code confers and prods, so don't trace the hcalls because the ++ * tracing code takes spinlocks which can cause recursion deadlocks. ++ * ++ * These calls are made while the lock is not held: the lock slowpath yields if ++ * it can not acquire the lock, and unlock slow path might prod if a waiter has ++ * yielded). So this may not be a problem for simple spin locks because the ++ * tracing does not technically recurse on the lock, but we avoid it anyway. ++ * ++ * However the queued spin lock contended path is more strictly ordered: the ++ * H_CONFER hcall is made after the task has queued itself on the lock, so then ++ * recursing on that lock will cause the task to then queue up again behind the ++ * first instance (or worse: queued spinlocks use tricks that assume a context ++ * never waits on more than one spinlock, so such recursion may cause random ++ * corruption in the lock code). ++ */ + static inline void yield_to_preempted(int cpu, u32 yield_count) + { +- plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); ++ plpar_hcall_norets_notrace(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); + } + + static inline void prod_cpu(int cpu) + { +- plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); ++ plpar_hcall_norets_notrace(H_PROD, get_hard_smp_processor_id(cpu)); + } + + static inline void yield_to_any(void) + { +- plpar_hcall_norets(H_CONFER, -1, 0); ++ plpar_hcall_norets_notrace(H_CONFER, -1, 0); + } + #else + static inline bool is_shared_processor(void) +diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h +index 1499e928ea6a6..5d8d397e928a0 100644 +--- a/arch/powerpc/include/asm/ptrace.h ++++ b/arch/powerpc/include/asm/ptrace.h +@@ -19,6 +19,7 @@ + #ifndef _ASM_POWERPC_PTRACE_H + #define _ASM_POWERPC_PTRACE_H + ++#include <linux/err.h> + #include <uapi/asm/ptrace.h> + #include <asm/asm-const.h> + +@@ -152,25 +153,6 @@ extern unsigned long profile_pc(struct pt_regs *regs); + long do_syscall_trace_enter(struct pt_regs *regs); + void do_syscall_trace_leave(struct pt_regs *regs); + +-#define kernel_stack_pointer(regs) ((regs)->gpr[1]) +-static inline int is_syscall_success(struct pt_regs *regs) +-{ +- return !(regs->ccr & 0x10000000); +-} +- +-static inline long regs_return_value(struct pt_regs *regs) +-{ +- if (is_syscall_success(regs)) +- return regs->gpr[3]; +- else +- return -regs->gpr[3]; +-} +- +-static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc) +-{ +- regs->gpr[3] = rc; +-} +- + #ifdef __powerpc64__ + #define user_mode(regs) ((((regs)->msr) >> MSR_PR_LG) & 0x1) + #else +@@ -252,6 +234,31 @@ static inline void set_trap_norestart(struct pt_regs *regs) + regs->trap |= 0x10; + } + ++#define kernel_stack_pointer(regs) ((regs)->gpr[1]) ++static inline int is_syscall_success(struct pt_regs *regs) ++{ ++ if (trap_is_scv(regs)) ++ return !IS_ERR_VALUE((unsigned long)regs->gpr[3]); ++ else ++ return !(regs->ccr & 0x10000000); ++} ++ ++static inline long regs_return_value(struct pt_regs *regs) ++{ ++ if (trap_is_scv(regs)) ++ return regs->gpr[3]; ++ ++ if (is_syscall_success(regs)) ++ return regs->gpr[3]; ++ else ++ return -regs->gpr[3]; ++} ++ ++static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc) ++{ ++ regs->gpr[3] = rc; ++} ++ + #define arch_has_single_step() (1) + #define arch_has_block_step() (true) + #define ARCH_HAS_USER_SINGLE_STEP_REPORT +diff --git a/arch/powerpc/include/asm/syscall.h b/arch/powerpc/include/asm/syscall.h +index fd1b518eed17c..ba0f88f3a30da 100644 +--- a/arch/powerpc/include/asm/syscall.h ++++ b/arch/powerpc/include/asm/syscall.h +@@ -41,11 +41,17 @@ static inline void syscall_rollback(struct task_struct *task, + static inline long syscall_get_error(struct task_struct *task, + struct pt_regs *regs) + { +- /* +- * If the system call failed, +- * regs->gpr[3] contains a positive ERRORCODE. +- */ +- return (regs->ccr & 0x10000000UL) ? -regs->gpr[3] : 0; ++ if (trap_is_scv(regs)) { ++ unsigned long error = regs->gpr[3]; ++ ++ return IS_ERR_VALUE(error) ? error : 0; ++ } else { ++ /* ++ * If the system call failed, ++ * regs->gpr[3] contains a positive ERRORCODE. ++ */ ++ return (regs->ccr & 0x10000000UL) ? -regs->gpr[3] : 0; ++ } + } + + static inline long syscall_get_return_value(struct task_struct *task, +@@ -58,18 +64,22 @@ static inline void syscall_set_return_value(struct task_struct *task, + struct pt_regs *regs, + int error, long val) + { +- /* +- * In the general case it's not obvious that we must deal with CCR +- * here, as the syscall exit path will also do that for us. However +- * there are some places, eg. the signal code, which check ccr to +- * decide if the value in r3 is actually an error. +- */ +- if (error) { +- regs->ccr |= 0x10000000L; +- regs->gpr[3] = error; ++ if (trap_is_scv(regs)) { ++ regs->gpr[3] = (long) error ?: val; + } else { +- regs->ccr &= ~0x10000000L; +- regs->gpr[3] = val; ++ /* ++ * In the general case it's not obvious that we must deal with ++ * CCR here, as the syscall exit path will also do that for us. ++ * However there are some places, eg. the signal code, which ++ * check ccr to decide if the value in r3 is actually an error. ++ */ ++ if (error) { ++ regs->ccr |= 0x10000000L; ++ regs->gpr[3] = error; ++ } else { ++ regs->ccr &= ~0x10000000L; ++ regs->gpr[3] = val; ++ } + } + } + +diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c +index 830fee91b2d99..c914fe8a2c67f 100644 +--- a/arch/powerpc/kernel/setup_64.c ++++ b/arch/powerpc/kernel/setup_64.c +@@ -369,11 +369,11 @@ void __init early_setup(unsigned long dt_ptr) + apply_feature_fixups(); + setup_feature_keys(); + +- early_ioremap_setup(); +- + /* Initialize the hash table or TLB handling */ + early_init_mmu(); + ++ early_ioremap_setup(); ++ + /* + * After firmware and early platform setup code has set things up, + * we note the SPR values for configurable control/performance +diff --git a/arch/powerpc/platforms/pseries/hvCall.S b/arch/powerpc/platforms/pseries/hvCall.S +index 2136e42833af3..8a2b8d64265bc 100644 +--- a/arch/powerpc/platforms/pseries/hvCall.S ++++ b/arch/powerpc/platforms/pseries/hvCall.S +@@ -102,6 +102,16 @@ END_FTR_SECTION(0, 1); \ + #define HCALL_BRANCH(LABEL) + #endif + ++_GLOBAL_TOC(plpar_hcall_norets_notrace) ++ HMT_MEDIUM ++ ++ mfcr r0 ++ stw r0,8(r1) ++ HVSC /* invoke the hypervisor */ ++ lwz r0,8(r1) ++ mtcrf 0xff,r0 ++ blr /* return r3 = status */ ++ + _GLOBAL_TOC(plpar_hcall_norets) + HMT_MEDIUM + +diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c +index cd38bd421f381..d4aa6a46e1fa6 100644 +--- a/arch/powerpc/platforms/pseries/lpar.c ++++ b/arch/powerpc/platforms/pseries/lpar.c +@@ -1830,8 +1830,7 @@ void hcall_tracepoint_unregfunc(void) + + /* + * Since the tracing code might execute hcalls we need to guard against +- * recursion. One example of this are spinlocks calling H_YIELD on +- * shared processor partitions. ++ * recursion. + */ + static DEFINE_PER_CPU(unsigned int, hcall_trace_depth); + +diff --git a/arch/x86/Makefile b/arch/x86/Makefile +index 78faf9c7e3aed..1f2e5bfb9bb03 100644 +--- a/arch/x86/Makefile ++++ b/arch/x86/Makefile +@@ -170,11 +170,6 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1) + KBUILD_CFLAGS += $(call cc-option,-maccumulate-outgoing-args,) + endif + +-ifdef CONFIG_LTO_CLANG +-KBUILD_LDFLAGS += -plugin-opt=-code-model=kernel \ +- -plugin-opt=-stack-alignment=$(if $(CONFIG_X86_32),4,8) +-endif +- + # Workaround for a gcc prelease that unfortunately was shipped in a suse release + KBUILD_CFLAGS += -Wno-sign-compare + # +@@ -194,7 +189,12 @@ ifdef CONFIG_RETPOLINE + endif + endif + +-KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE) ++KBUILD_LDFLAGS += -m elf_$(UTS_MACHINE) ++ ++ifdef CONFIG_LTO_CLANG ++KBUILD_LDFLAGS += -plugin-opt=-code-model=kernel \ ++ -plugin-opt=-stack-alignment=$(if $(CONFIG_X86_32),4,8) ++endif + + ifdef CONFIG_X86_NEED_RELOCS + LDFLAGS_vmlinux := --emit-relocs --discard-none +diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S +index e94874f4bbc1d..ae1fe558a2d88 100644 +--- a/arch/x86/boot/compressed/head_64.S ++++ b/arch/x86/boot/compressed/head_64.S +@@ -172,11 +172,21 @@ SYM_FUNC_START(startup_32) + */ + call get_sev_encryption_bit + xorl %edx, %edx ++#ifdef CONFIG_AMD_MEM_ENCRYPT + testl %eax, %eax + jz 1f + subl $32, %eax /* Encryption bit is always above bit 31 */ + bts %eax, %edx /* Set encryption mask for page tables */ ++ /* ++ * Mark SEV as active in sev_status so that startup32_check_sev_cbit() ++ * will do a check. The sev_status memory will be fully initialized ++ * with the contents of MSR_AMD_SEV_STATUS later in ++ * set_sev_encryption_mask(). For now it is sufficient to know that SEV ++ * is active. ++ */ ++ movl $1, rva(sev_status)(%ebp) + 1: ++#endif + + /* Initialize Page tables to 0 */ + leal rva(pgtable)(%ebx), %edi +@@ -261,6 +271,9 @@ SYM_FUNC_START(startup_32) + movl %esi, %edx + 1: + #endif ++ /* Check if the C-bit position is correct when SEV is active */ ++ call startup32_check_sev_cbit ++ + pushl $__KERNEL_CS + pushl %eax + +@@ -786,6 +799,78 @@ SYM_DATA_START_LOCAL(loaded_image_proto) + SYM_DATA_END(loaded_image_proto) + #endif + ++/* ++ * Check for the correct C-bit position when the startup_32 boot-path is used. ++ * ++ * The check makes use of the fact that all memory is encrypted when paging is ++ * disabled. The function creates 64 bits of random data using the RDRAND ++ * instruction. RDRAND is mandatory for SEV guests, so always available. If the ++ * hypervisor violates that the kernel will crash right here. ++ * ++ * The 64 bits of random data are stored to a memory location and at the same ++ * time kept in the %eax and %ebx registers. Since encryption is always active ++ * when paging is off the random data will be stored encrypted in main memory. ++ * ++ * Then paging is enabled. When the C-bit position is correct all memory is ++ * still mapped encrypted and comparing the register values with memory will ++ * succeed. An incorrect C-bit position will map all memory unencrypted, so that ++ * the compare will use the encrypted random data and fail. ++ */ ++ __HEAD ++ .code32 ++SYM_FUNC_START(startup32_check_sev_cbit) ++#ifdef CONFIG_AMD_MEM_ENCRYPT ++ pushl %eax ++ pushl %ebx ++ pushl %ecx ++ pushl %edx ++ ++ /* Check for non-zero sev_status */ ++ movl rva(sev_status)(%ebp), %eax ++ testl %eax, %eax ++ jz 4f ++ ++ /* ++ * Get two 32-bit random values - Don't bail out if RDRAND fails ++ * because it is better to prevent forward progress if no random value ++ * can be gathered. ++ */ ++1: rdrand %eax ++ jnc 1b ++2: rdrand %ebx ++ jnc 2b ++ ++ /* Store to memory and keep it in the registers */ ++ movl %eax, rva(sev_check_data)(%ebp) ++ movl %ebx, rva(sev_check_data+4)(%ebp) ++ ++ /* Enable paging to see if encryption is active */ ++ movl %cr0, %edx /* Backup %cr0 in %edx */ ++ movl $(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */ ++ movl %ecx, %cr0 ++ ++ cmpl %eax, rva(sev_check_data)(%ebp) ++ jne 3f ++ cmpl %ebx, rva(sev_check_data+4)(%ebp) ++ jne 3f ++ ++ movl %edx, %cr0 /* Restore previous %cr0 */ ++ ++ jmp 4f ++ ++3: /* Check failed - hlt the machine */ ++ hlt ++ jmp 3b ++ ++4: ++ popl %edx ++ popl %ecx ++ popl %ebx ++ popl %eax ++#endif ++ ret ++SYM_FUNC_END(startup32_check_sev_cbit) ++ + /* + * Stack and heap for uncompression + */ +diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c +index c57ec8e279078..4c18e7fb58f58 100644 +--- a/arch/x86/events/intel/core.c ++++ b/arch/x86/events/intel/core.c +@@ -5741,7 +5741,7 @@ __init int intel_pmu_init(void) + * Check all LBT MSR here. + * Disable LBR access if any LBR MSRs can not be accessed. + */ +- if (x86_pmu.lbr_nr && !check_msr(x86_pmu.lbr_tos, 0x3UL)) ++ if (x86_pmu.lbr_tos && !check_msr(x86_pmu.lbr_tos, 0x3UL)) + x86_pmu.lbr_nr = 0; + for (i = 0; i < x86_pmu.lbr_nr; i++) { + if (!(check_msr(x86_pmu.lbr_from + i, 0xffffUL) && +diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c +index 387b716698187..ecb20b17b7df6 100644 +--- a/arch/x86/kernel/sev-es-shared.c ++++ b/arch/x86/kernel/sev-es-shared.c +@@ -63,6 +63,7 @@ static bool sev_es_negotiate_protocol(void) + + static __always_inline void vc_ghcb_invalidate(struct ghcb *ghcb) + { ++ ghcb->save.sw_exit_code = 0; + memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap)); + } + +diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c +index 04a780abb512d..e0cdab7cb632b 100644 +--- a/arch/x86/kernel/sev-es.c ++++ b/arch/x86/kernel/sev-es.c +@@ -191,8 +191,18 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state) + if (unlikely(data->ghcb_active)) { + /* GHCB is already in use - save its contents */ + +- if (unlikely(data->backup_ghcb_active)) +- return NULL; ++ if (unlikely(data->backup_ghcb_active)) { ++ /* ++ * Backup-GHCB is also already in use. There is no way ++ * to continue here so just kill the machine. To make ++ * panic() work, mark GHCBs inactive so that messages ++ * can be printed out. ++ */ ++ data->ghcb_active = false; ++ data->backup_ghcb_active = false; ++ ++ panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use"); ++ } + + /* Mark backup_ghcb active before writing to it */ + data->backup_ghcb_active = true; +@@ -209,24 +219,6 @@ static __always_inline struct ghcb *sev_es_get_ghcb(struct ghcb_state *state) + return ghcb; + } + +-static __always_inline void sev_es_put_ghcb(struct ghcb_state *state) +-{ +- struct sev_es_runtime_data *data; +- struct ghcb *ghcb; +- +- data = this_cpu_read(runtime_data); +- ghcb = &data->ghcb_page; +- +- if (state->ghcb) { +- /* Restore GHCB from Backup */ +- *ghcb = *state->ghcb; +- data->backup_ghcb_active = false; +- state->ghcb = NULL; +- } else { +- data->ghcb_active = false; +- } +-} +- + /* Needed in vc_early_forward_exception */ + void do_early_exception(struct pt_regs *regs, int trapnr); + +@@ -296,31 +288,44 @@ static enum es_result vc_write_mem(struct es_em_ctxt *ctxt, + u16 d2; + u8 d1; + +- /* If instruction ran in kernel mode and the I/O buffer is in kernel space */ +- if (!user_mode(ctxt->regs) && !access_ok(target, size)) { +- memcpy(dst, buf, size); +- return ES_OK; +- } +- ++ /* ++ * This function uses __put_user() independent of whether kernel or user ++ * memory is accessed. This works fine because __put_user() does no ++ * sanity checks of the pointer being accessed. All that it does is ++ * to report when the access failed. ++ * ++ * Also, this function runs in atomic context, so __put_user() is not ++ * allowed to sleep. The page-fault handler detects that it is running ++ * in atomic context and will not try to take mmap_sem and handle the ++ * fault, so additional pagefault_enable()/disable() calls are not ++ * needed. ++ * ++ * The access can't be done via copy_to_user() here because ++ * vc_write_mem() must not use string instructions to access unsafe ++ * memory. The reason is that MOVS is emulated by the #VC handler by ++ * splitting the move up into a read and a write and taking a nested #VC ++ * exception on whatever of them is the MMIO access. Using string ++ * instructions here would cause infinite nesting. ++ */ + switch (size) { + case 1: + memcpy(&d1, buf, 1); +- if (put_user(d1, target)) ++ if (__put_user(d1, target)) + goto fault; + break; + case 2: + memcpy(&d2, buf, 2); +- if (put_user(d2, target)) ++ if (__put_user(d2, target)) + goto fault; + break; + case 4: + memcpy(&d4, buf, 4); +- if (put_user(d4, target)) ++ if (__put_user(d4, target)) + goto fault; + break; + case 8: + memcpy(&d8, buf, 8); +- if (put_user(d8, target)) ++ if (__put_user(d8, target)) + goto fault; + break; + default: +@@ -351,30 +356,43 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt, + u16 d2; + u8 d1; + +- /* If instruction ran in kernel mode and the I/O buffer is in kernel space */ +- if (!user_mode(ctxt->regs) && !access_ok(s, size)) { +- memcpy(buf, src, size); +- return ES_OK; +- } +- ++ /* ++ * This function uses __get_user() independent of whether kernel or user ++ * memory is accessed. This works fine because __get_user() does no ++ * sanity checks of the pointer being accessed. All that it does is ++ * to report when the access failed. ++ * ++ * Also, this function runs in atomic context, so __get_user() is not ++ * allowed to sleep. The page-fault handler detects that it is running ++ * in atomic context and will not try to take mmap_sem and handle the ++ * fault, so additional pagefault_enable()/disable() calls are not ++ * needed. ++ * ++ * The access can't be done via copy_from_user() here because ++ * vc_read_mem() must not use string instructions to access unsafe ++ * memory. The reason is that MOVS is emulated by the #VC handler by ++ * splitting the move up into a read and a write and taking a nested #VC ++ * exception on whatever of them is the MMIO access. Using string ++ * instructions here would cause infinite nesting. ++ */ + switch (size) { + case 1: +- if (get_user(d1, s)) ++ if (__get_user(d1, s)) + goto fault; + memcpy(buf, &d1, 1); + break; + case 2: +- if (get_user(d2, s)) ++ if (__get_user(d2, s)) + goto fault; + memcpy(buf, &d2, 2); + break; + case 4: +- if (get_user(d4, s)) ++ if (__get_user(d4, s)) + goto fault; + memcpy(buf, &d4, 4); + break; + case 8: +- if (get_user(d8, s)) ++ if (__get_user(d8, s)) + goto fault; + memcpy(buf, &d8, 8); + break; +@@ -434,6 +452,29 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt + /* Include code shared with pre-decompression boot stage */ + #include "sev-es-shared.c" + ++static __always_inline void sev_es_put_ghcb(struct ghcb_state *state) ++{ ++ struct sev_es_runtime_data *data; ++ struct ghcb *ghcb; ++ ++ data = this_cpu_read(runtime_data); ++ ghcb = &data->ghcb_page; ++ ++ if (state->ghcb) { ++ /* Restore GHCB from Backup */ ++ *ghcb = *state->ghcb; ++ data->backup_ghcb_active = false; ++ state->ghcb = NULL; ++ } else { ++ /* ++ * Invalidate the GHCB so a VMGEXIT instruction issued ++ * from userspace won't appear to be valid. ++ */ ++ vc_ghcb_invalidate(ghcb); ++ data->ghcb_active = false; ++ } ++} ++ + void noinstr __sev_es_nmi_complete(void) + { + struct ghcb_state state; +@@ -1228,6 +1269,10 @@ static __always_inline void vc_forward_exception(struct es_em_ctxt *ctxt) + case X86_TRAP_UD: + exc_invalid_op(ctxt->regs); + break; ++ case X86_TRAP_PF: ++ write_cr2(ctxt->fi.cr2); ++ exc_page_fault(ctxt->regs, error_code); ++ break; + case X86_TRAP_AC: + exc_alignment_check(ctxt->regs, error_code); + break; +@@ -1257,7 +1302,6 @@ static __always_inline bool on_vc_fallback_stack(struct pt_regs *regs) + */ + DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication) + { +- struct sev_es_runtime_data *data = this_cpu_read(runtime_data); + irqentry_state_t irq_state; + struct ghcb_state state; + struct es_em_ctxt ctxt; +@@ -1283,16 +1327,6 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication) + */ + + ghcb = sev_es_get_ghcb(&state); +- if (!ghcb) { +- /* +- * Mark GHCBs inactive so that panic() is able to print the +- * message. +- */ +- data->ghcb_active = false; +- data->backup_ghcb_active = false; +- +- panic("Unable to handle #VC exception! GHCB and Backup GHCB are already in use"); +- } + + vc_ghcb_invalidate(ghcb); + result = vc_init_em_ctxt(&ctxt, regs, error_code); +diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c +index dc0a337f985b6..8183ddb3700c4 100644 +--- a/arch/x86/xen/enlighten_pv.c ++++ b/arch/x86/xen/enlighten_pv.c +@@ -1276,16 +1276,16 @@ asmlinkage __visible void __init xen_start_kernel(void) + /* Get mfn list */ + xen_build_dynamic_phys_to_machine(); + ++ /* Work out if we support NX */ ++ get_cpu_cap(&boot_cpu_data); ++ x86_configure_nx(); ++ + /* + * Set up kernel GDT and segment registers, mainly so that + * -fstack-protector code can be executed. + */ + xen_setup_gdt(0); + +- /* Work out if we support NX */ +- get_cpu_cap(&boot_cpu_data); +- x86_configure_nx(); +- + /* Determine virtual and physical address sizes */ + get_cpu_address_sizes(&boot_cpu_data); + +diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c +index 9874fc1c815b5..1831099306aa9 100644 +--- a/drivers/cdrom/gdrom.c ++++ b/drivers/cdrom/gdrom.c +@@ -743,6 +743,13 @@ static const struct blk_mq_ops gdrom_mq_ops = { + static int probe_gdrom(struct platform_device *devptr) + { + int err; ++ ++ /* ++ * Ensure our "one" device is initialized properly in case of previous ++ * usages of it ++ */ ++ memset(&gd, 0, sizeof(gd)); ++ + /* Start the device */ + if (gdrom_execute_diagnostic() != 1) { + pr_warn("ATA Probe for GDROM failed\n"); +@@ -831,6 +838,8 @@ static int remove_gdrom(struct platform_device *devptr) + if (gdrom_major) + unregister_blkdev(gdrom_major, GDROM_DEV_NAME); + unregister_cdrom(gd.cd_info); ++ kfree(gd.cd_info); ++ kfree(gd.toc); + + return 0; + } +@@ -846,7 +855,7 @@ static struct platform_driver gdrom_driver = { + static int __init init_gdrom(void) + { + int rc; +- gd.toc = NULL; ++ + rc = platform_driver_register(&gdrom_driver); + if (rc) + return rc; +@@ -862,8 +871,6 @@ static void __exit exit_gdrom(void) + { + platform_device_unregister(pd); + platform_driver_unregister(&gdrom_driver); +- kfree(gd.toc); +- kfree(gd.cd_info); + } + + module_init(init_gdrom); +diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c +index f264b70c383eb..eadd1eaa2fb54 100644 +--- a/drivers/dma-buf/dma-buf.c ++++ b/drivers/dma-buf/dma-buf.c +@@ -760,7 +760,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, + + if (dma_buf_is_dynamic(attach->dmabuf)) { + dma_resv_lock(attach->dmabuf->resv, NULL); +- ret = dma_buf_pin(attach); ++ ret = dmabuf->ops->pin(attach); + if (ret) + goto err_unlock; + } +@@ -786,7 +786,7 @@ err_attach: + + err_unpin: + if (dma_buf_is_dynamic(attach->dmabuf)) +- dma_buf_unpin(attach); ++ dmabuf->ops->unpin(attach); + + err_unlock: + if (dma_buf_is_dynamic(attach->dmabuf)) +@@ -843,7 +843,7 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) + __unmap_dma_buf(attach, attach->sgt, attach->dir); + + if (dma_buf_is_dynamic(attach->dmabuf)) { +- dma_buf_unpin(attach); ++ dmabuf->ops->unpin(attach); + dma_resv_unlock(attach->dmabuf->resv); + } + } +@@ -956,7 +956,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, + if (dma_buf_is_dynamic(attach->dmabuf)) { + dma_resv_assert_held(attach->dmabuf->resv); + if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { +- r = dma_buf_pin(attach); ++ r = attach->dmabuf->ops->pin(attach); + if (r) + return ERR_PTR(r); + } +@@ -968,7 +968,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, + + if (IS_ERR(sg_table) && dma_buf_is_dynamic(attach->dmabuf) && + !IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) +- dma_buf_unpin(attach); ++ attach->dmabuf->ops->unpin(attach); + + if (!IS_ERR(sg_table) && attach->dmabuf->ops->cache_sgt_mapping) { + attach->sgt = sg_table; +diff --git a/drivers/firmware/arm_scpi.c b/drivers/firmware/arm_scpi.c +index d0dee37ad5228..4ceba5ef78958 100644 +--- a/drivers/firmware/arm_scpi.c ++++ b/drivers/firmware/arm_scpi.c +@@ -552,8 +552,10 @@ static unsigned long scpi_clk_get_val(u16 clk_id) + + ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id, + sizeof(le_clk_id), &rate, sizeof(rate)); ++ if (ret) ++ return 0; + +- return ret ? ret : le32_to_cpu(rate); ++ return le32_to_cpu(rate); + } + + static int scpi_clk_set_val(u16 clk_id, unsigned long rate) +diff --git a/drivers/gpio/gpio-tegra186.c b/drivers/gpio/gpio-tegra186.c +index 1bd9e44df7184..05974b760796b 100644 +--- a/drivers/gpio/gpio-tegra186.c ++++ b/drivers/gpio/gpio-tegra186.c +@@ -444,16 +444,6 @@ static int tegra186_irq_set_wake(struct irq_data *data, unsigned int on) + return 0; + } + +-static int tegra186_irq_set_affinity(struct irq_data *data, +- const struct cpumask *dest, +- bool force) +-{ +- if (data->parent_data) +- return irq_chip_set_affinity_parent(data, dest, force); +- +- return -EINVAL; +-} +- + static void tegra186_gpio_irq(struct irq_desc *desc) + { + struct tegra_gpio *gpio = irq_desc_get_handler_data(desc); +@@ -700,7 +690,6 @@ static int tegra186_gpio_probe(struct platform_device *pdev) + gpio->intc.irq_unmask = tegra186_irq_unmask; + gpio->intc.irq_set_type = tegra186_irq_set_type; + gpio->intc.irq_set_wake = tegra186_irq_set_wake; +- gpio->intc.irq_set_affinity = tegra186_irq_set_affinity; + + irq = &gpio->gpio.irq; + irq->chip = &gpio->intc; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +index 383c178cf0746..6b14626c148ee 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +@@ -267,7 +267,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo, + *addr += offset & ~PAGE_MASK; + + num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8); +- num_bytes = num_pages * 8; ++ num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE; + + r = amdgpu_job_alloc_with_ib(adev, num_dw * 4 + num_bytes, + AMDGPU_IB_POOL_DELAYED, &job); +diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c +index 63691deb7df3c..2342c5d216f9b 100644 +--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c +@@ -1391,9 +1391,10 @@ static const struct soc15_reg_golden golden_settings_gc_10_1_2[] = + SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG, 0xffffffff, 0x20000000), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG2, 0xffffffff, 0x00000420), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0xffffffff, 0x00000200), +- SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x04800000), ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x04900000), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DFSM_TILES_IN_FLIGHT, 0x0000ffff, 0x0000003f), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_LAST_OF_BURST_CONFIG, 0xffffffff, 0x03860204), ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0x0c1800ff, 0x00000044), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL, 0x1ff0ffff, 0x00000500), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmGE_PRIV_CONTROL, 0x00007fff, 0x000001fe), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL1_PIPE_STEER, 0xffffffff, 0xe4e4e4e4), +@@ -1411,12 +1412,13 @@ static const struct soc15_reg_golden golden_settings_gc_10_1_2[] = + SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_ENHANCE_2, 0x00000820, 0x00000820), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmPA_SC_LINE_STIPPLE_STATE, 0x0000ff0f, 0x00000000), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmRMI_SPARE, 0xffffffff, 0xffff3101), ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmSPI_CONFIG_CNTL_1, 0x001f0000, 0x00070104), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_ALU_CLK_CTRL, 0xffffffff, 0xffffffff), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_ARB_CONFIG, 0x00000133, 0x00000130), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmSQ_LDS_CLK_CTRL, 0xffffffff, 0xffffffff), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmTA_CNTL_AUX, 0xfff7ffff, 0x01030000), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmTCP_CNTL, 0xffdf80ff, 0x479c0010), +- SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffffffff, 0x00800000) ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmUTCL1_CTRL, 0xffffffff, 0x00c00000) + }; + + static void gfx_v10_rlcg_wreg(struct amdgpu_device *adev, u32 offset, u32 v) +diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +index 65db88bb6cbcd..d2c020a91c0be 100644 +--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +@@ -4864,7 +4864,7 @@ static void gfx_v9_0_update_3d_clock_gating(struct amdgpu_device *adev, + amdgpu_gfx_rlc_enter_safe_mode(adev); + + /* Enable 3D CGCG/CGLS */ +- if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)) { ++ if (enable) { + /* write cmd to clear cgcg/cgls ov */ + def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE); + /* unset CGCG override */ +@@ -4876,8 +4876,12 @@ static void gfx_v9_0_update_3d_clock_gating(struct amdgpu_device *adev, + /* enable 3Dcgcg FSM(0x0000363f) */ + def = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D); + +- data = (0x36 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT) | +- RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK; ++ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG) ++ data = (0x36 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT) | ++ RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK; ++ else ++ data = 0x0 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT; ++ + if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGLS) + data |= (0x000F << RLC_CGCG_CGLS_CTRL_3D__CGLS_REP_COMPANSAT_DELAY__SHIFT) | + RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK; +diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c +index d345e324837dd..2a27fe26232b6 100644 +--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c +@@ -123,6 +123,10 @@ static const struct soc15_reg_golden golden_settings_sdma_nv14[] = { + + static const struct soc15_reg_golden golden_settings_sdma_nv12[] = { + SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_RLC3_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000), ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_GB_ADDR_CONFIG, 0x001877ff, 0x00000044), ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x001877ff, 0x00000044), ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_GB_ADDR_CONFIG, 0x001877ff, 0x00000044), ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x001877ff, 0x00000044), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmSDMA1_RLC3_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000), + }; + +diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c +index 1221aa6b40a9f..d1045a9b37d98 100644 +--- a/drivers/gpu/drm/amd/amdgpu/soc15.c ++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c +@@ -1151,7 +1151,6 @@ static int soc15_common_early_init(void *handle) + adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG | + AMD_CG_SUPPORT_GFX_MGLS | + AMD_CG_SUPPORT_GFX_CP_LS | +- AMD_CG_SUPPORT_GFX_3D_CGCG | + AMD_CG_SUPPORT_GFX_3D_CGLS | + AMD_CG_SUPPORT_GFX_CGCG | + AMD_CG_SUPPORT_GFX_CGLS | +@@ -1170,7 +1169,6 @@ static int soc15_common_early_init(void *handle) + AMD_CG_SUPPORT_GFX_MGLS | + AMD_CG_SUPPORT_GFX_RLC_LS | + AMD_CG_SUPPORT_GFX_CP_LS | +- AMD_CG_SUPPORT_GFX_3D_CGCG | + AMD_CG_SUPPORT_GFX_3D_CGLS | + AMD_CG_SUPPORT_GFX_CGCG | + AMD_CG_SUPPORT_GFX_CGLS | +diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c +index 71e2d5e025710..9b33182f3abd5 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c +@@ -826,10 +826,11 @@ static const struct dc_plane_cap plane_cap = { + .fp16 = 16000 + }, + ++ /* 6:1 downscaling ratio: 1000/6 = 166.666 */ + .max_downscale_factor = { +- .argb8888 = 600, +- .nv12 = 600, +- .fp16 = 600 ++ .argb8888 = 167, ++ .nv12 = 167, ++ .fp16 = 167 + } + }; + +diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c +index c494235016e09..00f066f1da0c7 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c +@@ -843,10 +843,11 @@ static const struct dc_plane_cap plane_cap = { + .fp16 = 16000 + }, + ++ /* 6:1 downscaling ratio: 1000/6 = 166.666 */ + .max_downscale_factor = { +- .argb8888 = 600, +- .nv12 = 600, +- .fp16 = 600 ++ .argb8888 = 167, ++ .nv12 = 167, ++ .fp16 = 167 + }, + 64, + 64 +diff --git a/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c b/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c +index d03b1975e4178..7d9d591de411b 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c +@@ -282,10 +282,11 @@ static const struct dc_plane_cap plane_cap = { + .nv12 = 16000, + .fp16 = 16000 + }, ++ /* 6:1 downscaling ratio: 1000/6 = 166.666 */ + .max_downscale_factor = { +- .argb8888 = 600, +- .nv12 = 600, +- .fp16 = 600 ++ .argb8888 = 167, ++ .nv12 = 167, ++ .fp16 = 167 + }, + 16, + 16 +diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c +index 43028f3539a6d..76574e2459161 100644 +--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c ++++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c +@@ -63,6 +63,8 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, + i915->quirks & QUIRK_PIN_SWIZZLED_PAGES) { + GEM_BUG_ON(i915_gem_object_has_tiling_quirk(obj)); + i915_gem_object_set_tiling_quirk(obj); ++ GEM_BUG_ON(!list_empty(&obj->mm.link)); ++ atomic_inc(&obj->mm.shrink_pin); + shrinkable = false; + } + +diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c +index de575fdb033f5..21f08e53889c3 100644 +--- a/drivers/gpu/drm/i915/gt/gen7_renderclear.c ++++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c +@@ -397,7 +397,10 @@ static void emit_batch(struct i915_vma * const vma, + gen7_emit_pipeline_invalidate(&cmds); + batch_add(&cmds, MI_LOAD_REGISTER_IMM(2)); + batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_0_GEN7)); +- batch_add(&cmds, 0xffff0000); ++ batch_add(&cmds, 0xffff0000 | ++ ((IS_IVB_GT1(i915) || IS_VALLEYVIEW(i915)) ? ++ HIZ_RAW_STALL_OPT_DISABLE : ++ 0)); + batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_1)); + batch_add(&cmds, 0xffff0000 | PIXEL_SUBSPAN_COLLECT_OPT_DISABLE); + gen7_emit_pipeline_invalidate(&cmds); +diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c +index aa44909344694..19351addb68c4 100644 +--- a/drivers/gpu/drm/i915/i915_gem.c ++++ b/drivers/gpu/drm/i915/i915_gem.c +@@ -972,12 +972,11 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data, + obj->mm.madv = args->madv; + + if (i915_gem_object_has_pages(obj)) { +- struct list_head *list; ++ unsigned long flags; + +- if (i915_gem_object_is_shrinkable(obj)) { +- unsigned long flags; +- +- spin_lock_irqsave(&i915->mm.obj_lock, flags); ++ spin_lock_irqsave(&i915->mm.obj_lock, flags); ++ if (!list_empty(&obj->mm.link)) { ++ struct list_head *list; + + if (obj->mm.madv != I915_MADV_WILLNEED) + list = &i915->mm.purge_list; +@@ -985,8 +984,8 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data, + list = &i915->mm.shrink_list; + list_move_tail(&obj->mm.link, list); + +- spin_unlock_irqrestore(&i915->mm.obj_lock, flags); + } ++ spin_unlock_irqrestore(&i915->mm.obj_lock, flags); + } + + /* if the object is no longer attached, discard its backing storage */ +diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c +index 3808a753127bc..04109a2a6fd76 100644 +--- a/drivers/gpu/drm/radeon/radeon_gart.c ++++ b/drivers/gpu/drm/radeon/radeon_gart.c +@@ -301,7 +301,8 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned offset, + p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); + + for (i = 0; i < pages; i++, p++) { +- rdev->gart.pages[p] = pagelist[i]; ++ rdev->gart.pages[p] = pagelist ? pagelist[i] : ++ rdev->dummy_page.page; + page_base = dma_addr[i]; + for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) { + page_entry = radeon_gart_get_page_entry(page_base, flags); +diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c +index 101a68dc615b6..799ec7a7caa4d 100644 +--- a/drivers/gpu/drm/ttm/ttm_bo.c ++++ b/drivers/gpu/drm/ttm/ttm_bo.c +@@ -153,6 +153,8 @@ void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo, + + swap = &ttm_bo_glob.swap_lru[bo->priority]; + list_move_tail(&bo->swap, swap); ++ } else { ++ list_del_init(&bo->swap); + } + + if (bdev->driver->del_from_lru_notify) +diff --git a/drivers/hwmon/lm80.c b/drivers/hwmon/lm80.c +index ac4adb44b224d..97ab491d2922c 100644 +--- a/drivers/hwmon/lm80.c ++++ b/drivers/hwmon/lm80.c +@@ -596,7 +596,6 @@ static int lm80_probe(struct i2c_client *client) + struct device *dev = &client->dev; + struct device *hwmon_dev; + struct lm80_data *data; +- int rv; + + data = devm_kzalloc(dev, sizeof(struct lm80_data), GFP_KERNEL); + if (!data) +@@ -609,14 +608,8 @@ static int lm80_probe(struct i2c_client *client) + lm80_init_client(client); + + /* A few vars need to be filled upon startup */ +- rv = lm80_read_value(client, LM80_REG_FAN_MIN(1)); +- if (rv < 0) +- return rv; +- data->fan[f_min][0] = rv; +- rv = lm80_read_value(client, LM80_REG_FAN_MIN(2)); +- if (rv < 0) +- return rv; +- data->fan[f_min][1] = rv; ++ data->fan[f_min][0] = lm80_read_value(client, LM80_REG_FAN_MIN(1)); ++ data->fan[f_min][1] = lm80_read_value(client, LM80_REG_FAN_MIN(2)); + + hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name, + data, lm80_groups); +diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c +index 6ac07911a17bd..5b9022a8c9ece 100644 +--- a/drivers/infiniband/core/cma.c ++++ b/drivers/infiniband/core/cma.c +@@ -482,6 +482,7 @@ static void cma_release_dev(struct rdma_id_private *id_priv) + list_del(&id_priv->list); + cma_dev_put(id_priv->cma_dev); + id_priv->cma_dev = NULL; ++ id_priv->id.device = NULL; + if (id_priv->id.route.addr.dev_addr.sgid_attr) { + rdma_put_gid_attr(id_priv->id.route.addr.dev_addr.sgid_attr); + id_priv->id.route.addr.dev_addr.sgid_attr = NULL; +@@ -1864,6 +1865,7 @@ static void _destroy_id(struct rdma_id_private *id_priv, + iw_destroy_cm_id(id_priv->cm_id.iw); + } + cma_leave_mc_groups(id_priv); ++ rdma_restrack_del(&id_priv->res); + cma_release_dev(id_priv); + } + +@@ -1877,7 +1879,6 @@ static void _destroy_id(struct rdma_id_private *id_priv, + kfree(id_priv->id.route.path_rec); + + put_net(id_priv->id.route.addr.dev_addr.net); +- rdma_restrack_del(&id_priv->res); + kfree(id_priv); + } + +@@ -3740,7 +3741,7 @@ int rdma_listen(struct rdma_cm_id *id, int backlog) + } + + id_priv->backlog = backlog; +- if (id->device) { ++ if (id_priv->cma_dev) { + if (rdma_cap_ib_cm(id->device, 1)) { + ret = cma_ib_listen(id_priv); + if (ret) +diff --git a/drivers/infiniband/core/uverbs_std_types_device.c b/drivers/infiniband/core/uverbs_std_types_device.c +index 9ec6971056fa8..049684880ae03 100644 +--- a/drivers/infiniband/core/uverbs_std_types_device.c ++++ b/drivers/infiniband/core/uverbs_std_types_device.c +@@ -117,8 +117,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_INFO_HANDLES)( + return ret; + + uapi_object = uapi_get_object(attrs->ufile->device->uapi, object_id); +- if (!uapi_object) +- return -EINVAL; ++ if (IS_ERR(uapi_object)) ++ return PTR_ERR(uapi_object); + + handles = gather_objects_handle(attrs->ufile, uapi_object, attrs, + out_len, &total); +@@ -331,6 +331,9 @@ static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_GID_TABLE)( + if (ret) + return ret; + ++ if (!user_entry_size) ++ return -EINVAL; ++ + max_entries = uverbs_attr_ptr_get_array_size( + attrs, UVERBS_ATTR_QUERY_GID_TABLE_RESP_ENTRIES, + user_entry_size); +diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c +index 07b8350929cd6..81276b4247f8e 100644 +--- a/drivers/infiniband/hw/mlx5/devx.c ++++ b/drivers/infiniband/hw/mlx5/devx.c +@@ -630,9 +630,8 @@ static bool devx_is_valid_obj_id(struct uverbs_attr_bundle *attrs, + case UVERBS_OBJECT_QP: + { + struct mlx5_ib_qp *qp = to_mqp(uobj->object); +- enum ib_qp_type qp_type = qp->ibqp.qp_type; + +- if (qp_type == IB_QPT_RAW_PACKET || ++ if (qp->type == IB_QPT_RAW_PACKET || + (qp->flags & IB_QP_CREATE_SOURCE_QPN)) { + struct mlx5_ib_raw_packet_qp *raw_packet_qp = + &qp->raw_packet_qp; +@@ -649,10 +648,9 @@ static bool devx_is_valid_obj_id(struct uverbs_attr_bundle *attrs, + sq->tisn) == obj_id); + } + +- if (qp_type == MLX5_IB_QPT_DCT) ++ if (qp->type == MLX5_IB_QPT_DCT) + return get_enc_obj_id(MLX5_CMD_OP_CREATE_DCT, + qp->dct.mdct.mqp.qpn) == obj_id; +- + return get_enc_obj_id(MLX5_CMD_OP_CREATE_QP, + qp->ibqp.qp_num) == obj_id; + } +diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c +index 4be7bccefaa40..59ffbbdda3179 100644 +--- a/drivers/infiniband/hw/mlx5/main.c ++++ b/drivers/infiniband/hw/mlx5/main.c +@@ -4655,6 +4655,7 @@ static int mlx5r_mp_probe(struct auxiliary_device *adev, + + if (bound) { + rdma_roce_rescan_device(&dev->ib_dev); ++ mpi->ibdev->ib_active = true; + break; + } + } +diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c +index 17a361b8dbb16..06b556169867a 100644 +--- a/drivers/infiniband/sw/rxe/rxe_comp.c ++++ b/drivers/infiniband/sw/rxe/rxe_comp.c +@@ -345,14 +345,16 @@ static inline enum comp_state do_read(struct rxe_qp *qp, + + ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, + &wqe->dma, payload_addr(pkt), +- payload_size(pkt), to_mem_obj, NULL); +- if (ret) ++ payload_size(pkt), to_mr_obj, NULL); ++ if (ret) { ++ wqe->status = IB_WC_LOC_PROT_ERR; + return COMPST_ERROR; ++ } + + if (wqe->dma.resid == 0 && (pkt->mask & RXE_END_MASK)) + return COMPST_COMP_ACK; +- else +- return COMPST_UPDATE_COMP; ++ ++ return COMPST_UPDATE_COMP; + } + + static inline enum comp_state do_atomic(struct rxe_qp *qp, +@@ -365,11 +367,13 @@ static inline enum comp_state do_atomic(struct rxe_qp *qp, + + ret = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, + &wqe->dma, &atomic_orig, +- sizeof(u64), to_mem_obj, NULL); +- if (ret) ++ sizeof(u64), to_mr_obj, NULL); ++ if (ret) { ++ wqe->status = IB_WC_LOC_PROT_ERR; + return COMPST_ERROR; +- else +- return COMPST_COMP_ACK; ++ } ++ ++ return COMPST_COMP_ACK; + } + + static void make_send_cqe(struct rxe_qp *qp, struct rxe_send_wqe *wqe, +diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h +index 0d758760b9ae7..08e21fa9ec97e 100644 +--- a/drivers/infiniband/sw/rxe/rxe_loc.h ++++ b/drivers/infiniband/sw/rxe/rxe_loc.h +@@ -72,40 +72,37 @@ int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma); + + /* rxe_mr.c */ + enum copy_direction { +- to_mem_obj, +- from_mem_obj, ++ to_mr_obj, ++ from_mr_obj, + }; + +-void rxe_mem_init_dma(struct rxe_pd *pd, +- int access, struct rxe_mem *mem); ++void rxe_mr_init_dma(struct rxe_pd *pd, int access, struct rxe_mr *mr); + +-int rxe_mem_init_user(struct rxe_pd *pd, u64 start, +- u64 length, u64 iova, int access, struct ib_udata *udata, +- struct rxe_mem *mr); ++int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova, ++ int access, struct ib_udata *udata, struct rxe_mr *mr); + +-int rxe_mem_init_fast(struct rxe_pd *pd, +- int max_pages, struct rxe_mem *mem); ++int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages, struct rxe_mr *mr); + +-int rxe_mem_copy(struct rxe_mem *mem, u64 iova, void *addr, +- int length, enum copy_direction dir, u32 *crcp); ++int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, ++ enum copy_direction dir, u32 *crcp); + + int copy_data(struct rxe_pd *pd, int access, + struct rxe_dma_info *dma, void *addr, int length, + enum copy_direction dir, u32 *crcp); + +-void *iova_to_vaddr(struct rxe_mem *mem, u64 iova, int length); ++void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length); + + enum lookup_type { + lookup_local, + lookup_remote, + }; + +-struct rxe_mem *lookup_mem(struct rxe_pd *pd, int access, u32 key, +- enum lookup_type type); ++struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, ++ enum lookup_type type); + +-int mem_check_range(struct rxe_mem *mem, u64 iova, size_t length); ++int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length); + +-void rxe_mem_cleanup(struct rxe_pool_entry *arg); ++void rxe_mr_cleanup(struct rxe_pool_entry *arg); + + int advance_dma_data(struct rxe_dma_info *dma, unsigned int length); + +diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c +index 6e8c41567ba08..9f63947bab123 100644 +--- a/drivers/infiniband/sw/rxe/rxe_mr.c ++++ b/drivers/infiniband/sw/rxe/rxe_mr.c +@@ -24,16 +24,15 @@ static u8 rxe_get_key(void) + return key; + } + +-int mem_check_range(struct rxe_mem *mem, u64 iova, size_t length) ++int mr_check_range(struct rxe_mr *mr, u64 iova, size_t length) + { +- switch (mem->type) { +- case RXE_MEM_TYPE_DMA: ++ switch (mr->type) { ++ case RXE_MR_TYPE_DMA: + return 0; + +- case RXE_MEM_TYPE_MR: +- if (iova < mem->iova || +- length > mem->length || +- iova > mem->iova + mem->length - length) ++ case RXE_MR_TYPE_MR: ++ if (iova < mr->iova || length > mr->length || ++ iova > mr->iova + mr->length - length) + return -EFAULT; + return 0; + +@@ -46,85 +45,83 @@ int mem_check_range(struct rxe_mem *mem, u64 iova, size_t length) + | IB_ACCESS_REMOTE_WRITE \ + | IB_ACCESS_REMOTE_ATOMIC) + +-static void rxe_mem_init(int access, struct rxe_mem *mem) ++static void rxe_mr_init(int access, struct rxe_mr *mr) + { +- u32 lkey = mem->pelem.index << 8 | rxe_get_key(); ++ u32 lkey = mr->pelem.index << 8 | rxe_get_key(); + u32 rkey = (access & IB_ACCESS_REMOTE) ? lkey : 0; + +- mem->ibmr.lkey = lkey; +- mem->ibmr.rkey = rkey; +- mem->state = RXE_MEM_STATE_INVALID; +- mem->type = RXE_MEM_TYPE_NONE; +- mem->map_shift = ilog2(RXE_BUF_PER_MAP); ++ mr->ibmr.lkey = lkey; ++ mr->ibmr.rkey = rkey; ++ mr->state = RXE_MR_STATE_INVALID; ++ mr->type = RXE_MR_TYPE_NONE; ++ mr->map_shift = ilog2(RXE_BUF_PER_MAP); + } + +-void rxe_mem_cleanup(struct rxe_pool_entry *arg) ++void rxe_mr_cleanup(struct rxe_pool_entry *arg) + { +- struct rxe_mem *mem = container_of(arg, typeof(*mem), pelem); ++ struct rxe_mr *mr = container_of(arg, typeof(*mr), pelem); + int i; + +- ib_umem_release(mem->umem); ++ ib_umem_release(mr->umem); + +- if (mem->map) { +- for (i = 0; i < mem->num_map; i++) +- kfree(mem->map[i]); ++ if (mr->map) { ++ for (i = 0; i < mr->num_map; i++) ++ kfree(mr->map[i]); + +- kfree(mem->map); ++ kfree(mr->map); + } + } + +-static int rxe_mem_alloc(struct rxe_mem *mem, int num_buf) ++static int rxe_mr_alloc(struct rxe_mr *mr, int num_buf) + { + int i; + int num_map; +- struct rxe_map **map = mem->map; ++ struct rxe_map **map = mr->map; + + num_map = (num_buf + RXE_BUF_PER_MAP - 1) / RXE_BUF_PER_MAP; + +- mem->map = kmalloc_array(num_map, sizeof(*map), GFP_KERNEL); +- if (!mem->map) ++ mr->map = kmalloc_array(num_map, sizeof(*map), GFP_KERNEL); ++ if (!mr->map) + goto err1; + + for (i = 0; i < num_map; i++) { +- mem->map[i] = kmalloc(sizeof(**map), GFP_KERNEL); +- if (!mem->map[i]) ++ mr->map[i] = kmalloc(sizeof(**map), GFP_KERNEL); ++ if (!mr->map[i]) + goto err2; + } + + BUILD_BUG_ON(!is_power_of_2(RXE_BUF_PER_MAP)); + +- mem->map_shift = ilog2(RXE_BUF_PER_MAP); +- mem->map_mask = RXE_BUF_PER_MAP - 1; ++ mr->map_shift = ilog2(RXE_BUF_PER_MAP); ++ mr->map_mask = RXE_BUF_PER_MAP - 1; + +- mem->num_buf = num_buf; +- mem->num_map = num_map; +- mem->max_buf = num_map * RXE_BUF_PER_MAP; ++ mr->num_buf = num_buf; ++ mr->num_map = num_map; ++ mr->max_buf = num_map * RXE_BUF_PER_MAP; + + return 0; + + err2: + for (i--; i >= 0; i--) +- kfree(mem->map[i]); ++ kfree(mr->map[i]); + +- kfree(mem->map); ++ kfree(mr->map); + err1: + return -ENOMEM; + } + +-void rxe_mem_init_dma(struct rxe_pd *pd, +- int access, struct rxe_mem *mem) ++void rxe_mr_init_dma(struct rxe_pd *pd, int access, struct rxe_mr *mr) + { +- rxe_mem_init(access, mem); ++ rxe_mr_init(access, mr); + +- mem->ibmr.pd = &pd->ibpd; +- mem->access = access; +- mem->state = RXE_MEM_STATE_VALID; +- mem->type = RXE_MEM_TYPE_DMA; ++ mr->ibmr.pd = &pd->ibpd; ++ mr->access = access; ++ mr->state = RXE_MR_STATE_VALID; ++ mr->type = RXE_MR_TYPE_DMA; + } + +-int rxe_mem_init_user(struct rxe_pd *pd, u64 start, +- u64 length, u64 iova, int access, struct ib_udata *udata, +- struct rxe_mem *mem) ++int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova, ++ int access, struct ib_udata *udata, struct rxe_mr *mr) + { + struct rxe_map **map; + struct rxe_phys_buf *buf = NULL; +@@ -142,23 +139,23 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start, + goto err1; + } + +- mem->umem = umem; ++ mr->umem = umem; + num_buf = ib_umem_num_pages(umem); + +- rxe_mem_init(access, mem); ++ rxe_mr_init(access, mr); + +- err = rxe_mem_alloc(mem, num_buf); ++ err = rxe_mr_alloc(mr, num_buf); + if (err) { +- pr_warn("err %d from rxe_mem_alloc\n", err); ++ pr_warn("err %d from rxe_mr_alloc\n", err); + ib_umem_release(umem); + goto err1; + } + +- mem->page_shift = PAGE_SHIFT; +- mem->page_mask = PAGE_SIZE - 1; ++ mr->page_shift = PAGE_SHIFT; ++ mr->page_mask = PAGE_SIZE - 1; + + num_buf = 0; +- map = mem->map; ++ map = mr->map; + if (length > 0) { + buf = map[0]->buf; + +@@ -185,15 +182,15 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start, + } + } + +- mem->ibmr.pd = &pd->ibpd; +- mem->umem = umem; +- mem->access = access; +- mem->length = length; +- mem->iova = iova; +- mem->va = start; +- mem->offset = ib_umem_offset(umem); +- mem->state = RXE_MEM_STATE_VALID; +- mem->type = RXE_MEM_TYPE_MR; ++ mr->ibmr.pd = &pd->ibpd; ++ mr->umem = umem; ++ mr->access = access; ++ mr->length = length; ++ mr->iova = iova; ++ mr->va = start; ++ mr->offset = ib_umem_offset(umem); ++ mr->state = RXE_MR_STATE_VALID; ++ mr->type = RXE_MR_TYPE_MR; + + return 0; + +@@ -201,24 +198,23 @@ err1: + return err; + } + +-int rxe_mem_init_fast(struct rxe_pd *pd, +- int max_pages, struct rxe_mem *mem) ++int rxe_mr_init_fast(struct rxe_pd *pd, int max_pages, struct rxe_mr *mr) + { + int err; + +- rxe_mem_init(0, mem); ++ rxe_mr_init(0, mr); + + /* In fastreg, we also set the rkey */ +- mem->ibmr.rkey = mem->ibmr.lkey; ++ mr->ibmr.rkey = mr->ibmr.lkey; + +- err = rxe_mem_alloc(mem, max_pages); ++ err = rxe_mr_alloc(mr, max_pages); + if (err) + goto err1; + +- mem->ibmr.pd = &pd->ibpd; +- mem->max_buf = max_pages; +- mem->state = RXE_MEM_STATE_FREE; +- mem->type = RXE_MEM_TYPE_MR; ++ mr->ibmr.pd = &pd->ibpd; ++ mr->max_buf = max_pages; ++ mr->state = RXE_MR_STATE_FREE; ++ mr->type = RXE_MR_TYPE_MR; + + return 0; + +@@ -226,28 +222,24 @@ err1: + return err; + } + +-static void lookup_iova( +- struct rxe_mem *mem, +- u64 iova, +- int *m_out, +- int *n_out, +- size_t *offset_out) ++static void lookup_iova(struct rxe_mr *mr, u64 iova, int *m_out, int *n_out, ++ size_t *offset_out) + { +- size_t offset = iova - mem->iova + mem->offset; ++ size_t offset = iova - mr->iova + mr->offset; + int map_index; + int buf_index; + u64 length; + +- if (likely(mem->page_shift)) { +- *offset_out = offset & mem->page_mask; +- offset >>= mem->page_shift; +- *n_out = offset & mem->map_mask; +- *m_out = offset >> mem->map_shift; ++ if (likely(mr->page_shift)) { ++ *offset_out = offset & mr->page_mask; ++ offset >>= mr->page_shift; ++ *n_out = offset & mr->map_mask; ++ *m_out = offset >> mr->map_shift; + } else { + map_index = 0; + buf_index = 0; + +- length = mem->map[map_index]->buf[buf_index].size; ++ length = mr->map[map_index]->buf[buf_index].size; + + while (offset >= length) { + offset -= length; +@@ -257,7 +249,7 @@ static void lookup_iova( + map_index++; + buf_index = 0; + } +- length = mem->map[map_index]->buf[buf_index].size; ++ length = mr->map[map_index]->buf[buf_index].size; + } + + *m_out = map_index; +@@ -266,49 +258,49 @@ static void lookup_iova( + } + } + +-void *iova_to_vaddr(struct rxe_mem *mem, u64 iova, int length) ++void *iova_to_vaddr(struct rxe_mr *mr, u64 iova, int length) + { + size_t offset; + int m, n; + void *addr; + +- if (mem->state != RXE_MEM_STATE_VALID) { +- pr_warn("mem not in valid state\n"); ++ if (mr->state != RXE_MR_STATE_VALID) { ++ pr_warn("mr not in valid state\n"); + addr = NULL; + goto out; + } + +- if (!mem->map) { ++ if (!mr->map) { + addr = (void *)(uintptr_t)iova; + goto out; + } + +- if (mem_check_range(mem, iova, length)) { ++ if (mr_check_range(mr, iova, length)) { + pr_warn("range violation\n"); + addr = NULL; + goto out; + } + +- lookup_iova(mem, iova, &m, &n, &offset); ++ lookup_iova(mr, iova, &m, &n, &offset); + +- if (offset + length > mem->map[m]->buf[n].size) { ++ if (offset + length > mr->map[m]->buf[n].size) { + pr_warn("crosses page boundary\n"); + addr = NULL; + goto out; + } + +- addr = (void *)(uintptr_t)mem->map[m]->buf[n].addr + offset; ++ addr = (void *)(uintptr_t)mr->map[m]->buf[n].addr + offset; + + out: + return addr; + } + + /* copy data from a range (vaddr, vaddr+length-1) to or from +- * a mem object starting at iova. Compute incremental value of +- * crc32 if crcp is not zero. caller must hold a reference to mem ++ * a mr object starting at iova. Compute incremental value of ++ * crc32 if crcp is not zero. caller must hold a reference to mr + */ +-int rxe_mem_copy(struct rxe_mem *mem, u64 iova, void *addr, int length, +- enum copy_direction dir, u32 *crcp) ++int rxe_mr_copy(struct rxe_mr *mr, u64 iova, void *addr, int length, ++ enum copy_direction dir, u32 *crcp) + { + int err; + int bytes; +@@ -323,43 +315,41 @@ int rxe_mem_copy(struct rxe_mem *mem, u64 iova, void *addr, int length, + if (length == 0) + return 0; + +- if (mem->type == RXE_MEM_TYPE_DMA) { ++ if (mr->type == RXE_MR_TYPE_DMA) { + u8 *src, *dest; + +- src = (dir == to_mem_obj) ? +- addr : ((void *)(uintptr_t)iova); ++ src = (dir == to_mr_obj) ? addr : ((void *)(uintptr_t)iova); + +- dest = (dir == to_mem_obj) ? +- ((void *)(uintptr_t)iova) : addr; ++ dest = (dir == to_mr_obj) ? ((void *)(uintptr_t)iova) : addr; + + memcpy(dest, src, length); + + if (crcp) +- *crcp = rxe_crc32(to_rdev(mem->ibmr.device), +- *crcp, dest, length); ++ *crcp = rxe_crc32(to_rdev(mr->ibmr.device), *crcp, dest, ++ length); + + return 0; + } + +- WARN_ON_ONCE(!mem->map); ++ WARN_ON_ONCE(!mr->map); + +- err = mem_check_range(mem, iova, length); ++ err = mr_check_range(mr, iova, length); + if (err) { + err = -EFAULT; + goto err1; + } + +- lookup_iova(mem, iova, &m, &i, &offset); ++ lookup_iova(mr, iova, &m, &i, &offset); + +- map = mem->map + m; ++ map = mr->map + m; + buf = map[0]->buf + i; + + while (length > 0) { + u8 *src, *dest; + + va = (u8 *)(uintptr_t)buf->addr + offset; +- src = (dir == to_mem_obj) ? addr : va; +- dest = (dir == to_mem_obj) ? va : addr; ++ src = (dir == to_mr_obj) ? addr : va; ++ dest = (dir == to_mr_obj) ? va : addr; + + bytes = buf->size - offset; + +@@ -369,8 +359,8 @@ int rxe_mem_copy(struct rxe_mem *mem, u64 iova, void *addr, int length, + memcpy(dest, src, bytes); + + if (crcp) +- crc = rxe_crc32(to_rdev(mem->ibmr.device), +- crc, dest, bytes); ++ crc = rxe_crc32(to_rdev(mr->ibmr.device), crc, dest, ++ bytes); + + length -= bytes; + addr += bytes; +@@ -411,7 +401,7 @@ int copy_data( + struct rxe_sge *sge = &dma->sge[dma->cur_sge]; + int offset = dma->sge_offset; + int resid = dma->resid; +- struct rxe_mem *mem = NULL; ++ struct rxe_mr *mr = NULL; + u64 iova; + int err; + +@@ -424,8 +414,8 @@ int copy_data( + } + + if (sge->length && (offset < sge->length)) { +- mem = lookup_mem(pd, access, sge->lkey, lookup_local); +- if (!mem) { ++ mr = lookup_mr(pd, access, sge->lkey, lookup_local); ++ if (!mr) { + err = -EINVAL; + goto err1; + } +@@ -435,9 +425,9 @@ int copy_data( + bytes = length; + + if (offset >= sge->length) { +- if (mem) { +- rxe_drop_ref(mem); +- mem = NULL; ++ if (mr) { ++ rxe_drop_ref(mr); ++ mr = NULL; + } + sge++; + dma->cur_sge++; +@@ -449,9 +439,9 @@ int copy_data( + } + + if (sge->length) { +- mem = lookup_mem(pd, access, sge->lkey, +- lookup_local); +- if (!mem) { ++ mr = lookup_mr(pd, access, sge->lkey, ++ lookup_local); ++ if (!mr) { + err = -EINVAL; + goto err1; + } +@@ -466,7 +456,7 @@ int copy_data( + if (bytes > 0) { + iova = sge->addr + offset; + +- err = rxe_mem_copy(mem, iova, addr, bytes, dir, crcp); ++ err = rxe_mr_copy(mr, iova, addr, bytes, dir, crcp); + if (err) + goto err2; + +@@ -480,14 +470,14 @@ int copy_data( + dma->sge_offset = offset; + dma->resid = resid; + +- if (mem) +- rxe_drop_ref(mem); ++ if (mr) ++ rxe_drop_ref(mr); + + return 0; + + err2: +- if (mem) +- rxe_drop_ref(mem); ++ if (mr) ++ rxe_drop_ref(mr); + err1: + return err; + } +@@ -525,31 +515,30 @@ int advance_dma_data(struct rxe_dma_info *dma, unsigned int length) + return 0; + } + +-/* (1) find the mem (mr or mw) corresponding to lkey/rkey ++/* (1) find the mr corresponding to lkey/rkey + * depending on lookup_type +- * (2) verify that the (qp) pd matches the mem pd +- * (3) verify that the mem can support the requested access +- * (4) verify that mem state is valid ++ * (2) verify that the (qp) pd matches the mr pd ++ * (3) verify that the mr can support the requested access ++ * (4) verify that mr state is valid + */ +-struct rxe_mem *lookup_mem(struct rxe_pd *pd, int access, u32 key, +- enum lookup_type type) ++struct rxe_mr *lookup_mr(struct rxe_pd *pd, int access, u32 key, ++ enum lookup_type type) + { +- struct rxe_mem *mem; ++ struct rxe_mr *mr; + struct rxe_dev *rxe = to_rdev(pd->ibpd.device); + int index = key >> 8; + +- mem = rxe_pool_get_index(&rxe->mr_pool, index); +- if (!mem) ++ mr = rxe_pool_get_index(&rxe->mr_pool, index); ++ if (!mr) + return NULL; + +- if (unlikely((type == lookup_local && mr_lkey(mem) != key) || +- (type == lookup_remote && mr_rkey(mem) != key) || +- mr_pd(mem) != pd || +- (access && !(access & mem->access)) || +- mem->state != RXE_MEM_STATE_VALID)) { +- rxe_drop_ref(mem); +- mem = NULL; ++ if (unlikely((type == lookup_local && mr_lkey(mr) != key) || ++ (type == lookup_remote && mr_rkey(mr) != key) || ++ mr_pd(mr) != pd || (access && !(access & mr->access)) || ++ mr->state != RXE_MR_STATE_VALID)) { ++ rxe_drop_ref(mr); ++ mr = NULL; + } + +- return mem; ++ return mr; + } +diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c +index 307d8986e7c9b..d24901f2af3fb 100644 +--- a/drivers/infiniband/sw/rxe/rxe_pool.c ++++ b/drivers/infiniband/sw/rxe/rxe_pool.c +@@ -8,8 +8,6 @@ + #include "rxe_loc.h" + + /* info about object pools +- * note that mr and mw share a single index space +- * so that one can map an lkey to the correct type of object + */ + struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { + [RXE_TYPE_UC] = { +@@ -56,18 +54,18 @@ struct rxe_type_info rxe_type_info[RXE_NUM_TYPES] = { + }, + [RXE_TYPE_MR] = { + .name = "rxe-mr", +- .size = sizeof(struct rxe_mem), +- .elem_offset = offsetof(struct rxe_mem, pelem), +- .cleanup = rxe_mem_cleanup, ++ .size = sizeof(struct rxe_mr), ++ .elem_offset = offsetof(struct rxe_mr, pelem), ++ .cleanup = rxe_mr_cleanup, + .flags = RXE_POOL_INDEX, + .max_index = RXE_MAX_MR_INDEX, + .min_index = RXE_MIN_MR_INDEX, + }, + [RXE_TYPE_MW] = { + .name = "rxe-mw", +- .size = sizeof(struct rxe_mem), +- .elem_offset = offsetof(struct rxe_mem, pelem), +- .flags = RXE_POOL_INDEX, ++ .size = sizeof(struct rxe_mw), ++ .elem_offset = offsetof(struct rxe_mw, pelem), ++ .flags = RXE_POOL_INDEX | RXE_POOL_NO_ALLOC, + .max_index = RXE_MAX_MW_INDEX, + .min_index = RXE_MIN_MW_INDEX, + }, +diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c +index 34ae957a315ca..b0f350d674fdb 100644 +--- a/drivers/infiniband/sw/rxe/rxe_qp.c ++++ b/drivers/infiniband/sw/rxe/rxe_qp.c +@@ -242,6 +242,7 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp, + if (err) { + vfree(qp->sq.queue->buf); + kfree(qp->sq.queue); ++ qp->sq.queue = NULL; + return err; + } + +@@ -295,6 +296,7 @@ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp, + if (err) { + vfree(qp->rq.queue->buf); + kfree(qp->rq.queue); ++ qp->rq.queue = NULL; + return err; + } + } +@@ -355,6 +357,11 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd, + err2: + rxe_queue_cleanup(qp->sq.queue); + err1: ++ qp->pd = NULL; ++ qp->rcq = NULL; ++ qp->scq = NULL; ++ qp->srq = NULL; ++ + if (srq) + rxe_drop_ref(srq); + rxe_drop_ref(scq); +diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c +index 889290793d75b..3664cdae7e1f4 100644 +--- a/drivers/infiniband/sw/rxe/rxe_req.c ++++ b/drivers/infiniband/sw/rxe/rxe_req.c +@@ -464,7 +464,7 @@ static int fill_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe, + } else { + err = copy_data(qp->pd, 0, &wqe->dma, + payload_addr(pkt), paylen, +- from_mem_obj, ++ from_mr_obj, + &crc); + if (err) + return err; +@@ -596,7 +596,7 @@ next_wqe: + if (wqe->mask & WR_REG_MASK) { + if (wqe->wr.opcode == IB_WR_LOCAL_INV) { + struct rxe_dev *rxe = to_rdev(qp->ibqp.device); +- struct rxe_mem *rmr; ++ struct rxe_mr *rmr; + + rmr = rxe_pool_get_index(&rxe->mr_pool, + wqe->wr.ex.invalidate_rkey >> 8); +@@ -607,14 +607,14 @@ next_wqe: + wqe->status = IB_WC_MW_BIND_ERR; + goto exit; + } +- rmr->state = RXE_MEM_STATE_FREE; ++ rmr->state = RXE_MR_STATE_FREE; + rxe_drop_ref(rmr); + wqe->state = wqe_state_done; + wqe->status = IB_WC_SUCCESS; + } else if (wqe->wr.opcode == IB_WR_REG_MR) { +- struct rxe_mem *rmr = to_rmr(wqe->wr.wr.reg.mr); ++ struct rxe_mr *rmr = to_rmr(wqe->wr.wr.reg.mr); + +- rmr->state = RXE_MEM_STATE_VALID; ++ rmr->state = RXE_MR_STATE_VALID; + rmr->access = wqe->wr.wr.reg.access; + rmr->ibmr.lkey = wqe->wr.wr.reg.key; + rmr->ibmr.rkey = wqe->wr.wr.reg.key; +diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c +index 142f3d8014d83..8e237b623b316 100644 +--- a/drivers/infiniband/sw/rxe/rxe_resp.c ++++ b/drivers/infiniband/sw/rxe/rxe_resp.c +@@ -391,7 +391,7 @@ static enum resp_states check_length(struct rxe_qp *qp, + static enum resp_states check_rkey(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) + { +- struct rxe_mem *mem = NULL; ++ struct rxe_mr *mr = NULL; + u64 va; + u32 rkey; + u32 resid; +@@ -430,18 +430,18 @@ static enum resp_states check_rkey(struct rxe_qp *qp, + resid = qp->resp.resid; + pktlen = payload_size(pkt); + +- mem = lookup_mem(qp->pd, access, rkey, lookup_remote); +- if (!mem) { ++ mr = lookup_mr(qp->pd, access, rkey, lookup_remote); ++ if (!mr) { + state = RESPST_ERR_RKEY_VIOLATION; + goto err; + } + +- if (unlikely(mem->state == RXE_MEM_STATE_FREE)) { ++ if (unlikely(mr->state == RXE_MR_STATE_FREE)) { + state = RESPST_ERR_RKEY_VIOLATION; + goto err; + } + +- if (mem_check_range(mem, va, resid)) { ++ if (mr_check_range(mr, va, resid)) { + state = RESPST_ERR_RKEY_VIOLATION; + goto err; + } +@@ -469,12 +469,12 @@ static enum resp_states check_rkey(struct rxe_qp *qp, + + WARN_ON_ONCE(qp->resp.mr); + +- qp->resp.mr = mem; ++ qp->resp.mr = mr; + return RESPST_EXECUTE; + + err: +- if (mem) +- rxe_drop_ref(mem); ++ if (mr) ++ rxe_drop_ref(mr); + return state; + } + +@@ -484,7 +484,7 @@ static enum resp_states send_data_in(struct rxe_qp *qp, void *data_addr, + int err; + + err = copy_data(qp->pd, IB_ACCESS_LOCAL_WRITE, &qp->resp.wqe->dma, +- data_addr, data_len, to_mem_obj, NULL); ++ data_addr, data_len, to_mr_obj, NULL); + if (unlikely(err)) + return (err == -ENOSPC) ? RESPST_ERR_LENGTH + : RESPST_ERR_MALFORMED_WQE; +@@ -499,8 +499,8 @@ static enum resp_states write_data_in(struct rxe_qp *qp, + int err; + int data_len = payload_size(pkt); + +- err = rxe_mem_copy(qp->resp.mr, qp->resp.va, payload_addr(pkt), +- data_len, to_mem_obj, NULL); ++ err = rxe_mr_copy(qp->resp.mr, qp->resp.va, payload_addr(pkt), data_len, ++ to_mr_obj, NULL); + if (err) { + rc = RESPST_ERR_RKEY_VIOLATION; + goto out; +@@ -522,9 +522,9 @@ static enum resp_states process_atomic(struct rxe_qp *qp, + u64 iova = atmeth_va(pkt); + u64 *vaddr; + enum resp_states ret; +- struct rxe_mem *mr = qp->resp.mr; ++ struct rxe_mr *mr = qp->resp.mr; + +- if (mr->state != RXE_MEM_STATE_VALID) { ++ if (mr->state != RXE_MR_STATE_VALID) { + ret = RESPST_ERR_RKEY_VIOLATION; + goto out; + } +@@ -700,8 +700,8 @@ static enum resp_states read_reply(struct rxe_qp *qp, + if (!skb) + return RESPST_ERR_RNR; + +- err = rxe_mem_copy(res->read.mr, res->read.va, payload_addr(&ack_pkt), +- payload, from_mem_obj, &icrc); ++ err = rxe_mr_copy(res->read.mr, res->read.va, payload_addr(&ack_pkt), ++ payload, from_mr_obj, &icrc); + if (err) + pr_err("Failed copying memory\n"); + +@@ -883,7 +883,7 @@ static enum resp_states do_complete(struct rxe_qp *qp, + } + + if (pkt->mask & RXE_IETH_MASK) { +- struct rxe_mem *rmr; ++ struct rxe_mr *rmr; + + wc->wc_flags |= IB_WC_WITH_INVALIDATE; + wc->ex.invalidate_rkey = ieth_rkey(pkt); +@@ -895,7 +895,7 @@ static enum resp_states do_complete(struct rxe_qp *qp, + wc->ex.invalidate_rkey); + return RESPST_ERROR; + } +- rmr->state = RXE_MEM_STATE_FREE; ++ rmr->state = RXE_MR_STATE_FREE; + rxe_drop_ref(rmr); + } + +diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c +index dee5e0e919d28..38249c1a76a88 100644 +--- a/drivers/infiniband/sw/rxe/rxe_verbs.c ++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c +@@ -865,7 +865,7 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) + { + struct rxe_dev *rxe = to_rdev(ibpd->device); + struct rxe_pd *pd = to_rpd(ibpd); +- struct rxe_mem *mr; ++ struct rxe_mr *mr; + + mr = rxe_alloc(&rxe->mr_pool); + if (!mr) +@@ -873,7 +873,7 @@ static struct ib_mr *rxe_get_dma_mr(struct ib_pd *ibpd, int access) + + rxe_add_index(mr); + rxe_add_ref(pd); +- rxe_mem_init_dma(pd, access, mr); ++ rxe_mr_init_dma(pd, access, mr); + + return &mr->ibmr; + } +@@ -887,7 +887,7 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, + int err; + struct rxe_dev *rxe = to_rdev(ibpd->device); + struct rxe_pd *pd = to_rpd(ibpd); +- struct rxe_mem *mr; ++ struct rxe_mr *mr; + + mr = rxe_alloc(&rxe->mr_pool); + if (!mr) { +@@ -899,8 +899,7 @@ static struct ib_mr *rxe_reg_user_mr(struct ib_pd *ibpd, + + rxe_add_ref(pd); + +- err = rxe_mem_init_user(pd, start, length, iova, +- access, udata, mr); ++ err = rxe_mr_init_user(pd, start, length, iova, access, udata, mr); + if (err) + goto err3; + +@@ -916,9 +915,9 @@ err2: + + static int rxe_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) + { +- struct rxe_mem *mr = to_rmr(ibmr); ++ struct rxe_mr *mr = to_rmr(ibmr); + +- mr->state = RXE_MEM_STATE_ZOMBIE; ++ mr->state = RXE_MR_STATE_ZOMBIE; + rxe_drop_ref(mr_pd(mr)); + rxe_drop_index(mr); + rxe_drop_ref(mr); +@@ -930,7 +929,7 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, + { + struct rxe_dev *rxe = to_rdev(ibpd->device); + struct rxe_pd *pd = to_rpd(ibpd); +- struct rxe_mem *mr; ++ struct rxe_mr *mr; + int err; + + if (mr_type != IB_MR_TYPE_MEM_REG) +@@ -946,7 +945,7 @@ static struct ib_mr *rxe_alloc_mr(struct ib_pd *ibpd, enum ib_mr_type mr_type, + + rxe_add_ref(pd); + +- err = rxe_mem_init_fast(pd, max_num_sg, mr); ++ err = rxe_mr_init_fast(pd, max_num_sg, mr); + if (err) + goto err2; + +@@ -962,7 +961,7 @@ err1: + + static int rxe_set_page(struct ib_mr *ibmr, u64 addr) + { +- struct rxe_mem *mr = to_rmr(ibmr); ++ struct rxe_mr *mr = to_rmr(ibmr); + struct rxe_map *map; + struct rxe_phys_buf *buf; + +@@ -982,7 +981,7 @@ static int rxe_set_page(struct ib_mr *ibmr, u64 addr) + static int rxe_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, + int sg_nents, unsigned int *sg_offset) + { +- struct rxe_mem *mr = to_rmr(ibmr); ++ struct rxe_mr *mr = to_rmr(ibmr); + int n; + + mr->nbuf = 0; +@@ -1110,6 +1109,7 @@ static const struct ib_device_ops rxe_dev_ops = { + INIT_RDMA_OBJ_SIZE(ib_pd, rxe_pd, ibpd), + INIT_RDMA_OBJ_SIZE(ib_srq, rxe_srq, ibsrq), + INIT_RDMA_OBJ_SIZE(ib_ucontext, rxe_ucontext, ibuc), ++ INIT_RDMA_OBJ_SIZE(ib_mw, rxe_mw, ibmw), + }; + + int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name) +diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h +index 79e0a5a878da3..11eba7a3ba8f4 100644 +--- a/drivers/infiniband/sw/rxe/rxe_verbs.h ++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h +@@ -156,7 +156,7 @@ struct resp_res { + struct sk_buff *skb; + } atomic; + struct { +- struct rxe_mem *mr; ++ struct rxe_mr *mr; + u64 va_org; + u32 rkey; + u32 length; +@@ -183,7 +183,7 @@ struct rxe_resp_info { + + /* RDMA read / atomic only */ + u64 va; +- struct rxe_mem *mr; ++ struct rxe_mr *mr; + u32 resid; + u32 rkey; + u32 length; +@@ -262,18 +262,18 @@ struct rxe_qp { + struct execute_work cleanup_work; + }; + +-enum rxe_mem_state { +- RXE_MEM_STATE_ZOMBIE, +- RXE_MEM_STATE_INVALID, +- RXE_MEM_STATE_FREE, +- RXE_MEM_STATE_VALID, ++enum rxe_mr_state { ++ RXE_MR_STATE_ZOMBIE, ++ RXE_MR_STATE_INVALID, ++ RXE_MR_STATE_FREE, ++ RXE_MR_STATE_VALID, + }; + +-enum rxe_mem_type { +- RXE_MEM_TYPE_NONE, +- RXE_MEM_TYPE_DMA, +- RXE_MEM_TYPE_MR, +- RXE_MEM_TYPE_MW, ++enum rxe_mr_type { ++ RXE_MR_TYPE_NONE, ++ RXE_MR_TYPE_DMA, ++ RXE_MR_TYPE_MR, ++ RXE_MR_TYPE_MW, + }; + + #define RXE_BUF_PER_MAP (PAGE_SIZE / sizeof(struct rxe_phys_buf)) +@@ -287,17 +287,14 @@ struct rxe_map { + struct rxe_phys_buf buf[RXE_BUF_PER_MAP]; + }; + +-struct rxe_mem { ++struct rxe_mr { + struct rxe_pool_entry pelem; +- union { +- struct ib_mr ibmr; +- struct ib_mw ibmw; +- }; ++ struct ib_mr ibmr; + + struct ib_umem *umem; + +- enum rxe_mem_state state; +- enum rxe_mem_type type; ++ enum rxe_mr_state state; ++ enum rxe_mr_type type; + u64 va; + u64 iova; + size_t length; +@@ -318,6 +315,17 @@ struct rxe_mem { + struct rxe_map **map; + }; + ++enum rxe_mw_state { ++ RXE_MW_STATE_INVALID = RXE_MR_STATE_INVALID, ++ RXE_MW_STATE_FREE = RXE_MR_STATE_FREE, ++ RXE_MW_STATE_VALID = RXE_MR_STATE_VALID, ++}; ++ ++struct rxe_mw { ++ struct ib_mw ibmw; ++ struct rxe_pool_entry pelem; ++}; ++ + struct rxe_mc_grp { + struct rxe_pool_entry pelem; + spinlock_t mcg_lock; /* guard group */ +@@ -422,27 +430,27 @@ static inline struct rxe_cq *to_rcq(struct ib_cq *cq) + return cq ? container_of(cq, struct rxe_cq, ibcq) : NULL; + } + +-static inline struct rxe_mem *to_rmr(struct ib_mr *mr) ++static inline struct rxe_mr *to_rmr(struct ib_mr *mr) + { +- return mr ? container_of(mr, struct rxe_mem, ibmr) : NULL; ++ return mr ? container_of(mr, struct rxe_mr, ibmr) : NULL; + } + +-static inline struct rxe_mem *to_rmw(struct ib_mw *mw) ++static inline struct rxe_mw *to_rmw(struct ib_mw *mw) + { +- return mw ? container_of(mw, struct rxe_mem, ibmw) : NULL; ++ return mw ? container_of(mw, struct rxe_mw, ibmw) : NULL; + } + +-static inline struct rxe_pd *mr_pd(struct rxe_mem *mr) ++static inline struct rxe_pd *mr_pd(struct rxe_mr *mr) + { + return to_rpd(mr->ibmr.pd); + } + +-static inline u32 mr_lkey(struct rxe_mem *mr) ++static inline u32 mr_lkey(struct rxe_mr *mr) + { + return mr->ibmr.lkey; + } + +-static inline u32 mr_rkey(struct rxe_mem *mr) ++static inline u32 mr_rkey(struct rxe_mr *mr) + { + return mr->ibmr.rkey; + } +diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c +index e389d44e5591d..8a00c06e5f56f 100644 +--- a/drivers/infiniband/sw/siw/siw_verbs.c ++++ b/drivers/infiniband/sw/siw/siw_verbs.c +@@ -300,7 +300,6 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd, + struct siw_ucontext *uctx = + rdma_udata_to_drv_context(udata, struct siw_ucontext, + base_ucontext); +- struct siw_cq *scq = NULL, *rcq = NULL; + unsigned long flags; + int num_sqe, num_rqe, rv = 0; + size_t length; +@@ -343,10 +342,8 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd, + rv = -EINVAL; + goto err_out; + } +- scq = to_siw_cq(attrs->send_cq); +- rcq = to_siw_cq(attrs->recv_cq); + +- if (!scq || (!rcq && !attrs->srq)) { ++ if (!attrs->send_cq || (!attrs->recv_cq && !attrs->srq)) { + siw_dbg(base_dev, "send CQ or receive CQ invalid\n"); + rv = -EINVAL; + goto err_out; +@@ -378,7 +375,7 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd, + else { + /* Zero sized SQ is not supported */ + rv = -EINVAL; +- goto err_out; ++ goto err_out_xa; + } + if (num_rqe) + num_rqe = roundup_pow_of_two(num_rqe); +@@ -401,8 +398,8 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd, + } + } + qp->pd = pd; +- qp->scq = scq; +- qp->rcq = rcq; ++ qp->scq = to_siw_cq(attrs->send_cq); ++ qp->rcq = to_siw_cq(attrs->recv_cq); + + if (attrs->srq) { + /* +diff --git a/drivers/leds/leds-lp5523.c b/drivers/leds/leds-lp5523.c +index fc433e63b1dc0..b1590cb4a1887 100644 +--- a/drivers/leds/leds-lp5523.c ++++ b/drivers/leds/leds-lp5523.c +@@ -307,7 +307,7 @@ static int lp5523_init_program_engine(struct lp55xx_chip *chip) + usleep_range(3000, 6000); + ret = lp55xx_read(chip, LP5523_REG_STATUS, &status); + if (ret) +- return ret; ++ goto out; + status &= LP5523_ENG_STATUS_MASK; + + if (status != LP5523_ENG_STATUS_MASK) { +diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c +index 11890db71f3fe..962f7df0691ef 100644 +--- a/drivers/md/dm-snap.c ++++ b/drivers/md/dm-snap.c +@@ -1408,6 +1408,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv) + + if (!s->store->chunk_size) { + ti->error = "Chunk size not set"; ++ r = -EINVAL; + goto bad_read_metadata; + } + +diff --git a/drivers/media/platform/rcar_drif.c b/drivers/media/platform/rcar_drif.c +index 83bd9a412a560..1e3b68a8743af 100644 +--- a/drivers/media/platform/rcar_drif.c ++++ b/drivers/media/platform/rcar_drif.c +@@ -915,7 +915,6 @@ static int rcar_drif_g_fmt_sdr_cap(struct file *file, void *priv, + { + struct rcar_drif_sdr *sdr = video_drvdata(file); + +- memset(f->fmt.sdr.reserved, 0, sizeof(f->fmt.sdr.reserved)); + f->fmt.sdr.pixelformat = sdr->fmt->pixelformat; + f->fmt.sdr.buffersize = sdr->fmt->buffersize; + +diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c +index 926408b41270c..7a6f01ace78ac 100644 +--- a/drivers/misc/eeprom/at24.c ++++ b/drivers/misc/eeprom/at24.c +@@ -763,7 +763,8 @@ static int at24_probe(struct i2c_client *client) + at24->nvmem = devm_nvmem_register(dev, &nvmem_config); + if (IS_ERR(at24->nvmem)) { + pm_runtime_disable(dev); +- regulator_disable(at24->vcc_reg); ++ if (!pm_runtime_status_suspended(dev)) ++ regulator_disable(at24->vcc_reg); + return PTR_ERR(at24->nvmem); + } + +@@ -774,7 +775,8 @@ static int at24_probe(struct i2c_client *client) + err = at24_read(at24, 0, &test_byte, 1); + if (err) { + pm_runtime_disable(dev); +- regulator_disable(at24->vcc_reg); ++ if (!pm_runtime_status_suspended(dev)) ++ regulator_disable(at24->vcc_reg); + return -ENODEV; + } + +diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c +index 9152242778f5e..ecdedd87f8ccf 100644 +--- a/drivers/misc/habanalabs/gaudi/gaudi.c ++++ b/drivers/misc/habanalabs/gaudi/gaudi.c +@@ -5546,6 +5546,7 @@ static int gaudi_memset_device_memory(struct hl_device *hdev, u64 addr, + struct hl_cs_job *job; + u32 cb_size, ctl, err_cause; + struct hl_cb *cb; ++ u64 id; + int rc; + + cb = hl_cb_kernel_create(hdev, PAGE_SIZE, false); +@@ -5612,8 +5613,9 @@ static int gaudi_memset_device_memory(struct hl_device *hdev, u64 addr, + } + + release_cb: ++ id = cb->id; + hl_cb_put(cb); +- hl_cb_destroy(hdev, &hdev->kernel_cb_mgr, cb->id << PAGE_SHIFT); ++ hl_cb_destroy(hdev, &hdev->kernel_cb_mgr, id << PAGE_SHIFT); + + return rc; + } +diff --git a/drivers/misc/ics932s401.c b/drivers/misc/ics932s401.c +index 2bdf560ee681b..0f9ea75b0b189 100644 +--- a/drivers/misc/ics932s401.c ++++ b/drivers/misc/ics932s401.c +@@ -134,7 +134,7 @@ static struct ics932s401_data *ics932s401_update_device(struct device *dev) + for (i = 0; i < NUM_MIRRORED_REGS; i++) { + temp = i2c_smbus_read_word_data(client, regs_to_copy[i]); + if (temp < 0) +- data->regs[regs_to_copy[i]] = 0; ++ temp = 0; + data->regs[regs_to_copy[i]] = temp >> 8; + } + +diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c +index b8b771b643cc8..016a6106151a5 100644 +--- a/drivers/mmc/host/meson-gx-mmc.c ++++ b/drivers/mmc/host/meson-gx-mmc.c +@@ -236,7 +236,8 @@ static void meson_mmc_get_transfer_mode(struct mmc_host *mmc, + if (host->dram_access_quirk) + return; + +- if (data->blocks > 1) { ++ /* SD_IO_RW_EXTENDED (CMD53) can also use block mode under the hood */ ++ if (data->blocks > 1 || mrq->cmd->opcode == SD_IO_RW_EXTENDED) { + /* + * In block mode DMA descriptor format, "length" field indicates + * number of blocks and there is no way to pass DMA size that +@@ -258,7 +259,9 @@ static void meson_mmc_get_transfer_mode(struct mmc_host *mmc, + for_each_sg(data->sg, sg, data->sg_len, i) { + /* check for 8 byte alignment */ + if (sg->offset % 8) { +- WARN_ONCE(1, "unaligned scatterlist buffer\n"); ++ dev_warn_once(mmc_dev(mmc), ++ "unaligned sg offset %u, disabling descriptor DMA for transfer\n", ++ sg->offset); + return; + } + } +diff --git a/drivers/mmc/host/sdhci-pci-gli.c b/drivers/mmc/host/sdhci-pci-gli.c +index 4a0f69b97a78f..7572119225068 100644 +--- a/drivers/mmc/host/sdhci-pci-gli.c ++++ b/drivers/mmc/host/sdhci-pci-gli.c +@@ -587,8 +587,13 @@ static void sdhci_gli_voltage_switch(struct sdhci_host *host) + * + * Wait 5ms after set 1.8V signal enable in Host Control 2 register + * to ensure 1.8V signal enable bit is set by GL9750/GL9755. ++ * ++ * ...however, the controller in the NUC10i3FNK4 (a 9755) requires ++ * slightly longer than 5ms before the control register reports that ++ * 1.8V is ready, and far longer still before the card will actually ++ * work reliably. + */ +- usleep_range(5000, 5500); ++ usleep_range(100000, 110000); + } + + static void sdhci_gl9750_reset(struct sdhci_host *host, u8 mask) +diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c +index d8a3ecaed3fc6..d8f0863b39342 100644 +--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c ++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c +@@ -1048,7 +1048,7 @@ int qlcnic_do_lb_test(struct qlcnic_adapter *adapter, u8 mode) + for (i = 0; i < QLCNIC_NUM_ILB_PKT; i++) { + skb = netdev_alloc_skb(adapter->netdev, QLCNIC_ILB_PKT_SIZE); + if (!skb) +- break; ++ goto error; + qlcnic_create_loopback_buff(skb->data, adapter->mac_addr); + skb_put(skb, QLCNIC_ILB_PKT_SIZE); + adapter->ahw->diag_cnt = 0; +@@ -1072,6 +1072,7 @@ int qlcnic_do_lb_test(struct qlcnic_adapter *adapter, u8 mode) + cnt++; + } + if (cnt != i) { ++error: + dev_err(&adapter->pdev->dev, + "LB Test: failed, TX[%d], RX[%d]\n", i, cnt); + if (mode != QLCNIC_ILB_MODE) +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c +index 0e1ca2cba3c7c..e18dee7fe6876 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c +@@ -30,7 +30,7 @@ struct sunxi_priv_data { + static int sun7i_gmac_init(struct platform_device *pdev, void *priv) + { + struct sunxi_priv_data *gmac = priv; +- int ret; ++ int ret = 0; + + if (gmac->regulator) { + ret = regulator_enable(gmac->regulator); +@@ -51,11 +51,11 @@ static int sun7i_gmac_init(struct platform_device *pdev, void *priv) + } else { + clk_set_rate(gmac->tx_clk, SUN7I_GMAC_MII_RATE); + ret = clk_prepare(gmac->tx_clk); +- if (ret) +- return ret; ++ if (ret && gmac->regulator) ++ regulator_disable(gmac->regulator); + } + +- return 0; ++ return ret; + } + + static void sun7i_gmac_exit(struct platform_device *pdev, void *priv) +diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c +index 707ccdd03b19e..74e748662ec01 100644 +--- a/drivers/net/ethernet/sun/niu.c ++++ b/drivers/net/ethernet/sun/niu.c +@@ -8144,10 +8144,10 @@ static int niu_pci_vpd_scan_props(struct niu *np, u32 start, u32 end) + "VPD_SCAN: Reading in property [%s] len[%d]\n", + namebuf, prop_len); + for (i = 0; i < prop_len; i++) { +- err = niu_pci_eeprom_read(np, off + i); +- if (err >= 0) +- *prop_buf = err; +- ++prop_buf; ++ err = niu_pci_eeprom_read(np, off + i); ++ if (err < 0) ++ return err; ++ *prop_buf++ = err; + } + } + +@@ -8158,14 +8158,14 @@ static int niu_pci_vpd_scan_props(struct niu *np, u32 start, u32 end) + } + + /* ESPC_PIO_EN_ENABLE must be set */ +-static void niu_pci_vpd_fetch(struct niu *np, u32 start) ++static int niu_pci_vpd_fetch(struct niu *np, u32 start) + { + u32 offset; + int err; + + err = niu_pci_eeprom_read16_swp(np, start + 1); + if (err < 0) +- return; ++ return err; + + offset = err + 3; + +@@ -8174,12 +8174,14 @@ static void niu_pci_vpd_fetch(struct niu *np, u32 start) + u32 end; + + err = niu_pci_eeprom_read(np, here); ++ if (err < 0) ++ return err; + if (err != 0x90) +- return; ++ return -EINVAL; + + err = niu_pci_eeprom_read16_swp(np, here + 1); + if (err < 0) +- return; ++ return err; + + here = start + offset + 3; + end = start + offset + err; +@@ -8187,9 +8189,12 @@ static void niu_pci_vpd_fetch(struct niu *np, u32 start) + offset += err; + + err = niu_pci_vpd_scan_props(np, here, end); +- if (err < 0 || err == 1) +- return; ++ if (err < 0) ++ return err; ++ if (err == 1) ++ return -EINVAL; + } ++ return 0; + } + + /* ESPC_PIO_EN_ENABLE must be set */ +@@ -9280,8 +9285,11 @@ static int niu_get_invariants(struct niu *np) + offset = niu_pci_vpd_offset(np); + netif_printk(np, probe, KERN_DEBUG, np->dev, + "%s() VPD offset [%08x]\n", __func__, offset); +- if (offset) +- niu_pci_vpd_fetch(np, offset); ++ if (offset) { ++ err = niu_pci_vpd_fetch(np, offset); ++ if (err < 0) ++ return err; ++ } + nw64(ESPC_PIO_EN, 0); + + if (np->flags & NIU_FLAGS_VPD_VALID) { +diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c +index 6e8bd99e8911d..1866f6c2acab1 100644 +--- a/drivers/net/wireless/realtek/rtlwifi/base.c ++++ b/drivers/net/wireless/realtek/rtlwifi/base.c +@@ -440,9 +440,14 @@ static void rtl_watchdog_wq_callback(struct work_struct *work); + static void rtl_fwevt_wq_callback(struct work_struct *work); + static void rtl_c2hcmd_wq_callback(struct work_struct *work); + +-static void _rtl_init_deferred_work(struct ieee80211_hw *hw) ++static int _rtl_init_deferred_work(struct ieee80211_hw *hw) + { + struct rtl_priv *rtlpriv = rtl_priv(hw); ++ struct workqueue_struct *wq; ++ ++ wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name); ++ if (!wq) ++ return -ENOMEM; + + /* <1> timer */ + timer_setup(&rtlpriv->works.watchdog_timer, +@@ -451,11 +456,7 @@ static void _rtl_init_deferred_work(struct ieee80211_hw *hw) + rtl_easy_concurrent_retrytimer_callback, 0); + /* <2> work queue */ + rtlpriv->works.hw = hw; +- rtlpriv->works.rtl_wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name); +- if (unlikely(!rtlpriv->works.rtl_wq)) { +- pr_err("Failed to allocate work queue\n"); +- return; +- } ++ rtlpriv->works.rtl_wq = wq; + + INIT_DELAYED_WORK(&rtlpriv->works.watchdog_wq, + rtl_watchdog_wq_callback); +@@ -466,6 +467,7 @@ static void _rtl_init_deferred_work(struct ieee80211_hw *hw) + rtl_swlps_rfon_wq_callback); + INIT_DELAYED_WORK(&rtlpriv->works.fwevt_wq, rtl_fwevt_wq_callback); + INIT_DELAYED_WORK(&rtlpriv->works.c2hcmd_wq, rtl_c2hcmd_wq_callback); ++ return 0; + } + + void rtl_deinit_deferred_work(struct ieee80211_hw *hw, bool ips_wq) +@@ -565,9 +567,7 @@ int rtl_init_core(struct ieee80211_hw *hw) + rtlmac->link_state = MAC80211_NOLINK; + + /* <6> init deferred work */ +- _rtl_init_deferred_work(hw); +- +- return 0; ++ return _rtl_init_deferred_work(hw); + } + EXPORT_SYMBOL_GPL(rtl_init_core); + +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index d5d7e0cdd78d8..091b2e77d39ba 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -3190,7 +3190,7 @@ int nvme_init_identify(struct nvme_ctrl *ctrl) + ctrl->hmmaxd = le16_to_cpu(id->hmmaxd); + } + +- ret = nvme_mpath_init(ctrl, id); ++ ret = nvme_mpath_init_identify(ctrl, id); + kfree(id); + + if (ret < 0) +@@ -4580,6 +4580,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev, + min(default_ps_max_latency_us, (unsigned long)S32_MAX)); + + nvme_fault_inject_init(&ctrl->fault_inject, dev_name(ctrl->device)); ++ nvme_mpath_init_ctrl(ctrl); + + return 0; + out_free_name: +diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c +index 6ffa8de2a0d77..5eee603bc2493 100644 +--- a/drivers/nvme/host/fc.c ++++ b/drivers/nvme/host/fc.c +@@ -2460,6 +2460,18 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved) + static void + __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues) + { ++ int q; ++ ++ /* ++ * if aborting io, the queues are no longer good, mark them ++ * all as not live. ++ */ ++ if (ctrl->ctrl.queue_count > 1) { ++ for (q = 1; q < ctrl->ctrl.queue_count; q++) ++ clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[q].flags); ++ } ++ clear_bit(NVME_FC_Q_LIVE, &ctrl->queues[0].flags); ++ + /* + * If io queues are present, stop them and terminate all outstanding + * ios on them. As FC allocates FC exchange for each io, the +diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c +index ec1e454848e58..56852e6edd81a 100644 +--- a/drivers/nvme/host/multipath.c ++++ b/drivers/nvme/host/multipath.c +@@ -709,9 +709,18 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head) + put_disk(head->disk); + } + +-int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) ++void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl) + { +- int error; ++ mutex_init(&ctrl->ana_lock); ++ timer_setup(&ctrl->anatt_timer, nvme_anatt_timeout, 0); ++ INIT_WORK(&ctrl->ana_work, nvme_ana_work); ++} ++ ++int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) ++{ ++ size_t max_transfer_size = ctrl->max_hw_sectors << SECTOR_SHIFT; ++ size_t ana_log_size; ++ int error = 0; + + /* check if multipath is enabled and we have the capability */ + if (!multipath || !ctrl->subsys || +@@ -723,37 +732,31 @@ int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) + ctrl->nanagrpid = le32_to_cpu(id->nanagrpid); + ctrl->anagrpmax = le32_to_cpu(id->anagrpmax); + +- mutex_init(&ctrl->ana_lock); +- timer_setup(&ctrl->anatt_timer, nvme_anatt_timeout, 0); +- ctrl->ana_log_size = sizeof(struct nvme_ana_rsp_hdr) + +- ctrl->nanagrpid * sizeof(struct nvme_ana_group_desc); +- ctrl->ana_log_size += ctrl->max_namespaces * sizeof(__le32); +- +- if (ctrl->ana_log_size > ctrl->max_hw_sectors << SECTOR_SHIFT) { ++ ana_log_size = sizeof(struct nvme_ana_rsp_hdr) + ++ ctrl->nanagrpid * sizeof(struct nvme_ana_group_desc) + ++ ctrl->max_namespaces * sizeof(__le32); ++ if (ana_log_size > max_transfer_size) { + dev_err(ctrl->device, +- "ANA log page size (%zd) larger than MDTS (%d).\n", +- ctrl->ana_log_size, +- ctrl->max_hw_sectors << SECTOR_SHIFT); ++ "ANA log page size (%zd) larger than MDTS (%zd).\n", ++ ana_log_size, max_transfer_size); + dev_err(ctrl->device, "disabling ANA support.\n"); +- return 0; ++ goto out_uninit; + } +- +- INIT_WORK(&ctrl->ana_work, nvme_ana_work); +- kfree(ctrl->ana_log_buf); +- ctrl->ana_log_buf = kmalloc(ctrl->ana_log_size, GFP_KERNEL); +- if (!ctrl->ana_log_buf) { +- error = -ENOMEM; +- goto out; ++ if (ana_log_size > ctrl->ana_log_size) { ++ nvme_mpath_stop(ctrl); ++ kfree(ctrl->ana_log_buf); ++ ctrl->ana_log_buf = kmalloc(ana_log_size, GFP_KERNEL); ++ if (!ctrl->ana_log_buf) ++ return -ENOMEM; + } +- ++ ctrl->ana_log_size = ana_log_size; + error = nvme_read_ana_log(ctrl); + if (error) +- goto out_free_ana_log_buf; ++ goto out_uninit; + return 0; +-out_free_ana_log_buf: +- kfree(ctrl->ana_log_buf); +- ctrl->ana_log_buf = NULL; +-out: ++ ++out_uninit: ++ nvme_mpath_uninit(ctrl); + return error; + } + +diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h +index 07b34175c6ce6..447b0720aef5e 100644 +--- a/drivers/nvme/host/nvme.h ++++ b/drivers/nvme/host/nvme.h +@@ -668,7 +668,8 @@ void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl); + int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl,struct nvme_ns_head *head); + void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id); + void nvme_mpath_remove_disk(struct nvme_ns_head *head); +-int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id); ++int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id); ++void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl); + void nvme_mpath_uninit(struct nvme_ctrl *ctrl); + void nvme_mpath_stop(struct nvme_ctrl *ctrl); + bool nvme_mpath_clear_current_path(struct nvme_ns *ns); +@@ -742,7 +743,10 @@ static inline void nvme_mpath_check_last_path(struct nvme_ns *ns) + static inline void nvme_trace_bio_complete(struct request *req) + { + } +-static inline int nvme_mpath_init(struct nvme_ctrl *ctrl, ++static inline void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl) ++{ ++} ++static inline int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, + struct nvme_id_ctrl *id) + { + if (ctrl->subsys->cmic & (1 << 3)) +diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c +index d7d7c81d07014..8c2ae6284c3b2 100644 +--- a/drivers/nvme/host/tcp.c ++++ b/drivers/nvme/host/tcp.c +@@ -940,7 +940,6 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req) + if (ret <= 0) + return ret; + +- nvme_tcp_advance_req(req, ret); + if (queue->data_digest) + nvme_tcp_ddgst_update(queue->snd_hash, page, + offset, ret); +@@ -957,6 +956,7 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req) + } + return 1; + } ++ nvme_tcp_advance_req(req, ret); + } + return -EAGAIN; + } +@@ -1137,7 +1137,8 @@ static void nvme_tcp_io_work(struct work_struct *w) + pending = true; + else if (unlikely(result < 0)) + break; +- } ++ } else ++ pending = !llist_empty(&queue->req_list); + + result = nvme_tcp_try_recv(queue); + if (result > 0) +diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c +index a027433b8be84..348057fdc568f 100644 +--- a/drivers/nvme/target/core.c ++++ b/drivers/nvme/target/core.c +@@ -1371,7 +1371,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn, + goto out_free_changed_ns_list; + + if (subsys->cntlid_min > subsys->cntlid_max) +- goto out_free_changed_ns_list; ++ goto out_free_sqs; + + ret = ida_simple_get(&cntlid_ida, + subsys->cntlid_min, subsys->cntlid_max, +diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c +index 715d4376c9979..7fdbdc496597d 100644 +--- a/drivers/nvme/target/io-cmd-file.c ++++ b/drivers/nvme/target/io-cmd-file.c +@@ -49,9 +49,11 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns) + + ns->file = filp_open(ns->device_path, flags, 0); + if (IS_ERR(ns->file)) { +- pr_err("failed to open file %s: (%ld)\n", +- ns->device_path, PTR_ERR(ns->file)); +- return PTR_ERR(ns->file); ++ ret = PTR_ERR(ns->file); ++ pr_err("failed to open file %s: (%d)\n", ++ ns->device_path, ret); ++ ns->file = NULL; ++ return ret; + } + + ret = nvmet_file_ns_revalidate(ns); +diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c +index 3e189e753bcf5..14913a4588ecc 100644 +--- a/drivers/nvme/target/loop.c ++++ b/drivers/nvme/target/loop.c +@@ -588,8 +588,10 @@ static struct nvme_ctrl *nvme_loop_create_ctrl(struct device *dev, + + ret = nvme_init_ctrl(&ctrl->ctrl, dev, &nvme_loop_ctrl_ops, + 0 /* no quirks, we're perfect! */); +- if (ret) ++ if (ret) { ++ kfree(ctrl); + goto out; ++ } + + if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) + WARN_ON_ONCE(1); +diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c +index bbc4e71a16ff8..38800e86ed8ad 100644 +--- a/drivers/platform/mellanox/mlxbf-tmfifo.c ++++ b/drivers/platform/mellanox/mlxbf-tmfifo.c +@@ -294,6 +294,9 @@ mlxbf_tmfifo_get_next_desc(struct mlxbf_tmfifo_vring *vring) + if (vring->next_avail == virtio16_to_cpu(vdev, vr->avail->idx)) + return NULL; + ++ /* Make sure 'avail->idx' is visible already. */ ++ virtio_rmb(false); ++ + idx = vring->next_avail % vr->num; + head = virtio16_to_cpu(vdev, vr->avail->ring[idx]); + if (WARN_ON(head >= vr->num)) +@@ -322,7 +325,7 @@ static void mlxbf_tmfifo_release_desc(struct mlxbf_tmfifo_vring *vring, + * done or not. Add a memory barrier here to make sure the update above + * completes before updating the idx. + */ +- mb(); ++ virtio_mb(false); + vr->used->idx = cpu_to_virtio16(vdev, vr_idx + 1); + } + +@@ -733,6 +736,12 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring, + desc = NULL; + fifo->vring[is_rx] = NULL; + ++ /* ++ * Make sure the load/store are in order before ++ * returning back to virtio. ++ */ ++ virtio_mb(false); ++ + /* Notify upper layer that packet is done. */ + spin_lock_irqsave(&fifo->spin_lock[is_rx], flags); + vring_interrupt(0, vring->vq); +diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig +index 461ec61530ebf..205a096e9ceee 100644 +--- a/drivers/platform/x86/Kconfig ++++ b/drivers/platform/x86/Kconfig +@@ -688,7 +688,7 @@ config INTEL_HID_EVENT + + config INTEL_INT0002_VGPIO + tristate "Intel ACPI INT0002 Virtual GPIO driver" +- depends on GPIOLIB && ACPI ++ depends on GPIOLIB && ACPI && PM_SLEEP + select GPIOLIB_IRQCHIP + help + Some peripherals on Bay Trail and Cherry Trail platforms signal a +diff --git a/drivers/platform/x86/dell/dell-smbios-wmi.c b/drivers/platform/x86/dell/dell-smbios-wmi.c +index 27a298b7c541b..c97bd4a452422 100644 +--- a/drivers/platform/x86/dell/dell-smbios-wmi.c ++++ b/drivers/platform/x86/dell/dell-smbios-wmi.c +@@ -271,7 +271,8 @@ int init_dell_smbios_wmi(void) + + void exit_dell_smbios_wmi(void) + { +- wmi_driver_unregister(&dell_smbios_wmi_driver); ++ if (wmi_supported) ++ wmi_driver_unregister(&dell_smbios_wmi_driver); + } + + MODULE_DEVICE_TABLE(wmi, dell_smbios_wmi_id_table); +diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c +index 6cb5ad4be231d..3878172909219 100644 +--- a/drivers/platform/x86/ideapad-laptop.c ++++ b/drivers/platform/x86/ideapad-laptop.c +@@ -57,8 +57,8 @@ enum { + }; + + enum { +- SMBC_CONSERVATION_ON = 3, +- SMBC_CONSERVATION_OFF = 5, ++ SBMC_CONSERVATION_ON = 3, ++ SBMC_CONSERVATION_OFF = 5, + }; + + enum { +@@ -182,9 +182,9 @@ static int eval_gbmd(acpi_handle handle, unsigned long *res) + return eval_int(handle, "GBMD", res); + } + +-static int exec_smbc(acpi_handle handle, unsigned long arg) ++static int exec_sbmc(acpi_handle handle, unsigned long arg) + { +- return exec_simple_method(handle, "SMBC", arg); ++ return exec_simple_method(handle, "SBMC", arg); + } + + static int eval_hals(acpi_handle handle, unsigned long *res) +@@ -477,7 +477,7 @@ static ssize_t conservation_mode_store(struct device *dev, + if (err) + return err; + +- err = exec_smbc(priv->adev->handle, state ? SMBC_CONSERVATION_ON : SMBC_CONSERVATION_OFF); ++ err = exec_sbmc(priv->adev->handle, state ? SBMC_CONSERVATION_ON : SBMC_CONSERVATION_OFF); + if (err) + return err; + +@@ -809,6 +809,7 @@ static int dytc_profile_set(struct platform_profile_handler *pprof, + { + struct ideapad_dytc_priv *dytc = container_of(pprof, struct ideapad_dytc_priv, pprof); + struct ideapad_private *priv = dytc->priv; ++ unsigned long output; + int err; + + err = mutex_lock_interruptible(&dytc->mutex); +@@ -829,7 +830,7 @@ static int dytc_profile_set(struct platform_profile_handler *pprof, + + /* Determine if we are in CQL mode. This alters the commands we do */ + err = dytc_cql_command(priv, DYTC_SET_COMMAND(DYTC_FUNCTION_MMC, perfmode, 1), +- NULL); ++ &output); + if (err) + goto unlock; + } +diff --git a/drivers/platform/x86/intel_int0002_vgpio.c b/drivers/platform/x86/intel_int0002_vgpio.c +index 289c6655d425d..569342aa8926e 100644 +--- a/drivers/platform/x86/intel_int0002_vgpio.c ++++ b/drivers/platform/x86/intel_int0002_vgpio.c +@@ -51,6 +51,12 @@ + #define GPE0A_STS_PORT 0x420 + #define GPE0A_EN_PORT 0x428 + ++struct int0002_data { ++ struct gpio_chip chip; ++ int parent_irq; ++ int wake_enable_count; ++}; ++ + /* + * As this is not a real GPIO at all, but just a hack to model an event in + * ACPI the get / set functions are dummy functions. +@@ -98,14 +104,16 @@ static void int0002_irq_mask(struct irq_data *data) + static int int0002_irq_set_wake(struct irq_data *data, unsigned int on) + { + struct gpio_chip *chip = irq_data_get_irq_chip_data(data); +- struct platform_device *pdev = to_platform_device(chip->parent); +- int irq = platform_get_irq(pdev, 0); ++ struct int0002_data *int0002 = container_of(chip, struct int0002_data, chip); + +- /* Propagate to parent irq */ ++ /* ++ * Applying of the wakeup flag to our parent IRQ is delayed till system ++ * suspend, because we only want to do this when using s2idle. ++ */ + if (on) +- enable_irq_wake(irq); ++ int0002->wake_enable_count++; + else +- disable_irq_wake(irq); ++ int0002->wake_enable_count--; + + return 0; + } +@@ -135,7 +143,7 @@ static bool int0002_check_wake(void *data) + return (gpe_sts_reg & GPE0A_PME_B0_STS_BIT); + } + +-static struct irq_chip int0002_byt_irqchip = { ++static struct irq_chip int0002_irqchip = { + .name = DRV_NAME, + .irq_ack = int0002_irq_ack, + .irq_mask = int0002_irq_mask, +@@ -143,21 +151,9 @@ static struct irq_chip int0002_byt_irqchip = { + .irq_set_wake = int0002_irq_set_wake, + }; + +-static struct irq_chip int0002_cht_irqchip = { +- .name = DRV_NAME, +- .irq_ack = int0002_irq_ack, +- .irq_mask = int0002_irq_mask, +- .irq_unmask = int0002_irq_unmask, +- /* +- * No set_wake, on CHT the IRQ is typically shared with the ACPI SCI +- * and we don't want to mess with the ACPI SCI irq settings. +- */ +- .flags = IRQCHIP_SKIP_SET_WAKE, +-}; +- + static const struct x86_cpu_id int0002_cpu_ids[] = { +- X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT, &int0002_byt_irqchip), +- X86_MATCH_INTEL_FAM6_MODEL(ATOM_AIRMONT, &int0002_cht_irqchip), ++ X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT, NULL), ++ X86_MATCH_INTEL_FAM6_MODEL(ATOM_AIRMONT, NULL), + {} + }; + +@@ -172,8 +168,9 @@ static int int0002_probe(struct platform_device *pdev) + { + struct device *dev = &pdev->dev; + const struct x86_cpu_id *cpu_id; +- struct gpio_chip *chip; ++ struct int0002_data *int0002; + struct gpio_irq_chip *girq; ++ struct gpio_chip *chip; + int irq, ret; + + /* Menlow has a different INT0002 device? <sigh> */ +@@ -185,10 +182,13 @@ static int int0002_probe(struct platform_device *pdev) + if (irq < 0) + return irq; + +- chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL); +- if (!chip) ++ int0002 = devm_kzalloc(dev, sizeof(*int0002), GFP_KERNEL); ++ if (!int0002) + return -ENOMEM; + ++ int0002->parent_irq = irq; ++ ++ chip = &int0002->chip; + chip->label = DRV_NAME; + chip->parent = dev; + chip->owner = THIS_MODULE; +@@ -214,7 +214,7 @@ static int int0002_probe(struct platform_device *pdev) + } + + girq = &chip->irq; +- girq->chip = (struct irq_chip *)cpu_id->driver_data; ++ girq->chip = &int0002_irqchip; + /* This let us handle the parent IRQ in the driver */ + girq->parent_handler = NULL; + girq->num_parents = 0; +@@ -230,6 +230,7 @@ static int int0002_probe(struct platform_device *pdev) + + acpi_register_wakeup_handler(irq, int0002_check_wake, NULL); + device_init_wakeup(dev, true); ++ dev_set_drvdata(dev, int0002); + return 0; + } + +@@ -240,6 +241,36 @@ static int int0002_remove(struct platform_device *pdev) + return 0; + } + ++static int int0002_suspend(struct device *dev) ++{ ++ struct int0002_data *int0002 = dev_get_drvdata(dev); ++ ++ /* ++ * The INT0002 parent IRQ is often shared with the ACPI GPE IRQ, don't ++ * muck with it when firmware based suspend is used, otherwise we may ++ * cause spurious wakeups from firmware managed suspend. ++ */ ++ if (!pm_suspend_via_firmware() && int0002->wake_enable_count) ++ enable_irq_wake(int0002->parent_irq); ++ ++ return 0; ++} ++ ++static int int0002_resume(struct device *dev) ++{ ++ struct int0002_data *int0002 = dev_get_drvdata(dev); ++ ++ if (!pm_suspend_via_firmware() && int0002->wake_enable_count) ++ disable_irq_wake(int0002->parent_irq); ++ ++ return 0; ++} ++ ++static const struct dev_pm_ops int0002_pm_ops = { ++ .suspend = int0002_suspend, ++ .resume = int0002_resume, ++}; ++ + static const struct acpi_device_id int0002_acpi_ids[] = { + { "INT0002", 0 }, + { }, +@@ -250,6 +281,7 @@ static struct platform_driver int0002_driver = { + .driver = { + .name = DRV_NAME, + .acpi_match_table = int0002_acpi_ids, ++ .pm = &int0002_pm_ops, + }, + .probe = int0002_probe, + .remove = int0002_remove, +diff --git a/drivers/rapidio/rio_cm.c b/drivers/rapidio/rio_cm.c +index 50ec53d67a4c0..db4c265287ae6 100644 +--- a/drivers/rapidio/rio_cm.c ++++ b/drivers/rapidio/rio_cm.c +@@ -2127,6 +2127,14 @@ static int riocm_add_mport(struct device *dev, + return -ENODEV; + } + ++ cm->rx_wq = create_workqueue(DRV_NAME "/rxq"); ++ if (!cm->rx_wq) { ++ rio_release_inb_mbox(mport, cmbox); ++ rio_release_outb_mbox(mport, cmbox); ++ kfree(cm); ++ return -ENOMEM; ++ } ++ + /* + * Allocate and register inbound messaging buffers to be ready + * to receive channel and system management requests +@@ -2137,15 +2145,6 @@ static int riocm_add_mport(struct device *dev, + cm->rx_slots = RIOCM_RX_RING_SIZE; + mutex_init(&cm->rx_lock); + riocm_rx_fill(cm, RIOCM_RX_RING_SIZE); +- cm->rx_wq = create_workqueue(DRV_NAME "/rxq"); +- if (!cm->rx_wq) { +- riocm_error("failed to allocate IBMBOX_%d on %s", +- cmbox, mport->name); +- rio_release_outb_mbox(mport, cmbox); +- kfree(cm); +- return -ENOMEM; +- } +- + INIT_WORK(&cm->rx_work, rio_ibmsg_handler); + + cm->tx_slot = 0; +diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c +index aef6c1ee8bb0e..82becae142299 100644 +--- a/drivers/rtc/rtc-pcf85063.c ++++ b/drivers/rtc/rtc-pcf85063.c +@@ -478,6 +478,7 @@ static struct clk *pcf85063_clkout_register_clk(struct pcf85063 *pcf85063) + { + struct clk *clk; + struct clk_init_data init; ++ struct device_node *node = pcf85063->rtc->dev.parent->of_node; + + init.name = "pcf85063-clkout"; + init.ops = &pcf85063_clkout_ops; +@@ -487,15 +488,13 @@ static struct clk *pcf85063_clkout_register_clk(struct pcf85063 *pcf85063) + pcf85063->clkout_hw.init = &init; + + /* optional override of the clockname */ +- of_property_read_string(pcf85063->rtc->dev.of_node, +- "clock-output-names", &init.name); ++ of_property_read_string(node, "clock-output-names", &init.name); + + /* register the clock */ + clk = devm_clk_register(&pcf85063->rtc->dev, &pcf85063->clkout_hw); + + if (!IS_ERR(clk)) +- of_clk_add_provider(pcf85063->rtc->dev.of_node, +- of_clk_src_simple_get, clk); ++ of_clk_add_provider(node, of_clk_src_simple_get, clk); + + return clk; + } +diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c +index cec27f2ef70d7..e5076f09d5ed4 100644 +--- a/drivers/scsi/qedf/qedf_main.c ++++ b/drivers/scsi/qedf/qedf_main.c +@@ -536,7 +536,9 @@ static void qedf_update_link_speed(struct qedf_ctx *qedf, + if (linkmode_intersects(link->supported_caps, sup_caps)) + lport->link_supported_speeds |= FC_PORTSPEED_20GBIT; + +- fc_host_supported_speeds(lport->host) = lport->link_supported_speeds; ++ if (lport->host && lport->host->shost_data) ++ fc_host_supported_speeds(lport->host) = ++ lport->link_supported_speeds; + } + + static void qedf_bw_update(void *dev) +diff --git a/drivers/scsi/qla2xxx/qla_nx.c b/drivers/scsi/qla2xxx/qla_nx.c +index 0677295957bc5..615e44af1ca60 100644 +--- a/drivers/scsi/qla2xxx/qla_nx.c ++++ b/drivers/scsi/qla2xxx/qla_nx.c +@@ -1063,7 +1063,8 @@ qla82xx_write_flash_dword(struct qla_hw_data *ha, uint32_t flashaddr, + return ret; + } + +- if (qla82xx_flash_set_write_enable(ha)) ++ ret = qla82xx_flash_set_write_enable(ha); ++ if (ret < 0) + goto done_write; + + qla82xx_wr_32(ha, QLA82XX_ROMUSB_ROM_WDATA, data); +diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c +index 0aa58131e7915..d0626773eb386 100644 +--- a/drivers/scsi/ufs/ufs-hisi.c ++++ b/drivers/scsi/ufs/ufs-hisi.c +@@ -467,21 +467,24 @@ static int ufs_hisi_init_common(struct ufs_hba *hba) + host->hba = hba; + ufshcd_set_variant(hba, host); + +- host->rst = devm_reset_control_get(dev, "rst"); ++ host->rst = devm_reset_control_get(dev, "rst"); + if (IS_ERR(host->rst)) { + dev_err(dev, "%s: failed to get reset control\n", __func__); +- return PTR_ERR(host->rst); ++ err = PTR_ERR(host->rst); ++ goto error; + } + + ufs_hisi_set_pm_lvl(hba); + + err = ufs_hisi_get_resource(host); +- if (err) { +- ufshcd_set_variant(hba, NULL); +- return err; +- } ++ if (err) ++ goto error; + + return 0; ++ ++error: ++ ufshcd_set_variant(hba, NULL); ++ return err; + } + + static int ufs_hi3660_init(struct ufs_hba *hba) +diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c +index 0c71a159d08f1..e1e510882ff42 100644 +--- a/drivers/scsi/ufs/ufshcd.c ++++ b/drivers/scsi/ufs/ufshcd.c +@@ -2849,7 +2849,7 @@ static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba, + * ufshcd_exec_dev_cmd - API for sending device management requests + * @hba: UFS hba + * @cmd_type: specifies the type (NOP, Query...) +- * @timeout: time in seconds ++ * @timeout: timeout in milliseconds + * + * NOTE: Since there is only one available tag for device management commands, + * it is expected you hold the hba->dev_cmd.lock mutex. +@@ -2879,6 +2879,9 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba, + } + tag = req->tag; + WARN_ON_ONCE(!ufshcd_valid_tag(hba, tag)); ++ /* Set the timeout such that the SCSI error handler is not activated. */ ++ req->timeout = msecs_to_jiffies(2 * timeout); ++ blk_mq_start_request(req); + + init_completion(&wait); + lrbp = &hba->lrb[tag]; +diff --git a/drivers/tee/amdtee/amdtee_private.h b/drivers/tee/amdtee/amdtee_private.h +index 337c8d82f74eb..6d0f7062bb870 100644 +--- a/drivers/tee/amdtee/amdtee_private.h ++++ b/drivers/tee/amdtee/amdtee_private.h +@@ -21,6 +21,7 @@ + #define TEEC_SUCCESS 0x00000000 + #define TEEC_ERROR_GENERIC 0xFFFF0000 + #define TEEC_ERROR_BAD_PARAMETERS 0xFFFF0006 ++#define TEEC_ERROR_OUT_OF_MEMORY 0xFFFF000C + #define TEEC_ERROR_COMMUNICATION 0xFFFF000E + + #define TEEC_ORIGIN_COMMS 0x00000002 +@@ -93,6 +94,18 @@ struct amdtee_shm_data { + u32 buf_id; + }; + ++/** ++ * struct amdtee_ta_data - Keeps track of all TAs loaded in AMD Secure ++ * Processor ++ * @ta_handle: Handle to TA loaded in TEE ++ * @refcount: Reference count for the loaded TA ++ */ ++struct amdtee_ta_data { ++ struct list_head list_node; ++ u32 ta_handle; ++ u32 refcount; ++}; ++ + #define LOWER_TWO_BYTE_MASK 0x0000FFFF + + /** +diff --git a/drivers/tee/amdtee/call.c b/drivers/tee/amdtee/call.c +index 096dd4d92d39c..07f36ac834c88 100644 +--- a/drivers/tee/amdtee/call.c ++++ b/drivers/tee/amdtee/call.c +@@ -121,15 +121,69 @@ static int amd_params_to_tee_params(struct tee_param *tee, u32 count, + return ret; + } + ++static DEFINE_MUTEX(ta_refcount_mutex); ++static struct list_head ta_list = LIST_HEAD_INIT(ta_list); ++ ++static u32 get_ta_refcount(u32 ta_handle) ++{ ++ struct amdtee_ta_data *ta_data; ++ u32 count = 0; ++ ++ /* Caller must hold a mutex */ ++ list_for_each_entry(ta_data, &ta_list, list_node) ++ if (ta_data->ta_handle == ta_handle) ++ return ++ta_data->refcount; ++ ++ ta_data = kzalloc(sizeof(*ta_data), GFP_KERNEL); ++ if (ta_data) { ++ ta_data->ta_handle = ta_handle; ++ ta_data->refcount = 1; ++ count = ta_data->refcount; ++ list_add(&ta_data->list_node, &ta_list); ++ } ++ ++ return count; ++} ++ ++static u32 put_ta_refcount(u32 ta_handle) ++{ ++ struct amdtee_ta_data *ta_data; ++ u32 count = 0; ++ ++ /* Caller must hold a mutex */ ++ list_for_each_entry(ta_data, &ta_list, list_node) ++ if (ta_data->ta_handle == ta_handle) { ++ count = --ta_data->refcount; ++ if (count == 0) { ++ list_del(&ta_data->list_node); ++ kfree(ta_data); ++ break; ++ } ++ } ++ ++ return count; ++} ++ + int handle_unload_ta(u32 ta_handle) + { + struct tee_cmd_unload_ta cmd = {0}; +- u32 status; ++ u32 status, count; + int ret; + + if (!ta_handle) + return -EINVAL; + ++ mutex_lock(&ta_refcount_mutex); ++ ++ count = put_ta_refcount(ta_handle); ++ ++ if (count) { ++ pr_debug("unload ta: not unloading %u count %u\n", ++ ta_handle, count); ++ ret = -EBUSY; ++ goto unlock; ++ } ++ + cmd.ta_handle = ta_handle; + + ret = psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA, (void *)&cmd, +@@ -137,8 +191,12 @@ int handle_unload_ta(u32 ta_handle) + if (!ret && status != 0) { + pr_err("unload ta: status = 0x%x\n", status); + ret = -EBUSY; ++ } else { ++ pr_debug("unloaded ta handle %u\n", ta_handle); + } + ++unlock: ++ mutex_unlock(&ta_refcount_mutex); + return ret; + } + +@@ -340,7 +398,8 @@ int handle_open_session(struct tee_ioctl_open_session_arg *arg, u32 *info, + + int handle_load_ta(void *data, u32 size, struct tee_ioctl_open_session_arg *arg) + { +- struct tee_cmd_load_ta cmd = {0}; ++ struct tee_cmd_unload_ta unload_cmd = {}; ++ struct tee_cmd_load_ta load_cmd = {}; + phys_addr_t blob; + int ret; + +@@ -353,21 +412,36 @@ int handle_load_ta(void *data, u32 size, struct tee_ioctl_open_session_arg *arg) + return -EINVAL; + } + +- cmd.hi_addr = upper_32_bits(blob); +- cmd.low_addr = lower_32_bits(blob); +- cmd.size = size; ++ load_cmd.hi_addr = upper_32_bits(blob); ++ load_cmd.low_addr = lower_32_bits(blob); ++ load_cmd.size = size; + +- ret = psp_tee_process_cmd(TEE_CMD_ID_LOAD_TA, (void *)&cmd, +- sizeof(cmd), &arg->ret); ++ mutex_lock(&ta_refcount_mutex); ++ ++ ret = psp_tee_process_cmd(TEE_CMD_ID_LOAD_TA, (void *)&load_cmd, ++ sizeof(load_cmd), &arg->ret); + if (ret) { + arg->ret_origin = TEEC_ORIGIN_COMMS; + arg->ret = TEEC_ERROR_COMMUNICATION; +- } else { +- set_session_id(cmd.ta_handle, 0, &arg->session); ++ } else if (arg->ret == TEEC_SUCCESS) { ++ ret = get_ta_refcount(load_cmd.ta_handle); ++ if (!ret) { ++ arg->ret_origin = TEEC_ORIGIN_COMMS; ++ arg->ret = TEEC_ERROR_OUT_OF_MEMORY; ++ ++ /* Unload the TA on error */ ++ unload_cmd.ta_handle = load_cmd.ta_handle; ++ psp_tee_process_cmd(TEE_CMD_ID_UNLOAD_TA, ++ (void *)&unload_cmd, ++ sizeof(unload_cmd), &ret); ++ } else { ++ set_session_id(load_cmd.ta_handle, 0, &arg->session); ++ } + } ++ mutex_unlock(&ta_refcount_mutex); + + pr_debug("load TA: TA handle = 0x%x, RO = 0x%x, ret = 0x%x\n", +- cmd.ta_handle, arg->ret_origin, arg->ret); ++ load_cmd.ta_handle, arg->ret_origin, arg->ret); + + return 0; + } +diff --git a/drivers/tee/amdtee/core.c b/drivers/tee/amdtee/core.c +index 8a6a8f30bb427..da6b88e80dc07 100644 +--- a/drivers/tee/amdtee/core.c ++++ b/drivers/tee/amdtee/core.c +@@ -59,10 +59,9 @@ static void release_session(struct amdtee_session *sess) + continue; + + handle_close_session(sess->ta_handle, sess->session_info[i]); ++ handle_unload_ta(sess->ta_handle); + } + +- /* Unload Trusted Application once all sessions are closed */ +- handle_unload_ta(sess->ta_handle); + kfree(sess); + } + +@@ -224,8 +223,6 @@ static void destroy_session(struct kref *ref) + struct amdtee_session *sess = container_of(ref, struct amdtee_session, + refcount); + +- /* Unload the TA from TEE */ +- handle_unload_ta(sess->ta_handle); + mutex_lock(&session_list_mutex); + list_del(&sess->list_node); + mutex_unlock(&session_list_mutex); +@@ -238,7 +235,7 @@ int amdtee_open_session(struct tee_context *ctx, + { + struct amdtee_context_data *ctxdata = ctx->data; + struct amdtee_session *sess = NULL; +- u32 session_info; ++ u32 session_info, ta_handle; + size_t ta_size; + int rc, i; + void *ta; +@@ -259,11 +256,14 @@ int amdtee_open_session(struct tee_context *ctx, + if (arg->ret != TEEC_SUCCESS) + goto out; + ++ ta_handle = get_ta_handle(arg->session); ++ + mutex_lock(&session_list_mutex); + sess = alloc_session(ctxdata, arg->session); + mutex_unlock(&session_list_mutex); + + if (!sess) { ++ handle_unload_ta(ta_handle); + rc = -ENOMEM; + goto out; + } +@@ -277,6 +277,7 @@ int amdtee_open_session(struct tee_context *ctx, + + if (i >= TEE_NUM_SESSIONS) { + pr_err("reached maximum session count %d\n", TEE_NUM_SESSIONS); ++ handle_unload_ta(ta_handle); + kref_put(&sess->refcount, destroy_session); + rc = -ENOMEM; + goto out; +@@ -289,12 +290,13 @@ int amdtee_open_session(struct tee_context *ctx, + spin_lock(&sess->lock); + clear_bit(i, sess->sess_mask); + spin_unlock(&sess->lock); ++ handle_unload_ta(ta_handle); + kref_put(&sess->refcount, destroy_session); + goto out; + } + + sess->session_info[i] = session_info; +- set_session_id(sess->ta_handle, i, &arg->session); ++ set_session_id(ta_handle, i, &arg->session); + out: + free_pages((u64)ta, get_order(ta_size)); + return rc; +@@ -329,6 +331,7 @@ int amdtee_close_session(struct tee_context *ctx, u32 session) + + /* Close the session */ + handle_close_session(ta_handle, session_info); ++ handle_unload_ta(ta_handle); + + kref_put(&sess->refcount, destroy_session); + +diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c +index e0c00a1b07639..51b0ecabf2ec9 100644 +--- a/drivers/tty/serial/mvebu-uart.c ++++ b/drivers/tty/serial/mvebu-uart.c +@@ -818,9 +818,6 @@ static int mvebu_uart_probe(struct platform_device *pdev) + return -EINVAL; + } + +- if (!match) +- return -ENODEV; +- + /* Assume that all UART ports have a DT alias or none has */ + id = of_alias_get_id(pdev->dev.of_node, "serial"); + if (!pdev->dev.of_node || id < 0) +diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c +index 0cc360da5426a..53cbf2c3f0330 100644 +--- a/drivers/tty/vt/vt.c ++++ b/drivers/tty/vt/vt.c +@@ -1171,7 +1171,7 @@ static inline int resize_screen(struct vc_data *vc, int width, int height, + /* Resizes the resolution of the display adapater */ + int err = 0; + +- if (vc->vc_mode != KD_GRAPHICS && vc->vc_sw->con_resize) ++ if (vc->vc_sw->con_resize) + err = vc->vc_sw->con_resize(vc, width, height, user); + + return err; +diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c +index 89aeaf3c1bca6..0e0cd9e9e589e 100644 +--- a/drivers/tty/vt/vt_ioctl.c ++++ b/drivers/tty/vt/vt_ioctl.c +@@ -671,21 +671,58 @@ static int vt_resizex(struct vc_data *vc, struct vt_consize __user *cs) + if (copy_from_user(&v, cs, sizeof(struct vt_consize))) + return -EFAULT; + +- if (v.v_vlin) +- pr_info_once("\"struct vt_consize\"->v_vlin is ignored. Please report if you need this.\n"); +- if (v.v_clin) +- pr_info_once("\"struct vt_consize\"->v_clin is ignored. Please report if you need this.\n"); ++ /* FIXME: Should check the copies properly */ ++ if (!v.v_vlin) ++ v.v_vlin = vc->vc_scan_lines; ++ ++ if (v.v_clin) { ++ int rows = v.v_vlin / v.v_clin; ++ if (v.v_rows != rows) { ++ if (v.v_rows) /* Parameters don't add up */ ++ return -EINVAL; ++ v.v_rows = rows; ++ } ++ } ++ ++ if (v.v_vcol && v.v_ccol) { ++ int cols = v.v_vcol / v.v_ccol; ++ if (v.v_cols != cols) { ++ if (v.v_cols) ++ return -EINVAL; ++ v.v_cols = cols; ++ } ++ } ++ ++ if (v.v_clin > 32) ++ return -EINVAL; + +- console_lock(); + for (i = 0; i < MAX_NR_CONSOLES; i++) { +- vc = vc_cons[i].d; ++ struct vc_data *vcp; + +- if (vc) { +- vc->vc_resize_user = 1; +- vc_resize(vc, v.v_cols, v.v_rows); ++ if (!vc_cons[i].d) ++ continue; ++ console_lock(); ++ vcp = vc_cons[i].d; ++ if (vcp) { ++ int ret; ++ int save_scan_lines = vcp->vc_scan_lines; ++ int save_cell_height = vcp->vc_cell_height; ++ ++ if (v.v_vlin) ++ vcp->vc_scan_lines = v.v_vlin; ++ if (v.v_clin) ++ vcp->vc_cell_height = v.v_clin; ++ vcp->vc_resize_user = 1; ++ ret = vc_resize(vcp, v.v_cols, v.v_rows); ++ if (ret) { ++ vcp->vc_scan_lines = save_scan_lines; ++ vcp->vc_cell_height = save_cell_height; ++ console_unlock(); ++ return ret; ++ } + } ++ console_unlock(); + } +- console_unlock(); + + return 0; + } +diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c +index 0330ba99730e2..652fe25475878 100644 +--- a/drivers/uio/uio_hv_generic.c ++++ b/drivers/uio/uio_hv_generic.c +@@ -291,13 +291,15 @@ hv_uio_probe(struct hv_device *dev, + pdata->recv_buf = vzalloc(RECV_BUFFER_SIZE); + if (pdata->recv_buf == NULL) { + ret = -ENOMEM; +- goto fail_close; ++ goto fail_free_ring; + } + + ret = vmbus_establish_gpadl(channel, pdata->recv_buf, + RECV_BUFFER_SIZE, &pdata->recv_gpadl); +- if (ret) ++ if (ret) { ++ vfree(pdata->recv_buf); + goto fail_close; ++ } + + /* put Global Physical Address Label in name */ + snprintf(pdata->recv_name, sizeof(pdata->recv_name), +@@ -316,8 +318,10 @@ hv_uio_probe(struct hv_device *dev, + + ret = vmbus_establish_gpadl(channel, pdata->send_buf, + SEND_BUFFER_SIZE, &pdata->send_gpadl); +- if (ret) ++ if (ret) { ++ vfree(pdata->send_buf); + goto fail_close; ++ } + + snprintf(pdata->send_name, sizeof(pdata->send_name), + "send:%u", pdata->send_gpadl); +@@ -347,6 +351,8 @@ hv_uio_probe(struct hv_device *dev, + + fail_close: + hv_uio_cleanup(dev, pdata); ++fail_free_ring: ++ vmbus_free_ring(dev->channel); + + return ret; + } +diff --git a/drivers/uio/uio_pci_generic.c b/drivers/uio/uio_pci_generic.c +index c7d681fef198d..3bb0b00754679 100644 +--- a/drivers/uio/uio_pci_generic.c ++++ b/drivers/uio/uio_pci_generic.c +@@ -82,7 +82,7 @@ static int probe(struct pci_dev *pdev, + } + + if (pdev->irq && !pci_intx_mask_supported(pdev)) +- return -ENOMEM; ++ return -ENODEV; + + gdev = devm_kzalloc(&pdev->dev, sizeof(struct uio_pci_generic_dev), GFP_KERNEL); + if (!gdev) +diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c +index 962c12be97741..631eb918f8e14 100644 +--- a/drivers/video/console/vgacon.c ++++ b/drivers/video/console/vgacon.c +@@ -383,7 +383,7 @@ static void vgacon_init(struct vc_data *c, int init) + vc_resize(c, vga_video_num_columns, vga_video_num_lines); + + c->vc_scan_lines = vga_scan_lines; +- c->vc_font.height = vga_video_font_height; ++ c->vc_font.height = c->vc_cell_height = vga_video_font_height; + c->vc_complement_mask = 0x7700; + if (vga_512_chars) + c->vc_hi_font_mask = 0x0800; +@@ -518,32 +518,32 @@ static void vgacon_cursor(struct vc_data *c, int mode) + switch (CUR_SIZE(c->vc_cursor_type)) { + case CUR_UNDERLINE: + vgacon_set_cursor_size(c->state.x, +- c->vc_font.height - +- (c->vc_font.height < ++ c->vc_cell_height - ++ (c->vc_cell_height < + 10 ? 2 : 3), +- c->vc_font.height - +- (c->vc_font.height < ++ c->vc_cell_height - ++ (c->vc_cell_height < + 10 ? 1 : 2)); + break; + case CUR_TWO_THIRDS: + vgacon_set_cursor_size(c->state.x, +- c->vc_font.height / 3, +- c->vc_font.height - +- (c->vc_font.height < ++ c->vc_cell_height / 3, ++ c->vc_cell_height - ++ (c->vc_cell_height < + 10 ? 1 : 2)); + break; + case CUR_LOWER_THIRD: + vgacon_set_cursor_size(c->state.x, +- (c->vc_font.height * 2) / 3, +- c->vc_font.height - +- (c->vc_font.height < ++ (c->vc_cell_height * 2) / 3, ++ c->vc_cell_height - ++ (c->vc_cell_height < + 10 ? 1 : 2)); + break; + case CUR_LOWER_HALF: + vgacon_set_cursor_size(c->state.x, +- c->vc_font.height / 2, +- c->vc_font.height - +- (c->vc_font.height < ++ c->vc_cell_height / 2, ++ c->vc_cell_height - ++ (c->vc_cell_height < + 10 ? 1 : 2)); + break; + case CUR_NONE: +@@ -554,7 +554,7 @@ static void vgacon_cursor(struct vc_data *c, int mode) + break; + default: + vgacon_set_cursor_size(c->state.x, 1, +- c->vc_font.height); ++ c->vc_cell_height); + break; + } + break; +@@ -565,13 +565,13 @@ static int vgacon_doresize(struct vc_data *c, + unsigned int width, unsigned int height) + { + unsigned long flags; +- unsigned int scanlines = height * c->vc_font.height; ++ unsigned int scanlines = height * c->vc_cell_height; + u8 scanlines_lo = 0, r7 = 0, vsync_end = 0, mode, max_scan; + + raw_spin_lock_irqsave(&vga_lock, flags); + + vgacon_xres = width * VGA_FONTWIDTH; +- vgacon_yres = height * c->vc_font.height; ++ vgacon_yres = height * c->vc_cell_height; + if (vga_video_type >= VIDEO_TYPE_VGAC) { + outb_p(VGA_CRTC_MAX_SCAN, vga_video_port_reg); + max_scan = inb_p(vga_video_port_val); +@@ -626,9 +626,9 @@ static int vgacon_doresize(struct vc_data *c, + static int vgacon_switch(struct vc_data *c) + { + int x = c->vc_cols * VGA_FONTWIDTH; +- int y = c->vc_rows * c->vc_font.height; ++ int y = c->vc_rows * c->vc_cell_height; + int rows = screen_info.orig_video_lines * vga_default_font_height/ +- c->vc_font.height; ++ c->vc_cell_height; + /* + * We need to save screen size here as it's the only way + * we can spot the screen has been resized and we need to +@@ -1041,7 +1041,7 @@ static int vgacon_adjust_height(struct vc_data *vc, unsigned fontheight) + cursor_size_lastto = 0; + c->vc_sw->con_cursor(c, CM_DRAW); + } +- c->vc_font.height = fontheight; ++ c->vc_font.height = c->vc_cell_height = fontheight; + vc_resize(c, 0, rows); /* Adjust console size */ + } + } +@@ -1089,12 +1089,20 @@ static int vgacon_resize(struct vc_data *c, unsigned int width, + if ((width << 1) * height > vga_vram_size) + return -EINVAL; + ++ if (user) { ++ /* ++ * Ho ho! Someone (svgatextmode, eh?) may have reprogrammed ++ * the video mode! Set the new defaults then and go away. ++ */ ++ screen_info.orig_video_cols = width; ++ screen_info.orig_video_lines = height; ++ vga_default_font_height = c->vc_cell_height; ++ return 0; ++ } + if (width % 2 || width > screen_info.orig_video_cols || + height > (screen_info.orig_video_lines * vga_default_font_height)/ +- c->vc_font.height) +- /* let svgatextmode tinker with video timings and +- return success */ +- return (user) ? 0 : -EINVAL; ++ c->vc_cell_height) ++ return -EINVAL; + + if (con_is_visible(c) && !vga_is_gfx) /* who knows */ + vgacon_doresize(c, width, height); +diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c +index 3406067985b1f..22bb3892f6bd1 100644 +--- a/drivers/video/fbdev/core/fbcon.c ++++ b/drivers/video/fbdev/core/fbcon.c +@@ -2019,7 +2019,7 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width, + return -EINVAL; + + pr_debug("resize now %ix%i\n", var.xres, var.yres); +- if (con_is_visible(vc)) { ++ if (con_is_visible(vc) && vc->vc_mode == KD_TEXT) { + var.activate = FB_ACTIVATE_NOW | + FB_ACTIVATE_FORCE; + fb_set_var(info, &var); +diff --git a/drivers/video/fbdev/hgafb.c b/drivers/video/fbdev/hgafb.c +index 8bbac7182ad32..bd3d07aa4f0ec 100644 +--- a/drivers/video/fbdev/hgafb.c ++++ b/drivers/video/fbdev/hgafb.c +@@ -286,7 +286,7 @@ static int hga_card_detect(void) + + hga_vram = ioremap(0xb0000, hga_vram_len); + if (!hga_vram) +- goto error; ++ return -ENOMEM; + + if (request_region(0x3b0, 12, "hgafb")) + release_io_ports = 1; +@@ -346,13 +346,18 @@ static int hga_card_detect(void) + hga_type_name = "Hercules"; + break; + } +- return 1; ++ return 0; + error: + if (release_io_ports) + release_region(0x3b0, 12); + if (release_io_port) + release_region(0x3bf, 1); +- return 0; ++ ++ iounmap(hga_vram); ++ ++ pr_err("hgafb: HGA card not detected.\n"); ++ ++ return -EINVAL; + } + + /** +@@ -550,13 +555,11 @@ static const struct fb_ops hgafb_ops = { + static int hgafb_probe(struct platform_device *pdev) + { + struct fb_info *info; ++ int ret; + +- if (! hga_card_detect()) { +- printk(KERN_INFO "hgafb: HGA card not detected.\n"); +- if (hga_vram) +- iounmap(hga_vram); +- return -EINVAL; +- } ++ ret = hga_card_detect(); ++ if (ret) ++ return ret; + + printk(KERN_INFO "hgafb: %s with %ldK of memory detected.\n", + hga_type_name, hga_vram_len/1024); +diff --git a/drivers/video/fbdev/imsttfb.c b/drivers/video/fbdev/imsttfb.c +index 3ac053b884958..e04411701ec85 100644 +--- a/drivers/video/fbdev/imsttfb.c ++++ b/drivers/video/fbdev/imsttfb.c +@@ -1512,11 +1512,6 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + info->fix.smem_start = addr; + info->screen_base = (__u8 *)ioremap(addr, par->ramdac == IBM ? + 0x400000 : 0x800000); +- if (!info->screen_base) { +- release_mem_region(addr, size); +- framebuffer_release(info); +- return -ENOMEM; +- } + info->fix.mmio_start = addr + 0x800000; + par->dc_regs = ioremap(addr + 0x800000, 0x1000); + par->cmap_regs_phys = addr + 0x840000; +diff --git a/drivers/xen/xen-pciback/vpci.c b/drivers/xen/xen-pciback/vpci.c +index 5447b5ab7c766..1221cfd914cb0 100644 +--- a/drivers/xen/xen-pciback/vpci.c ++++ b/drivers/xen/xen-pciback/vpci.c +@@ -70,7 +70,7 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev, + struct pci_dev *dev, int devid, + publish_pci_dev_cb publish_cb) + { +- int err = 0, slot, func = -1; ++ int err = 0, slot, func = PCI_FUNC(dev->devfn); + struct pci_dev_entry *t, *dev_entry; + struct vpci_dev_data *vpci_dev = pdev->pci_dev_data; + +@@ -95,22 +95,25 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev, + + /* + * Keep multi-function devices together on the virtual PCI bus, except +- * virtual functions. ++ * that we want to keep virtual functions at func 0 on their own. They ++ * aren't multi-function devices and hence their presence at func 0 ++ * may cause guests to not scan the other functions. + */ +- if (!dev->is_virtfn) { ++ if (!dev->is_virtfn || func) { + for (slot = 0; slot < PCI_SLOT_MAX; slot++) { + if (list_empty(&vpci_dev->dev_list[slot])) + continue; + + t = list_entry(list_first(&vpci_dev->dev_list[slot]), + struct pci_dev_entry, list); ++ if (t->dev->is_virtfn && !PCI_FUNC(t->dev->devfn)) ++ continue; + + if (match_slot(dev, t->dev)) { + dev_info(&dev->dev, "vpci: assign to virtual slot %d func %d\n", +- slot, PCI_FUNC(dev->devfn)); ++ slot, func); + list_add_tail(&dev_entry->list, + &vpci_dev->dev_list[slot]); +- func = PCI_FUNC(dev->devfn); + goto unlock; + } + } +@@ -123,7 +126,6 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev, + slot); + list_add_tail(&dev_entry->list, + &vpci_dev->dev_list[slot]); +- func = dev->is_virtfn ? 0 : PCI_FUNC(dev->devfn); + goto unlock; + } + } +diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c +index 5188f02e75fb3..c09c7ebd6968d 100644 +--- a/drivers/xen/xen-pciback/xenbus.c ++++ b/drivers/xen/xen-pciback/xenbus.c +@@ -359,7 +359,8 @@ out: + return err; + } + +-static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev) ++static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev, ++ enum xenbus_state state) + { + int err = 0; + int num_devs; +@@ -373,9 +374,7 @@ static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev) + dev_dbg(&pdev->xdev->dev, "Reconfiguring device ...\n"); + + mutex_lock(&pdev->dev_lock); +- /* Make sure we only reconfigure once */ +- if (xenbus_read_driver_state(pdev->xdev->nodename) != +- XenbusStateReconfiguring) ++ if (xenbus_read_driver_state(pdev->xdev->nodename) != state) + goto out; + + err = xenbus_scanf(XBT_NIL, pdev->xdev->nodename, "num_devs", "%d", +@@ -500,6 +499,10 @@ static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev) + } + } + ++ if (state != XenbusStateReconfiguring) ++ /* Make sure we only reconfigure once. */ ++ goto out; ++ + err = xenbus_switch_state(pdev->xdev, XenbusStateReconfigured); + if (err) { + xenbus_dev_fatal(pdev->xdev, err, +@@ -525,7 +528,7 @@ static void xen_pcibk_frontend_changed(struct xenbus_device *xdev, + break; + + case XenbusStateReconfiguring: +- xen_pcibk_reconfigure(pdev); ++ xen_pcibk_reconfigure(pdev, XenbusStateReconfiguring); + break; + + case XenbusStateConnected: +@@ -664,6 +667,15 @@ static void xen_pcibk_be_watch(struct xenbus_watch *watch, + xen_pcibk_setup_backend(pdev); + break; + ++ case XenbusStateInitialised: ++ /* ++ * We typically move to Initialised when the first device was ++ * added. Hence subsequent devices getting added may need ++ * reconfiguring. ++ */ ++ xen_pcibk_reconfigure(pdev, XenbusStateInitialised); ++ break; ++ + default: + break; + } +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 8c4d2eaa5d58b..81b93c9c659b7 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -3253,6 +3253,7 @@ void btrfs_run_delayed_iputs(struct btrfs_fs_info *fs_info) + inode = list_first_entry(&fs_info->delayed_iputs, + struct btrfs_inode, delayed_iput); + run_delayed_iput_locked(fs_info, inode); ++ cond_resched_lock(&fs_info->delayed_iput_lock); + } + spin_unlock(&fs_info->delayed_iput_lock); + } +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 47e76e79b3d6b..53624fca0747a 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -6457,6 +6457,24 @@ void btrfs_log_new_name(struct btrfs_trans_handle *trans, + (!old_dir || old_dir->logged_trans < trans->transid)) + return; + ++ /* ++ * If we are doing a rename (old_dir is not NULL) from a directory that ++ * was previously logged, make sure the next log attempt on the directory ++ * is not skipped and logs the inode again. This is because the log may ++ * not currently be authoritative for a range including the old ++ * BTRFS_DIR_ITEM_KEY and BTRFS_DIR_INDEX_KEY keys, so we want to make ++ * sure after a log replay we do not end up with both the new and old ++ * dentries around (in case the inode is a directory we would have a ++ * directory with two hard links and 2 inode references for different ++ * parents). The next log attempt of old_dir will happen at ++ * btrfs_log_all_parents(), called through btrfs_log_inode_parent() ++ * below, because we have previously set inode->last_unlink_trans to the ++ * current transaction ID, either here or at btrfs_record_unlink_dir() in ++ * case inode is a directory. ++ */ ++ if (old_dir) ++ old_dir->logged_trans = 0; ++ + btrfs_init_log_ctx(&ctx, &inode->vfs_inode); + ctx.logging_new_name = true; + /* +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c +index 5df6daacc230b..e9a530da4255c 100644 +--- a/fs/cifs/smb2ops.c ++++ b/fs/cifs/smb2ops.c +@@ -1822,6 +1822,8 @@ smb2_copychunk_range(const unsigned int xid, + cpu_to_le32(min_t(u32, len, tcon->max_bytes_chunk)); + + /* Request server copy to target from src identified by key */ ++ kfree(retbuf); ++ retbuf = NULL; + rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid, + trgtfile->fid.volatile_fid, FSCTL_SRV_COPYCHUNK_WRITE, + true /* is_fsctl */, (char *)pcchunk, +diff --git a/fs/ecryptfs/crypto.c b/fs/ecryptfs/crypto.c +index 943e523f4c9df..3d8623139538b 100644 +--- a/fs/ecryptfs/crypto.c ++++ b/fs/ecryptfs/crypto.c +@@ -296,10 +296,8 @@ static int crypt_scatterlist(struct ecryptfs_crypt_stat *crypt_stat, + struct extent_crypt_result ecr; + int rc = 0; + +- if (!crypt_stat || !crypt_stat->tfm +- || !(crypt_stat->flags & ECRYPTFS_STRUCT_INITIALIZED)) +- return -EINVAL; +- ++ BUG_ON(!crypt_stat || !crypt_stat->tfm ++ || !(crypt_stat->flags & ECRYPTFS_STRUCT_INITIALIZED)); + if (unlikely(ecryptfs_verbosity > 0)) { + ecryptfs_printk(KERN_DEBUG, "Key size [%zd]; key:\n", + crypt_stat->key_size); +diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c +index 99df69b848224..c63d0a7f7ba4f 100644 +--- a/fs/hugetlbfs/inode.c ++++ b/fs/hugetlbfs/inode.c +@@ -532,7 +532,7 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, + * the subpool and global reserve usage count can need + * to be adjusted. + */ +- VM_BUG_ON(PagePrivate(page)); ++ VM_BUG_ON(HPageRestoreReserve(page)); + remove_huge_page(page); + freed++; + if (!truncate_op) { +diff --git a/fs/namespace.c b/fs/namespace.c +index 56bb5a5fdc0d0..4d2e827ddb598 100644 +--- a/fs/namespace.c ++++ b/fs/namespace.c +@@ -3853,8 +3853,12 @@ static int can_idmap_mount(const struct mount_kattr *kattr, struct mount *mnt) + if (!(m->mnt_sb->s_type->fs_flags & FS_ALLOW_IDMAP)) + return -EINVAL; + ++ /* Don't yet support filesystem mountable in user namespaces. */ ++ if (m->mnt_sb->s_user_ns != &init_user_ns) ++ return -EINVAL; ++ + /* We're not controlling the superblock. */ +- if (!ns_capable(m->mnt_sb->s_user_ns, CAP_SYS_ADMIN)) ++ if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + + /* Mount has already been visible in the filesystem hierarchy. */ +diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h +index 153734816b49c..d5b9c8d40c18e 100644 +--- a/include/linux/console_struct.h ++++ b/include/linux/console_struct.h +@@ -101,6 +101,7 @@ struct vc_data { + unsigned int vc_rows; + unsigned int vc_size_row; /* Bytes per row */ + unsigned int vc_scan_lines; /* # of scan lines */ ++ unsigned int vc_cell_height; /* CRTC character cell height */ + unsigned long vc_origin; /* [!] Start of real screen */ + unsigned long vc_scr_end; /* [!] End of real screen */ + unsigned long vc_visible_origin; /* [!] Top of visible window */ +diff --git a/ipc/mqueue.c b/ipc/mqueue.c +index 8031464ed4ae2..4e4e61111500c 100644 +--- a/ipc/mqueue.c ++++ b/ipc/mqueue.c +@@ -1004,12 +1004,14 @@ static inline void __pipelined_op(struct wake_q_head *wake_q, + struct mqueue_inode_info *info, + struct ext_wait_queue *this) + { ++ struct task_struct *task; ++ + list_del(&this->list); +- get_task_struct(this->task); ++ task = get_task_struct(this->task); + + /* see MQ_BARRIER for purpose/pairing */ + smp_store_release(&this->state, STATE_READY); +- wake_q_add_safe(wake_q, this->task); ++ wake_q_add_safe(wake_q, task); + } + + /* pipelined_send() - send a message directly to the task waiting in +diff --git a/ipc/msg.c b/ipc/msg.c +index acd1bc7af55a2..6e6c8e0c9380e 100644 +--- a/ipc/msg.c ++++ b/ipc/msg.c +@@ -251,11 +251,13 @@ static void expunge_all(struct msg_queue *msq, int res, + struct msg_receiver *msr, *t; + + list_for_each_entry_safe(msr, t, &msq->q_receivers, r_list) { +- get_task_struct(msr->r_tsk); ++ struct task_struct *r_tsk; ++ ++ r_tsk = get_task_struct(msr->r_tsk); + + /* see MSG_BARRIER for purpose/pairing */ + smp_store_release(&msr->r_msg, ERR_PTR(res)); +- wake_q_add_safe(wake_q, msr->r_tsk); ++ wake_q_add_safe(wake_q, r_tsk); + } + } + +diff --git a/ipc/sem.c b/ipc/sem.c +index f6c30a85dadf9..7d9c06b0ad6e2 100644 +--- a/ipc/sem.c ++++ b/ipc/sem.c +@@ -784,12 +784,14 @@ would_block: + static inline void wake_up_sem_queue_prepare(struct sem_queue *q, int error, + struct wake_q_head *wake_q) + { +- get_task_struct(q->sleeper); ++ struct task_struct *sleeper; ++ ++ sleeper = get_task_struct(q->sleeper); + + /* see SEM_BARRIER_2 for purpuse/pairing */ + smp_store_release(&q->status, error); + +- wake_q_add_safe(wake_q, q->sleeper); ++ wake_q_add_safe(wake_q, sleeper); + } + + static void unlink_queue(struct sem_array *sma, struct sem_queue *q) +diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c +index 209ad8dcfcecf..62a52be8f6ba9 100644 +--- a/kernel/kcsan/debugfs.c ++++ b/kernel/kcsan/debugfs.c +@@ -261,9 +261,10 @@ static const struct file_operations debugfs_ops = + .release = single_release + }; + +-static void __init kcsan_debugfs_init(void) ++static int __init kcsan_debugfs_init(void) + { + debugfs_create_file("kcsan", 0644, NULL, NULL, &debugfs_ops); ++ return 0; + } + + late_initcall(kcsan_debugfs_init); +diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c +index f160f1c97ca1e..f39c383c71804 100644 +--- a/kernel/locking/lockdep.c ++++ b/kernel/locking/lockdep.c +@@ -5731,7 +5731,7 @@ void lock_contended(struct lockdep_map *lock, unsigned long ip) + { + unsigned long flags; + +- trace_lock_acquired(lock, ip); ++ trace_lock_contended(lock, ip); + + if (unlikely(!lock_stat || !lockdep_enabled())) + return; +@@ -5749,7 +5749,7 @@ void lock_acquired(struct lockdep_map *lock, unsigned long ip) + { + unsigned long flags; + +- trace_lock_contended(lock, ip); ++ trace_lock_acquired(lock, ip); + + if (unlikely(!lock_stat || !lockdep_enabled())) + return; +diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c +index a7276aaf2abc0..db9301591e3fc 100644 +--- a/kernel/locking/mutex-debug.c ++++ b/kernel/locking/mutex-debug.c +@@ -57,7 +57,7 @@ void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, + task->blocked_on = waiter; + } + +-void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, ++void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, + struct task_struct *task) + { + DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list)); +@@ -65,7 +65,7 @@ void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, + DEBUG_LOCKS_WARN_ON(task->blocked_on != waiter); + task->blocked_on = NULL; + +- list_del_init(&waiter->list); ++ INIT_LIST_HEAD(&waiter->list); + waiter->task = NULL; + } + +diff --git a/kernel/locking/mutex-debug.h b/kernel/locking/mutex-debug.h +index 1edd3f45a4ecb..53e631e1d76da 100644 +--- a/kernel/locking/mutex-debug.h ++++ b/kernel/locking/mutex-debug.h +@@ -22,7 +22,7 @@ extern void debug_mutex_free_waiter(struct mutex_waiter *waiter); + extern void debug_mutex_add_waiter(struct mutex *lock, + struct mutex_waiter *waiter, + struct task_struct *task); +-extern void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, ++extern void debug_mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, + struct task_struct *task); + extern void debug_mutex_unlock(struct mutex *lock); + extern void debug_mutex_init(struct mutex *lock, const char *name, +diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c +index 622ebdfcd083b..3899157c13b10 100644 +--- a/kernel/locking/mutex.c ++++ b/kernel/locking/mutex.c +@@ -194,7 +194,7 @@ static inline bool __mutex_waiter_is_first(struct mutex *lock, struct mutex_wait + * Add @waiter to a given location in the lock wait_list and set the + * FLAG_WAITERS flag if it's the first waiter. + */ +-static void __sched ++static void + __mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, + struct list_head *list) + { +@@ -205,6 +205,16 @@ __mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, + __mutex_set_flag(lock, MUTEX_FLAG_WAITERS); + } + ++static void ++__mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter) ++{ ++ list_del(&waiter->list); ++ if (likely(list_empty(&lock->wait_list))) ++ __mutex_clear_flag(lock, MUTEX_FLAGS); ++ ++ debug_mutex_remove_waiter(lock, waiter, current); ++} ++ + /* + * Give up ownership to a specific task, when @task = NULL, this is equivalent + * to a regular unlock. Sets PICKUP on a handoff, clears HANDOF, preserves +@@ -1061,9 +1071,7 @@ acquired: + __ww_mutex_check_waiters(lock, ww_ctx); + } + +- mutex_remove_waiter(lock, &waiter, current); +- if (likely(list_empty(&lock->wait_list))) +- __mutex_clear_flag(lock, MUTEX_FLAGS); ++ __mutex_remove_waiter(lock, &waiter); + + debug_mutex_free_waiter(&waiter); + +@@ -1080,7 +1088,7 @@ skip_wait: + + err: + __set_current_state(TASK_RUNNING); +- mutex_remove_waiter(lock, &waiter, current); ++ __mutex_remove_waiter(lock, &waiter); + err_early_kill: + spin_unlock(&lock->wait_lock); + debug_mutex_free_waiter(&waiter); +diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h +index 1c2287d3fa719..f0c710b1d1927 100644 +--- a/kernel/locking/mutex.h ++++ b/kernel/locking/mutex.h +@@ -10,12 +10,10 @@ + * !CONFIG_DEBUG_MUTEXES case. Most of them are NOPs: + */ + +-#define mutex_remove_waiter(lock, waiter, task) \ +- __list_del((waiter)->list.prev, (waiter)->list.next) +- + #define debug_mutex_wake_waiter(lock, waiter) do { } while (0) + #define debug_mutex_free_waiter(waiter) do { } while (0) + #define debug_mutex_add_waiter(lock, waiter, ti) do { } while (0) ++#define debug_mutex_remove_waiter(lock, waiter, ti) do { } while (0) + #define debug_mutex_unlock(lock) do { } while (0) + #define debug_mutex_init(lock, name, key) do { } while (0) + +diff --git a/kernel/ptrace.c b/kernel/ptrace.c +index 61db50f7ca866..5f50fdd1d855e 100644 +--- a/kernel/ptrace.c ++++ b/kernel/ptrace.c +@@ -169,6 +169,21 @@ void __ptrace_unlink(struct task_struct *child) + spin_unlock(&child->sighand->siglock); + } + ++static bool looks_like_a_spurious_pid(struct task_struct *task) ++{ ++ if (task->exit_code != ((PTRACE_EVENT_EXEC << 8) | SIGTRAP)) ++ return false; ++ ++ if (task_pid_vnr(task) == task->ptrace_message) ++ return false; ++ /* ++ * The tracee changed its pid but the PTRACE_EVENT_EXEC event ++ * was not wait()'ed, most probably debugger targets the old ++ * leader which was destroyed in de_thread(). ++ */ ++ return true; ++} ++ + /* Ensure that nothing can wake it up, even SIGKILL */ + static bool ptrace_freeze_traced(struct task_struct *task) + { +@@ -179,7 +194,8 @@ static bool ptrace_freeze_traced(struct task_struct *task) + return ret; + + spin_lock_irq(&task->sighand->siglock); +- if (task_is_traced(task) && !__fatal_signal_pending(task)) { ++ if (task_is_traced(task) && !looks_like_a_spurious_pid(task) && ++ !__fatal_signal_pending(task)) { + task->state = __TASK_TRACED; + ret = true; + } +diff --git a/mm/gup.c b/mm/gup.c +index 333f5dfd89423..4164a70160e31 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -1535,10 +1535,6 @@ struct page *get_dump_page(unsigned long addr) + FOLL_FORCE | FOLL_DUMP | FOLL_GET); + if (locked) + mmap_read_unlock(mm); +- +- if (ret == 1 && is_page_poisoned(page)) +- return NULL; +- + return (ret == 1) ? page : NULL; + } + #endif /* CONFIG_ELF_CORE */ +diff --git a/mm/internal.h b/mm/internal.h +index cb3c5e0a7799f..1432feec62df0 100644 +--- a/mm/internal.h ++++ b/mm/internal.h +@@ -97,26 +97,6 @@ static inline void set_page_refcounted(struct page *page) + set_page_count(page, 1); + } + +-/* +- * When kernel touch the user page, the user page may be have been marked +- * poison but still mapped in user space, if without this page, the kernel +- * can guarantee the data integrity and operation success, the kernel is +- * better to check the posion status and avoid touching it, be good not to +- * panic, coredump for process fatal signal is a sample case matching this +- * scenario. Or if kernel can't guarantee the data integrity, it's better +- * not to call this function, let kernel touch the poison page and get to +- * panic. +- */ +-static inline bool is_page_poisoned(struct page *page) +-{ +- if (PageHWPoison(page)) +- return true; +- else if (PageHuge(page) && PageHWPoison(compound_head(page))) +- return true; +- +- return false; +-} +- + extern unsigned long highest_memmap_pfn; + + /* +diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c +index 9a3d451402d7b..28b2314f1cf12 100644 +--- a/mm/userfaultfd.c ++++ b/mm/userfaultfd.c +@@ -362,38 +362,38 @@ out: + * If a reservation for the page existed in the reservation + * map of a private mapping, the map was modified to indicate + * the reservation was consumed when the page was allocated. +- * We clear the PagePrivate flag now so that the global ++ * We clear the HPageRestoreReserve flag now so that the global + * reserve count will not be incremented in free_huge_page. + * The reservation map will still indicate the reservation + * was consumed and possibly prevent later page allocation. + * This is better than leaking a global reservation. If no +- * reservation existed, it is still safe to clear PagePrivate +- * as no adjustments to reservation counts were made during +- * allocation. ++ * reservation existed, it is still safe to clear ++ * HPageRestoreReserve as no adjustments to reservation counts ++ * were made during allocation. + * + * The reservation map for shared mappings indicates which + * pages have reservations. When a huge page is allocated + * for an address with a reservation, no change is made to +- * the reserve map. In this case PagePrivate will be set +- * to indicate that the global reservation count should be ++ * the reserve map. In this case HPageRestoreReserve will be ++ * set to indicate that the global reservation count should be + * incremented when the page is freed. This is the desired + * behavior. However, when a huge page is allocated for an + * address without a reservation a reservation entry is added +- * to the reservation map, and PagePrivate will not be set. +- * When the page is freed, the global reserve count will NOT +- * be incremented and it will appear as though we have leaked +- * reserved page. In this case, set PagePrivate so that the +- * global reserve count will be incremented to match the +- * reservation map entry which was created. ++ * to the reservation map, and HPageRestoreReserve will not be ++ * set. When the page is freed, the global reserve count will ++ * NOT be incremented and it will appear as though we have ++ * leaked reserved page. In this case, set HPageRestoreReserve ++ * so that the global reserve count will be incremented to ++ * match the reservation map entry which was created. + * + * Note that vm_alloc_shared is based on the flags of the vma + * for which the page was originally allocated. dst_vma could + * be different or NULL on error. + */ + if (vm_alloc_shared) +- SetPagePrivate(page); ++ SetHPageRestoreReserve(page); + else +- ClearPagePrivate(page); ++ ClearHPageRestoreReserve(page); + put_page(page); + } + BUG_ON(copied < 0); +diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c +index b0c1ee110eff9..e03cc284161c4 100644 +--- a/net/bluetooth/smp.c ++++ b/net/bluetooth/smp.c +@@ -2732,6 +2732,15 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb) + if (skb->len < sizeof(*key)) + return SMP_INVALID_PARAMS; + ++ /* Check if remote and local public keys are the same and debug key is ++ * not in use. ++ */ ++ if (!test_bit(SMP_FLAG_DEBUG_KEY, &smp->flags) && ++ !crypto_memneq(key, smp->local_pk, 64)) { ++ bt_dev_err(hdev, "Remote and local public keys are identical"); ++ return SMP_UNSPECIFIED; ++ } ++ + memcpy(smp->remote_pk, key, 64); + + if (test_bit(SMP_FLAG_REMOTE_OOB, &smp->flags)) { +diff --git a/sound/firewire/Kconfig b/sound/firewire/Kconfig +index 25778765cbfe9..9897bd26a4388 100644 +--- a/sound/firewire/Kconfig ++++ b/sound/firewire/Kconfig +@@ -38,7 +38,7 @@ config SND_OXFW + * Mackie(Loud) Onyx 1640i (former model) + * Mackie(Loud) Onyx Satellite + * Mackie(Loud) Tapco Link.Firewire +- * Mackie(Loud) d.2 pro/d.4 pro ++ * Mackie(Loud) d.4 pro + * Mackie(Loud) U.420/U.420d + * TASCAM FireOne + * Stanton Controllers & Systems 1 Deck/Mixer +@@ -84,7 +84,7 @@ config SND_BEBOB + * PreSonus FIREBOX/FIREPOD/FP10/Inspire1394 + * BridgeCo RDAudio1/Audio5 + * Mackie Onyx 1220/1620/1640 (FireWire I/O Card) +- * Mackie d.2 (FireWire Option) ++ * Mackie d.2 (FireWire Option) and d.2 Pro + * Stanton FinalScratch 2 (ScratchAmp) + * Tascam IF-FW/DM + * Behringer XENIX UFX 1204/1604 +diff --git a/sound/firewire/amdtp-stream-trace.h b/sound/firewire/amdtp-stream-trace.h +index 26e7cb555d3c5..aa53c13b89d34 100644 +--- a/sound/firewire/amdtp-stream-trace.h ++++ b/sound/firewire/amdtp-stream-trace.h +@@ -14,8 +14,8 @@ + #include <linux/tracepoint.h> + + TRACE_EVENT(amdtp_packet, +- TP_PROTO(const struct amdtp_stream *s, u32 cycles, const __be32 *cip_header, unsigned int payload_length, unsigned int data_blocks, unsigned int data_block_counter, unsigned int index), +- TP_ARGS(s, cycles, cip_header, payload_length, data_blocks, data_block_counter, index), ++ TP_PROTO(const struct amdtp_stream *s, u32 cycles, const __be32 *cip_header, unsigned int payload_length, unsigned int data_blocks, unsigned int data_block_counter, unsigned int packet_index, unsigned int index), ++ TP_ARGS(s, cycles, cip_header, payload_length, data_blocks, data_block_counter, packet_index, index), + TP_STRUCT__entry( + __field(unsigned int, second) + __field(unsigned int, cycle) +@@ -48,7 +48,7 @@ TRACE_EVENT(amdtp_packet, + __entry->payload_quadlets = payload_length / sizeof(__be32); + __entry->data_blocks = data_blocks; + __entry->data_block_counter = data_block_counter, +- __entry->packet_index = s->packet_index; ++ __entry->packet_index = packet_index; + __entry->irq = !!in_interrupt(); + __entry->index = index; + ), +diff --git a/sound/firewire/amdtp-stream.c b/sound/firewire/amdtp-stream.c +index 4e2f2bb7879fb..e0faa6601966c 100644 +--- a/sound/firewire/amdtp-stream.c ++++ b/sound/firewire/amdtp-stream.c +@@ -526,7 +526,7 @@ static void build_it_pkt_header(struct amdtp_stream *s, unsigned int cycle, + } + + trace_amdtp_packet(s, cycle, cip_header, payload_length, data_blocks, +- data_block_counter, index); ++ data_block_counter, s->packet_index, index); + } + + static int check_cip_header(struct amdtp_stream *s, const __be32 *buf, +@@ -630,21 +630,27 @@ static int parse_ir_ctx_header(struct amdtp_stream *s, unsigned int cycle, + unsigned int *payload_length, + unsigned int *data_blocks, + unsigned int *data_block_counter, +- unsigned int *syt, unsigned int index) ++ unsigned int *syt, unsigned int packet_index, unsigned int index) + { + const __be32 *cip_header; ++ unsigned int cip_header_size; + int err; + + *payload_length = be32_to_cpu(ctx_header[0]) >> ISO_DATA_LENGTH_SHIFT; +- if (*payload_length > s->ctx_data.tx.ctx_header_size + +- s->ctx_data.tx.max_ctx_payload_length) { ++ ++ if (!(s->flags & CIP_NO_HEADER)) ++ cip_header_size = 8; ++ else ++ cip_header_size = 0; ++ ++ if (*payload_length > cip_header_size + s->ctx_data.tx.max_ctx_payload_length) { + dev_err(&s->unit->device, + "Detect jumbo payload: %04x %04x\n", +- *payload_length, s->ctx_data.tx.max_ctx_payload_length); ++ *payload_length, cip_header_size + s->ctx_data.tx.max_ctx_payload_length); + return -EIO; + } + +- if (!(s->flags & CIP_NO_HEADER)) { ++ if (cip_header_size > 0) { + cip_header = ctx_header + 2; + err = check_cip_header(s, cip_header, *payload_length, + data_blocks, data_block_counter, syt); +@@ -662,7 +668,7 @@ static int parse_ir_ctx_header(struct amdtp_stream *s, unsigned int cycle, + } + + trace_amdtp_packet(s, cycle, cip_header, *payload_length, *data_blocks, +- *data_block_counter, index); ++ *data_block_counter, packet_index, index); + + return err; + } +@@ -701,12 +707,13 @@ static int generate_device_pkt_descs(struct amdtp_stream *s, + unsigned int packets) + { + unsigned int dbc = s->data_block_counter; ++ unsigned int packet_index = s->packet_index; ++ unsigned int queue_size = s->queue_size; + int i; + int err; + + for (i = 0; i < packets; ++i) { + struct pkt_desc *desc = descs + i; +- unsigned int index = (s->packet_index + i) % s->queue_size; + unsigned int cycle; + unsigned int payload_length; + unsigned int data_blocks; +@@ -715,7 +722,7 @@ static int generate_device_pkt_descs(struct amdtp_stream *s, + cycle = compute_cycle_count(ctx_header[1]); + + err = parse_ir_ctx_header(s, cycle, ctx_header, &payload_length, +- &data_blocks, &dbc, &syt, i); ++ &data_blocks, &dbc, &syt, packet_index, i); + if (err < 0) + return err; + +@@ -723,13 +730,15 @@ static int generate_device_pkt_descs(struct amdtp_stream *s, + desc->syt = syt; + desc->data_blocks = data_blocks; + desc->data_block_counter = dbc; +- desc->ctx_payload = s->buffer.packets[index].buffer; ++ desc->ctx_payload = s->buffer.packets[packet_index].buffer; + + if (!(s->flags & CIP_DBC_IS_END_EVENT)) + dbc = (dbc + desc->data_blocks) & 0xff; + + ctx_header += + s->ctx_data.tx.ctx_header_size / sizeof(*ctx_header); ++ ++ packet_index = (packet_index + 1) % queue_size; + } + + s->data_block_counter = dbc; +@@ -1065,23 +1074,22 @@ static int amdtp_stream_start(struct amdtp_stream *s, int channel, int speed, + s->data_block_counter = 0; + } + +- /* initialize packet buffer */ ++ // initialize packet buffer. ++ max_ctx_payload_size = amdtp_stream_get_max_payload(s); + if (s->direction == AMDTP_IN_STREAM) { + dir = DMA_FROM_DEVICE; + type = FW_ISO_CONTEXT_RECEIVE; +- if (!(s->flags & CIP_NO_HEADER)) ++ if (!(s->flags & CIP_NO_HEADER)) { ++ max_ctx_payload_size -= 8; + ctx_header_size = IR_CTX_HEADER_SIZE_CIP; +- else ++ } else { + ctx_header_size = IR_CTX_HEADER_SIZE_NO_CIP; +- +- max_ctx_payload_size = amdtp_stream_get_max_payload(s) - +- ctx_header_size; ++ } + } else { + dir = DMA_TO_DEVICE; + type = FW_ISO_CONTEXT_TRANSMIT; + ctx_header_size = 0; // No effect for IT context. + +- max_ctx_payload_size = amdtp_stream_get_max_payload(s); + if (!(s->flags & CIP_NO_HEADER)) + max_ctx_payload_size -= IT_PKT_HEADER_SIZE_CIP; + } +diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c +index 2c8e3392a4903..daeecfa8b9aac 100644 +--- a/sound/firewire/bebob/bebob.c ++++ b/sound/firewire/bebob/bebob.c +@@ -387,7 +387,7 @@ static const struct ieee1394_device_id bebob_id_table[] = { + SND_BEBOB_DEV_ENTRY(VEN_BRIDGECO, 0x00010049, &spec_normal), + /* Mackie, Onyx 1220/1620/1640 (Firewire I/O Card) */ + SND_BEBOB_DEV_ENTRY(VEN_MACKIE2, 0x00010065, &spec_normal), +- /* Mackie, d.2 (Firewire Option) */ ++ // Mackie, d.2 (Firewire option card) and d.2 Pro (the card is built-in). + SND_BEBOB_DEV_ENTRY(VEN_MACKIE1, 0x00010067, &spec_normal), + /* Stanton, ScratchAmp */ + SND_BEBOB_DEV_ENTRY(VEN_STANTON, 0x00000001, &spec_normal), +diff --git a/sound/firewire/dice/dice-alesis.c b/sound/firewire/dice/dice-alesis.c +index 0916864511d50..27c13b9cc9efd 100644 +--- a/sound/firewire/dice/dice-alesis.c ++++ b/sound/firewire/dice/dice-alesis.c +@@ -16,7 +16,7 @@ alesis_io14_tx_pcm_chs[MAX_STREAMS][SND_DICE_RATE_MODE_COUNT] = { + static const unsigned int + alesis_io26_tx_pcm_chs[MAX_STREAMS][SND_DICE_RATE_MODE_COUNT] = { + {10, 10, 4}, /* Tx0 = Analog + S/PDIF. */ +- {16, 8, 0}, /* Tx1 = ADAT1 + ADAT2. */ ++ {16, 4, 0}, /* Tx1 = ADAT1 + ADAT2 (available at low rate). */ + }; + + int snd_dice_detect_alesis_formats(struct snd_dice *dice) +diff --git a/sound/firewire/dice/dice-tcelectronic.c b/sound/firewire/dice/dice-tcelectronic.c +index a8875d24ba2aa..43a3bcb15b3d1 100644 +--- a/sound/firewire/dice/dice-tcelectronic.c ++++ b/sound/firewire/dice/dice-tcelectronic.c +@@ -38,8 +38,8 @@ static const struct dice_tc_spec konnekt_24d = { + }; + + static const struct dice_tc_spec konnekt_live = { +- .tx_pcm_chs = {{16, 16, 16}, {0, 0, 0} }, +- .rx_pcm_chs = {{16, 16, 16}, {0, 0, 0} }, ++ .tx_pcm_chs = {{16, 16, 6}, {0, 0, 0} }, ++ .rx_pcm_chs = {{16, 16, 6}, {0, 0, 0} }, + .has_midi = true, + }; + +diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c +index 1f1e3236efb8e..9eea25c46dc7e 100644 +--- a/sound/firewire/oxfw/oxfw.c ++++ b/sound/firewire/oxfw/oxfw.c +@@ -355,7 +355,6 @@ static const struct ieee1394_device_id oxfw_id_table[] = { + * Onyx-i series (former models): 0x081216 + * Mackie Onyx Satellite: 0x00200f + * Tapco LINK.firewire 4x6: 0x000460 +- * d.2 pro: Unknown + * d.4 pro: Unknown + * U.420: Unknown + * U.420d: Unknown +diff --git a/sound/isa/sb/sb8.c b/sound/isa/sb/sb8.c +index 6c9d534ce8b61..95290ffe5c6e7 100644 +--- a/sound/isa/sb/sb8.c ++++ b/sound/isa/sb/sb8.c +@@ -95,10 +95,6 @@ static int snd_sb8_probe(struct device *pdev, unsigned int dev) + + /* block the 0x388 port to avoid PnP conflicts */ + acard->fm_res = request_region(0x388, 4, "SoundBlaster FM"); +- if (!acard->fm_res) { +- err = -EBUSY; +- goto _err; +- } + + if (port[dev] != SNDRV_AUTO_PORT) { + if ((err = snd_sbdsp_create(card, port[dev], irq[dev], +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 1fe70f2fe4fe8..43a63db4ab6ad 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -395,7 +395,6 @@ static void alc_fill_eapd_coef(struct hda_codec *codec) + case 0x10ec0282: + case 0x10ec0283: + case 0x10ec0286: +- case 0x10ec0287: + case 0x10ec0288: + case 0x10ec0285: + case 0x10ec0298: +@@ -406,6 +405,10 @@ static void alc_fill_eapd_coef(struct hda_codec *codec) + case 0x10ec0275: + alc_update_coef_idx(codec, 0xe, 0, 1<<0); + break; ++ case 0x10ec0287: ++ alc_update_coef_idx(codec, 0x10, 1<<9, 0); ++ alc_write_coef_idx(codec, 0x8, 0x4ab7); ++ break; + case 0x10ec0293: + alc_update_coef_idx(codec, 0xa, 1<<13, 0); + break; +@@ -5717,6 +5720,18 @@ static void alc_fixup_tpt470_dacs(struct hda_codec *codec, + spec->gen.preferred_dacs = preferred_pairs; + } + ++static void alc295_fixup_asus_dacs(struct hda_codec *codec, ++ const struct hda_fixup *fix, int action) ++{ ++ static const hda_nid_t preferred_pairs[] = { ++ 0x17, 0x02, 0x21, 0x03, 0 ++ }; ++ struct alc_spec *spec = codec->spec; ++ ++ if (action == HDA_FIXUP_ACT_PRE_PROBE) ++ spec->gen.preferred_dacs = preferred_pairs; ++} ++ + static void alc_shutup_dell_xps13(struct hda_codec *codec) + { + struct alc_spec *spec = codec->spec; +@@ -6232,6 +6247,35 @@ static void alc294_fixup_gx502_hp(struct hda_codec *codec, + } + } + ++static void alc294_gu502_toggle_output(struct hda_codec *codec, ++ struct hda_jack_callback *cb) ++{ ++ /* Windows sets 0x10 to 0x8420 for Node 0x20 which is ++ * responsible from changes between speakers and headphones ++ */ ++ if (snd_hda_jack_detect_state(codec, 0x21) == HDA_JACK_PRESENT) ++ alc_write_coef_idx(codec, 0x10, 0x8420); ++ else ++ alc_write_coef_idx(codec, 0x10, 0x0a20); ++} ++ ++static void alc294_fixup_gu502_hp(struct hda_codec *codec, ++ const struct hda_fixup *fix, int action) ++{ ++ if (!is_jack_detectable(codec, 0x21)) ++ return; ++ ++ switch (action) { ++ case HDA_FIXUP_ACT_PRE_PROBE: ++ snd_hda_jack_detect_enable_callback(codec, 0x21, ++ alc294_gu502_toggle_output); ++ break; ++ case HDA_FIXUP_ACT_INIT: ++ alc294_gu502_toggle_output(codec, NULL); ++ break; ++ } ++} ++ + static void alc285_fixup_hp_gpio_amp_init(struct hda_codec *codec, + const struct hda_fixup *fix, int action) + { +@@ -6449,6 +6493,9 @@ enum { + ALC294_FIXUP_ASUS_GX502_HP, + ALC294_FIXUP_ASUS_GX502_PINS, + ALC294_FIXUP_ASUS_GX502_VERBS, ++ ALC294_FIXUP_ASUS_GU502_HP, ++ ALC294_FIXUP_ASUS_GU502_PINS, ++ ALC294_FIXUP_ASUS_GU502_VERBS, + ALC285_FIXUP_HP_GPIO_LED, + ALC285_FIXUP_HP_MUTE_LED, + ALC236_FIXUP_HP_GPIO_LED, +@@ -6485,6 +6532,9 @@ enum { + ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST, + ALC256_FIXUP_ACER_HEADSET_MIC, + ALC285_FIXUP_IDEAPAD_S740_COEF, ++ ALC295_FIXUP_ASUS_DACS, ++ ALC295_FIXUP_HP_OMEN, ++ ALC285_FIXUP_HP_SPECTRE_X360, + }; + + static const struct hda_fixup alc269_fixups[] = { +@@ -7687,6 +7737,35 @@ static const struct hda_fixup alc269_fixups[] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc294_fixup_gx502_hp, + }, ++ [ALC294_FIXUP_ASUS_GU502_PINS] = { ++ .type = HDA_FIXUP_PINS, ++ .v.pins = (const struct hda_pintbl[]) { ++ { 0x19, 0x01a11050 }, /* rear HP mic */ ++ { 0x1a, 0x01a11830 }, /* rear external mic */ ++ { 0x21, 0x012110f0 }, /* rear HP out */ ++ { } ++ }, ++ .chained = true, ++ .chain_id = ALC294_FIXUP_ASUS_GU502_VERBS ++ }, ++ [ALC294_FIXUP_ASUS_GU502_VERBS] = { ++ .type = HDA_FIXUP_VERBS, ++ .v.verbs = (const struct hda_verb[]) { ++ /* set 0x15 to HP-OUT ctrl */ ++ { 0x15, AC_VERB_SET_PIN_WIDGET_CONTROL, 0xc0 }, ++ /* unmute the 0x15 amp */ ++ { 0x15, AC_VERB_SET_AMP_GAIN_MUTE, 0xb000 }, ++ /* set 0x1b to HP-OUT */ ++ { 0x1b, AC_VERB_SET_PIN_WIDGET_CONTROL, 0x24 }, ++ { } ++ }, ++ .chained = true, ++ .chain_id = ALC294_FIXUP_ASUS_GU502_HP ++ }, ++ [ALC294_FIXUP_ASUS_GU502_HP] = { ++ .type = HDA_FIXUP_FUNC, ++ .v.func = alc294_fixup_gu502_hp, ++ }, + [ALC294_FIXUP_ASUS_COEF_1B] = { + .type = HDA_FIXUP_VERBS, + .v.verbs = (const struct hda_verb[]) { +@@ -7983,6 +8062,39 @@ static const struct hda_fixup alc269_fixups[] = { + .chained = true, + .chain_id = ALC269_FIXUP_THINKPAD_ACPI, + }, ++ [ALC295_FIXUP_ASUS_DACS] = { ++ .type = HDA_FIXUP_FUNC, ++ .v.func = alc295_fixup_asus_dacs, ++ }, ++ [ALC295_FIXUP_HP_OMEN] = { ++ .type = HDA_FIXUP_PINS, ++ .v.pins = (const struct hda_pintbl[]) { ++ { 0x12, 0xb7a60130 }, ++ { 0x13, 0x40000000 }, ++ { 0x14, 0x411111f0 }, ++ { 0x16, 0x411111f0 }, ++ { 0x17, 0x90170110 }, ++ { 0x18, 0x411111f0 }, ++ { 0x19, 0x02a11030 }, ++ { 0x1a, 0x411111f0 }, ++ { 0x1b, 0x04a19030 }, ++ { 0x1d, 0x40600001 }, ++ { 0x1e, 0x411111f0 }, ++ { 0x21, 0x03211020 }, ++ {} ++ }, ++ .chained = true, ++ .chain_id = ALC269_FIXUP_HP_LINE1_MIC1_LED, ++ }, ++ [ALC285_FIXUP_HP_SPECTRE_X360] = { ++ .type = HDA_FIXUP_PINS, ++ .v.pins = (const struct hda_pintbl[]) { ++ { 0x14, 0x90170110 }, /* enable top speaker */ ++ {} ++ }, ++ .chained = true, ++ .chain_id = ALC285_FIXUP_SPEAKER2_TO_DAC1, ++ }, + }; + + static const struct snd_pci_quirk alc269_fixup_tbl[] = { +@@ -8141,7 +8253,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3), + SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3), ++ SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN), + SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), ++ SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360), + SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED), + SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO), + SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED), +@@ -8181,6 +8295,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK), + SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A), + SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC), ++ SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS), + SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK), + SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS), + SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC), +@@ -8198,6 +8313,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC), + SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE), + SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502), ++ SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS), + SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401), + SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401), + SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2), +@@ -8254,12 +8370,19 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1558, 0x50b8, "Clevo NK50SZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x50d5, "Clevo NP50D5", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x50f0, "Clevo NH50A[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0x50f2, "Clevo NH50E[PR]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x50f3, "Clevo NH58DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0x50f5, "Clevo NH55EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0x50f6, "Clevo NH55DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x5101, "Clevo S510WU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x5157, "Clevo W517GU1", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x51a1, "Clevo NS50MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0x70f2, "Clevo NH79EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0x70f3, "Clevo NH77DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), +@@ -8277,9 +8400,17 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1558, 0x8a51, "Clevo NH70RCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x8d50, "Clevo NH55RCQ-M", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x951d, "Clevo N950T[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0x9600, "Clevo N960K[PR]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x961d, "Clevo N960S[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x971d, "Clevo N970T[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0xa500, "Clevo NL53RU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0xa600, "Clevo NL5XNU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0xb018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0xb019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0xb022, "Clevo NH77D[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0xc018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0xc019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1558, 0xc022, "Clevo NH77[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS), + SND_PCI_QUIRK(0x17aa, 0x1048, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), + SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE), +@@ -8544,6 +8675,8 @@ static const struct hda_model_fixup alc269_fixup_models[] = { + {.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"}, + {.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"}, + {.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"}, ++ {.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"}, ++ {.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"}, + {} + }; + #define ALC225_STANDARD_PINS \ +diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c +index 35903d1a1cbd2..5b124c4ad5725 100644 +--- a/sound/pci/intel8x0.c ++++ b/sound/pci/intel8x0.c +@@ -331,6 +331,7 @@ struct ichdev { + unsigned int ali_slot; /* ALI DMA slot */ + struct ac97_pcm *pcm; + int pcm_open_flag; ++ unsigned int prepared:1; + unsigned int suspended: 1; + }; + +@@ -691,6 +692,9 @@ static inline void snd_intel8x0_update(struct intel8x0 *chip, struct ichdev *ich + int status, civ, i, step; + int ack = 0; + ++ if (!ichdev->prepared || ichdev->suspended) ++ return; ++ + spin_lock_irqsave(&chip->reg_lock, flags); + status = igetbyte(chip, port + ichdev->roff_sr); + civ = igetbyte(chip, port + ICH_REG_OFF_CIV); +@@ -881,6 +885,7 @@ static int snd_intel8x0_hw_params(struct snd_pcm_substream *substream, + if (ichdev->pcm_open_flag) { + snd_ac97_pcm_close(ichdev->pcm); + ichdev->pcm_open_flag = 0; ++ ichdev->prepared = 0; + } + err = snd_ac97_pcm_open(ichdev->pcm, params_rate(hw_params), + params_channels(hw_params), +@@ -902,6 +907,7 @@ static int snd_intel8x0_hw_free(struct snd_pcm_substream *substream) + if (ichdev->pcm_open_flag) { + snd_ac97_pcm_close(ichdev->pcm); + ichdev->pcm_open_flag = 0; ++ ichdev->prepared = 0; + } + return 0; + } +@@ -976,6 +982,7 @@ static int snd_intel8x0_pcm_prepare(struct snd_pcm_substream *substream) + ichdev->pos_shift = (runtime->sample_bits > 16) ? 2 : 1; + } + snd_intel8x0_setup_periods(chip, ichdev); ++ ichdev->prepared = 1; + return 0; + } + +diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c +index a030dd65eb280..9602929b7de90 100644 +--- a/sound/usb/line6/driver.c ++++ b/sound/usb/line6/driver.c +@@ -699,6 +699,10 @@ static int line6_init_cap_control(struct usb_line6 *line6) + line6->buffer_message = kmalloc(LINE6_MIDI_MESSAGE_MAXLEN, GFP_KERNEL); + if (!line6->buffer_message) + return -ENOMEM; ++ ++ ret = line6_init_midi(line6); ++ if (ret < 0) ++ return ret; + } else { + ret = line6_hwdep_init(line6); + if (ret < 0) +diff --git a/sound/usb/line6/pod.c b/sound/usb/line6/pod.c +index cd44cb5f1310c..16e644330c4d6 100644 +--- a/sound/usb/line6/pod.c ++++ b/sound/usb/line6/pod.c +@@ -376,11 +376,6 @@ static int pod_init(struct usb_line6 *line6, + if (err < 0) + return err; + +- /* initialize MIDI subsystem: */ +- err = line6_init_midi(line6); +- if (err < 0) +- return err; +- + /* initialize PCM subsystem: */ + err = line6_init_pcm(line6, &pod_pcm_properties); + if (err < 0) +diff --git a/sound/usb/line6/variax.c b/sound/usb/line6/variax.c +index ed158f04de80f..c2245aa93b08f 100644 +--- a/sound/usb/line6/variax.c ++++ b/sound/usb/line6/variax.c +@@ -159,7 +159,6 @@ static int variax_init(struct usb_line6 *line6, + const struct usb_device_id *id) + { + struct usb_line6_variax *variax = line6_to_variax(line6); +- int err; + + line6->process_message = line6_variax_process_message; + line6->disconnect = line6_variax_disconnect; +@@ -172,11 +171,6 @@ static int variax_init(struct usb_line6 *line6, + if (variax->buffer_activate == NULL) + return -ENOMEM; + +- /* initialize MIDI subsystem: */ +- err = line6_init_midi(&variax->line6); +- if (err < 0) +- return err; +- + /* initiate startup procedure: */ + schedule_delayed_work(&line6->startup_work, + msecs_to_jiffies(VARIAX_STARTUP_DELAY1)); +diff --git a/sound/usb/midi.c b/sound/usb/midi.c +index cd46ca7cd28de..fa91290ad89db 100644 +--- a/sound/usb/midi.c ++++ b/sound/usb/midi.c +@@ -1889,8 +1889,12 @@ static int snd_usbmidi_get_ms_info(struct snd_usb_midi *umidi, + ms_ep = find_usb_ms_endpoint_descriptor(hostep); + if (!ms_ep) + continue; ++ if (ms_ep->bLength <= sizeof(*ms_ep)) ++ continue; + if (ms_ep->bNumEmbMIDIJack > 0x10) + continue; ++ if (ms_ep->bLength < sizeof(*ms_ep) + ms_ep->bNumEmbMIDIJack) ++ continue; + if (usb_endpoint_dir_out(ep)) { + if (endpoints[epidx].out_ep) { + if (++epidx >= MIDI_MAX_ENDPOINTS) { +diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c +index 7c6e83eee71dc..8b8bee3c3dd63 100644 +--- a/sound/usb/quirks.c ++++ b/sound/usb/quirks.c +@@ -1511,6 +1511,10 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs, + case USB_ID(0x2b73, 0x0013): /* Pioneer DJM-450 */ + pioneer_djm_set_format_quirk(subs, 0x0082); + break; ++ case USB_ID(0x08e4, 0x017f): /* Pioneer DJM-750 */ ++ case USB_ID(0x08e4, 0x0163): /* Pioneer DJM-850 */ ++ pioneer_djm_set_format_quirk(subs, 0x0086); ++ break; + } + } + +diff --git a/tools/testing/selftests/exec/Makefile b/tools/testing/selftests/exec/Makefile +index cf69b2fcce59e..dd61118df66ed 100644 +--- a/tools/testing/selftests/exec/Makefile ++++ b/tools/testing/selftests/exec/Makefile +@@ -28,8 +28,8 @@ $(OUTPUT)/execveat.denatured: $(OUTPUT)/execveat + cp $< $@ + chmod -x $@ + $(OUTPUT)/load_address_4096: load_address.c +- $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000 -pie $< -o $@ ++ $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000 -pie -static $< -o $@ + $(OUTPUT)/load_address_2097152: load_address.c +- $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x200000 -pie $< -o $@ ++ $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x200000 -pie -static $< -o $@ + $(OUTPUT)/load_address_16777216: load_address.c +- $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000000 -pie $< -o $@ ++ $(CC) $(CFLAGS) $(LDFLAGS) -Wl,-z,max-page-size=0x1000000 -pie -static $< -o $@ +diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c +index 98c3b647f54dc..e3d5c77a86121 100644 +--- a/tools/testing/selftests/seccomp/seccomp_bpf.c ++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c +@@ -1753,16 +1753,25 @@ TEST_F(TRACE_poke, getpid_runs_normally) + # define SYSCALL_RET_SET(_regs, _val) \ + do { \ + typeof(_val) _result = (_val); \ +- /* \ +- * A syscall error is signaled by CR0 SO bit \ +- * and the code is stored as a positive value. \ +- */ \ +- if (_result < 0) { \ +- SYSCALL_RET(_regs) = -_result; \ +- (_regs).ccr |= 0x10000000; \ +- } else { \ ++ if ((_regs.trap & 0xfff0) == 0x3000) { \ ++ /* \ ++ * scv 0 system call uses -ve result \ ++ * for error, so no need to adjust. \ ++ */ \ + SYSCALL_RET(_regs) = _result; \ +- (_regs).ccr &= ~0x10000000; \ ++ } else { \ ++ /* \ ++ * A syscall error is signaled by the \ ++ * CR0 SO bit and the code is stored as \ ++ * a positive value. \ ++ */ \ ++ if (_result < 0) { \ ++ SYSCALL_RET(_regs) = -_result; \ ++ (_regs).ccr |= 0x10000000; \ ++ } else { \ ++ SYSCALL_RET(_regs) = _result; \ ++ (_regs).ccr &= ~0x10000000; \ ++ } \ + } \ + } while (0) + # define SYSCALL_RET_SET_ON_PTRACE_EXIT