From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id DB65E138D01 for ; Tue, 30 Jun 2015 14:34:29 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 1DAA8E084A; Tue, 30 Jun 2015 14:34:27 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 9DAB6E084A for ; Tue, 30 Jun 2015 14:34:26 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id AD48E3406F3 for ; Tue, 30 Jun 2015 14:34:25 +0000 (UTC) Received: from localhost.localdomain (localhost [127.0.0.1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id D3686744 for ; Tue, 30 Jun 2015 14:34:22 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1435674130.bc9b0397988a797ae691094eb7c83c3e210f29ca.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:3.14 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1045_linux-3.14.46.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: bc9b0397988a797ae691094eb7c83c3e210f29ca X-VCS-Branch: 3.14 Date: Tue, 30 Jun 2015 14:34:22 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: 28d065a4-cab2-4359-bfd1-6beba7854ebf X-Archives-Hash: 96e04bfe367d05050a1dbb7896e71484 commit: bc9b0397988a797ae691094eb7c83c3e210f29ca Author: Mike Pagano gentoo org> AuthorDate: Tue Jun 30 14:22:10 2015 +0000 Commit: Mike Pagano gentoo org> CommitDate: Tue Jun 30 14:22:10 2015 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bc9b0397 Linux patch 3.14.46 0000_README | 4 + 1045_linux-3.14.46.patch | 829 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 833 insertions(+) diff --git a/0000_README b/0000_README index ac0012f..a04d242 100644 --- a/0000_README +++ b/0000_README @@ -222,6 +222,10 @@ Patch: 1044_linux-3.14.45.patch From: http://www.kernel.org Desc: Linux 3.14.45 +Patch: 1045_linux-3.14.46.patch +From: http://www.kernel.org +Desc: Linux 3.14.46 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1045_linux-3.14.46.patch b/1045_linux-3.14.46.patch new file mode 100644 index 0000000..11a3c89 --- /dev/null +++ b/1045_linux-3.14.46.patch @@ -0,0 +1,829 @@ +diff --git a/Makefile b/Makefile +index c92186c3efd7..def39fdd9df4 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 3 + PATCHLEVEL = 14 +-SUBLEVEL = 45 ++SUBLEVEL = 46 + EXTRAVERSION = + NAME = Remembering Coco + +diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h +index 09af14999c9b..530f56e19931 100644 +--- a/arch/arm/include/asm/kvm_host.h ++++ b/arch/arm/include/asm/kvm_host.h +@@ -42,7 +42,7 @@ + + struct kvm_vcpu; + u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode); +-int kvm_target_cpu(void); ++int __attribute_const__ kvm_target_cpu(void); + int kvm_reset_vcpu(struct kvm_vcpu *vcpu); + void kvm_reset_coprocs(struct kvm_vcpu *vcpu); + +diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h +index 7b362bc9c09a..0cbdb8ed71cf 100644 +--- a/arch/arm/include/asm/kvm_mmu.h ++++ b/arch/arm/include/asm/kvm_mmu.h +@@ -127,6 +127,18 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd) + (__boundary - 1 < (end) - 1)? __boundary: (end); \ + }) + ++static inline bool kvm_page_empty(void *ptr) ++{ ++ struct page *ptr_page = virt_to_page(ptr); ++ return page_count(ptr_page) == 1; ++} ++ ++ ++#define kvm_pte_table_empty(ptep) kvm_page_empty(ptep) ++#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp) ++#define kvm_pud_table_empty(pudp) (0) ++ ++ + struct kvm; + + #define kvm_flush_dcache_to_poc(a,l) __cpuc_flush_dcache_area((a), (l)) +diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S +index 797b1a6a4906..7e666cfda634 100644 +--- a/arch/arm/kernel/hyp-stub.S ++++ b/arch/arm/kernel/hyp-stub.S +@@ -134,9 +134,7 @@ ENTRY(__hyp_stub_install_secondary) + mcr p15, 4, r7, c1, c1, 3 @ HSTR + + THUMB( orr r7, #(1 << 30) ) @ HSCTLR.TE +-#ifdef CONFIG_CPU_BIG_ENDIAN +- orr r7, #(1 << 9) @ HSCTLR.EE +-#endif ++ARM_BE8(orr r7, r7, #(1 << 25)) @ HSCTLR.EE + mcr p15, 4, r7, c1, c0, 0 @ HSCTLR + + mrc p15, 4, r7, c1, c1, 1 @ HDCR +diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c +index bd18bb8b2770..df6e75e47ae0 100644 +--- a/arch/arm/kvm/arm.c ++++ b/arch/arm/kvm/arm.c +@@ -82,7 +82,7 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void) + /** + * kvm_arm_get_running_vcpus - get the per-CPU array of currently running vcpus. + */ +-struct kvm_vcpu __percpu **kvm_get_running_vcpus(void) ++struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void) + { + return &kvm_arm_running_vcpu; + } +@@ -155,16 +155,6 @@ int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) + return VM_FAULT_SIGBUS; + } + +-void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, +- struct kvm_memory_slot *dont) +-{ +-} +- +-int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, +- unsigned long npages) +-{ +- return 0; +-} + + /** + * kvm_arch_destroy_vm - destroy the VM data structure +@@ -224,33 +214,6 @@ long kvm_arch_dev_ioctl(struct file *filp, + return -EINVAL; + } + +-void kvm_arch_memslots_updated(struct kvm *kvm) +-{ +-} +- +-int kvm_arch_prepare_memory_region(struct kvm *kvm, +- struct kvm_memory_slot *memslot, +- struct kvm_userspace_memory_region *mem, +- enum kvm_mr_change change) +-{ +- return 0; +-} +- +-void kvm_arch_commit_memory_region(struct kvm *kvm, +- struct kvm_userspace_memory_region *mem, +- const struct kvm_memory_slot *old, +- enum kvm_mr_change change) +-{ +-} +- +-void kvm_arch_flush_shadow_all(struct kvm *kvm) +-{ +-} +- +-void kvm_arch_flush_shadow_memslot(struct kvm *kvm, +- struct kvm_memory_slot *slot) +-{ +-} + + struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) + { +diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c +index c58a35116f63..7c732908f1df 100644 +--- a/arch/arm/kvm/coproc.c ++++ b/arch/arm/kvm/coproc.c +@@ -742,7 +742,7 @@ static bool is_valid_cache(u32 val) + u32 level, ctype; + + if (val >= CSSELR_MAX) +- return -ENOENT; ++ return false; + + /* Bottom bit is Instruction or Data bit. Next 3 bits are level. */ + level = (val >> 1); +diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c +index c93ef38f9cb0..70ed2c1f57b0 100644 +--- a/arch/arm/kvm/mmu.c ++++ b/arch/arm/kvm/mmu.c +@@ -90,103 +90,115 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) + return p; + } + +-static bool page_empty(void *ptr) ++static void clear_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr) + { +- struct page *ptr_page = virt_to_page(ptr); +- return page_count(ptr_page) == 1; ++ pud_t *pud_table __maybe_unused = pud_offset(pgd, 0); ++ pgd_clear(pgd); ++ kvm_tlb_flush_vmid_ipa(kvm, addr); ++ pud_free(NULL, pud_table); ++ put_page(virt_to_page(pgd)); + } + + static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr) + { +- if (pud_huge(*pud)) { +- pud_clear(pud); +- kvm_tlb_flush_vmid_ipa(kvm, addr); +- } else { +- pmd_t *pmd_table = pmd_offset(pud, 0); +- pud_clear(pud); +- kvm_tlb_flush_vmid_ipa(kvm, addr); +- pmd_free(NULL, pmd_table); +- } ++ pmd_t *pmd_table = pmd_offset(pud, 0); ++ VM_BUG_ON(pud_huge(*pud)); ++ pud_clear(pud); ++ kvm_tlb_flush_vmid_ipa(kvm, addr); ++ pmd_free(NULL, pmd_table); + put_page(virt_to_page(pud)); + } + + static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr) + { +- if (kvm_pmd_huge(*pmd)) { +- pmd_clear(pmd); +- kvm_tlb_flush_vmid_ipa(kvm, addr); +- } else { +- pte_t *pte_table = pte_offset_kernel(pmd, 0); +- pmd_clear(pmd); +- kvm_tlb_flush_vmid_ipa(kvm, addr); +- pte_free_kernel(NULL, pte_table); +- } ++ pte_t *pte_table = pte_offset_kernel(pmd, 0); ++ VM_BUG_ON(kvm_pmd_huge(*pmd)); ++ pmd_clear(pmd); ++ kvm_tlb_flush_vmid_ipa(kvm, addr); ++ pte_free_kernel(NULL, pte_table); + put_page(virt_to_page(pmd)); + } + +-static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr) ++static void unmap_ptes(struct kvm *kvm, pmd_t *pmd, ++ phys_addr_t addr, phys_addr_t end) + { +- if (pte_present(*pte)) { +- kvm_set_pte(pte, __pte(0)); +- put_page(virt_to_page(pte)); +- kvm_tlb_flush_vmid_ipa(kvm, addr); ++ phys_addr_t start_addr = addr; ++ pte_t *pte, *start_pte; ++ ++ start_pte = pte = pte_offset_kernel(pmd, addr); ++ do { ++ if (!pte_none(*pte)) { ++ kvm_set_pte(pte, __pte(0)); ++ put_page(virt_to_page(pte)); ++ kvm_tlb_flush_vmid_ipa(kvm, addr); ++ } ++ } while (pte++, addr += PAGE_SIZE, addr != end); ++ ++ if (kvm_pte_table_empty(start_pte)) ++ clear_pmd_entry(kvm, pmd, start_addr); + } +-} + +-static void unmap_range(struct kvm *kvm, pgd_t *pgdp, +- unsigned long long start, u64 size) ++static void unmap_pmds(struct kvm *kvm, pud_t *pud, ++ phys_addr_t addr, phys_addr_t end) + { +- pgd_t *pgd; +- pud_t *pud; +- pmd_t *pmd; +- pte_t *pte; +- unsigned long long addr = start, end = start + size; +- u64 next; +- +- while (addr < end) { +- pgd = pgdp + pgd_index(addr); +- pud = pud_offset(pgd, addr); +- if (pud_none(*pud)) { +- addr = kvm_pud_addr_end(addr, end); +- continue; +- } ++ phys_addr_t next, start_addr = addr; ++ pmd_t *pmd, *start_pmd; + +- if (pud_huge(*pud)) { +- /* +- * If we are dealing with a huge pud, just clear it and +- * move on. +- */ +- clear_pud_entry(kvm, pud, addr); +- addr = kvm_pud_addr_end(addr, end); +- continue; ++ start_pmd = pmd = pmd_offset(pud, addr); ++ do { ++ next = kvm_pmd_addr_end(addr, end); ++ if (!pmd_none(*pmd)) { ++ if (kvm_pmd_huge(*pmd)) { ++ pmd_clear(pmd); ++ kvm_tlb_flush_vmid_ipa(kvm, addr); ++ put_page(virt_to_page(pmd)); ++ } else { ++ unmap_ptes(kvm, pmd, addr, next); ++ } + } ++ } while (pmd++, addr = next, addr != end); + +- pmd = pmd_offset(pud, addr); +- if (pmd_none(*pmd)) { +- addr = kvm_pmd_addr_end(addr, end); +- continue; +- } ++ if (kvm_pmd_table_empty(start_pmd)) ++ clear_pud_entry(kvm, pud, start_addr); ++} + +- if (!kvm_pmd_huge(*pmd)) { +- pte = pte_offset_kernel(pmd, addr); +- clear_pte_entry(kvm, pte, addr); +- next = addr + PAGE_SIZE; +- } ++static void unmap_puds(struct kvm *kvm, pgd_t *pgd, ++ phys_addr_t addr, phys_addr_t end) ++{ ++ phys_addr_t next, start_addr = addr; ++ pud_t *pud, *start_pud; + +- /* +- * If the pmd entry is to be cleared, walk back up the ladder +- */ +- if (kvm_pmd_huge(*pmd) || page_empty(pte)) { +- clear_pmd_entry(kvm, pmd, addr); +- next = kvm_pmd_addr_end(addr, end); +- if (page_empty(pmd) && !page_empty(pud)) { +- clear_pud_entry(kvm, pud, addr); +- next = kvm_pud_addr_end(addr, end); ++ start_pud = pud = pud_offset(pgd, addr); ++ do { ++ next = kvm_pud_addr_end(addr, end); ++ if (!pud_none(*pud)) { ++ if (pud_huge(*pud)) { ++ pud_clear(pud); ++ kvm_tlb_flush_vmid_ipa(kvm, addr); ++ put_page(virt_to_page(pud)); ++ } else { ++ unmap_pmds(kvm, pud, addr, next); + } + } ++ } while (pud++, addr = next, addr != end); + +- addr = next; +- } ++ if (kvm_pud_table_empty(start_pud)) ++ clear_pgd_entry(kvm, pgd, start_addr); ++} ++ ++ ++static void unmap_range(struct kvm *kvm, pgd_t *pgdp, ++ phys_addr_t start, u64 size) ++{ ++ pgd_t *pgd; ++ phys_addr_t addr = start, end = start + size; ++ phys_addr_t next; ++ ++ pgd = pgdp + pgd_index(addr); ++ do { ++ next = kvm_pgd_addr_end(addr, end); ++ unmap_puds(kvm, pgd, addr, next); ++ } while (pgd++, addr = next, addr != end); + } + + static void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd, +@@ -747,6 +759,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + struct vm_area_struct *vma; + pfn_t pfn; ++ pgprot_t mem_type = PAGE_S2; + + write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu)); + if (fault_status == FSC_PERM && !write_fault) { +@@ -797,6 +810,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + if (is_error_pfn(pfn)) + return -EFAULT; + ++ if (kvm_is_mmio_pfn(pfn)) ++ mem_type = PAGE_S2_DEVICE; ++ + spin_lock(&kvm->mmu_lock); + if (mmu_notifier_retry(kvm, mmu_seq)) + goto out_unlock; +@@ -804,7 +820,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); + + if (hugetlb) { +- pmd_t new_pmd = pfn_pmd(pfn, PAGE_S2); ++ pmd_t new_pmd = pfn_pmd(pfn, mem_type); + new_pmd = pmd_mkhuge(new_pmd); + if (writable) { + kvm_set_s2pmd_writable(&new_pmd); +@@ -813,13 +829,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE); + ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); + } else { +- pte_t new_pte = pfn_pte(pfn, PAGE_S2); ++ pte_t new_pte = pfn_pte(pfn, mem_type); + if (writable) { + kvm_set_s2pte_writable(&new_pte); + kvm_set_pfn_dirty(pfn); + } + coherent_cache_guest_page(vcpu, hva, PAGE_SIZE); +- ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, false); ++ ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, ++ mem_type == PAGE_S2_DEVICE); + } + + +@@ -1099,3 +1116,49 @@ out: + free_hyp_pgds(); + return err; + } ++ ++void kvm_arch_commit_memory_region(struct kvm *kvm, ++ struct kvm_userspace_memory_region *mem, ++ const struct kvm_memory_slot *old, ++ enum kvm_mr_change change) ++{ ++ gpa_t gpa = old->base_gfn << PAGE_SHIFT; ++ phys_addr_t size = old->npages << PAGE_SHIFT; ++ if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { ++ spin_lock(&kvm->mmu_lock); ++ unmap_stage2_range(kvm, gpa, size); ++ spin_unlock(&kvm->mmu_lock); ++ } ++} ++ ++int kvm_arch_prepare_memory_region(struct kvm *kvm, ++ struct kvm_memory_slot *memslot, ++ struct kvm_userspace_memory_region *mem, ++ enum kvm_mr_change change) ++{ ++ return 0; ++} ++ ++void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, ++ struct kvm_memory_slot *dont) ++{ ++} ++ ++int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, ++ unsigned long npages) ++{ ++ return 0; ++} ++ ++void kvm_arch_memslots_updated(struct kvm *kvm) ++{ ++} ++ ++void kvm_arch_flush_shadow_all(struct kvm *kvm) ++{ ++} ++ ++void kvm_arch_flush_shadow_memslot(struct kvm *kvm, ++ struct kvm_memory_slot *slot) ++{ ++} +diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h +index 0a1d69751562..3fb0946d963a 100644 +--- a/arch/arm64/include/asm/kvm_host.h ++++ b/arch/arm64/include/asm/kvm_host.h +@@ -42,7 +42,7 @@ + #define KVM_VCPU_MAX_FEATURES 2 + + struct kvm_vcpu; +-int kvm_target_cpu(void); ++int __attribute_const__ kvm_target_cpu(void); + int kvm_reset_vcpu(struct kvm_vcpu *vcpu); + int kvm_arch_dev_ioctl_check_extension(long ext); + +@@ -177,7 +177,7 @@ static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) + } + + struct kvm_vcpu *kvm_arm_get_running_vcpu(void); +-struct kvm_vcpu __percpu **kvm_get_running_vcpus(void); ++struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void); + + u64 kvm_call_hyp(void *hypfn, ...); + +diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h +index 7d29847a893b..8e138c7c53ac 100644 +--- a/arch/arm64/include/asm/kvm_mmu.h ++++ b/arch/arm64/include/asm/kvm_mmu.h +@@ -125,6 +125,21 @@ static inline void kvm_set_s2pmd_writable(pmd_t *pmd) + #define kvm_pud_addr_end(addr, end) pud_addr_end(addr, end) + #define kvm_pmd_addr_end(addr, end) pmd_addr_end(addr, end) + ++static inline bool kvm_page_empty(void *ptr) ++{ ++ struct page *ptr_page = virt_to_page(ptr); ++ return page_count(ptr_page) == 1; ++} ++ ++#define kvm_pte_table_empty(ptep) kvm_page_empty(ptep) ++#ifndef CONFIG_ARM64_64K_PAGES ++#define kvm_pmd_table_empty(pmdp) kvm_page_empty(pmdp) ++#else ++#define kvm_pmd_table_empty(pmdp) (0) ++#endif ++#define kvm_pud_table_empty(pudp) (0) ++ ++ + struct kvm; + + #define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l)) +diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S +index b0d1512acf08..5dfc8331c385 100644 +--- a/arch/arm64/kvm/hyp.S ++++ b/arch/arm64/kvm/hyp.S +@@ -830,7 +830,7 @@ el1_trap: + mrs x2, far_el2 + + 2: mrs x0, tpidr_el2 +- str x1, [x0, #VCPU_ESR_EL2] ++ str w1, [x0, #VCPU_ESR_EL2] + str x2, [x0, #VCPU_FAR_EL2] + str x3, [x0, #VCPU_HPFAR_EL2] + +diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c +index 03244582bc55..7691b2563d27 100644 +--- a/arch/arm64/kvm/sys_regs.c ++++ b/arch/arm64/kvm/sys_regs.c +@@ -836,7 +836,7 @@ static bool is_valid_cache(u32 val) + u32 level, ctype; + + if (val >= CSSELR_MAX) +- return -ENOENT; ++ return false; + + /* Bottom bit is Instruction or Data bit. Next 3 bits are level. */ + level = (val >> 1); +@@ -962,7 +962,7 @@ static unsigned int num_demux_regs(void) + + static int write_demux_regids(u64 __user *uindices) + { +- u64 val = KVM_REG_ARM | KVM_REG_SIZE_U32 | KVM_REG_ARM_DEMUX; ++ u64 val = KVM_REG_ARM64 | KVM_REG_SIZE_U32 | KVM_REG_ARM_DEMUX; + unsigned int i; + + val |= KVM_REG_ARM_DEMUX_ID_CCSIDR; +diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c +index 26b03e1254ef..8ff2b3ca7ee9 100644 +--- a/drivers/bluetooth/ath3k.c ++++ b/drivers/bluetooth/ath3k.c +@@ -79,6 +79,7 @@ static const struct usb_device_id ath3k_table[] = { + { USB_DEVICE(0x0489, 0xe057) }, + { USB_DEVICE(0x0489, 0xe056) }, + { USB_DEVICE(0x0489, 0xe05f) }, ++ { USB_DEVICE(0x0489, 0xe076) }, + { USB_DEVICE(0x0489, 0xe078) }, + { USB_DEVICE(0x04c5, 0x1330) }, + { USB_DEVICE(0x04CA, 0x3004) }, +@@ -109,6 +110,7 @@ static const struct usb_device_id ath3k_table[] = { + { USB_DEVICE(0x13d3, 0x3402) }, + { USB_DEVICE(0x13d3, 0x3408) }, + { USB_DEVICE(0x13d3, 0x3432) }, ++ { USB_DEVICE(0x13d3, 0x3474) }, + + /* Atheros AR5BBU12 with sflash firmware */ + { USB_DEVICE(0x0489, 0xE02C) }, +@@ -133,6 +135,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = { + { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0489, 0xe05f), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x0489, 0xe076), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0489, 0xe078), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, +@@ -163,6 +166,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = { + { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 }, + + /* Atheros AR5BBU22 with sflash firmware */ + { USB_DEVICE(0x0489, 0xE036), .driver_info = BTUSB_ATH3012 }, +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c +index 9eb1669962ef..c0e7a9aa97a4 100644 +--- a/drivers/bluetooth/btusb.c ++++ b/drivers/bluetooth/btusb.c +@@ -157,6 +157,7 @@ static const struct usb_device_id blacklist_table[] = { + { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0489, 0xe05f), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x0489, 0xe076), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0489, 0xe078), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, +@@ -187,6 +188,7 @@ static const struct usb_device_id blacklist_table[] = { + { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 }, + + /* Atheros AR5BBU12 with sflash firmware */ + { USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE }, +diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c +index 28486b19fc36..ae6dae8ef7ab 100644 +--- a/drivers/crypto/caam/caamrng.c ++++ b/drivers/crypto/caam/caamrng.c +@@ -56,7 +56,7 @@ + + /* Buffer, its dma address and lock */ + struct buf_data { +- u8 buf[RN_BUF_SIZE]; ++ u8 buf[RN_BUF_SIZE] ____cacheline_aligned; + dma_addr_t addr; + struct completion filled; + u32 hw_desc[DESC_JOB_O_LEN]; +diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c +index 968374776db9..f2511a03e3e9 100644 +--- a/drivers/gpu/drm/mgag200/mgag200_mode.c ++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c +@@ -1529,6 +1529,11 @@ static int mga_vga_mode_valid(struct drm_connector *connector, + return MODE_BANDWIDTH; + } + ++ if ((mode->hdisplay % 8) != 0 || (mode->hsync_start % 8) != 0 || ++ (mode->hsync_end % 8) != 0 || (mode->htotal % 8) != 0) { ++ return MODE_H_ILLEGAL; ++ } ++ + if (mode->crtc_hdisplay > 2048 || mode->crtc_hsync_start > 4096 || + mode->crtc_hsync_end > 4096 || mode->crtc_htotal > 4096 || + mode->crtc_vdisplay > 2048 || mode->crtc_vsync_start > 4096 || +diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c +index 8f580fda443f..ce211328bc1c 100644 +--- a/drivers/scsi/lpfc/lpfc_sli.c ++++ b/drivers/scsi/lpfc/lpfc_sli.c +@@ -265,6 +265,16 @@ lpfc_sli4_eq_get(struct lpfc_queue *q) + return NULL; + + q->hba_index = idx; ++ ++ /* ++ * insert barrier for instruction interlock : data from the hardware ++ * must have the valid bit checked before it can be copied and acted ++ * upon. Given what was seen in lpfc_sli4_cq_get() of speculative ++ * instructions allowing action on content before valid bit checked, ++ * add barrier here as well. May not be needed as "content" is a ++ * single 32-bit entity here (vs multi word structure for cq's). ++ */ ++ mb(); + return eqe; + } + +@@ -370,6 +380,17 @@ lpfc_sli4_cq_get(struct lpfc_queue *q) + + cqe = q->qe[q->hba_index].cqe; + q->hba_index = idx; ++ ++ /* ++ * insert barrier for instruction interlock : data from the hardware ++ * must have the valid bit checked before it can be copied and acted ++ * upon. Speculative instructions were allowing a bcopy at the start ++ * of lpfc_sli4_fp_handle_wcqe(), which is called immediately ++ * after our return, to copy data before the valid bit check above ++ * was done. As such, some of the copied data was stale. The barrier ++ * ensures the check is before any data is copied. ++ */ ++ mb(); + return cqe; + } + +diff --git a/fs/pipe.c b/fs/pipe.c +index 78fd0d0788db..46f1ab264a4c 100644 +--- a/fs/pipe.c ++++ b/fs/pipe.c +@@ -117,25 +117,27 @@ void pipe_wait(struct pipe_inode_info *pipe) + } + + static int +-pipe_iov_copy_from_user(void *to, struct iovec *iov, unsigned long len, +- int atomic) ++pipe_iov_copy_from_user(void *addr, int *offset, struct iovec *iov, ++ size_t *remaining, int atomic) + { + unsigned long copy; + +- while (len > 0) { ++ while (*remaining > 0) { + while (!iov->iov_len) + iov++; +- copy = min_t(unsigned long, len, iov->iov_len); ++ copy = min_t(unsigned long, *remaining, iov->iov_len); + + if (atomic) { +- if (__copy_from_user_inatomic(to, iov->iov_base, copy)) ++ if (__copy_from_user_inatomic(addr + *offset, ++ iov->iov_base, copy)) + return -EFAULT; + } else { +- if (copy_from_user(to, iov->iov_base, copy)) ++ if (copy_from_user(addr + *offset, ++ iov->iov_base, copy)) + return -EFAULT; + } +- to += copy; +- len -= copy; ++ *offset += copy; ++ *remaining -= copy; + iov->iov_base += copy; + iov->iov_len -= copy; + } +@@ -143,25 +145,27 @@ pipe_iov_copy_from_user(void *to, struct iovec *iov, unsigned long len, + } + + static int +-pipe_iov_copy_to_user(struct iovec *iov, const void *from, unsigned long len, +- int atomic) ++pipe_iov_copy_to_user(struct iovec *iov, void *addr, int *offset, ++ size_t *remaining, int atomic) + { + unsigned long copy; + +- while (len > 0) { ++ while (*remaining > 0) { + while (!iov->iov_len) + iov++; +- copy = min_t(unsigned long, len, iov->iov_len); ++ copy = min_t(unsigned long, *remaining, iov->iov_len); + + if (atomic) { +- if (__copy_to_user_inatomic(iov->iov_base, from, copy)) ++ if (__copy_to_user_inatomic(iov->iov_base, ++ addr + *offset, copy)) + return -EFAULT; + } else { +- if (copy_to_user(iov->iov_base, from, copy)) ++ if (copy_to_user(iov->iov_base, ++ addr + *offset, copy)) + return -EFAULT; + } +- from += copy; +- len -= copy; ++ *offset += copy; ++ *remaining -= copy; + iov->iov_base += copy; + iov->iov_len -= copy; + } +@@ -395,7 +399,7 @@ pipe_read(struct kiocb *iocb, const struct iovec *_iov, + struct pipe_buffer *buf = pipe->bufs + curbuf; + const struct pipe_buf_operations *ops = buf->ops; + void *addr; +- size_t chars = buf->len; ++ size_t chars = buf->len, remaining; + int error, atomic; + + if (chars > total_len) +@@ -409,9 +413,11 @@ pipe_read(struct kiocb *iocb, const struct iovec *_iov, + } + + atomic = !iov_fault_in_pages_write(iov, chars); ++ remaining = chars; + redo: + addr = ops->map(pipe, buf, atomic); +- error = pipe_iov_copy_to_user(iov, addr + buf->offset, chars, atomic); ++ error = pipe_iov_copy_to_user(iov, addr, &buf->offset, ++ &remaining, atomic); + ops->unmap(pipe, buf, addr); + if (unlikely(error)) { + /* +@@ -426,7 +432,6 @@ redo: + break; + } + ret += chars; +- buf->offset += chars; + buf->len -= chars; + + /* Was it a packet buffer? Clean up and exit */ +@@ -531,6 +536,7 @@ pipe_write(struct kiocb *iocb, const struct iovec *_iov, + if (ops->can_merge && offset + chars <= PAGE_SIZE) { + int error, atomic = 1; + void *addr; ++ size_t remaining = chars; + + error = ops->confirm(pipe, buf); + if (error) +@@ -539,8 +545,8 @@ pipe_write(struct kiocb *iocb, const struct iovec *_iov, + iov_fault_in_pages_read(iov, chars); + redo1: + addr = ops->map(pipe, buf, atomic); +- error = pipe_iov_copy_from_user(offset + addr, iov, +- chars, atomic); ++ error = pipe_iov_copy_from_user(addr, &offset, iov, ++ &remaining, atomic); + ops->unmap(pipe, buf, addr); + ret = error; + do_wakeup = 1; +@@ -575,6 +581,8 @@ redo1: + struct page *page = pipe->tmp_page; + char *src; + int error, atomic = 1; ++ int offset = 0; ++ size_t remaining; + + if (!page) { + page = alloc_page(GFP_HIGHUSER); +@@ -595,14 +603,15 @@ redo1: + chars = total_len; + + iov_fault_in_pages_read(iov, chars); ++ remaining = chars; + redo2: + if (atomic) + src = kmap_atomic(page); + else + src = kmap(page); + +- error = pipe_iov_copy_from_user(src, iov, chars, +- atomic); ++ error = pipe_iov_copy_from_user(src, &offset, iov, ++ &remaining, atomic); + if (atomic) + kunmap_atomic(src); + else +diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c +index 8a8631926a07..cb347e85f75e 100644 +--- a/kernel/trace/trace_events_filter.c ++++ b/kernel/trace/trace_events_filter.c +@@ -1399,19 +1399,24 @@ static int check_preds(struct filter_parse_state *ps) + { + int n_normal_preds = 0, n_logical_preds = 0; + struct postfix_elt *elt; ++ int cnt = 0; + + list_for_each_entry(elt, &ps->postfix, list) { +- if (elt->op == OP_NONE) ++ if (elt->op == OP_NONE) { ++ cnt++; + continue; ++ } + ++ cnt--; + if (elt->op == OP_AND || elt->op == OP_OR) { + n_logical_preds++; + continue; + } + n_normal_preds++; ++ WARN_ON_ONCE(cnt < 0); + } + +- if (!n_normal_preds || n_logical_preds >= n_normal_preds) { ++ if (cnt != 1 || !n_normal_preds || n_logical_preds >= n_normal_preds) { + parse_error(ps, FILT_ERR_INVALID_FILTER, 0); + return -EINVAL; + } +diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c +index 4eec2d436109..1316e558db64 100644 +--- a/virt/kvm/arm/vgic.c ++++ b/virt/kvm/arm/vgic.c +@@ -1654,7 +1654,7 @@ out: + return ret; + } + +-static bool vgic_ioaddr_overlap(struct kvm *kvm) ++static int vgic_ioaddr_overlap(struct kvm *kvm) + { + phys_addr_t dist = kvm->arch.vgic.vgic_dist_base; + phys_addr_t cpu = kvm->arch.vgic.vgic_cpu_base;