From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 1C0A213877A for ; Tue, 19 Aug 2014 14:07:34 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 31D22E0909; Tue, 19 Aug 2014 14:07:33 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 76EDBE0909 for ; Tue, 19 Aug 2014 14:07:32 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 6F4C333FE30 for ; Tue, 19 Aug 2014 14:07:31 +0000 (UTC) Received: from localhost.localdomain (localhost [127.0.0.1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 74B1238FA for ; Tue, 19 Aug 2014 14:07:29 +0000 (UTC) From: "Anthony G. Basile" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Anthony G. Basile" Message-ID: <1406298913.64a02f06fb83ec19cb979fabac2117596143adf8.blueness@gentoo> Subject: [gentoo-commits] proj/hardened-patchset:master commit in: 3.14.13/, 3.15.6/, 3.15.5/, 3.14.12/, 3.2.61/ X-VCS-Repository: proj/hardened-patchset X-VCS-Files: 3.14.12/0000_README 3.14.12/4420_grsecurity-3.0-3.14.12-201407170638.patch 3.14.12/4425_grsec_remove_EI_PAX.patch 3.14.12/4427_force_XATTR_PAX_tmpfs.patch 3.14.12/4430_grsec-remove-localversion-grsec.patch 3.14.12/4435_grsec-mute-warnings.patch 3.14.12/4440_grsec-remove-protected-paths.patch 3.14.12/4450_grsec-kconfig-default-gids.patch 3.14.12/4465_selinux-avc_audit-log-curr_ip.patch 3.14.12/4470_disable-compat_vdso.patch 3.14.12/4475_emutramp_default_on.patch 3.14.13/0000_README 3.14.13/4420_grsecurity-3.0-3.14.13-201407232159.patch 3.14.13/4425_grsec_remove_EI_PAX.patch 3.14.13/4427_force_XATTR_PAX_tmpfs.patch 3.14.13/4430_grsec-remove-localversion-grsec.patch 3.14.13/4435_grsec-mute-warnings.patch 3.14.13/4440_grsec-remove-protected-paths.patch 3.14.13/4450_grsec-kconfig-default-gids.patch 3.14.13/4465_selinux-avc_audit-log-curr_ip.patch 3.14.13/4470_disable-compat_vdso.patch 3.14.13/4475_emutramp_default_on.patch 3.15.5/0000_README 3.15.5/4420_grsecurity-3.0-3.15.5- 201407170639.patch 3.15.5/4425_grsec_remove_EI_PAX.patch 3.15.5/4427_force_XATTR_PAX_tmpfs.patch 3.15.5/4430_grsec-remove-localversion-grsec.patch 3.15.5/4435_grsec-mute-warnings.patch 3.15.5/4440_grsec-remove-protected-paths.patch 3.15.5/4450_grsec-kconfig-default-gids.patch 3.15.5/4465_selinux-avc_audit-log-curr_ip.patch 3.15.5/4470_disable-compat_vdso.patch 3.15.5/4475_emutramp_default_on.patch 3.15.6/0000_README 3.15.6/4420_grsecurity-3.0-3.15.6-201407232200.patch 3.15.6/4425_grsec_remove_EI_PAX.patch 3.15.6/4427_force_XATTR_PAX_tmpfs.patch 3.15.6/4430_grsec-remove-localversion-grsec.patch 3.15.6/4435_grsec-mute-warnings.patch 3.15.6/4440_grsec-remove-protected-paths.patch 3.15.6/4450_grsec-kconfig-default-gids.patch 3.15.6/4465_selinux-avc_audit-log-curr_ip.patch 3.15.6/4470_disable-compat_vdso.patch 3.15.6/4475_emutramp_default_on.patch 3.2.61/0000_README 3.2.61/4420_grsecurity-3.0-3.2.61-201407170636.patch 3.2.61/4420_grsecurity-3.0-3.2.61-201407232156.patch X-VCS-Directories: 3.14.13/ 3.15.6/ 3.15.5/ 3.14.12/ 3.2.61/ X-VCS-Committer: blueness X-VCS-Committer-Name: Anthony G. Basile X-VCS-Revision: 64a02f06fb83ec19cb979fabac2117596143adf8 X-VCS-Branch: master Date: Tue, 19 Aug 2014 14:07:29 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: e17d9423-e50d-4219-af3d-80fe47007947 X-Archives-Hash: 104be0fc25beb91b8177467128fdc5db Message-ID: <20140819140729.rGROLJbMVv_pOQS2QKt641KsXpZoE_bRKRCCQ-CBAjI@z> commit: 64a02f06fb83ec19cb979fabac2117596143adf8 Author: Anthony G. Basile gentoo org> AuthorDate: Fri Jul 25 14:35:13 2014 +0000 Commit: Anthony G. Basile gentoo org> CommitDate: Fri Jul 25 14:35:13 2014 +0000 URL: http://sources.gentoo.org/gitweb/?p=proj/hardened-patchset.git;a=commit;h=64a02f06 Grsec/PaX: 3.0-{3.2.60,3.14.13,3.15.6}-201407232200 --- {3.14.12 => 3.14.13}/0000_README | 2 +- .../4420_grsecurity-3.0-3.14.13-201407232159.patch | 432 +++++++++---- .../4425_grsec_remove_EI_PAX.patch | 0 .../4427_force_XATTR_PAX_tmpfs.patch | 0 .../4430_grsec-remove-localversion-grsec.patch | 0 .../4435_grsec-mute-warnings.patch | 0 .../4440_grsec-remove-protected-paths.patch | 0 .../4450_grsec-kconfig-default-gids.patch | 0 .../4465_selinux-avc_audit-log-curr_ip.patch | 0 .../4470_disable-compat_vdso.patch | 0 .../4475_emutramp_default_on.patch | 0 {3.15.5 => 3.15.6}/0000_README | 2 +- .../4420_grsecurity-3.0-3.15.6-201407232200.patch | 699 ++++++++++++--------- {3.15.5 => 3.15.6}/4425_grsec_remove_EI_PAX.patch | 0 .../4427_force_XATTR_PAX_tmpfs.patch | 0 .../4430_grsec-remove-localversion-grsec.patch | 0 {3.15.5 => 3.15.6}/4435_grsec-mute-warnings.patch | 0 .../4440_grsec-remove-protected-paths.patch | 0 .../4450_grsec-kconfig-default-gids.patch | 0 .../4465_selinux-avc_audit-log-curr_ip.patch | 0 {3.15.5 => 3.15.6}/4470_disable-compat_vdso.patch | 0 {3.15.5 => 3.15.6}/4475_emutramp_default_on.patch | 0 3.2.61/0000_README | 2 +- ... 4420_grsecurity-3.0-3.2.61-201407232156.patch} | 144 ++++- 24 files changed, 802 insertions(+), 479 deletions(-) diff --git a/3.14.12/0000_README b/3.14.13/0000_README similarity index 96% rename from 3.14.12/0000_README rename to 3.14.13/0000_README index 857c6a1..ed0d890 100644 --- a/3.14.12/0000_README +++ b/3.14.13/0000_README @@ -2,7 +2,7 @@ README ----------------------------------------------------------------------------- Individual Patch Descriptions: ----------------------------------------------------------------------------- -Patch: 4420_grsecurity-3.0-3.14.12-201407170638.patch +Patch: 4420_grsecurity-3.0-3.14.13-201407232159.patch From: http://www.grsecurity.net Desc: hardened-sources base patch from upstream grsecurity diff --git a/3.14.12/4420_grsecurity-3.0-3.14.12-201407170638.patch b/3.14.13/4420_grsecurity-3.0-3.14.13-201407232159.patch similarity index 99% rename from 3.14.12/4420_grsecurity-3.0-3.14.12-201407170638.patch rename to 3.14.13/4420_grsecurity-3.0-3.14.13-201407232159.patch index 02636ed..81dff0f 100644 --- a/3.14.12/4420_grsecurity-3.0-3.14.12-201407170638.patch +++ b/3.14.13/4420_grsecurity-3.0-3.14.13-201407232159.patch @@ -287,7 +287,7 @@ index 7116fda..d8ed6e8 100644 pcd. [PARIDE] diff --git a/Makefile b/Makefile -index 13d8f32..a7a7b9b 100644 +index 7a2981c..9fadd78 100644 --- a/Makefile +++ b/Makefile @@ -244,8 +244,9 @@ CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ @@ -7700,7 +7700,7 @@ index 50dfafc..b9fc230 100644 DEBUGP("register_unwind_table(), sect = %d at 0x%p - 0x%p (gp=0x%lx)\n", me->arch.unwind_section, table, end, gp); diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c -index 31ffa9b..588a798 100644 +index e1ffea2..46ed66e 100644 --- a/arch/parisc/kernel/sys_parisc.c +++ b/arch/parisc/kernel/sys_parisc.c @@ -89,6 +89,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, @@ -7960,7 +7960,7 @@ index d72197f..c017c84 100644 /* * If for any reason at all we couldn't handle the fault, make diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig -index 957bf34..3430cc8 100644 +index 2156fa2..cc28613 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -393,6 +393,7 @@ config PPC64_SUPPORTS_MEMORY_FAILURE @@ -33352,19 +33352,21 @@ index 7b179b4..6bd17777 100644 return (void *)vaddr; diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c -index 799580c..72f9fe0 100644 +index 94bd247..7e48391 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c -@@ -97,7 +97,7 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr, - for (pfn = phys_addr >> PAGE_SHIFT; pfn <= last_pfn; pfn++) { - int is_ram = page_is_ram(pfn); +@@ -56,8 +56,8 @@ static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages, + unsigned long i; + + for (i = 0; i < nr_pages; ++i) +- if (pfn_valid(start_pfn + i) && +- !PageReserved(pfn_to_page(start_pfn + i))) ++ if (pfn_valid(start_pfn + i) && (start_pfn + i >= 0x100 || ++ !PageReserved(pfn_to_page(start_pfn + i)))) + return 1; -- if (is_ram && pfn_valid(pfn) && !PageReserved(pfn_to_page(pfn))) -+ if (is_ram && pfn_valid(pfn) && (pfn >= 0x100 || !PageReserved(pfn_to_page(pfn)))) - return NULL; - WARN_ON_ONCE(is_ram); - } -@@ -256,7 +256,7 @@ EXPORT_SYMBOL(ioremap_prot); + WARN_ONCE(1, "ioremap on RAM pfn 0x%lx\n", start_pfn); +@@ -268,7 +268,7 @@ EXPORT_SYMBOL(ioremap_prot); * * Caller must ensure there is only one unmapping for the same pointer. */ @@ -33373,7 +33375,7 @@ index 799580c..72f9fe0 100644 { struct vm_struct *p, *o; -@@ -310,6 +310,9 @@ void *xlate_dev_mem_ptr(unsigned long phys) +@@ -322,6 +322,9 @@ void *xlate_dev_mem_ptr(unsigned long phys) /* If page is RAM, we can use __va. Otherwise ioremap and unmap. */ if (page_is_ram(start >> PAGE_SHIFT)) @@ -33383,7 +33385,7 @@ index 799580c..72f9fe0 100644 return __va(phys); addr = (void __force *)ioremap_cache(start, PAGE_SIZE); -@@ -322,6 +325,9 @@ void *xlate_dev_mem_ptr(unsigned long phys) +@@ -334,6 +337,9 @@ void *xlate_dev_mem_ptr(unsigned long phys) void unxlate_dev_mem_ptr(unsigned long phys, void *addr) { if (page_is_ram(phys >> PAGE_SHIFT)) @@ -33393,7 +33395,7 @@ index 799580c..72f9fe0 100644 return; iounmap((void __iomem *)((unsigned long)addr & PAGE_MASK)); -@@ -339,7 +345,7 @@ static int __init early_ioremap_debug_setup(char *str) +@@ -351,7 +357,7 @@ static int __init early_ioremap_debug_setup(char *str) early_param("early_ioremap_debug", early_ioremap_debug_setup); static __initdata int after_paging_init; @@ -33402,7 +33404,7 @@ index 799580c..72f9fe0 100644 static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) { -@@ -376,8 +382,7 @@ void __init early_ioremap_init(void) +@@ -388,8 +394,7 @@ void __init early_ioremap_init(void) slot_virt[i] = __fix_to_virt(FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*i); pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)); @@ -39664,7 +39666,7 @@ index 18d4091..434be15 100644 } EXPORT_SYMBOL_GPL(od_unregister_powersave_bias_handler); diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c -index 6d98c37..a592321 100644 +index ae52c77..3d8f69b 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -125,10 +125,10 @@ struct pstate_funcs { @@ -39680,7 +39682,7 @@ index 6d98c37..a592321 100644 struct perf_limits { int no_turbo; -@@ -526,7 +526,7 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate) +@@ -530,7 +530,7 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate) cpu->pstate.current_pstate = pstate; @@ -39689,7 +39691,7 @@ index 6d98c37..a592321 100644 } static inline void intel_pstate_pstate_increase(struct cpudata *cpu, int steps) -@@ -548,12 +548,12 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) +@@ -552,12 +552,12 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) { sprintf(cpu->name, "Intel 2nd generation core"); @@ -39707,7 +39709,7 @@ index 6d98c37..a592321 100644 intel_pstate_set_pstate(cpu, cpu->pstate.min_pstate); } -@@ -835,9 +835,9 @@ static int intel_pstate_msrs_not_valid(void) +@@ -844,9 +844,9 @@ static int intel_pstate_msrs_not_valid(void) rdmsrl(MSR_IA32_APERF, aperf); rdmsrl(MSR_IA32_MPERF, mperf); @@ -39720,7 +39722,7 @@ index 6d98c37..a592321 100644 return -ENODEV; rdmsrl(MSR_IA32_APERF, tmp); -@@ -851,7 +851,7 @@ static int intel_pstate_msrs_not_valid(void) +@@ -860,7 +860,7 @@ static int intel_pstate_msrs_not_valid(void) return 0; } @@ -39729,7 +39731,7 @@ index 6d98c37..a592321 100644 { pid_params.sample_rate_ms = policy->sample_rate_ms; pid_params.p_gain_pct = policy->p_gain_pct; -@@ -863,11 +863,7 @@ static void copy_pid_params(struct pstate_adjust_policy *policy) +@@ -872,11 +872,7 @@ static void copy_pid_params(struct pstate_adjust_policy *policy) static void copy_cpu_funcs(struct pstate_funcs *funcs) { @@ -44543,10 +44545,10 @@ index b086a94..74cb67e 100644 pmd->bl_info.value_type.inc = data_block_inc; pmd->bl_info.value_type.dec = data_block_dec; diff --git a/drivers/md/dm.c b/drivers/md/dm.c -index 8c53b09..f1fb2b0 100644 +index 65ee3a0..1852af9 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c -@@ -185,9 +185,9 @@ struct mapped_device { +@@ -187,9 +187,9 @@ struct mapped_device { /* * Event handling. */ @@ -44558,7 +44560,7 @@ index 8c53b09..f1fb2b0 100644 struct list_head uevent_list; spinlock_t uevent_lock; /* Protect access to uevent_list */ -@@ -1888,8 +1888,8 @@ static struct mapped_device *alloc_dev(int minor) +@@ -1899,8 +1899,8 @@ static struct mapped_device *alloc_dev(int minor) spin_lock_init(&md->deferred_lock); atomic_set(&md->holders, 1); atomic_set(&md->open_count, 0); @@ -44569,7 +44571,7 @@ index 8c53b09..f1fb2b0 100644 INIT_LIST_HEAD(&md->uevent_list); spin_lock_init(&md->uevent_lock); -@@ -2043,7 +2043,7 @@ static void event_callback(void *context) +@@ -2054,7 +2054,7 @@ static void event_callback(void *context) dm_send_uevents(&uevents, &disk_to_dev(md->disk)->kobj); @@ -44578,7 +44580,7 @@ index 8c53b09..f1fb2b0 100644 wake_up(&md->eventq); } -@@ -2736,18 +2736,18 @@ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action, +@@ -2747,18 +2747,18 @@ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action, uint32_t dm_next_uevent_seq(struct mapped_device *md) { @@ -47503,6 +47505,24 @@ index 5920c99..ff2e4a5 100644 }; static void +diff --git a/drivers/net/wan/x25_asy.c b/drivers/net/wan/x25_asy.c +index 5895f19..fa9fdfa 100644 +--- a/drivers/net/wan/x25_asy.c ++++ b/drivers/net/wan/x25_asy.c +@@ -122,8 +122,12 @@ static int x25_asy_change_mtu(struct net_device *dev, int newmtu) + { + struct x25_asy *sl = netdev_priv(dev); + unsigned char *xbuff, *rbuff; +- int len = 2 * newmtu; ++ int len; + ++ if (newmtu > 65534) ++ return -EINVAL; ++ ++ len = 2 * newmtu; + xbuff = kmalloc(len + 4, GFP_ATOMIC); + rbuff = kmalloc(len + 4, GFP_ATOMIC); + diff --git a/drivers/net/wan/z85230.c b/drivers/net/wan/z85230.c index feacc3b..5bac0de 100644 --- a/drivers/net/wan/z85230.c @@ -51951,7 +51971,7 @@ index 9cd706d..6ff2de7 100644 if (cfg->uart_flags & UPF_CONS_FLOW) { diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c -index ece2049..fba2524 100644 +index ece2049b..fba2524 100644 --- a/drivers/tty/serial/serial_core.c +++ b/drivers/tty/serial/serial_core.c @@ -1448,7 +1448,7 @@ static void uart_hangup(struct tty_struct *tty) @@ -60208,7 +60228,7 @@ index e6574d7..c30cbe2 100644 brelse(bh); bh = NULL; diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c -index 08ddfda..a48f3f6 100644 +index 502f0fd..bf3b3c1 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -1880,7 +1880,7 @@ void ext4_mb_simple_scan_group(struct ext4_allocation_context *ac, @@ -60338,7 +60358,7 @@ index 04434ad..6404663 100644 "MMP failure info: last update time: %llu, last update " "node: %s, last update device: %s\n", diff --git a/fs/ext4/super.c b/fs/ext4/super.c -index 710fed2..a82e4e8 100644 +index 25b327e..56f169d 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1270,7 +1270,7 @@ static ext4_fsblk_t get_sb_block(void **data) @@ -60350,7 +60370,7 @@ index 710fed2..a82e4e8 100644 "Contact linux-ext4@vger.kernel.org if you think we should keep it.\n"; #ifdef CONFIG_QUOTA -@@ -2450,7 +2450,7 @@ struct ext4_attr { +@@ -2448,7 +2448,7 @@ struct ext4_attr { int offset; int deprecated_val; } u; @@ -62357,7 +62377,7 @@ index b29e42f..5ea7fdf 100644 #define MNT_NS_INTERNAL ERR_PTR(-EINVAL) /* distinct from any mnt_namespace */ diff --git a/fs/namei.c b/fs/namei.c -index 8274c8d..922e189 100644 +index 8274c8d..e242796 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -330,17 +330,34 @@ int generic_permission(struct inode *inode, int mask) @@ -62493,7 +62513,19 @@ index 8274c8d..922e189 100644 return retval; } -@@ -2557,6 +2590,13 @@ static int may_open(struct path *path, int acc_mode, int flag) +@@ -2247,9 +2280,10 @@ done: + goto out; + } + path->dentry = dentry; +- path->mnt = mntget(nd->path.mnt); ++ path->mnt = nd->path.mnt; + if (should_follow_link(dentry, nd->flags & LOOKUP_FOLLOW)) + return 1; ++ mntget(path->mnt); + follow_mount(path); + error = 0; + out: +@@ -2557,6 +2591,13 @@ static int may_open(struct path *path, int acc_mode, int flag) if (flag & O_NOATIME && !inode_owner_or_capable(inode)) return -EPERM; @@ -62507,7 +62539,7 @@ index 8274c8d..922e189 100644 return 0; } -@@ -2788,7 +2828,7 @@ looked_up: +@@ -2788,7 +2829,7 @@ looked_up: * cleared otherwise prior to returning. */ static int lookup_open(struct nameidata *nd, struct path *path, @@ -62516,7 +62548,7 @@ index 8274c8d..922e189 100644 const struct open_flags *op, bool got_write, int *opened) { -@@ -2823,6 +2863,17 @@ static int lookup_open(struct nameidata *nd, struct path *path, +@@ -2823,6 +2864,17 @@ static int lookup_open(struct nameidata *nd, struct path *path, /* Negative dentry, just create the file */ if (!dentry->d_inode && (op->open_flag & O_CREAT)) { umode_t mode = op->mode; @@ -62534,7 +62566,7 @@ index 8274c8d..922e189 100644 if (!IS_POSIXACL(dir->d_inode)) mode &= ~current_umask(); /* -@@ -2844,6 +2895,8 @@ static int lookup_open(struct nameidata *nd, struct path *path, +@@ -2844,6 +2896,8 @@ static int lookup_open(struct nameidata *nd, struct path *path, nd->flags & LOOKUP_EXCL); if (error) goto out_dput; @@ -62543,7 +62575,7 @@ index 8274c8d..922e189 100644 } out_no_open: path->dentry = dentry; -@@ -2858,7 +2911,7 @@ out_dput: +@@ -2858,7 +2912,7 @@ out_dput: /* * Handle the last step of open() */ @@ -62552,7 +62584,7 @@ index 8274c8d..922e189 100644 struct file *file, const struct open_flags *op, int *opened, struct filename *name) { -@@ -2908,6 +2961,15 @@ static int do_last(struct nameidata *nd, struct path *path, +@@ -2908,6 +2962,15 @@ static int do_last(struct nameidata *nd, struct path *path, if (error) return error; @@ -62568,7 +62600,7 @@ index 8274c8d..922e189 100644 audit_inode(name, dir, LOOKUP_PARENT); error = -EISDIR; /* trailing slashes? */ -@@ -2927,7 +2989,7 @@ retry_lookup: +@@ -2927,7 +2990,7 @@ retry_lookup: */ } mutex_lock(&dir->d_inode->i_mutex); @@ -62577,7 +62609,7 @@ index 8274c8d..922e189 100644 mutex_unlock(&dir->d_inode->i_mutex); if (error <= 0) { -@@ -2951,11 +3013,28 @@ retry_lookup: +@@ -2951,11 +3014,28 @@ retry_lookup: goto finish_open_created; } @@ -62607,7 +62639,7 @@ index 8274c8d..922e189 100644 /* * If atomic_open() acquired write access it is dropped now due to -@@ -2996,6 +3075,11 @@ finish_lookup: +@@ -2996,6 +3076,11 @@ finish_lookup: } } BUG_ON(inode != path->dentry->d_inode); @@ -62619,7 +62651,7 @@ index 8274c8d..922e189 100644 return 1; } -@@ -3005,7 +3089,6 @@ finish_lookup: +@@ -3005,7 +3090,6 @@ finish_lookup: save_parent.dentry = nd->path.dentry; save_parent.mnt = mntget(path->mnt); nd->path.dentry = path->dentry; @@ -62627,7 +62659,7 @@ index 8274c8d..922e189 100644 } nd->inode = inode; /* Why this, you ask? _Now_ we might have grown LOOKUP_JUMPED... */ -@@ -3015,7 +3098,18 @@ finish_open: +@@ -3015,7 +3099,18 @@ finish_open: path_put(&save_parent); return error; } @@ -62646,7 +62678,7 @@ index 8274c8d..922e189 100644 error = -EISDIR; if ((open_flag & O_CREAT) && (d_is_directory(nd->path.dentry) || d_is_autodir(nd->path.dentry))) -@@ -3179,7 +3273,7 @@ static struct file *path_openat(int dfd, struct filename *pathname, +@@ -3179,7 +3274,7 @@ static struct file *path_openat(int dfd, struct filename *pathname, if (unlikely(error)) goto out; @@ -62655,7 +62687,7 @@ index 8274c8d..922e189 100644 while (unlikely(error > 0)) { /* trailing symlink */ struct path link = path; void *cookie; -@@ -3197,7 +3291,7 @@ static struct file *path_openat(int dfd, struct filename *pathname, +@@ -3197,7 +3292,7 @@ static struct file *path_openat(int dfd, struct filename *pathname, error = follow_link(&link, nd, &cookie); if (unlikely(error)) break; @@ -62664,7 +62696,7 @@ index 8274c8d..922e189 100644 put_link(nd, &link, cookie); } out: -@@ -3297,9 +3391,11 @@ struct dentry *kern_path_create(int dfd, const char *pathname, +@@ -3297,9 +3392,11 @@ struct dentry *kern_path_create(int dfd, const char *pathname, goto unlock; error = -EEXIST; @@ -62678,7 +62710,7 @@ index 8274c8d..922e189 100644 /* * Special case - lookup gave negative, but... we had foo/bar/ * From the vfs_mknod() POV we just have a negative dentry - -@@ -3351,6 +3447,20 @@ struct dentry *user_path_create(int dfd, const char __user *pathname, +@@ -3351,6 +3448,20 @@ struct dentry *user_path_create(int dfd, const char __user *pathname, } EXPORT_SYMBOL(user_path_create); @@ -62699,7 +62731,7 @@ index 8274c8d..922e189 100644 int vfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode, dev_t dev) { int error = may_create(dir, dentry); -@@ -3413,6 +3523,17 @@ retry: +@@ -3413,6 +3524,17 @@ retry: if (!IS_POSIXACL(path.dentry->d_inode)) mode &= ~current_umask(); @@ -62717,7 +62749,7 @@ index 8274c8d..922e189 100644 error = security_path_mknod(&path, dentry, mode, dev); if (error) goto out; -@@ -3429,6 +3550,8 @@ retry: +@@ -3429,6 +3551,8 @@ retry: break; } out: @@ -62726,7 +62758,7 @@ index 8274c8d..922e189 100644 done_path_create(&path, dentry); if (retry_estale(error, lookup_flags)) { lookup_flags |= LOOKUP_REVAL; -@@ -3481,9 +3604,16 @@ retry: +@@ -3481,9 +3605,16 @@ retry: if (!IS_POSIXACL(path.dentry->d_inode)) mode &= ~current_umask(); @@ -62743,7 +62775,7 @@ index 8274c8d..922e189 100644 done_path_create(&path, dentry); if (retry_estale(error, lookup_flags)) { lookup_flags |= LOOKUP_REVAL; -@@ -3564,6 +3694,8 @@ static long do_rmdir(int dfd, const char __user *pathname) +@@ -3564,6 +3695,8 @@ static long do_rmdir(int dfd, const char __user *pathname) struct filename *name; struct dentry *dentry; struct nameidata nd; @@ -62752,7 +62784,7 @@ index 8274c8d..922e189 100644 unsigned int lookup_flags = 0; retry: name = user_path_parent(dfd, pathname, &nd, lookup_flags); -@@ -3596,10 +3728,21 @@ retry: +@@ -3596,10 +3729,21 @@ retry: error = -ENOENT; goto exit3; } @@ -62774,7 +62806,7 @@ index 8274c8d..922e189 100644 exit3: dput(dentry); exit2: -@@ -3689,6 +3832,8 @@ static long do_unlinkat(int dfd, const char __user *pathname) +@@ -3689,6 +3833,8 @@ static long do_unlinkat(int dfd, const char __user *pathname) struct nameidata nd; struct inode *inode = NULL; struct inode *delegated_inode = NULL; @@ -62783,7 +62815,7 @@ index 8274c8d..922e189 100644 unsigned int lookup_flags = 0; retry: name = user_path_parent(dfd, pathname, &nd, lookup_flags); -@@ -3715,10 +3860,22 @@ retry_deleg: +@@ -3715,10 +3861,22 @@ retry_deleg: if (d_is_negative(dentry)) goto slashes; ihold(inode); @@ -62806,7 +62838,7 @@ index 8274c8d..922e189 100644 exit2: dput(dentry); } -@@ -3806,9 +3963,17 @@ retry: +@@ -3806,9 +3964,17 @@ retry: if (IS_ERR(dentry)) goto out_putname; @@ -62824,7 +62856,7 @@ index 8274c8d..922e189 100644 done_path_create(&path, dentry); if (retry_estale(error, lookup_flags)) { lookup_flags |= LOOKUP_REVAL; -@@ -3911,6 +4076,7 @@ SYSCALL_DEFINE5(linkat, int, olddfd, const char __user *, oldname, +@@ -3911,6 +4077,7 @@ SYSCALL_DEFINE5(linkat, int, olddfd, const char __user *, oldname, struct dentry *new_dentry; struct path old_path, new_path; struct inode *delegated_inode = NULL; @@ -62832,7 +62864,7 @@ index 8274c8d..922e189 100644 int how = 0; int error; -@@ -3934,7 +4100,7 @@ retry: +@@ -3934,7 +4101,7 @@ retry: if (error) return error; @@ -62841,7 +62873,7 @@ index 8274c8d..922e189 100644 (how & LOOKUP_REVAL)); error = PTR_ERR(new_dentry); if (IS_ERR(new_dentry)) -@@ -3946,11 +4112,28 @@ retry: +@@ -3946,11 +4113,28 @@ retry: error = may_linkat(&old_path); if (unlikely(error)) goto out_dput; @@ -62870,7 +62902,7 @@ index 8274c8d..922e189 100644 done_path_create(&new_path, new_dentry); if (delegated_inode) { error = break_deleg_wait(&delegated_inode); -@@ -4237,6 +4420,12 @@ retry_deleg: +@@ -4237,6 +4421,12 @@ retry_deleg: if (new_dentry == trap) goto exit5; @@ -62883,7 +62915,7 @@ index 8274c8d..922e189 100644 error = security_path_rename(&oldnd.path, old_dentry, &newnd.path, new_dentry); if (error) -@@ -4244,6 +4433,9 @@ retry_deleg: +@@ -4244,6 +4434,9 @@ retry_deleg: error = vfs_rename(old_dir->d_inode, old_dentry, new_dir->d_inode, new_dentry, &delegated_inode); @@ -62893,7 +62925,7 @@ index 8274c8d..922e189 100644 exit5: dput(new_dentry); exit4: -@@ -4280,6 +4472,8 @@ SYSCALL_DEFINE2(rename, const char __user *, oldname, const char __user *, newna +@@ -4280,6 +4473,8 @@ SYSCALL_DEFINE2(rename, const char __user *, oldname, const char __user *, newna int vfs_readlink(struct dentry *dentry, char __user *buffer, int buflen, const char *link) { @@ -62902,7 +62934,7 @@ index 8274c8d..922e189 100644 int len; len = PTR_ERR(link); -@@ -4289,7 +4483,14 @@ int vfs_readlink(struct dentry *dentry, char __user *buffer, int buflen, const c +@@ -4289,7 +4484,14 @@ int vfs_readlink(struct dentry *dentry, char __user *buffer, int buflen, const c len = strlen(link); if (len > (unsigned) buflen) len = buflen; @@ -91687,7 +91719,7 @@ index 868633e..921dc41 100644 ftrace_graph_active++; diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c -index fc4da2d..f3e800b 100644 +index 04202d9..e3e4242 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -352,9 +352,9 @@ struct buffer_data_page { @@ -91713,7 +91745,7 @@ index fc4da2d..f3e800b 100644 local_t dropped_events; local_t committing; local_t commits; -@@ -992,8 +992,8 @@ static int rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer, +@@ -995,8 +995,8 @@ static int rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer, * * We add a counter to the write field to denote this. */ @@ -91724,7 +91756,7 @@ index fc4da2d..f3e800b 100644 /* * Just make sure we have seen our old_write and synchronize -@@ -1021,8 +1021,8 @@ static int rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer, +@@ -1024,8 +1024,8 @@ static int rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer, * cmpxchg to only update if an interrupt did not already * do it for us. If the cmpxchg fails, we don't care. */ @@ -91735,7 +91767,7 @@ index fc4da2d..f3e800b 100644 /* * No need to worry about races with clearing out the commit. -@@ -1386,12 +1386,12 @@ static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer); +@@ -1389,12 +1389,12 @@ static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer); static inline unsigned long rb_page_entries(struct buffer_page *bpage) { @@ -91750,7 +91782,7 @@ index fc4da2d..f3e800b 100644 } static int -@@ -1486,7 +1486,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned int nr_pages) +@@ -1489,7 +1489,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned int nr_pages) * bytes consumed in ring buffer from here. * Increment overrun to account for the lost events. */ @@ -91759,7 +91791,7 @@ index fc4da2d..f3e800b 100644 local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes); } -@@ -2064,7 +2064,7 @@ rb_handle_head_page(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2067,7 +2067,7 @@ rb_handle_head_page(struct ring_buffer_per_cpu *cpu_buffer, * it is our responsibility to update * the counters. */ @@ -91768,7 +91800,7 @@ index fc4da2d..f3e800b 100644 local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes); /* -@@ -2214,7 +2214,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2217,7 +2217,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, if (tail == BUF_PAGE_SIZE) tail_page->real_end = 0; @@ -91777,7 +91809,7 @@ index fc4da2d..f3e800b 100644 return; } -@@ -2249,7 +2249,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2252,7 +2252,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, rb_event_set_padding(event); /* Set the write back to the previous setting */ @@ -91786,7 +91818,7 @@ index fc4da2d..f3e800b 100644 return; } -@@ -2261,7 +2261,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2264,7 +2264,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, /* Set write to end of buffer */ length = (tail + length) - BUF_PAGE_SIZE; @@ -91795,7 +91827,7 @@ index fc4da2d..f3e800b 100644 } /* -@@ -2287,7 +2287,7 @@ rb_move_tail(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2290,7 +2290,7 @@ rb_move_tail(struct ring_buffer_per_cpu *cpu_buffer, * about it. */ if (unlikely(next_page == commit_page)) { @@ -91804,7 +91836,7 @@ index fc4da2d..f3e800b 100644 goto out_reset; } -@@ -2343,7 +2343,7 @@ rb_move_tail(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2346,7 +2346,7 @@ rb_move_tail(struct ring_buffer_per_cpu *cpu_buffer, cpu_buffer->tail_page) && (cpu_buffer->commit_page == cpu_buffer->reader_page))) { @@ -91813,7 +91845,7 @@ index fc4da2d..f3e800b 100644 goto out_reset; } } -@@ -2391,7 +2391,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2394,7 +2394,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, length += RB_LEN_TIME_EXTEND; tail_page = cpu_buffer->tail_page; @@ -91822,7 +91854,7 @@ index fc4da2d..f3e800b 100644 /* set write to only the index of the write */ write &= RB_WRITE_MASK; -@@ -2415,7 +2415,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2418,7 +2418,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, kmemcheck_annotate_bitfield(event, bitfield); rb_update_event(cpu_buffer, event, length, add_timestamp, delta); @@ -91831,7 +91863,7 @@ index fc4da2d..f3e800b 100644 /* * If this is the first commit on the page, then update -@@ -2448,7 +2448,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2451,7 +2451,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, if (bpage->page == (void *)addr && rb_page_write(bpage) == old_index) { unsigned long write_mask = @@ -91840,7 +91872,7 @@ index fc4da2d..f3e800b 100644 unsigned long event_length = rb_event_length(event); /* * This is on the tail page. It is possible that -@@ -2458,7 +2458,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2461,7 +2461,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, */ old_index += write_mask; new_index += write_mask; @@ -91849,7 +91881,7 @@ index fc4da2d..f3e800b 100644 if (index == old_index) { /* update counters */ local_sub(event_length, &cpu_buffer->entries_bytes); -@@ -2850,7 +2850,7 @@ rb_decrement_entry(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2853,7 +2853,7 @@ rb_decrement_entry(struct ring_buffer_per_cpu *cpu_buffer, /* Do the likely case first */ if (likely(bpage->page == (void *)addr)) { @@ -91858,7 +91890,7 @@ index fc4da2d..f3e800b 100644 return; } -@@ -2862,7 +2862,7 @@ rb_decrement_entry(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2865,7 +2865,7 @@ rb_decrement_entry(struct ring_buffer_per_cpu *cpu_buffer, start = bpage; do { if (bpage->page == (void *)addr) { @@ -91867,7 +91899,7 @@ index fc4da2d..f3e800b 100644 return; } rb_inc_page(cpu_buffer, &bpage); -@@ -3146,7 +3146,7 @@ static inline unsigned long +@@ -3149,7 +3149,7 @@ static inline unsigned long rb_num_of_entries(struct ring_buffer_per_cpu *cpu_buffer) { return local_read(&cpu_buffer->entries) - @@ -91876,7 +91908,7 @@ index fc4da2d..f3e800b 100644 } /** -@@ -3235,7 +3235,7 @@ unsigned long ring_buffer_overrun_cpu(struct ring_buffer *buffer, int cpu) +@@ -3238,7 +3238,7 @@ unsigned long ring_buffer_overrun_cpu(struct ring_buffer *buffer, int cpu) return 0; cpu_buffer = buffer->buffers[cpu]; @@ -91885,7 +91917,7 @@ index fc4da2d..f3e800b 100644 return ret; } -@@ -3258,7 +3258,7 @@ ring_buffer_commit_overrun_cpu(struct ring_buffer *buffer, int cpu) +@@ -3261,7 +3261,7 @@ ring_buffer_commit_overrun_cpu(struct ring_buffer *buffer, int cpu) return 0; cpu_buffer = buffer->buffers[cpu]; @@ -91894,7 +91926,7 @@ index fc4da2d..f3e800b 100644 return ret; } -@@ -3343,7 +3343,7 @@ unsigned long ring_buffer_overruns(struct ring_buffer *buffer) +@@ -3346,7 +3346,7 @@ unsigned long ring_buffer_overruns(struct ring_buffer *buffer) /* if you care about this being correct, lock the buffer */ for_each_buffer_cpu(buffer, cpu) { cpu_buffer = buffer->buffers[cpu]; @@ -91903,7 +91935,7 @@ index fc4da2d..f3e800b 100644 } return overruns; -@@ -3519,8 +3519,8 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) +@@ -3522,8 +3522,8 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) /* * Reset the reader page to size zero. */ @@ -91914,7 +91946,7 @@ index fc4da2d..f3e800b 100644 local_set(&cpu_buffer->reader_page->page->commit, 0); cpu_buffer->reader_page->real_end = 0; -@@ -3554,7 +3554,7 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) +@@ -3557,7 +3557,7 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) * want to compare with the last_overrun. */ smp_mb(); @@ -91923,7 +91955,7 @@ index fc4da2d..f3e800b 100644 /* * Here's the tricky part. -@@ -4124,8 +4124,8 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) +@@ -4127,8 +4127,8 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) cpu_buffer->head_page = list_entry(cpu_buffer->pages, struct buffer_page, list); @@ -91934,7 +91966,7 @@ index fc4da2d..f3e800b 100644 local_set(&cpu_buffer->head_page->page->commit, 0); cpu_buffer->head_page->read = 0; -@@ -4135,14 +4135,14 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) +@@ -4138,14 +4138,14 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) INIT_LIST_HEAD(&cpu_buffer->reader_page->list); INIT_LIST_HEAD(&cpu_buffer->new_pages); @@ -91953,7 +91985,7 @@ index fc4da2d..f3e800b 100644 local_set(&cpu_buffer->dropped_events, 0); local_set(&cpu_buffer->entries, 0); local_set(&cpu_buffer->committing, 0); -@@ -4547,8 +4547,8 @@ int ring_buffer_read_page(struct ring_buffer *buffer, +@@ -4550,8 +4550,8 @@ int ring_buffer_read_page(struct ring_buffer *buffer, rb_init_page(bpage); bpage = reader->page; reader->page = *data_page; @@ -91965,7 +91997,7 @@ index fc4da2d..f3e800b 100644 *data_page = bpage; diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c -index fd21e60..eb47c25 100644 +index 922657f..3d229d9 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3398,7 +3398,7 @@ int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set) @@ -91978,7 +92010,7 @@ index fd21e60..eb47c25 100644 /* do nothing if flag is already set */ if (!!(trace_flags & mask) == !!enabled) diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h -index 02b592f..f971546 100644 +index c8bd809..33d7539 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -1233,7 +1233,7 @@ extern const char *__stop___tracepoint_str[]; @@ -92171,10 +92203,10 @@ index c9b6f01..37781d9 100644 .thread_should_run = watchdog_should_run, .thread_fn = watchdog, diff --git a/kernel/workqueue.c b/kernel/workqueue.c -index b6a3941..b68f191 100644 +index b4defde..f092808 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c -@@ -4702,7 +4702,7 @@ static void rebind_workers(struct worker_pool *pool) +@@ -4703,7 +4703,7 @@ static void rebind_workers(struct worker_pool *pool) WARN_ON_ONCE(!(worker_flags & WORKER_UNBOUND)); worker_flags |= WORKER_REBOUND; worker_flags &= ~WORKER_UNBOUND; @@ -92950,7 +92982,7 @@ index 0000000..7cd6065 @@ -0,0 +1 @@ +-grsec diff --git a/mm/Kconfig b/mm/Kconfig -index 9b63c15..2ab509e 100644 +index 0862816..2e3a043 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -329,10 +329,11 @@ config KSM @@ -94220,7 +94252,7 @@ index 2121d8b8..fa1095a 100644 mm = get_task_mm(tsk); if (!mm) diff --git a/mm/mempolicy.c b/mm/mempolicy.c -index 9c6288a..b0ea97e 100644 +index 15a8ea0..cb50389 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -747,6 +747,10 @@ static int mbind_range(struct mm_struct *mm, unsigned long start, @@ -96343,7 +96375,7 @@ index cdbd312..2e1e0b9 100644 /* diff --git a/mm/shmem.c b/mm/shmem.c -index 1f18c9d..b550bab 100644 +index 1f18c9d..6aa94ab 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -33,7 +33,7 @@ @@ -96371,19 +96403,73 @@ index 1f18c9d..b550bab 100644 + * a time): we would prefer not to enlarge the shmem inode just for that. */ struct shmem_falloc { -+ int mode; /* FALLOC_FL mode currently operating */ ++ wait_queue_head_t *waitq; /* faults into hole wait for punch to end */ pgoff_t start; /* start of range currently being fallocated */ pgoff_t next; /* the next page offset to be fallocated */ pgoff_t nr_falloced; /* how many new pages have been fallocated */ -@@ -824,6 +825,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) +@@ -533,22 +534,19 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + return; + + index = start; +- for ( ; ; ) { ++ while (index < end) { + cond_resched(); + pvec.nr = shmem_find_get_pages_and_swap(mapping, index, + min(end - index, (pgoff_t)PAGEVEC_SIZE), + pvec.pages, indices); + if (!pvec.nr) { +- if (index == start || unfalloc) ++ /* If all gone or hole-punch or unfalloc, we're done */ ++ if (index == start || end != -1) + break; ++ /* But if truncating, restart to make sure all gone */ + index = start; + continue; + } +- if ((index == start || unfalloc) && indices[0] >= end) { +- shmem_deswap_pagevec(&pvec); +- pagevec_release(&pvec); +- break; +- } + mem_cgroup_uncharge_start(); + for (i = 0; i < pagevec_count(&pvec); i++) { + struct page *page = pvec.pages[i]; +@@ -560,8 +558,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + if (radix_tree_exceptional_entry(page)) { + if (unfalloc) + continue; +- nr_swaps_freed += !shmem_free_swap(mapping, +- index, page); ++ if (shmem_free_swap(mapping, index, page)) { ++ /* Swap was replaced by page: retry */ ++ index--; ++ break; ++ } ++ nr_swaps_freed++; + continue; + } + +@@ -570,6 +572,11 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + if (page->mapping == mapping) { + VM_BUG_ON_PAGE(PageWriteback(page), page); + truncate_inode_page(mapping, page); ++ } else { ++ /* Page was replaced by swap: retry */ ++ unlock_page(page); ++ index--; ++ break; + } + } + unlock_page(page); +@@ -824,6 +831,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) spin_lock(&inode->i_lock); shmem_falloc = inode->i_private; if (shmem_falloc && -+ !shmem_falloc->mode && ++ !shmem_falloc->waitq && index >= shmem_falloc->start && index < shmem_falloc->next) shmem_falloc->nr_unswapped++; -@@ -1298,6 +1300,43 @@ static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) +@@ -1298,6 +1306,64 @@ static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) int error; int ret = VM_FAULT_LOCKED; @@ -96391,71 +96477,98 @@ index 1f18c9d..b550bab 100644 + * Trinity finds that probing a hole which tmpfs is punching can + * prevent the hole-punch from ever completing: which in turn + * locks writers out with its hold on i_mutex. So refrain from -+ * faulting pages into the hole while it's being punched, and -+ * wait on i_mutex to be released if vmf->flags permits, ++ * faulting pages into the hole while it's being punched. Although ++ * shmem_undo_range() does remove the additions, it may be unable to ++ * keep up, as each new page needs its own unmap_mapping_range() call, ++ * and the i_mmap tree grows ever slower to scan if new vmas are added. ++ * ++ * It does not matter if we sometimes reach this check just before the ++ * hole-punch begins, so that one fault then races with the punch: ++ * we just need to make racing faults a rare case. ++ * ++ * The implementation below would be much simpler if we just used a ++ * standard mutex or completion: but we cannot take i_mutex in fault, ++ * and bloating every shmem inode for this unlikely case would be sad. + */ + if (unlikely(inode->i_private)) { + struct shmem_falloc *shmem_falloc; ++ + spin_lock(&inode->i_lock); + shmem_falloc = inode->i_private; -+ if (!shmem_falloc || -+ shmem_falloc->mode != FALLOC_FL_PUNCH_HOLE || -+ vmf->pgoff < shmem_falloc->start || -+ vmf->pgoff >= shmem_falloc->next) -+ shmem_falloc = NULL; -+ spin_unlock(&inode->i_lock); -+ /* -+ * i_lock has protected us from taking shmem_falloc seriously -+ * once return from shmem_fallocate() went back up that stack. -+ * i_lock does not serialize with i_mutex at all, but it does -+ * not matter if sometimes we wait unnecessarily, or sometimes -+ * miss out on waiting: we just need to make those cases rare. -+ */ -+ if (shmem_falloc) { ++ if (shmem_falloc && ++ shmem_falloc->waitq && ++ vmf->pgoff >= shmem_falloc->start && ++ vmf->pgoff < shmem_falloc->next) { ++ wait_queue_head_t *shmem_falloc_waitq; ++ DEFINE_WAIT(shmem_fault_wait); ++ ++ ret = VM_FAULT_NOPAGE; + if ((vmf->flags & FAULT_FLAG_ALLOW_RETRY) && + !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { ++ /* It's polite to up mmap_sem if we can */ + up_read(&vma->vm_mm->mmap_sem); -+ mutex_lock(&inode->i_mutex); -+ mutex_unlock(&inode->i_mutex); -+ return VM_FAULT_RETRY; ++ ret = VM_FAULT_RETRY; + } -+ /* cond_resched? Leave that to GUP or return to user */ -+ return VM_FAULT_NOPAGE; ++ ++ shmem_falloc_waitq = shmem_falloc->waitq; ++ prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait, ++ TASK_UNINTERRUPTIBLE); ++ spin_unlock(&inode->i_lock); ++ schedule(); ++ ++ /* ++ * shmem_falloc_waitq points into the shmem_fallocate() ++ * stack of the hole-punching task: shmem_falloc_waitq ++ * is usually invalid by the time we reach here, but ++ * finish_wait() does not dereference it in that case; ++ * though i_lock needed lest racing with wake_up_all(). ++ */ ++ spin_lock(&inode->i_lock); ++ finish_wait(shmem_falloc_waitq, &shmem_fault_wait); ++ spin_unlock(&inode->i_lock); ++ return ret; + } ++ spin_unlock(&inode->i_lock); + } + error = shmem_getpage(inode, vmf->pgoff, &vmf->page, SGP_CACHE, &ret); if (error) return ((error == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS); -@@ -1813,18 +1852,26 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, - - mutex_lock(&inode->i_mutex); - -+ shmem_falloc.mode = mode & ~FALLOC_FL_KEEP_SIZE; -+ - if (mode & FALLOC_FL_PUNCH_HOLE) { +@@ -1817,12 +1883,25 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, struct address_space *mapping = file->f_mapping; loff_t unmap_start = round_up(offset, PAGE_SIZE); loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1; - ++ DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq); ++ ++ shmem_falloc.waitq = &shmem_falloc_waitq; + shmem_falloc.start = unmap_start >> PAGE_SHIFT; + shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT; + spin_lock(&inode->i_lock); + inode->i_private = &shmem_falloc; + spin_unlock(&inode->i_lock); -+ + if ((u64)unmap_end > (u64)unmap_start) unmap_mapping_range(mapping, unmap_start, 1 + unmap_end - unmap_start, 0); shmem_truncate_range(inode, offset, offset + len - 1); /* No need to unmap again: hole-punching leaves COWed pages */ ++ ++ spin_lock(&inode->i_lock); ++ inode->i_private = NULL; ++ wake_up_all(&shmem_falloc_waitq); ++ spin_unlock(&inode->i_lock); error = 0; -- goto out; -+ goto undone; + goto out; + } +@@ -1840,6 +1919,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, + goto out; } - /* We need to check rlimit even when FALLOC_FL_KEEP_SIZE */ -@@ -2218,6 +2265,11 @@ static const struct xattr_handler *shmem_xattr_handlers[] = { ++ shmem_falloc.waitq = NULL; + shmem_falloc.start = start; + shmem_falloc.next = start; + shmem_falloc.nr_falloced = 0; +@@ -2218,6 +2298,11 @@ static const struct xattr_handler *shmem_xattr_handlers[] = { static int shmem_xattr_validate(const char *name) { struct { const char *prefix; size_t len; } arr[] = { @@ -96467,7 +96580,7 @@ index 1f18c9d..b550bab 100644 { XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN }, { XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN } }; -@@ -2273,6 +2325,15 @@ static int shmem_setxattr(struct dentry *dentry, const char *name, +@@ -2273,6 +2358,15 @@ static int shmem_setxattr(struct dentry *dentry, const char *name, if (err) return err; @@ -96483,7 +96596,7 @@ index 1f18c9d..b550bab 100644 return simple_xattr_set(&info->xattrs, name, value, size, flags); } -@@ -2585,8 +2646,7 @@ int shmem_fill_super(struct super_block *sb, void *data, int silent) +@@ -2585,8 +2679,7 @@ int shmem_fill_super(struct super_block *sb, void *data, int silent) int err = -ENOMEM; /* Round up to L1_CACHE_BYTES to resist false sharing */ @@ -99666,6 +99779,21 @@ index 5325b54..a0d4d69 100644 return -EFAULT; *lenp = len; +diff --git a/net/dns_resolver/dns_query.c b/net/dns_resolver/dns_query.c +index e7b6d53..f005cc7 100644 +--- a/net/dns_resolver/dns_query.c ++++ b/net/dns_resolver/dns_query.c +@@ -149,7 +149,9 @@ int dns_query(const char *type, const char *name, size_t namelen, + if (!*_result) + goto put; + +- memcpy(*_result, upayload->data, len + 1); ++ memcpy(*_result, upayload->data, len); ++ (*_result)[len] = '\0'; ++ + if (_expiry) + *_expiry = rkey->expiry; + diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index 19ab78a..bf575c9 100644 --- a/net/ipv4/af_inet.c @@ -103158,6 +103286,18 @@ index f226709..0e735a8 100644 _proto("Tx RESPONSE %%%u", ntohl(hdr->serial)); ret = kernel_sendmsg(conn->trans->local->socket, &msg, iov, 3, len); +diff --git a/net/sctp/associola.c b/net/sctp/associola.c +index a4d5701..5d97d8f 100644 +--- a/net/sctp/associola.c ++++ b/net/sctp/associola.c +@@ -1151,6 +1151,7 @@ void sctp_assoc_update(struct sctp_association *asoc, + asoc->c = new->c; + asoc->peer.rwnd = new->peer.rwnd; + asoc->peer.sack_needed = new->peer.sack_needed; ++ asoc->peer.auth_capable = new->peer.auth_capable; + asoc->peer.i = new->peer.i; + sctp_tsnmap_init(&asoc->peer.tsn_map, SCTP_TSN_MAP_INITIAL, + asoc->peer.i.initial_tsn, GFP_ATOMIC); diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c index 2b1738e..a9d0fc9 100644 --- a/net/sctp/ipv6.c @@ -103388,6 +103528,26 @@ index c82fdc1..4ca1f95 100644 return 0; } +diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c +index 85c6465..879f3cd 100644 +--- a/net/sctp/ulpevent.c ++++ b/net/sctp/ulpevent.c +@@ -411,6 +411,7 @@ struct sctp_ulpevent *sctp_ulpevent_make_remote_error( + * sre_type: + * It should be SCTP_REMOTE_ERROR. + */ ++ memset(sre, 0, sizeof(*sre)); + sre->sre_type = SCTP_REMOTE_ERROR; + + /* +@@ -916,6 +917,7 @@ void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, + * For recvmsg() the SCTP stack places the message's stream number in + * this value. + */ ++ memset(&sinfo, 0, sizeof(sinfo)); + sinfo.sinfo_stream = event->stream; + /* sinfo_ssn: 16 bits (unsigned integer) + * diff --git a/net/socket.c b/net/socket.c index a19ae19..89554dc 100644 --- a/net/socket.c diff --git a/3.14.12/4425_grsec_remove_EI_PAX.patch b/3.14.13/4425_grsec_remove_EI_PAX.patch similarity index 100% rename from 3.14.12/4425_grsec_remove_EI_PAX.patch rename to 3.14.13/4425_grsec_remove_EI_PAX.patch diff --git a/3.14.12/4427_force_XATTR_PAX_tmpfs.patch b/3.14.13/4427_force_XATTR_PAX_tmpfs.patch similarity index 100% rename from 3.14.12/4427_force_XATTR_PAX_tmpfs.patch rename to 3.14.13/4427_force_XATTR_PAX_tmpfs.patch diff --git a/3.14.12/4430_grsec-remove-localversion-grsec.patch b/3.14.13/4430_grsec-remove-localversion-grsec.patch similarity index 100% rename from 3.14.12/4430_grsec-remove-localversion-grsec.patch rename to 3.14.13/4430_grsec-remove-localversion-grsec.patch diff --git a/3.14.12/4435_grsec-mute-warnings.patch b/3.14.13/4435_grsec-mute-warnings.patch similarity index 100% rename from 3.14.12/4435_grsec-mute-warnings.patch rename to 3.14.13/4435_grsec-mute-warnings.patch diff --git a/3.14.12/4440_grsec-remove-protected-paths.patch b/3.14.13/4440_grsec-remove-protected-paths.patch similarity index 100% rename from 3.14.12/4440_grsec-remove-protected-paths.patch rename to 3.14.13/4440_grsec-remove-protected-paths.patch diff --git a/3.14.12/4450_grsec-kconfig-default-gids.patch b/3.14.13/4450_grsec-kconfig-default-gids.patch similarity index 100% rename from 3.14.12/4450_grsec-kconfig-default-gids.patch rename to 3.14.13/4450_grsec-kconfig-default-gids.patch diff --git a/3.14.12/4465_selinux-avc_audit-log-curr_ip.patch b/3.14.13/4465_selinux-avc_audit-log-curr_ip.patch similarity index 100% rename from 3.14.12/4465_selinux-avc_audit-log-curr_ip.patch rename to 3.14.13/4465_selinux-avc_audit-log-curr_ip.patch diff --git a/3.14.12/4470_disable-compat_vdso.patch b/3.14.13/4470_disable-compat_vdso.patch similarity index 100% rename from 3.14.12/4470_disable-compat_vdso.patch rename to 3.14.13/4470_disable-compat_vdso.patch diff --git a/3.14.12/4475_emutramp_default_on.patch b/3.14.13/4475_emutramp_default_on.patch similarity index 100% rename from 3.14.12/4475_emutramp_default_on.patch rename to 3.14.13/4475_emutramp_default_on.patch diff --git a/3.15.5/0000_README b/3.15.6/0000_README similarity index 96% rename from 3.15.5/0000_README rename to 3.15.6/0000_README index 6000532..3a519cd 100644 --- a/3.15.5/0000_README +++ b/3.15.6/0000_README @@ -2,7 +2,7 @@ README ----------------------------------------------------------------------------- Individual Patch Descriptions: ----------------------------------------------------------------------------- -Patch: 4420_grsecurity-3.0-3.15.5-201407170639.patch +Patch: 4420_grsecurity-3.0-3.15.6-201407232200.patch From: http://www.grsecurity.net Desc: hardened-sources base patch from upstream grsecurity diff --git a/3.15.5/4420_grsecurity-3.0-3.15.5-201407170639.patch b/3.15.6/4420_grsecurity-3.0-3.15.6-201407232200.patch similarity index 99% rename from 3.15.5/4420_grsecurity-3.0-3.15.5-201407170639.patch rename to 3.15.6/4420_grsecurity-3.0-3.15.6-201407232200.patch index 7a5e81c..f992e88 100644 --- a/3.15.5/4420_grsecurity-3.0-3.15.5-201407170639.patch +++ b/3.15.6/4420_grsecurity-3.0-3.15.6-201407232200.patch @@ -287,13 +287,14 @@ index 30a8ad0d..2ed9efd 100644 pcd. [PARIDE] diff --git a/Makefile b/Makefile -index e6b01ed..74dbc85 100644 +index fefa023..06f4bb4 100644 --- a/Makefile +++ b/Makefile -@@ -246,7 +246,9 @@ CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ +@@ -245,8 +245,9 @@ CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ + HOSTCC = gcc HOSTCXX = g++ - HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer +-HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -HOSTCXXFLAGS = -O2 +HOSTCFLAGS = -Wall -W -Wmissing-prototypes -Wstrict-prototypes -Wno-unused-parameter -Wno-missing-field-initializers -O2 -fomit-frame-pointer -fno-delete-null-pointer-checks +HOSTCFLAGS += $(call cc-option, -Wno-empty-body) @@ -301,7 +302,7 @@ index e6b01ed..74dbc85 100644 ifeq ($(shell $(HOSTCC) -v 2>&1 | grep -c "clang version"), 1) HOSTCFLAGS += -Wno-unused-value -Wno-unused-parameter \ -@@ -438,8 +440,8 @@ export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \ +@@ -438,8 +439,8 @@ export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \ # Rules shared between *config targets and build targets # Basic helpers built in scripts/ @@ -312,7 +313,7 @@ index e6b01ed..74dbc85 100644 $(Q)$(MAKE) $(build)=scripts/basic $(Q)rm -f .tmp_quiet_recordmcount -@@ -600,6 +602,72 @@ else +@@ -600,6 +601,72 @@ else KBUILD_CFLAGS += -O2 endif @@ -385,7 +386,7 @@ index e6b01ed..74dbc85 100644 include $(srctree)/arch/$(SRCARCH)/Makefile ifdef CONFIG_READABLE_ASM -@@ -816,7 +884,7 @@ export mod_sign_cmd +@@ -816,7 +883,7 @@ export mod_sign_cmd ifeq ($(KBUILD_EXTMOD),) @@ -394,7 +395,7 @@ index e6b01ed..74dbc85 100644 vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \ $(core-y) $(core-m) $(drivers-y) $(drivers-m) \ -@@ -865,6 +933,8 @@ endif +@@ -865,6 +932,8 @@ endif # The actual objects are generated when descending, # make sure no implicit rule kicks in @@ -403,7 +404,7 @@ index e6b01ed..74dbc85 100644 $(sort $(vmlinux-deps)): $(vmlinux-dirs) ; # Handle descending into subdirectories listed in $(vmlinux-dirs) -@@ -874,7 +944,7 @@ $(sort $(vmlinux-deps)): $(vmlinux-dirs) ; +@@ -874,7 +943,7 @@ $(sort $(vmlinux-deps)): $(vmlinux-dirs) ; # Error messages still appears in the original language PHONY += $(vmlinux-dirs) @@ -412,7 +413,7 @@ index e6b01ed..74dbc85 100644 $(Q)$(MAKE) $(build)=$@ define filechk_kernel.release -@@ -917,10 +987,13 @@ prepare1: prepare2 $(version_h) include/generated/utsrelease.h \ +@@ -917,10 +986,13 @@ prepare1: prepare2 $(version_h) include/generated/utsrelease.h \ archprepare: archheaders archscripts prepare1 scripts_basic @@ -426,7 +427,7 @@ index e6b01ed..74dbc85 100644 prepare: prepare0 # Generate some files -@@ -1028,6 +1101,8 @@ all: modules +@@ -1028,6 +1100,8 @@ all: modules # using awk while concatenating to the final file. PHONY += modules @@ -435,7 +436,7 @@ index e6b01ed..74dbc85 100644 modules: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),vmlinux) modules.builtin $(Q)$(AWK) '!x[$$0]++' $(vmlinux-dirs:%=$(objtree)/%/modules.order) > $(objtree)/modules.order @$(kecho) ' Building modules, stage 2.'; -@@ -1043,7 +1118,7 @@ modules.builtin: $(vmlinux-dirs:%=%/modules.builtin) +@@ -1043,7 +1117,7 @@ modules.builtin: $(vmlinux-dirs:%=%/modules.builtin) # Target to prepare building external modules PHONY += modules_prepare @@ -444,7 +445,7 @@ index e6b01ed..74dbc85 100644 # Target to install modules PHONY += modules_install -@@ -1109,7 +1184,10 @@ MRPROPER_FILES += .config .config.old .version .old_version $(version_h) \ +@@ -1109,7 +1183,10 @@ MRPROPER_FILES += .config .config.old .version .old_version $(version_h) \ Module.symvers tags TAGS cscope* GPATH GTAGS GRTAGS GSYMS \ signing_key.priv signing_key.x509 x509.genkey \ extra_certificates signing_key.x509.keyid \ @@ -456,7 +457,7 @@ index e6b01ed..74dbc85 100644 # clean - Delete most, but leave enough to build external modules # -@@ -1148,7 +1226,7 @@ distclean: mrproper +@@ -1148,7 +1225,7 @@ distclean: mrproper @find $(srctree) $(RCS_FIND_IGNORE) \ \( -name '*.orig' -o -name '*.rej' -o -name '*~' \ -o -name '*.bak' -o -name '#*#' -o -name '.*.orig' \ @@ -465,7 +466,7 @@ index e6b01ed..74dbc85 100644 -type f -print | xargs rm -f -@@ -1309,6 +1387,8 @@ PHONY += $(module-dirs) modules +@@ -1309,6 +1386,8 @@ PHONY += $(module-dirs) modules $(module-dirs): crmodverdir $(objtree)/Module.symvers $(Q)$(MAKE) $(build)=$(patsubst _module_%,%,$@) @@ -474,7 +475,7 @@ index e6b01ed..74dbc85 100644 modules: $(module-dirs) @$(kecho) ' Building modules, stage 2.'; $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost -@@ -1448,17 +1528,21 @@ else +@@ -1448,17 +1527,21 @@ else target-dir = $(if $(KBUILD_EXTMOD),$(dir $<),$(dir $@)) endif @@ -500,7 +501,7 @@ index e6b01ed..74dbc85 100644 $(Q)$(MAKE) $(build)=$(build-dir) $(target-dir)$(notdir $@) %.symtypes: %.c prepare scripts FORCE $(Q)$(MAKE) $(build)=$(build-dir) $(target-dir)$(notdir $@) -@@ -1468,11 +1552,15 @@ endif +@@ -1468,11 +1551,15 @@ endif $(cmd_crmodverdir) $(Q)$(MAKE) KBUILD_MODULES=$(if $(CONFIG_MODULES),1) \ $(build)=$(build-dir) @@ -2429,7 +2430,7 @@ index f7b450f..f5364c5 100644 EXPORT_SYMBOL(__get_user_1); EXPORT_SYMBOL(__get_user_2); diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S -index 1879e8d..b2207fc 100644 +index 1879e8d..5602dd4 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -47,6 +47,87 @@ @@ -2448,7 +2449,7 @@ index 1879e8d..b2207fc 100644 + bic r2, r2, #(0x1fc0) + bic r2, r2, #(0x3f) + ldr r1, [r2, #TI_CPU_DOMAIN] -+ @ store old DACR on stack ++ @ store old DACR on stack + str r1, [sp, #8] +#ifdef CONFIG_PAX_KERNEXEC + @ set type of DOMAIN_KERNEL to DOMAIN_KERNELCLIENT @@ -7990,7 +7991,7 @@ index 3ca9c11..d163ef7 100644 /* * If for any reason at all we couldn't handle the fault, make diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig -index e099899..457d6a8 100644 +index c95c4b8..d831f81 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -397,6 +397,7 @@ config PPC64_SUPPORTS_MEMORY_FAILURE @@ -14413,7 +14414,7 @@ index 2206757..85cbcfa 100644 err |= copy_siginfo_to_user32(&frame->info, &ksig->info); diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S -index 4299eb0..c0687a7 100644 +index 4299eb0..fefe70e 100644 --- a/arch/x86/ia32/ia32entry.S +++ b/arch/x86/ia32/ia32entry.S @@ -15,8 +15,10 @@ @@ -14564,7 +14565,7 @@ index 4299eb0..c0687a7 100644 /* clear IF, that popfq doesn't enable interrupts early */ - andl $~0x200,EFLAGS-R11(%rsp) - movl RIP-R11(%rsp),%edx /* User %eip */ -+ andl $~X86_EFLAGS_IF,EFLAGS(%rsp) ++ andl $~X86_EFLAGS_IF,EFLAGS(%rsp) + movl RIP(%rsp),%edx /* User %eip */ CFI_REGISTER rip,rdx RESTORE_ARGS 0,24,0,0,0,0 @@ -18365,7 +18366,7 @@ index a4ea023..33aa874 100644 void df_debug(struct pt_regs *regs, long error_code); #endif /* _ASM_X86_PROCESSOR_H */ diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h -index 6205f0c..b31a4a4 100644 +index 6205f0c..688a3a9 100644 --- a/arch/x86/include/asm/ptrace.h +++ b/arch/x86/include/asm/ptrace.h @@ -84,28 +84,29 @@ static inline unsigned long regs_return_value(struct pt_regs *regs) @@ -18432,7 +18433,7 @@ index 6205f0c..b31a4a4 100644 - return kernel_stack_pointer(regs); + if (offset == offsetof(struct pt_regs, sp)) { + unsigned long cs = regs->cs & 0xffff; -+ if (cs == __KERNEL_CS || cs == __KERNEXEC_KERNEL_CS) ++ if (cs == __KERNEL_CS || cs == __KERNEXEC_KERNEL_CS) + return kernel_stack_pointer(regs); + } #endif @@ -32880,19 +32881,21 @@ index 7b179b4..6bd17777 100644 return (void *)vaddr; diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c -index 597ac15..49841be 100644 +index bc7527e..5e2c495 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c -@@ -97,7 +97,7 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr, - for (pfn = phys_addr >> PAGE_SHIFT; pfn <= last_pfn; pfn++) { - int is_ram = page_is_ram(pfn); +@@ -56,8 +56,8 @@ static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages, + unsigned long i; + + for (i = 0; i < nr_pages; ++i) +- if (pfn_valid(start_pfn + i) && +- !PageReserved(pfn_to_page(start_pfn + i))) ++ if (pfn_valid(start_pfn + i) && (start_pfn + i >= 0x100 || ++ !PageReserved(pfn_to_page(start_pfn + i)))) + return 1; -- if (is_ram && pfn_valid(pfn) && !PageReserved(pfn_to_page(pfn))) -+ if (is_ram && pfn_valid(pfn) && (pfn >= 0x100 || !PageReserved(pfn_to_page(pfn)))) - return NULL; - WARN_ON_ONCE(is_ram); - } -@@ -256,7 +256,7 @@ EXPORT_SYMBOL(ioremap_prot); + WARN_ONCE(1, "ioremap on RAM pfn 0x%lx\n", start_pfn); +@@ -268,7 +268,7 @@ EXPORT_SYMBOL(ioremap_prot); * * Caller must ensure there is only one unmapping for the same pointer. */ @@ -32901,7 +32904,7 @@ index 597ac15..49841be 100644 { struct vm_struct *p, *o; -@@ -310,6 +310,9 @@ void *xlate_dev_mem_ptr(unsigned long phys) +@@ -322,6 +322,9 @@ void *xlate_dev_mem_ptr(unsigned long phys) /* If page is RAM, we can use __va. Otherwise ioremap and unmap. */ if (page_is_ram(start >> PAGE_SHIFT)) @@ -32911,7 +32914,7 @@ index 597ac15..49841be 100644 return __va(phys); addr = (void __force *)ioremap_cache(start, PAGE_SIZE); -@@ -322,13 +325,16 @@ void *xlate_dev_mem_ptr(unsigned long phys) +@@ -334,13 +337,16 @@ void *xlate_dev_mem_ptr(unsigned long phys) void unxlate_dev_mem_ptr(unsigned long phys, void *addr) { if (page_is_ram(phys >> PAGE_SHIFT)) @@ -32929,7 +32932,7 @@ index 597ac15..49841be 100644 static inline pmd_t * __init early_ioremap_pmd(unsigned long addr) { -@@ -358,8 +364,7 @@ void __init early_ioremap_init(void) +@@ -370,8 +376,7 @@ void __init early_ioremap_init(void) early_ioremap_setup(); pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)); @@ -38688,7 +38691,7 @@ index 8320abd..ec48108 100644 if (cmd != SIOCWANDEV) diff --git a/drivers/char/random.c b/drivers/char/random.c -index 2b6e4cd..43d7ae1 100644 +index 2b6e4cd..32033f3 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -270,10 +270,17 @@ @@ -38772,7 +38775,44 @@ index 2b6e4cd..43d7ae1 100644 unsigned int add = ((pool_size - entropy_count)*anfrac*3) >> s; -@@ -1166,7 +1177,7 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf, +@@ -641,7 +652,7 @@ retry: + } while (unlikely(entropy_count < pool_size-2 && pnfrac)); + } + +- if (entropy_count < 0) { ++ if (unlikely(entropy_count < 0)) { + pr_warn("random: negative entropy/overflow: pool %s count %d\n", + r->name, entropy_count); + WARN_ON(1); +@@ -980,7 +991,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min, + int reserved) + { + int entropy_count, orig; +- size_t ibytes; ++ size_t ibytes, nfrac; + + BUG_ON(r->entropy_count > r->poolinfo->poolfracbits); + +@@ -998,7 +1009,17 @@ retry: + } + if (ibytes < min) + ibytes = 0; +- if ((entropy_count -= ibytes << (ENTROPY_SHIFT + 3)) < 0) ++ ++ if (unlikely(entropy_count < 0)) { ++ pr_warn("random: negative entropy count: pool %s count %d\n", ++ r->name, entropy_count); ++ WARN_ON(1); ++ entropy_count = 0; ++ } ++ nfrac = ibytes << (ENTROPY_SHIFT + 3); ++ if ((size_t) entropy_count > nfrac) ++ entropy_count -= nfrac; ++ else + entropy_count = 0; + + if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) +@@ -1166,7 +1187,7 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf, extract_buf(r, tmp); i = min_t(int, nbytes, EXTRACT_SIZE); @@ -38781,7 +38821,15 @@ index 2b6e4cd..43d7ae1 100644 ret = -EFAULT; break; } -@@ -1555,7 +1566,7 @@ static char sysctl_bootid[16]; +@@ -1375,6 +1396,7 @@ urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) + "with %d bits of entropy available\n", + current->comm, nonblocking_pool.entropy_total); + ++ nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3)); + ret = extract_entropy_user(&nonblocking_pool, buf, nbytes); + + trace_urandom_read(8 * nbytes, ENTROPY_BITS(&nonblocking_pool), +@@ -1555,7 +1577,7 @@ static char sysctl_bootid[16]; static int proc_do_uuid(struct ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { @@ -38790,7 +38838,7 @@ index 2b6e4cd..43d7ae1 100644 unsigned char buf[64], tmp_uuid[16], *uuid; uuid = table->data; -@@ -1585,7 +1596,7 @@ static int proc_do_uuid(struct ctl_table *table, int write, +@@ -1585,7 +1607,7 @@ static int proc_do_uuid(struct ctl_table *table, int write, static int proc_do_entropy(ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { @@ -39194,7 +39242,7 @@ index 18d4091..434be15 100644 } EXPORT_SYMBOL_GPL(od_unregister_powersave_bias_handler); diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c -index fcd0c92..7b736c2 100644 +index 870eecc..787bbca 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -125,10 +125,10 @@ struct pstate_funcs { @@ -39210,7 +39258,7 @@ index fcd0c92..7b736c2 100644 struct perf_limits { int no_turbo; -@@ -526,7 +526,7 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate) +@@ -530,7 +530,7 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate) cpu->pstate.current_pstate = pstate; @@ -39219,7 +39267,7 @@ index fcd0c92..7b736c2 100644 } static inline void intel_pstate_pstate_increase(struct cpudata *cpu, int steps) -@@ -548,12 +548,12 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) +@@ -552,12 +552,12 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) { sprintf(cpu->name, "Intel 2nd generation core"); @@ -39237,7 +39285,7 @@ index fcd0c92..7b736c2 100644 intel_pstate_set_pstate(cpu, cpu->pstate.min_pstate); } -@@ -838,9 +838,9 @@ static int intel_pstate_msrs_not_valid(void) +@@ -847,9 +847,9 @@ static int intel_pstate_msrs_not_valid(void) rdmsrl(MSR_IA32_APERF, aperf); rdmsrl(MSR_IA32_MPERF, mperf); @@ -39250,7 +39298,7 @@ index fcd0c92..7b736c2 100644 return -ENODEV; rdmsrl(MSR_IA32_APERF, tmp); -@@ -854,7 +854,7 @@ static int intel_pstate_msrs_not_valid(void) +@@ -863,7 +863,7 @@ static int intel_pstate_msrs_not_valid(void) return 0; } @@ -39259,7 +39307,7 @@ index fcd0c92..7b736c2 100644 { pid_params.sample_rate_ms = policy->sample_rate_ms; pid_params.p_gain_pct = policy->p_gain_pct; -@@ -866,11 +866,7 @@ static void copy_pid_params(struct pstate_adjust_policy *policy) +@@ -875,11 +875,7 @@ static void copy_pid_params(struct pstate_adjust_policy *policy) static void copy_cpu_funcs(struct pstate_funcs *funcs) { @@ -40320,10 +40368,10 @@ index 3c59584..500f2e9 100644 return ret; diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c -index 5b60e25..eac1625 100644 +index b91dfbe..b7fb16d 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c -@@ -11171,13 +11171,13 @@ struct intel_quirk { +@@ -11179,13 +11179,13 @@ struct intel_quirk { int subsystem_vendor; int subsystem_device; void (*hook)(struct drm_device *dev); @@ -40339,7 +40387,7 @@ index 5b60e25..eac1625 100644 static int intel_dmi_reverse_brightness(const struct dmi_system_id *id) { -@@ -11185,18 +11185,20 @@ static int intel_dmi_reverse_brightness(const struct dmi_system_id *id) +@@ -11193,18 +11193,20 @@ static int intel_dmi_reverse_brightness(const struct dmi_system_id *id) return 1; } @@ -41056,28 +41104,6 @@ index c8a8a51..219dacc 100644 } vma->vm_ops = &radeon_ttm_vm_ops; return 0; -diff --git a/drivers/gpu/drm/radeon/radeon_vm.c b/drivers/gpu/drm/radeon/radeon_vm.c -index c11b71d..c8c48aa 100644 ---- a/drivers/gpu/drm/radeon/radeon_vm.c -+++ b/drivers/gpu/drm/radeon/radeon_vm.c -@@ -493,7 +493,7 @@ int radeon_vm_bo_set_addr(struct radeon_device *rdev, - mutex_unlock(&vm->mutex); - - r = radeon_bo_create(rdev, RADEON_VM_PTE_COUNT * 8, -- RADEON_GPU_PAGE_SIZE, false, -+ RADEON_GPU_PAGE_SIZE, true, - RADEON_GEM_DOMAIN_VRAM, NULL, &pt); - if (r) - return r; -@@ -913,7 +913,7 @@ int radeon_vm_init(struct radeon_device *rdev, struct radeon_vm *vm) - return -ENOMEM; - } - -- r = radeon_bo_create(rdev, pd_size, RADEON_VM_PTB_ALIGN_SIZE, false, -+ r = radeon_bo_create(rdev, pd_size, RADEON_VM_PTB_ALIGN_SIZE, true, - RADEON_GEM_DOMAIN_VRAM, NULL, - &vm->page_directory); - if (r) diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c index edb871d..a275c6ed 100644 --- a/drivers/gpu/drm/tegra/dc.c @@ -43868,10 +43894,10 @@ index b086a94..74cb67e 100644 pmd->bl_info.value_type.inc = data_block_inc; pmd->bl_info.value_type.dec = data_block_dec; diff --git a/drivers/md/dm.c b/drivers/md/dm.c -index 455e649..1f214be 100644 +index 490ac23..b9790cd 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c -@@ -178,9 +178,9 @@ struct mapped_device { +@@ -180,9 +180,9 @@ struct mapped_device { /* * Event handling. */ @@ -43883,7 +43909,7 @@ index 455e649..1f214be 100644 struct list_head uevent_list; spinlock_t uevent_lock; /* Protect access to uevent_list */ -@@ -1884,8 +1884,8 @@ static struct mapped_device *alloc_dev(int minor) +@@ -1895,8 +1895,8 @@ static struct mapped_device *alloc_dev(int minor) spin_lock_init(&md->deferred_lock); atomic_set(&md->holders, 1); atomic_set(&md->open_count, 0); @@ -43894,7 +43920,7 @@ index 455e649..1f214be 100644 INIT_LIST_HEAD(&md->uevent_list); spin_lock_init(&md->uevent_lock); -@@ -2039,7 +2039,7 @@ static void event_callback(void *context) +@@ -2050,7 +2050,7 @@ static void event_callback(void *context) dm_send_uevents(&uevents, &disk_to_dev(md->disk)->kobj); @@ -43903,7 +43929,7 @@ index 455e649..1f214be 100644 wake_up(&md->eventq); } -@@ -2732,18 +2732,18 @@ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action, +@@ -2743,18 +2743,18 @@ int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action, uint32_t dm_next_uevent_seq(struct mapped_device *md) { @@ -46857,6 +46883,24 @@ index 5920c99..ff2e4a5 100644 }; static void +diff --git a/drivers/net/wan/x25_asy.c b/drivers/net/wan/x25_asy.c +index 5895f19..fa9fdfa 100644 +--- a/drivers/net/wan/x25_asy.c ++++ b/drivers/net/wan/x25_asy.c +@@ -122,8 +122,12 @@ static int x25_asy_change_mtu(struct net_device *dev, int newmtu) + { + struct x25_asy *sl = netdev_priv(dev); + unsigned char *xbuff, *rbuff; +- int len = 2 * newmtu; ++ int len; + ++ if (newmtu > 65534) ++ return -EINVAL; ++ ++ len = 2 * newmtu; + xbuff = kmalloc(len + 4, GFP_ATOMIC); + rbuff = kmalloc(len + 4, GFP_ATOMIC); + diff --git a/drivers/net/wan/z85230.c b/drivers/net/wan/z85230.c index feacc3b..5bac0de 100644 --- a/drivers/net/wan/z85230.c @@ -59617,7 +59661,7 @@ index e6574d7..c30cbe2 100644 brelse(bh); bh = NULL; diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c -index fe4e668..f983538 100644 +index 2735a72..d083044 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -1889,7 +1889,7 @@ void ext4_mb_simple_scan_group(struct ext4_allocation_context *ac, @@ -59747,7 +59791,7 @@ index 04434ad..6404663 100644 "MMP failure info: last update time: %llu, last update " "node: %s, last update device: %s\n", diff --git a/fs/ext4/super.c b/fs/ext4/super.c -index 6f9e6fa..d0ebdb7 100644 +index 29a403c..f58dbdb 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1275,7 +1275,7 @@ static ext4_fsblk_t get_sb_block(void **data) @@ -59759,7 +59803,7 @@ index 6f9e6fa..d0ebdb7 100644 "Contact linux-ext4@vger.kernel.org if you think we should keep it.\n"; #ifdef CONFIG_QUOTA -@@ -2455,7 +2455,7 @@ struct ext4_attr { +@@ -2453,7 +2453,7 @@ struct ext4_attr { int offset; int deprecated_val; } u; @@ -59768,114 +59812,6 @@ index 6f9e6fa..d0ebdb7 100644 static int parse_strtoull(const char *buf, unsigned long long max, unsigned long long *value) -@@ -3869,38 +3869,19 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) - goto failed_mount2; - } - } -- -- /* -- * set up enough so that it can read an inode, -- * and create new inode for buddy allocator -- */ -- sbi->s_gdb_count = db_count; -- if (!test_opt(sb, NOLOAD) && -- EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) -- sb->s_op = &ext4_sops; -- else -- sb->s_op = &ext4_nojournal_sops; -- -- ext4_ext_init(sb); -- err = ext4_mb_init(sb); -- if (err) { -- ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", -- err); -- goto failed_mount2; -- } -- - if (!ext4_check_descriptors(sb, &first_not_zeroed)) { - ext4_msg(sb, KERN_ERR, "group descriptors corrupted!"); -- goto failed_mount2a; -+ goto failed_mount2; - } - if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG)) - if (!ext4_fill_flex_info(sb)) { - ext4_msg(sb, KERN_ERR, - "unable to initialize " - "flex_bg meta info!"); -- goto failed_mount2a; -+ goto failed_mount2; - } - -+ sbi->s_gdb_count = db_count; - get_random_bytes(&sbi->s_next_generation, sizeof(u32)); - spin_lock_init(&sbi->s_next_gen_lock); - -@@ -3935,6 +3916,14 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) - sbi->s_stripe = ext4_get_stripe_size(sbi); - sbi->s_extent_max_zeroout_kb = 32; - -+ /* -+ * set up enough so that it can read an inode -+ */ -+ if (!test_opt(sb, NOLOAD) && -+ EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL)) -+ sb->s_op = &ext4_sops; -+ else -+ sb->s_op = &ext4_nojournal_sops; - sb->s_export_op = &ext4_export_ops; - sb->s_xattr = ext4_xattr_handlers; - #ifdef CONFIG_QUOTA -@@ -4124,13 +4113,21 @@ no_journal: - if (err) { - ext4_msg(sb, KERN_ERR, "failed to reserve %llu clusters for " - "reserved pool", ext4_calculate_resv_clusters(sb)); -- goto failed_mount5; -+ goto failed_mount4a; - } - - err = ext4_setup_system_zone(sb); - if (err) { - ext4_msg(sb, KERN_ERR, "failed to initialize system " - "zone (%d)", err); -+ goto failed_mount4a; -+ } -+ -+ ext4_ext_init(sb); -+ err = ext4_mb_init(sb); -+ if (err) { -+ ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)", -+ err); - goto failed_mount5; - } - -@@ -4207,8 +4204,11 @@ failed_mount8: - failed_mount7: - ext4_unregister_li_request(sb); - failed_mount6: -- ext4_release_system_zone(sb); -+ ext4_mb_release(sb); - failed_mount5: -+ ext4_ext_release(sb); -+ ext4_release_system_zone(sb); -+failed_mount4a: - dput(sb->s_root); - sb->s_root = NULL; - failed_mount4: -@@ -4232,14 +4232,11 @@ failed_mount3: - percpu_counter_destroy(&sbi->s_extent_cache_cnt); - if (sbi->s_mmp_tsk) - kthread_stop(sbi->s_mmp_tsk); --failed_mount2a: -- ext4_mb_release(sb); - failed_mount2: - for (i = 0; i < db_count; i++) - brelse(sbi->s_group_desc[i]); - ext4_kvfree(sbi->s_group_desc); - failed_mount: -- ext4_ext_release(sb); - if (sbi->s_chksum_driver) - crypto_free_shash(sbi->s_chksum_driver); - if (sbi->s_proc) { diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c index 4eec399..1d9444c 100644 --- a/fs/ext4/xattr.c @@ -61681,7 +61617,7 @@ index 97f7fda..09bd33d 100644 if (jfs_inode_cachep == NULL) return -ENOMEM; diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c -index ac127cd..d8079db 100644 +index a693f5b..82276a1 100644 --- a/fs/kernfs/dir.c +++ b/fs/kernfs/dir.c @@ -182,7 +182,7 @@ struct kernfs_node *kernfs_get_parent(struct kernfs_node *kn) @@ -61874,7 +61810,7 @@ index d55297f..f5b28c5 100644 #define MNT_NS_INTERNAL ERR_PTR(-EINVAL) /* distinct from any mnt_namespace */ diff --git a/fs/namei.c b/fs/namei.c -index 985c6f3..f67a0f8 100644 +index 985c6f3..5f520b67 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -330,17 +330,32 @@ int generic_permission(struct inode *inode, int mask) @@ -62009,7 +61945,19 @@ index 985c6f3..f67a0f8 100644 return retval; } -@@ -2569,6 +2600,13 @@ static int may_open(struct path *path, int acc_mode, int flag) +@@ -2256,9 +2287,10 @@ done: + goto out; + } + path->dentry = dentry; +- path->mnt = mntget(nd->path.mnt); ++ path->mnt = nd->path.mnt; + if (should_follow_link(dentry, nd->flags & LOOKUP_FOLLOW)) + return 1; ++ mntget(path->mnt); + follow_mount(path); + error = 0; + out: +@@ -2569,6 +2601,13 @@ static int may_open(struct path *path, int acc_mode, int flag) if (flag & O_NOATIME && !inode_owner_or_capable(inode)) return -EPERM; @@ -62023,7 +61971,7 @@ index 985c6f3..f67a0f8 100644 return 0; } -@@ -2800,7 +2838,7 @@ looked_up: +@@ -2800,7 +2839,7 @@ looked_up: * cleared otherwise prior to returning. */ static int lookup_open(struct nameidata *nd, struct path *path, @@ -62032,7 +61980,7 @@ index 985c6f3..f67a0f8 100644 const struct open_flags *op, bool got_write, int *opened) { -@@ -2835,6 +2873,17 @@ static int lookup_open(struct nameidata *nd, struct path *path, +@@ -2835,6 +2874,17 @@ static int lookup_open(struct nameidata *nd, struct path *path, /* Negative dentry, just create the file */ if (!dentry->d_inode && (op->open_flag & O_CREAT)) { umode_t mode = op->mode; @@ -62050,7 +61998,7 @@ index 985c6f3..f67a0f8 100644 if (!IS_POSIXACL(dir->d_inode)) mode &= ~current_umask(); /* -@@ -2856,6 +2905,8 @@ static int lookup_open(struct nameidata *nd, struct path *path, +@@ -2856,6 +2906,8 @@ static int lookup_open(struct nameidata *nd, struct path *path, nd->flags & LOOKUP_EXCL); if (error) goto out_dput; @@ -62059,7 +62007,7 @@ index 985c6f3..f67a0f8 100644 } out_no_open: path->dentry = dentry; -@@ -2870,7 +2921,7 @@ out_dput: +@@ -2870,7 +2922,7 @@ out_dput: /* * Handle the last step of open() */ @@ -62068,7 +62016,7 @@ index 985c6f3..f67a0f8 100644 struct file *file, const struct open_flags *op, int *opened, struct filename *name) { -@@ -2920,6 +2971,15 @@ static int do_last(struct nameidata *nd, struct path *path, +@@ -2920,6 +2972,15 @@ static int do_last(struct nameidata *nd, struct path *path, if (error) return error; @@ -62084,7 +62032,7 @@ index 985c6f3..f67a0f8 100644 audit_inode(name, dir, LOOKUP_PARENT); error = -EISDIR; /* trailing slashes? */ -@@ -2939,7 +2999,7 @@ retry_lookup: +@@ -2939,7 +3000,7 @@ retry_lookup: */ } mutex_lock(&dir->d_inode->i_mutex); @@ -62093,7 +62041,7 @@ index 985c6f3..f67a0f8 100644 mutex_unlock(&dir->d_inode->i_mutex); if (error <= 0) { -@@ -2963,11 +3023,28 @@ retry_lookup: +@@ -2963,11 +3024,28 @@ retry_lookup: goto finish_open_created; } @@ -62123,7 +62071,7 @@ index 985c6f3..f67a0f8 100644 /* * If atomic_open() acquired write access it is dropped now due to -@@ -3008,6 +3085,11 @@ finish_lookup: +@@ -3008,6 +3086,11 @@ finish_lookup: } } BUG_ON(inode != path->dentry->d_inode); @@ -62135,7 +62083,7 @@ index 985c6f3..f67a0f8 100644 return 1; } -@@ -3017,7 +3099,6 @@ finish_lookup: +@@ -3017,7 +3100,6 @@ finish_lookup: save_parent.dentry = nd->path.dentry; save_parent.mnt = mntget(path->mnt); nd->path.dentry = path->dentry; @@ -62143,7 +62091,7 @@ index 985c6f3..f67a0f8 100644 } nd->inode = inode; /* Why this, you ask? _Now_ we might have grown LOOKUP_JUMPED... */ -@@ -3027,7 +3108,18 @@ finish_open: +@@ -3027,7 +3109,18 @@ finish_open: path_put(&save_parent); return error; } @@ -62162,7 +62110,7 @@ index 985c6f3..f67a0f8 100644 error = -EISDIR; if ((open_flag & O_CREAT) && d_is_dir(nd->path.dentry)) goto out; -@@ -3190,7 +3282,7 @@ static struct file *path_openat(int dfd, struct filename *pathname, +@@ -3190,7 +3283,7 @@ static struct file *path_openat(int dfd, struct filename *pathname, if (unlikely(error)) goto out; @@ -62171,7 +62119,7 @@ index 985c6f3..f67a0f8 100644 while (unlikely(error > 0)) { /* trailing symlink */ struct path link = path; void *cookie; -@@ -3208,7 +3300,7 @@ static struct file *path_openat(int dfd, struct filename *pathname, +@@ -3208,7 +3301,7 @@ static struct file *path_openat(int dfd, struct filename *pathname, error = follow_link(&link, nd, &cookie); if (unlikely(error)) break; @@ -62180,7 +62128,7 @@ index 985c6f3..f67a0f8 100644 put_link(nd, &link, cookie); } out: -@@ -3308,9 +3400,11 @@ struct dentry *kern_path_create(int dfd, const char *pathname, +@@ -3308,9 +3401,11 @@ struct dentry *kern_path_create(int dfd, const char *pathname, goto unlock; error = -EEXIST; @@ -62194,7 +62142,7 @@ index 985c6f3..f67a0f8 100644 /* * Special case - lookup gave negative, but... we had foo/bar/ * From the vfs_mknod() POV we just have a negative dentry - -@@ -3362,6 +3456,20 @@ struct dentry *user_path_create(int dfd, const char __user *pathname, +@@ -3362,6 +3457,20 @@ struct dentry *user_path_create(int dfd, const char __user *pathname, } EXPORT_SYMBOL(user_path_create); @@ -62215,7 +62163,7 @@ index 985c6f3..f67a0f8 100644 int vfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode, dev_t dev) { int error = may_create(dir, dentry); -@@ -3425,6 +3533,17 @@ retry: +@@ -3425,6 +3534,17 @@ retry: if (!IS_POSIXACL(path.dentry->d_inode)) mode &= ~current_umask(); @@ -62233,7 +62181,7 @@ index 985c6f3..f67a0f8 100644 error = security_path_mknod(&path, dentry, mode, dev); if (error) goto out; -@@ -3441,6 +3560,8 @@ retry: +@@ -3441,6 +3561,8 @@ retry: break; } out: @@ -62242,7 +62190,7 @@ index 985c6f3..f67a0f8 100644 done_path_create(&path, dentry); if (retry_estale(error, lookup_flags)) { lookup_flags |= LOOKUP_REVAL; -@@ -3494,9 +3615,16 @@ retry: +@@ -3494,9 +3616,16 @@ retry: if (!IS_POSIXACL(path.dentry->d_inode)) mode &= ~current_umask(); @@ -62259,7 +62207,7 @@ index 985c6f3..f67a0f8 100644 done_path_create(&path, dentry); if (retry_estale(error, lookup_flags)) { lookup_flags |= LOOKUP_REVAL; -@@ -3579,6 +3707,8 @@ static long do_rmdir(int dfd, const char __user *pathname) +@@ -3579,6 +3708,8 @@ static long do_rmdir(int dfd, const char __user *pathname) struct filename *name; struct dentry *dentry; struct nameidata nd; @@ -62268,7 +62216,7 @@ index 985c6f3..f67a0f8 100644 unsigned int lookup_flags = 0; retry: name = user_path_parent(dfd, pathname, &nd, lookup_flags); -@@ -3611,10 +3741,21 @@ retry: +@@ -3611,10 +3742,21 @@ retry: error = -ENOENT; goto exit3; } @@ -62290,7 +62238,7 @@ index 985c6f3..f67a0f8 100644 exit3: dput(dentry); exit2: -@@ -3705,6 +3846,8 @@ static long do_unlinkat(int dfd, const char __user *pathname) +@@ -3705,6 +3847,8 @@ static long do_unlinkat(int dfd, const char __user *pathname) struct nameidata nd; struct inode *inode = NULL; struct inode *delegated_inode = NULL; @@ -62299,7 +62247,7 @@ index 985c6f3..f67a0f8 100644 unsigned int lookup_flags = 0; retry: name = user_path_parent(dfd, pathname, &nd, lookup_flags); -@@ -3731,10 +3874,22 @@ retry_deleg: +@@ -3731,10 +3875,22 @@ retry_deleg: if (d_is_negative(dentry)) goto slashes; ihold(inode); @@ -62322,7 +62270,7 @@ index 985c6f3..f67a0f8 100644 exit2: dput(dentry); } -@@ -3823,9 +3978,17 @@ retry: +@@ -3823,9 +3979,17 @@ retry: if (IS_ERR(dentry)) goto out_putname; @@ -62340,7 +62288,7 @@ index 985c6f3..f67a0f8 100644 done_path_create(&path, dentry); if (retry_estale(error, lookup_flags)) { lookup_flags |= LOOKUP_REVAL; -@@ -3929,6 +4092,7 @@ SYSCALL_DEFINE5(linkat, int, olddfd, const char __user *, oldname, +@@ -3929,6 +4093,7 @@ SYSCALL_DEFINE5(linkat, int, olddfd, const char __user *, oldname, struct dentry *new_dentry; struct path old_path, new_path; struct inode *delegated_inode = NULL; @@ -62348,7 +62296,7 @@ index 985c6f3..f67a0f8 100644 int how = 0; int error; -@@ -3952,7 +4116,7 @@ retry: +@@ -3952,7 +4117,7 @@ retry: if (error) return error; @@ -62357,7 +62305,7 @@ index 985c6f3..f67a0f8 100644 (how & LOOKUP_REVAL)); error = PTR_ERR(new_dentry); if (IS_ERR(new_dentry)) -@@ -3964,11 +4128,28 @@ retry: +@@ -3964,11 +4129,28 @@ retry: error = may_linkat(&old_path); if (unlikely(error)) goto out_dput; @@ -62386,7 +62334,7 @@ index 985c6f3..f67a0f8 100644 done_path_create(&new_path, new_dentry); if (delegated_inode) { error = break_deleg_wait(&delegated_inode); -@@ -4278,6 +4459,12 @@ retry_deleg: +@@ -4278,6 +4460,12 @@ retry_deleg: if (new_dentry == trap) goto exit5; @@ -62399,7 +62347,7 @@ index 985c6f3..f67a0f8 100644 error = security_path_rename(&oldnd.path, old_dentry, &newnd.path, new_dentry, flags); if (error) -@@ -4285,6 +4472,9 @@ retry_deleg: +@@ -4285,6 +4473,9 @@ retry_deleg: error = vfs_rename(old_dir->d_inode, old_dentry, new_dir->d_inode, new_dentry, &delegated_inode, flags); @@ -62409,7 +62357,7 @@ index 985c6f3..f67a0f8 100644 exit5: dput(new_dentry); exit4: -@@ -4327,14 +4517,24 @@ SYSCALL_DEFINE2(rename, const char __user *, oldname, const char __user *, newna +@@ -4327,14 +4518,24 @@ SYSCALL_DEFINE2(rename, const char __user *, oldname, const char __user *, newna int readlink_copy(char __user *buffer, int buflen, const char *link) { @@ -85909,10 +85857,10 @@ index 24663b3..b926ae1 100644 +} +EXPORT_SYMBOL(capable_wrt_inode_uidgid_nolog); diff --git a/kernel/cgroup.c b/kernel/cgroup.c -index ceee0c5..d6f81dd 100644 +index 073226b..969c746 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c -@@ -4757,7 +4757,7 @@ static int cgroup_css_links_read(struct seq_file *seq, void *v) +@@ -4808,7 +4808,7 @@ static int cgroup_css_links_read(struct seq_file *seq, void *v) struct task_struct *task; int count = 0; @@ -91335,7 +91283,7 @@ index 4a54a25..7ca9c89 100644 ftrace_graph_active++; diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c -index c634868..00d0d19 100644 +index 7c56c3d..9980576 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -352,9 +352,9 @@ struct buffer_data_page { @@ -91361,7 +91309,7 @@ index c634868..00d0d19 100644 local_t dropped_events; local_t committing; local_t commits; -@@ -992,8 +992,8 @@ static int rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer, +@@ -995,8 +995,8 @@ static int rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer, * * We add a counter to the write field to denote this. */ @@ -91372,7 +91320,7 @@ index c634868..00d0d19 100644 /* * Just make sure we have seen our old_write and synchronize -@@ -1021,8 +1021,8 @@ static int rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer, +@@ -1024,8 +1024,8 @@ static int rb_tail_page_update(struct ring_buffer_per_cpu *cpu_buffer, * cmpxchg to only update if an interrupt did not already * do it for us. If the cmpxchg fails, we don't care. */ @@ -91383,7 +91331,7 @@ index c634868..00d0d19 100644 /* * No need to worry about races with clearing out the commit. -@@ -1389,12 +1389,12 @@ static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer); +@@ -1392,12 +1392,12 @@ static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer); static inline unsigned long rb_page_entries(struct buffer_page *bpage) { @@ -91398,7 +91346,7 @@ index c634868..00d0d19 100644 } static int -@@ -1489,7 +1489,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned int nr_pages) +@@ -1492,7 +1492,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned int nr_pages) * bytes consumed in ring buffer from here. * Increment overrun to account for the lost events. */ @@ -91407,7 +91355,7 @@ index c634868..00d0d19 100644 local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes); } -@@ -2067,7 +2067,7 @@ rb_handle_head_page(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2070,7 +2070,7 @@ rb_handle_head_page(struct ring_buffer_per_cpu *cpu_buffer, * it is our responsibility to update * the counters. */ @@ -91416,7 +91364,7 @@ index c634868..00d0d19 100644 local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes); /* -@@ -2217,7 +2217,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2220,7 +2220,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, if (tail == BUF_PAGE_SIZE) tail_page->real_end = 0; @@ -91425,7 +91373,7 @@ index c634868..00d0d19 100644 return; } -@@ -2252,7 +2252,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2255,7 +2255,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, rb_event_set_padding(event); /* Set the write back to the previous setting */ @@ -91434,7 +91382,7 @@ index c634868..00d0d19 100644 return; } -@@ -2264,7 +2264,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2267,7 +2267,7 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer, /* Set write to end of buffer */ length = (tail + length) - BUF_PAGE_SIZE; @@ -91443,7 +91391,7 @@ index c634868..00d0d19 100644 } /* -@@ -2290,7 +2290,7 @@ rb_move_tail(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2293,7 +2293,7 @@ rb_move_tail(struct ring_buffer_per_cpu *cpu_buffer, * about it. */ if (unlikely(next_page == commit_page)) { @@ -91452,7 +91400,7 @@ index c634868..00d0d19 100644 goto out_reset; } -@@ -2346,7 +2346,7 @@ rb_move_tail(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2349,7 +2349,7 @@ rb_move_tail(struct ring_buffer_per_cpu *cpu_buffer, cpu_buffer->tail_page) && (cpu_buffer->commit_page == cpu_buffer->reader_page))) { @@ -91461,7 +91409,7 @@ index c634868..00d0d19 100644 goto out_reset; } } -@@ -2394,7 +2394,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2397,7 +2397,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, length += RB_LEN_TIME_EXTEND; tail_page = cpu_buffer->tail_page; @@ -91470,7 +91418,7 @@ index c634868..00d0d19 100644 /* set write to only the index of the write */ write &= RB_WRITE_MASK; -@@ -2418,7 +2418,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2421,7 +2421,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, kmemcheck_annotate_bitfield(event, bitfield); rb_update_event(cpu_buffer, event, length, add_timestamp, delta); @@ -91479,7 +91427,7 @@ index c634868..00d0d19 100644 /* * If this is the first commit on the page, then update -@@ -2451,7 +2451,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2454,7 +2454,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, if (bpage->page == (void *)addr && rb_page_write(bpage) == old_index) { unsigned long write_mask = @@ -91488,7 +91436,7 @@ index c634868..00d0d19 100644 unsigned long event_length = rb_event_length(event); /* * This is on the tail page. It is possible that -@@ -2461,7 +2461,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2464,7 +2464,7 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, */ old_index += write_mask; new_index += write_mask; @@ -91497,7 +91445,7 @@ index c634868..00d0d19 100644 if (index == old_index) { /* update counters */ local_sub(event_length, &cpu_buffer->entries_bytes); -@@ -2853,7 +2853,7 @@ rb_decrement_entry(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2856,7 +2856,7 @@ rb_decrement_entry(struct ring_buffer_per_cpu *cpu_buffer, /* Do the likely case first */ if (likely(bpage->page == (void *)addr)) { @@ -91506,7 +91454,7 @@ index c634868..00d0d19 100644 return; } -@@ -2865,7 +2865,7 @@ rb_decrement_entry(struct ring_buffer_per_cpu *cpu_buffer, +@@ -2868,7 +2868,7 @@ rb_decrement_entry(struct ring_buffer_per_cpu *cpu_buffer, start = bpage; do { if (bpage->page == (void *)addr) { @@ -91515,7 +91463,7 @@ index c634868..00d0d19 100644 return; } rb_inc_page(cpu_buffer, &bpage); -@@ -3149,7 +3149,7 @@ static inline unsigned long +@@ -3152,7 +3152,7 @@ static inline unsigned long rb_num_of_entries(struct ring_buffer_per_cpu *cpu_buffer) { return local_read(&cpu_buffer->entries) - @@ -91524,7 +91472,7 @@ index c634868..00d0d19 100644 } /** -@@ -3238,7 +3238,7 @@ unsigned long ring_buffer_overrun_cpu(struct ring_buffer *buffer, int cpu) +@@ -3241,7 +3241,7 @@ unsigned long ring_buffer_overrun_cpu(struct ring_buffer *buffer, int cpu) return 0; cpu_buffer = buffer->buffers[cpu]; @@ -91533,7 +91481,7 @@ index c634868..00d0d19 100644 return ret; } -@@ -3261,7 +3261,7 @@ ring_buffer_commit_overrun_cpu(struct ring_buffer *buffer, int cpu) +@@ -3264,7 +3264,7 @@ ring_buffer_commit_overrun_cpu(struct ring_buffer *buffer, int cpu) return 0; cpu_buffer = buffer->buffers[cpu]; @@ -91542,7 +91490,7 @@ index c634868..00d0d19 100644 return ret; } -@@ -3346,7 +3346,7 @@ unsigned long ring_buffer_overruns(struct ring_buffer *buffer) +@@ -3349,7 +3349,7 @@ unsigned long ring_buffer_overruns(struct ring_buffer *buffer) /* if you care about this being correct, lock the buffer */ for_each_buffer_cpu(buffer, cpu) { cpu_buffer = buffer->buffers[cpu]; @@ -91551,7 +91499,7 @@ index c634868..00d0d19 100644 } return overruns; -@@ -3522,8 +3522,8 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) +@@ -3525,8 +3525,8 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) /* * Reset the reader page to size zero. */ @@ -91562,7 +91510,7 @@ index c634868..00d0d19 100644 local_set(&cpu_buffer->reader_page->page->commit, 0); cpu_buffer->reader_page->real_end = 0; -@@ -3557,7 +3557,7 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) +@@ -3560,7 +3560,7 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) * want to compare with the last_overrun. */ smp_mb(); @@ -91571,7 +91519,7 @@ index c634868..00d0d19 100644 /* * Here's the tricky part. -@@ -4127,8 +4127,8 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) +@@ -4130,8 +4130,8 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) cpu_buffer->head_page = list_entry(cpu_buffer->pages, struct buffer_page, list); @@ -91582,7 +91530,7 @@ index c634868..00d0d19 100644 local_set(&cpu_buffer->head_page->page->commit, 0); cpu_buffer->head_page->read = 0; -@@ -4138,14 +4138,14 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) +@@ -4141,14 +4141,14 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) INIT_LIST_HEAD(&cpu_buffer->reader_page->list); INIT_LIST_HEAD(&cpu_buffer->new_pages); @@ -91601,7 +91549,7 @@ index c634868..00d0d19 100644 local_set(&cpu_buffer->dropped_events, 0); local_set(&cpu_buffer->entries, 0); local_set(&cpu_buffer->committing, 0); -@@ -4550,8 +4550,8 @@ int ring_buffer_read_page(struct ring_buffer *buffer, +@@ -4553,8 +4553,8 @@ int ring_buffer_read_page(struct ring_buffer *buffer, rb_init_page(bpage); bpage = reader->page; reader->page = *data_page; @@ -91613,7 +91561,7 @@ index c634868..00d0d19 100644 *data_page = bpage; diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c -index 1848dc6..5fc244c 100644 +index 39a1226..2dc2b43 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3447,7 +3447,7 @@ int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set) @@ -91626,7 +91574,7 @@ index 1848dc6..5fc244c 100644 /* do nothing if flag is already set */ if (!!(trace_flags & mask) == !!enabled) diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h -index 2e29d7b..61367d7 100644 +index 99676cd..670b9e8 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -1264,7 +1264,7 @@ extern const char *__stop___tracepoint_str[]; @@ -91819,10 +91767,10 @@ index 30e4822..dd2b854 100644 .thread_should_run = watchdog_should_run, .thread_fn = watchdog, diff --git a/kernel/workqueue.c b/kernel/workqueue.c -index 8edc8718..b6a70b9 100644 +index 7ba5897..c8ed1f2 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c -@@ -4709,7 +4709,7 @@ static void rebind_workers(struct worker_pool *pool) +@@ -4710,7 +4710,7 @@ static void rebind_workers(struct worker_pool *pool) WARN_ON_ONCE(!(worker_flags & WORKER_UNBOUND)); worker_flags |= WORKER_REBOUND; worker_flags &= ~WORKER_UNBOUND; @@ -93143,7 +93091,7 @@ index eb8fb72..ae36cf3 100644 } unset_migratetype_isolate(page, MIGRATE_MOVABLE); diff --git a/mm/memory.c b/mm/memory.c -index e302ae1..c0ef712 100644 +index e302ae1..779c7ce 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -413,6 +413,7 @@ static inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, @@ -93633,7 +93581,17 @@ index e302ae1..c0ef712 100644 unlock: pte_unmap_unlock(page_table, ptl); return 0; -@@ -3535,6 +3724,11 @@ static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma, +@@ -3515,7 +3704,8 @@ static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma, + * if page by the offset is not ready to be mapped (cold cache or + * something). + */ +- if (vma->vm_ops->map_pages) { ++ if (vma->vm_ops->map_pages && !(flags & FAULT_FLAG_NONLINEAR) && ++ fault_around_pages() > 1) { + pte = pte_offset_map_lock(mm, pmd, address, &ptl); + do_fault_around(vma, address, pte, pgoff, flags); + if (!pte_same(*pte, orig_pte)) +@@ -3535,6 +3725,11 @@ static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma, return ret; } do_set_pte(vma, address, fault_page, pte, false, false); @@ -93645,7 +93603,7 @@ index e302ae1..c0ef712 100644 unlock_page(fault_page); unlock_out: pte_unmap_unlock(pte, ptl); -@@ -3576,7 +3770,18 @@ static int do_cow_fault(struct mm_struct *mm, struct vm_area_struct *vma, +@@ -3576,7 +3771,18 @@ static int do_cow_fault(struct mm_struct *mm, struct vm_area_struct *vma, page_cache_release(fault_page); goto uncharge_out; } @@ -93664,7 +93622,7 @@ index e302ae1..c0ef712 100644 pte_unmap_unlock(pte, ptl); unlock_page(fault_page); page_cache_release(fault_page); -@@ -3624,6 +3829,11 @@ static int do_shared_fault(struct mm_struct *mm, struct vm_area_struct *vma, +@@ -3624,6 +3830,11 @@ static int do_shared_fault(struct mm_struct *mm, struct vm_area_struct *vma, return ret; } do_set_pte(vma, address, fault_page, pte, true, false); @@ -93676,7 +93634,7 @@ index e302ae1..c0ef712 100644 pte_unmap_unlock(pte, ptl); if (set_page_dirty(fault_page)) -@@ -3854,6 +4064,12 @@ static int handle_pte_fault(struct mm_struct *mm, +@@ -3854,6 +4065,12 @@ static int handle_pte_fault(struct mm_struct *mm, if (flags & FAULT_FLAG_WRITE) flush_tlb_fix_spurious_fault(vma, address); } @@ -93689,7 +93647,7 @@ index e302ae1..c0ef712 100644 unlock: pte_unmap_unlock(pte, ptl); return 0; -@@ -3870,9 +4086,41 @@ static int __handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma, +@@ -3870,9 +4087,41 @@ static int __handle_mm_fault(struct mm_struct *mm, struct vm_area_struct *vma, pmd_t *pmd; pte_t *pte; @@ -93731,7 +93689,7 @@ index e302ae1..c0ef712 100644 pgd = pgd_offset(mm, address); pud = pud_alloc(mm, pgd, address); if (!pud) -@@ -4000,6 +4248,23 @@ int __pud_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address) +@@ -4000,6 +4249,23 @@ int __pud_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address) spin_unlock(&mm->page_table_lock); return 0; } @@ -93755,7 +93713,7 @@ index e302ae1..c0ef712 100644 #endif /* __PAGETABLE_PUD_FOLDED */ #ifndef __PAGETABLE_PMD_FOLDED -@@ -4030,6 +4295,30 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) +@@ -4030,6 +4296,30 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) spin_unlock(&mm->page_table_lock); return 0; } @@ -93786,7 +93744,7 @@ index e302ae1..c0ef712 100644 #endif /* __PAGETABLE_PMD_FOLDED */ #if !defined(__HAVE_ARCH_GATE_AREA) -@@ -4043,7 +4332,7 @@ static int __init gate_vma_init(void) +@@ -4043,7 +4333,7 @@ static int __init gate_vma_init(void) gate_vma.vm_start = FIXADDR_USER_START; gate_vma.vm_end = FIXADDR_USER_END; gate_vma.vm_flags = VM_READ | VM_MAYREAD | VM_EXEC | VM_MAYEXEC; @@ -93795,7 +93753,7 @@ index e302ae1..c0ef712 100644 return 0; } -@@ -4177,8 +4466,8 @@ out: +@@ -4177,8 +4467,8 @@ out: return ret; } @@ -93806,7 +93764,7 @@ index e302ae1..c0ef712 100644 { resource_size_t phys_addr; unsigned long prot = 0; -@@ -4204,8 +4493,8 @@ EXPORT_SYMBOL_GPL(generic_access_phys); +@@ -4204,8 +4494,8 @@ EXPORT_SYMBOL_GPL(generic_access_phys); * Access another process' address space as given in mm. If non-NULL, use the * given task for page fault accounting. */ @@ -93817,7 +93775,7 @@ index e302ae1..c0ef712 100644 { struct vm_area_struct *vma; void *old_buf = buf; -@@ -4213,7 +4502,7 @@ static int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, +@@ -4213,7 +4503,7 @@ static int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, down_read(&mm->mmap_sem); /* ignore errors, just check how much was successfully transferred */ while (len) { @@ -93826,7 +93784,7 @@ index e302ae1..c0ef712 100644 void *maddr; struct page *page = NULL; -@@ -4272,8 +4561,8 @@ static int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, +@@ -4272,8 +4562,8 @@ static int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, * * The caller must hold a reference on @mm. */ @@ -93837,7 +93795,7 @@ index e302ae1..c0ef712 100644 { return __access_remote_vm(NULL, mm, addr, buf, len, write); } -@@ -4283,11 +4572,11 @@ int access_remote_vm(struct mm_struct *mm, unsigned long addr, +@@ -4283,11 +4573,11 @@ int access_remote_vm(struct mm_struct *mm, unsigned long addr, * Source/target buffer must be kernel space, * Do not walk the page table directly, use get_user_pages */ @@ -93853,7 +93811,7 @@ index e302ae1..c0ef712 100644 mm = get_task_mm(tsk); if (!mm) diff --git a/mm/mempolicy.c b/mm/mempolicy.c -index 35f9f91..bed4575 100644 +index 6b65d10..e6f415a 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -747,6 +747,10 @@ static int mbind_range(struct mm_struct *mm, unsigned long start, @@ -95970,7 +95928,7 @@ index 14d1e28..3777962 100644 /* diff --git a/mm/shmem.c b/mm/shmem.c -index a2801ba..b8651e6 100644 +index a2801ba..1e82984 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -33,7 +33,7 @@ @@ -95998,19 +95956,74 @@ index a2801ba..b8651e6 100644 + * a time): we would prefer not to enlarge the shmem inode just for that. */ struct shmem_falloc { -+ int mode; /* FALLOC_FL mode currently operating */ ++ wait_queue_head_t *waitq; /* faults into hole wait for punch to end */ pgoff_t start; /* start of range currently being fallocated */ pgoff_t next; /* the next page offset to be fallocated */ pgoff_t nr_falloced; /* how many new pages have been fallocated */ -@@ -759,6 +760,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) +@@ -467,23 +468,20 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + return; + + index = start; +- for ( ; ; ) { ++ while (index < end) { + cond_resched(); + + pvec.nr = find_get_entries(mapping, index, + min(end - index, (pgoff_t)PAGEVEC_SIZE), + pvec.pages, indices); + if (!pvec.nr) { +- if (index == start || unfalloc) ++ /* If all gone or hole-punch or unfalloc, we're done */ ++ if (index == start || end != -1) + break; ++ /* But if truncating, restart to make sure all gone */ + index = start; + continue; + } +- if ((index == start || unfalloc) && indices[0] >= end) { +- pagevec_remove_exceptionals(&pvec); +- pagevec_release(&pvec); +- break; +- } + mem_cgroup_uncharge_start(); + for (i = 0; i < pagevec_count(&pvec); i++) { + struct page *page = pvec.pages[i]; +@@ -495,8 +493,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + if (radix_tree_exceptional_entry(page)) { + if (unfalloc) + continue; +- nr_swaps_freed += !shmem_free_swap(mapping, +- index, page); ++ if (shmem_free_swap(mapping, index, page)) { ++ /* Swap was replaced by page: retry */ ++ index--; ++ break; ++ } ++ nr_swaps_freed++; + continue; + } + +@@ -505,6 +507,11 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + if (page->mapping == mapping) { + VM_BUG_ON_PAGE(PageWriteback(page), page); + truncate_inode_page(mapping, page); ++ } else { ++ /* Page was replaced by swap: retry */ ++ unlock_page(page); ++ index--; ++ break; + } + } + unlock_page(page); +@@ -759,6 +766,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) spin_lock(&inode->i_lock); shmem_falloc = inode->i_private; if (shmem_falloc && -+ !shmem_falloc->mode && ++ !shmem_falloc->waitq && index >= shmem_falloc->start && index < shmem_falloc->next) shmem_falloc->nr_unswapped++; -@@ -1233,6 +1235,43 @@ static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) +@@ -1233,6 +1241,64 @@ static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) int error; int ret = VM_FAULT_LOCKED; @@ -96018,71 +96031,98 @@ index a2801ba..b8651e6 100644 + * Trinity finds that probing a hole which tmpfs is punching can + * prevent the hole-punch from ever completing: which in turn + * locks writers out with its hold on i_mutex. So refrain from -+ * faulting pages into the hole while it's being punched, and -+ * wait on i_mutex to be released if vmf->flags permits, ++ * faulting pages into the hole while it's being punched. Although ++ * shmem_undo_range() does remove the additions, it may be unable to ++ * keep up, as each new page needs its own unmap_mapping_range() call, ++ * and the i_mmap tree grows ever slower to scan if new vmas are added. ++ * ++ * It does not matter if we sometimes reach this check just before the ++ * hole-punch begins, so that one fault then races with the punch: ++ * we just need to make racing faults a rare case. ++ * ++ * The implementation below would be much simpler if we just used a ++ * standard mutex or completion: but we cannot take i_mutex in fault, ++ * and bloating every shmem inode for this unlikely case would be sad. + */ + if (unlikely(inode->i_private)) { + struct shmem_falloc *shmem_falloc; ++ + spin_lock(&inode->i_lock); + shmem_falloc = inode->i_private; -+ if (!shmem_falloc || -+ shmem_falloc->mode != FALLOC_FL_PUNCH_HOLE || -+ vmf->pgoff < shmem_falloc->start || -+ vmf->pgoff >= shmem_falloc->next) -+ shmem_falloc = NULL; -+ spin_unlock(&inode->i_lock); -+ /* -+ * i_lock has protected us from taking shmem_falloc seriously -+ * once return from shmem_fallocate() went back up that stack. -+ * i_lock does not serialize with i_mutex at all, but it does -+ * not matter if sometimes we wait unnecessarily, or sometimes -+ * miss out on waiting: we just need to make those cases rare. -+ */ -+ if (shmem_falloc) { ++ if (shmem_falloc && ++ shmem_falloc->waitq && ++ vmf->pgoff >= shmem_falloc->start && ++ vmf->pgoff < shmem_falloc->next) { ++ wait_queue_head_t *shmem_falloc_waitq; ++ DEFINE_WAIT(shmem_fault_wait); ++ ++ ret = VM_FAULT_NOPAGE; + if ((vmf->flags & FAULT_FLAG_ALLOW_RETRY) && + !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { ++ /* It's polite to up mmap_sem if we can */ + up_read(&vma->vm_mm->mmap_sem); -+ mutex_lock(&inode->i_mutex); -+ mutex_unlock(&inode->i_mutex); -+ return VM_FAULT_RETRY; ++ ret = VM_FAULT_RETRY; + } -+ /* cond_resched? Leave that to GUP or return to user */ -+ return VM_FAULT_NOPAGE; ++ ++ shmem_falloc_waitq = shmem_falloc->waitq; ++ prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait, ++ TASK_UNINTERRUPTIBLE); ++ spin_unlock(&inode->i_lock); ++ schedule(); ++ ++ /* ++ * shmem_falloc_waitq points into the shmem_fallocate() ++ * stack of the hole-punching task: shmem_falloc_waitq ++ * is usually invalid by the time we reach here, but ++ * finish_wait() does not dereference it in that case; ++ * though i_lock needed lest racing with wake_up_all(). ++ */ ++ spin_lock(&inode->i_lock); ++ finish_wait(shmem_falloc_waitq, &shmem_fault_wait); ++ spin_unlock(&inode->i_lock); ++ return ret; + } ++ spin_unlock(&inode->i_lock); + } + error = shmem_getpage(inode, vmf->pgoff, &vmf->page, SGP_CACHE, &ret); if (error) return ((error == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS); -@@ -1733,18 +1772,26 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, - - mutex_lock(&inode->i_mutex); - -+ shmem_falloc.mode = mode & ~FALLOC_FL_KEEP_SIZE; -+ - if (mode & FALLOC_FL_PUNCH_HOLE) { +@@ -1737,12 +1803,25 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, struct address_space *mapping = file->f_mapping; loff_t unmap_start = round_up(offset, PAGE_SIZE); loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1; - ++ DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq); ++ ++ shmem_falloc.waitq = &shmem_falloc_waitq; + shmem_falloc.start = unmap_start >> PAGE_SHIFT; + shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT; + spin_lock(&inode->i_lock); + inode->i_private = &shmem_falloc; + spin_unlock(&inode->i_lock); -+ + if ((u64)unmap_end > (u64)unmap_start) unmap_mapping_range(mapping, unmap_start, 1 + unmap_end - unmap_start, 0); shmem_truncate_range(inode, offset, offset + len - 1); /* No need to unmap again: hole-punching leaves COWed pages */ ++ ++ spin_lock(&inode->i_lock); ++ inode->i_private = NULL; ++ wake_up_all(&shmem_falloc_waitq); ++ spin_unlock(&inode->i_lock); error = 0; -- goto out; -+ goto undone; + goto out; + } +@@ -1760,6 +1839,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, + goto out; } - /* We need to check rlimit even when FALLOC_FL_KEEP_SIZE */ -@@ -2138,6 +2185,11 @@ static const struct xattr_handler *shmem_xattr_handlers[] = { ++ shmem_falloc.waitq = NULL; + shmem_falloc.start = start; + shmem_falloc.next = start; + shmem_falloc.nr_falloced = 0; +@@ -2138,6 +2218,11 @@ static const struct xattr_handler *shmem_xattr_handlers[] = { static int shmem_xattr_validate(const char *name) { struct { const char *prefix; size_t len; } arr[] = { @@ -96094,7 +96134,7 @@ index a2801ba..b8651e6 100644 { XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN }, { XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN } }; -@@ -2193,6 +2245,15 @@ static int shmem_setxattr(struct dentry *dentry, const char *name, +@@ -2193,6 +2278,15 @@ static int shmem_setxattr(struct dentry *dentry, const char *name, if (err) return err; @@ -96110,7 +96150,7 @@ index a2801ba..b8651e6 100644 return simple_xattr_set(&info->xattrs, name, value, size, flags); } -@@ -2505,8 +2566,7 @@ int shmem_fill_super(struct super_block *sb, void *data, int silent) +@@ -2505,8 +2599,7 @@ int shmem_fill_super(struct super_block *sb, void *data, int silent) int err = -ENOMEM; /* Round up to L1_CACHE_BYTES to resist false sharing */ @@ -99302,6 +99342,21 @@ index 5325b54..a0d4d69 100644 return -EFAULT; *lenp = len; +diff --git a/net/dns_resolver/dns_query.c b/net/dns_resolver/dns_query.c +index e7b6d53..f005cc7 100644 +--- a/net/dns_resolver/dns_query.c ++++ b/net/dns_resolver/dns_query.c +@@ -149,7 +149,9 @@ int dns_query(const char *type, const char *name, size_t namelen, + if (!*_result) + goto put; + +- memcpy(*_result, upayload->data, len + 1); ++ memcpy(*_result, upayload->data, len); ++ (*_result)[len] = '\0'; ++ + if (_expiry) + *_expiry = rkey->expiry; + diff --git a/net/ieee802154/reassembly.c b/net/ieee802154/reassembly.c index ef2d543..5b9b73f 100644 --- a/net/ieee802154/reassembly.c @@ -103055,6 +103110,18 @@ index e1543b0..7ce8bd0 100644 linkwatch_fire_event(dev); } } +diff --git a/net/sctp/associola.c b/net/sctp/associola.c +index 0b99998..a6953b0 100644 +--- a/net/sctp/associola.c ++++ b/net/sctp/associola.c +@@ -1151,6 +1151,7 @@ void sctp_assoc_update(struct sctp_association *asoc, + asoc->c = new->c; + asoc->peer.rwnd = new->peer.rwnd; + asoc->peer.sack_needed = new->peer.sack_needed; ++ asoc->peer.auth_capable = new->peer.auth_capable; + asoc->peer.i = new->peer.i; + sctp_tsnmap_init(&asoc->peer.tsn_map, SCTP_TSN_MAP_INITIAL, + asoc->peer.i.initial_tsn, GFP_ATOMIC); diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c index 2b1738e..a9d0fc9 100644 --- a/net/sctp/ipv6.c @@ -103285,6 +103352,26 @@ index c82fdc1..4ca1f95 100644 return 0; } +diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c +index 85c6465..879f3cd 100644 +--- a/net/sctp/ulpevent.c ++++ b/net/sctp/ulpevent.c +@@ -411,6 +411,7 @@ struct sctp_ulpevent *sctp_ulpevent_make_remote_error( + * sre_type: + * It should be SCTP_REMOTE_ERROR. + */ ++ memset(sre, 0, sizeof(*sre)); + sre->sre_type = SCTP_REMOTE_ERROR; + + /* +@@ -916,6 +917,7 @@ void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, + * For recvmsg() the SCTP stack places the message's stream number in + * this value. + */ ++ memset(&sinfo, 0, sizeof(sinfo)); + sinfo.sinfo_stream = event->stream; + /* sinfo_ssn: 16 bits (unsigned integer) + * diff --git a/net/socket.c b/net/socket.c index abf56b2..b8998bc 100644 --- a/net/socket.c diff --git a/3.15.5/4425_grsec_remove_EI_PAX.patch b/3.15.6/4425_grsec_remove_EI_PAX.patch similarity index 100% rename from 3.15.5/4425_grsec_remove_EI_PAX.patch rename to 3.15.6/4425_grsec_remove_EI_PAX.patch diff --git a/3.15.5/4427_force_XATTR_PAX_tmpfs.patch b/3.15.6/4427_force_XATTR_PAX_tmpfs.patch similarity index 100% rename from 3.15.5/4427_force_XATTR_PAX_tmpfs.patch rename to 3.15.6/4427_force_XATTR_PAX_tmpfs.patch diff --git a/3.15.5/4430_grsec-remove-localversion-grsec.patch b/3.15.6/4430_grsec-remove-localversion-grsec.patch similarity index 100% rename from 3.15.5/4430_grsec-remove-localversion-grsec.patch rename to 3.15.6/4430_grsec-remove-localversion-grsec.patch diff --git a/3.15.5/4435_grsec-mute-warnings.patch b/3.15.6/4435_grsec-mute-warnings.patch similarity index 100% rename from 3.15.5/4435_grsec-mute-warnings.patch rename to 3.15.6/4435_grsec-mute-warnings.patch diff --git a/3.15.5/4440_grsec-remove-protected-paths.patch b/3.15.6/4440_grsec-remove-protected-paths.patch similarity index 100% rename from 3.15.5/4440_grsec-remove-protected-paths.patch rename to 3.15.6/4440_grsec-remove-protected-paths.patch diff --git a/3.15.5/4450_grsec-kconfig-default-gids.patch b/3.15.6/4450_grsec-kconfig-default-gids.patch similarity index 100% rename from 3.15.5/4450_grsec-kconfig-default-gids.patch rename to 3.15.6/4450_grsec-kconfig-default-gids.patch diff --git a/3.15.5/4465_selinux-avc_audit-log-curr_ip.patch b/3.15.6/4465_selinux-avc_audit-log-curr_ip.patch similarity index 100% rename from 3.15.5/4465_selinux-avc_audit-log-curr_ip.patch rename to 3.15.6/4465_selinux-avc_audit-log-curr_ip.patch diff --git a/3.15.5/4470_disable-compat_vdso.patch b/3.15.6/4470_disable-compat_vdso.patch similarity index 100% rename from 3.15.5/4470_disable-compat_vdso.patch rename to 3.15.6/4470_disable-compat_vdso.patch diff --git a/3.15.5/4475_emutramp_default_on.patch b/3.15.6/4475_emutramp_default_on.patch similarity index 100% rename from 3.15.5/4475_emutramp_default_on.patch rename to 3.15.6/4475_emutramp_default_on.patch diff --git a/3.2.61/0000_README b/3.2.61/0000_README index c0718d5..be52f3a 100644 --- a/3.2.61/0000_README +++ b/3.2.61/0000_README @@ -162,7 +162,7 @@ Patch: 1060_linux-3.2.61.patch From: http://www.kernel.org Desc: Linux 3.2.61 -Patch: 4420_grsecurity-3.0-3.2.61-201407170636.patch +Patch: 4420_grsecurity-3.0-3.2.61-201407232156.patch From: http://www.grsecurity.net Desc: hardened-sources base patch from upstream grsecurity diff --git a/3.2.61/4420_grsecurity-3.0-3.2.61-201407170636.patch b/3.2.61/4420_grsecurity-3.0-3.2.61-201407232156.patch similarity index 99% rename from 3.2.61/4420_grsecurity-3.0-3.2.61-201407170636.patch rename to 3.2.61/4420_grsecurity-3.0-3.2.61-201407232156.patch index d53a91b..c484237 100644 --- a/3.2.61/4420_grsecurity-3.0-3.2.61-201407170636.patch +++ b/3.2.61/4420_grsecurity-3.0-3.2.61-201407232156.patch @@ -11106,7 +11106,7 @@ index 7bcf3fc..560ff4c 100644 + pax_force_retaddr ret diff --git a/arch/x86/ia32/ia32_aout.c b/arch/x86/ia32/ia32_aout.c -index fd84387..887aa7e 100644 +index fd84387..887aa7ef 100644 --- a/arch/x86/ia32/ia32_aout.c +++ b/arch/x86/ia32/ia32_aout.c @@ -162,6 +162,8 @@ static int aout_core_dump(long signr, struct pt_regs *regs, struct file *file, @@ -28843,7 +28843,7 @@ index a4cca06..9e00106 100644 (unsigned long)(&__init_begin), (unsigned long)(&__init_end)); diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c -index 29f7c6d..5122941 100644 +index 29f7c6d9..5122941 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -74,36 +74,6 @@ static __init void *alloc_low_page(void) @@ -34913,7 +34913,7 @@ index da3cfee..a5a6606 100644 *ppos = i; diff --git a/drivers/char/random.c b/drivers/char/random.c -index c244f0e..8b3452f 100644 +index c244f0e..59b5e6c 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -255,10 +255,8 @@ @@ -35363,6 +35363,8 @@ index c244f0e..8b3452f 100644 retry: entropy_count = orig = ACCESS_ONCE(r->entropy_count); - entropy_count += nbits; +- if (entropy_count < 0) { +- DEBUG_ENT("negative entropy/overflow\n"); + if (nfrac < 0) { + /* Debit */ + entropy_count += nfrac; @@ -35402,8 +35404,7 @@ index c244f0e..8b3452f 100644 + } while (unlikely(entropy_count < pool_size-2 && pnfrac)); + } + - if (entropy_count < 0) { -- DEBUG_ENT("negative entropy/overflow\n"); ++ if (unlikely(entropy_count < 0)) { + pr_warn("random: negative entropy/overflow: pool %s count %d\n", + r->name, entropy_count); + WARN_ON(1); @@ -35651,7 +35652,7 @@ index c244f0e..8b3452f 100644 } #endif -@@ -835,104 +915,131 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf, +@@ -835,104 +915,141 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf, * from the primary pool to the secondary extraction pool. We make * sure we pull enough for a 'catastrophic reseed'. */ @@ -35746,7 +35747,7 @@ index c244f0e..8b3452f 100644 { - unsigned long flags; + int entropy_count, orig; -+ size_t ibytes; ++ size_t ibytes, nfrac; - /* Hold lock while accounting */ - spin_lock_irqsave(&r->lock, flags); @@ -35781,18 +35782,27 @@ index c244f0e..8b3452f 100644 - if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) - goto retry; - } -- -- if (entropy_count < random_write_wakeup_thresh) { -- wake_up_interruptible(&random_write_wait); -- kill_fasync(&fasync, SIGIO, POLL_OUT); -- } + if ((have_bytes -= reserved) < 0) + have_bytes = 0; + ibytes = min_t(size_t, ibytes, have_bytes); - } ++ } + if (ibytes < min) + ibytes = 0; -+ if ((entropy_count -= ibytes << (ENTROPY_SHIFT + 3)) < 0) + +- if (entropy_count < random_write_wakeup_thresh) { +- wake_up_interruptible(&random_write_wait); +- kill_fasync(&fasync, SIGIO, POLL_OUT); +- } ++ if (unlikely(entropy_count < 0)) { ++ pr_warn("random: negative entropy count: pool %s count %d\n", ++ r->name, entropy_count); ++ WARN_ON(1); ++ entropy_count = 0; + } ++ nfrac = ibytes << (ENTROPY_SHIFT + 3); ++ if ((size_t) entropy_count > nfrac) ++ entropy_count -= nfrac; ++ else + entropy_count = 0; - DEBUG_ENT("debiting %d entropy credits from %s%s\n", @@ -35847,7 +35857,7 @@ index c244f0e..8b3452f 100644 spin_lock_irqsave(&r->lock, flags); for (i = 0; i < r->poolinfo->poolwords; i += 16) sha_transform(hash.w, (__u8 *)(r->pool + i), workspace); -@@ -966,27 +1073,43 @@ static void extract_buf(struct entropy_store *r, __u8 *out) +@@ -966,27 +1083,43 @@ static void extract_buf(struct entropy_store *r, __u8 *out) hash.w[1] ^= hash.w[4]; hash.w[2] ^= rol32(hash.w[2], 16); @@ -35902,7 +35912,7 @@ index c244f0e..8b3452f 100644 xfer_secondary_pool(r, nbytes); nbytes = account(r, nbytes, min, reserved); -@@ -994,8 +1117,6 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf, +@@ -994,8 +1127,6 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf, extract_buf(r, tmp); if (fips_enabled) { @@ -35911,7 +35921,7 @@ index c244f0e..8b3452f 100644 spin_lock_irqsave(&r->lock, flags); if (!memcmp(tmp, r->last_data, EXTRACT_SIZE)) panic("Hardware RNG duplicated output!\n"); -@@ -1015,12 +1136,17 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf, +@@ -1015,12 +1146,17 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf, return ret; } @@ -35929,7 +35939,7 @@ index c244f0e..8b3452f 100644 xfer_secondary_pool(r, nbytes); nbytes = account(r, nbytes, 0, 0); -@@ -1036,7 +1162,7 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf, +@@ -1036,7 +1172,7 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf, extract_buf(r, tmp); i = min_t(int, nbytes, EXTRACT_SIZE); @@ -35938,7 +35948,7 @@ index c244f0e..8b3452f 100644 ret = -EFAULT; break; } -@@ -1055,11 +1181,20 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf, +@@ -1055,11 +1191,20 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf, /* * This function is the exported kernel interface. It returns some * number of good random numbers, suitable for key generation, seeding @@ -35961,7 +35971,7 @@ index c244f0e..8b3452f 100644 extract_entropy(&nonblocking_pool, buf, nbytes, 0, 0); } EXPORT_SYMBOL(get_random_bytes); -@@ -1078,6 +1213,7 @@ void get_random_bytes_arch(void *buf, int nbytes) +@@ -1078,6 +1223,7 @@ void get_random_bytes_arch(void *buf, int nbytes) { char *p = buf; @@ -35969,7 +35979,7 @@ index c244f0e..8b3452f 100644 while (nbytes) { unsigned long v; int chunk = min(nbytes, (int)sizeof(unsigned long)); -@@ -1111,12 +1247,11 @@ static void init_std_data(struct entropy_store *r) +@@ -1111,12 +1257,11 @@ static void init_std_data(struct entropy_store *r) ktime_t now = ktime_get_real(); unsigned long rv; @@ -35985,7 +35995,7 @@ index c244f0e..8b3452f 100644 mix_pool_bytes(r, &rv, sizeof(rv), NULL); } mix_pool_bytes(r, utsname(), sizeof(*(utsname())), NULL); -@@ -1139,25 +1274,7 @@ static int rand_initialize(void) +@@ -1139,25 +1284,7 @@ static int rand_initialize(void) init_std_data(&nonblocking_pool); return 0; } @@ -36012,7 +36022,7 @@ index c244f0e..8b3452f 100644 #ifdef CONFIG_BLOCK void rand_initialize_disk(struct gendisk *disk) -@@ -1169,71 +1286,59 @@ void rand_initialize_disk(struct gendisk *disk) +@@ -1169,71 +1296,60 @@ void rand_initialize_disk(struct gendisk *disk) * source. */ state = kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL); @@ -36112,6 +36122,7 @@ index c244f0e..8b3452f 100644 + "with %d bits of entropy available\n", + current->comm, nonblocking_pool.entropy_total); + ++ nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3)); + ret = extract_entropy_user(&nonblocking_pool, buf, nbytes); + + trace_urandom_read(8 * nbytes, ENTROPY_BITS(&nonblocking_pool), @@ -36120,7 +36131,7 @@ index c244f0e..8b3452f 100644 } static unsigned int -@@ -1244,9 +1349,9 @@ random_poll(struct file *file, poll_table * wait) +@@ -1244,9 +1360,9 @@ random_poll(struct file *file, poll_table * wait) poll_wait(file, &random_read_wait, wait); poll_wait(file, &random_write_wait, wait); mask = 0; @@ -36132,7 +36143,7 @@ index c244f0e..8b3452f 100644 mask |= POLLOUT | POLLWRNORM; return mask; } -@@ -1297,7 +1402,8 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) +@@ -1297,7 +1413,8 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) switch (cmd) { case RNDGETENTCNT: /* inherently racy, no point locking */ @@ -36142,7 +36153,7 @@ index c244f0e..8b3452f 100644 return -EFAULT; return 0; case RNDADDTOENTCNT: -@@ -1305,7 +1411,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) +@@ -1305,7 +1422,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) return -EPERM; if (get_user(ent_count, p)) return -EFAULT; @@ -36151,7 +36162,7 @@ index c244f0e..8b3452f 100644 return 0; case RNDADDENTROPY: if (!capable(CAP_SYS_ADMIN)) -@@ -1320,14 +1426,19 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) +@@ -1320,14 +1437,19 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) size); if (retval < 0) return retval; @@ -36174,7 +36185,7 @@ index c244f0e..8b3452f 100644 return 0; default: return -EINVAL; -@@ -1387,23 +1498,23 @@ EXPORT_SYMBOL(generate_random_uuid); +@@ -1387,23 +1509,23 @@ EXPORT_SYMBOL(generate_random_uuid); #include static int min_read_thresh = 8, min_write_thresh; @@ -36205,7 +36216,7 @@ index c244f0e..8b3452f 100644 unsigned char buf[64], tmp_uuid[16], *uuid; uuid = table->data; -@@ -1427,8 +1538,26 @@ static int proc_do_uuid(ctl_table *table, int write, +@@ -1427,8 +1549,26 @@ static int proc_do_uuid(ctl_table *table, int write, return proc_dostring(&fake_table, write, buffer, lenp, ppos); } @@ -36233,7 +36244,7 @@ index c244f0e..8b3452f 100644 { .procname = "poolsize", .data = &sysctl_poolsize, -@@ -1440,12 +1569,12 @@ ctl_table random_table[] = { +@@ -1440,12 +1580,12 @@ ctl_table random_table[] = { .procname = "entropy_avail", .maxlen = sizeof(int), .mode = 0444, @@ -36248,7 +36259,7 @@ index c244f0e..8b3452f 100644 .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, -@@ -1454,7 +1583,7 @@ ctl_table random_table[] = { +@@ -1454,7 +1594,7 @@ ctl_table random_table[] = { }, { .procname = "write_wakeup_threshold", @@ -36257,7 +36268,7 @@ index c244f0e..8b3452f 100644 .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, -@@ -1462,6 +1591,13 @@ ctl_table random_table[] = { +@@ -1462,6 +1602,13 @@ ctl_table random_table[] = { .extra2 = &max_write_thresh, }, { @@ -36271,7 +36282,7 @@ index c244f0e..8b3452f 100644 .procname = "boot_id", .data = &sysctl_bootid, .maxlen = 16, -@@ -1492,7 +1628,7 @@ int random_int_secret_init(void) +@@ -1492,7 +1639,7 @@ int random_int_secret_init(void) * value is not cryptographically secure but for several uses the cost of * depleting entropy is too high */ @@ -36280,7 +36291,7 @@ index c244f0e..8b3452f 100644 unsigned int get_random_int(void) { __u32 *hash; -@@ -1510,6 +1646,7 @@ unsigned int get_random_int(void) +@@ -1510,6 +1657,7 @@ unsigned int get_random_int(void) return ret; } @@ -44899,6 +44910,24 @@ index 5920c99..ff2e4a5 100644 }; static void +diff --git a/drivers/net/wan/x25_asy.c b/drivers/net/wan/x25_asy.c +index 8a10bb7..7560422 100644 +--- a/drivers/net/wan/x25_asy.c ++++ b/drivers/net/wan/x25_asy.c +@@ -123,8 +123,12 @@ static int x25_asy_change_mtu(struct net_device *dev, int newmtu) + { + struct x25_asy *sl = netdev_priv(dev); + unsigned char *xbuff, *rbuff; +- int len = 2 * newmtu; ++ int len; + ++ if (newmtu > 65534) ++ return -EINVAL; ++ ++ len = 2 * newmtu; + xbuff = kmalloc(len + 4, GFP_ATOMIC); + rbuff = kmalloc(len + 4, GFP_ATOMIC); + diff --git a/drivers/net/wan/z85230.c b/drivers/net/wan/z85230.c index 0e57690..ad698bb 100644 --- a/drivers/net/wan/z85230.c @@ -100930,6 +100959,21 @@ index d50a13c..1f612ff 100644 return -EFAULT; *lenp = len; +diff --git a/net/dns_resolver/dns_query.c b/net/dns_resolver/dns_query.c +index c32be29..2022b46 100644 +--- a/net/dns_resolver/dns_query.c ++++ b/net/dns_resolver/dns_query.c +@@ -150,7 +150,9 @@ int dns_query(const char *type, const char *name, size_t namelen, + if (!*_result) + goto put; + +- memcpy(*_result, upayload->data, len + 1); ++ memcpy(*_result, upayload->data, len); ++ (*_result)[len] = '\0'; ++ + if (_expiry) + *_expiry = rkey->expiry; + diff --git a/net/econet/Kconfig b/net/econet/Kconfig index 39a2d29..f39c0fe 100644 --- a/net/econet/Kconfig @@ -105043,6 +105087,18 @@ index 7635107..4670276 100644 _proto("Tx RESPONSE %%%u", ntohl(hdr->serial)); ret = kernel_sendmsg(conn->trans->local->socket, &msg, iov, 3, len); +diff --git a/net/sctp/associola.c b/net/sctp/associola.c +index 25b207b..da54d29 100644 +--- a/net/sctp/associola.c ++++ b/net/sctp/associola.c +@@ -1188,6 +1188,7 @@ void sctp_assoc_update(struct sctp_association *asoc, + asoc->c = new->c; + asoc->peer.rwnd = new->peer.rwnd; + asoc->peer.sack_needed = new->peer.sack_needed; ++ asoc->peer.auth_capable = new->peer.auth_capable; + asoc->peer.i = new->peer.i; + sctp_tsnmap_init(&asoc->peer.tsn_map, SCTP_TSN_MAP_INITIAL, + asoc->peer.i.initial_tsn, GFP_ATOMIC); diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c index 0b6a391..febcef2 100644 --- a/net/sctp/ipv6.c @@ -105301,6 +105357,26 @@ index 8da4481..d02565e 100644 tp->srtt = tp->srtt - (tp->srtt >> sctp_rto_alpha) + (rtt >> sctp_rto_alpha); } else { +diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c +index 8a84017..d4faa70 100644 +--- a/net/sctp/ulpevent.c ++++ b/net/sctp/ulpevent.c +@@ -418,6 +418,7 @@ struct sctp_ulpevent *sctp_ulpevent_make_remote_error( + * sre_type: + * It should be SCTP_REMOTE_ERROR. + */ ++ memset(sre, 0, sizeof(*sre)); + sre->sre_type = SCTP_REMOTE_ERROR; + + /* +@@ -921,6 +922,7 @@ void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, + * For recvmsg() the SCTP stack places the message's stream number in + * this value. + */ ++ memset(&sinfo, 0, sizeof(sinfo)); + sinfo.sinfo_stream = event->stream; + /* sinfo_ssn: 16 bits (unsigned integer) + * diff --git a/net/socket.c b/net/socket.c index 3faa358..3d43f20 100644 --- a/net/socket.c