From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 0712413877A for ; Mon, 28 Jul 2014 19:17:21 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id E6887E0DB9; Mon, 28 Jul 2014 19:17:19 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 6A38EE0DB9 for ; Mon, 28 Jul 2014 19:17:19 +0000 (UTC) Received: from spoonbill.gentoo.org (spoonbill.gentoo.org [81.93.255.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 08A943400FF for ; Mon, 28 Jul 2014 19:17:18 +0000 (UTC) Received: from localhost.localdomain (localhost [127.0.0.1]) by spoonbill.gentoo.org (Postfix) with ESMTP id E56B618BF0 for ; Mon, 28 Jul 2014 19:17:15 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1406575025.0a8452bd52ddbe1bdbbbb9180eec4d67f595d3d2.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:3.14 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1013_linux-3.14.14.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 0a8452bd52ddbe1bdbbbb9180eec4d67f595d3d2 X-VCS-Branch: 3.14 Date: Mon, 28 Jul 2014 19:17:15 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: 960e732e-5124-4566-b8fa-61ed1a876cdb X-Archives-Hash: 51cd083d1dbb94fded4742c943ebc029 commit: 0a8452bd52ddbe1bdbbbb9180eec4d67f595d3d2 Author: Mike Pagano gentoo org> AuthorDate: Mon Jul 28 19:17:05 2014 +0000 Commit: Mike Pagano gentoo org> CommitDate: Mon Jul 28 19:17:05 2014 +0000 URL: http://git.overlays.gentoo.org/gitweb/?p=proj/linux-patches.git;a=commit;h=0a8452bd Linux patch 3.14.14 --- 0000_README | 4 + 1013_linux-3.14.14.patch | 2814 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 2818 insertions(+) diff --git a/0000_README b/0000_README index 4e6bfc3..d44b3d0 100644 --- a/0000_README +++ b/0000_README @@ -94,6 +94,10 @@ Patch: 1012_linux-3.14.13.patch From: http://www.kernel.org Desc: Linux 3.14.13 +Patch: 1013_linux-3.14.14.patch +From: http://www.kernel.org +Desc: Linux 3.14.14 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1013_linux-3.14.14.patch b/1013_linux-3.14.14.patch new file mode 100644 index 0000000..35b4ef6 --- /dev/null +++ b/1013_linux-3.14.14.patch @@ -0,0 +1,2814 @@ +diff --git a/Makefile b/Makefile +index 7a2981c972ae..230c7f694ab7 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 3 + PATCHLEVEL = 14 +-SUBLEVEL = 13 ++SUBLEVEL = 14 + EXTRAVERSION = + NAME = Remembering Coco + +diff --git a/arch/arc/include/uapi/asm/ptrace.h b/arch/arc/include/uapi/asm/ptrace.h +index 2618cc13ba75..76a7739aab1c 100644 +--- a/arch/arc/include/uapi/asm/ptrace.h ++++ b/arch/arc/include/uapi/asm/ptrace.h +@@ -11,6 +11,7 @@ + #ifndef _UAPI__ASM_ARC_PTRACE_H + #define _UAPI__ASM_ARC_PTRACE_H + ++#define PTRACE_GET_THREAD_AREA 25 + + #ifndef __ASSEMBLY__ + /* +diff --git a/arch/arc/kernel/ptrace.c b/arch/arc/kernel/ptrace.c +index 5d76706139dd..13b3ffb27a38 100644 +--- a/arch/arc/kernel/ptrace.c ++++ b/arch/arc/kernel/ptrace.c +@@ -146,6 +146,10 @@ long arch_ptrace(struct task_struct *child, long request, + pr_debug("REQ=%ld: ADDR =0x%lx, DATA=0x%lx)\n", request, addr, data); + + switch (request) { ++ case PTRACE_GET_THREAD_AREA: ++ ret = put_user(task_thread_info(child)->thr_ptr, ++ (unsigned long __user *)data); ++ break; + default: + ret = ptrace_request(child, request, addr, data); + break; +diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig +index 44298add8a48..4733d327cfb1 100644 +--- a/arch/arm/Kconfig ++++ b/arch/arm/Kconfig +@@ -6,6 +6,7 @@ config ARM + select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST + select ARCH_HAVE_CUSTOM_GPIO_H + select ARCH_MIGHT_HAVE_PC_PARPORT ++ select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_USE_BUILTIN_BSWAP + select ARCH_USE_CMPXCHG_LOCKREF + select ARCH_WANT_IPC_PARSE_VERSION +diff --git a/arch/arm/boot/dts/imx25.dtsi b/arch/arm/boot/dts/imx25.dtsi +index 737ed5da8f71..de1611966d8b 100644 +--- a/arch/arm/boot/dts/imx25.dtsi ++++ b/arch/arm/boot/dts/imx25.dtsi +@@ -30,6 +30,7 @@ + spi2 = &spi3; + usb0 = &usbotg; + usb1 = &usbhost1; ++ ethernet0 = &fec; + }; + + cpus { +diff --git a/arch/arm/boot/dts/imx27.dtsi b/arch/arm/boot/dts/imx27.dtsi +index 826231eb4446..da2eb7f6a5b2 100644 +--- a/arch/arm/boot/dts/imx27.dtsi ++++ b/arch/arm/boot/dts/imx27.dtsi +@@ -30,6 +30,7 @@ + spi0 = &cspi1; + spi1 = &cspi2; + spi2 = &cspi3; ++ ethernet0 = &fec; + }; + + aitc: aitc-interrupt-controller@e0000000 { +diff --git a/arch/arm/boot/dts/imx51.dtsi b/arch/arm/boot/dts/imx51.dtsi +index 4bcdd3ad15e5..e1b601595a09 100644 +--- a/arch/arm/boot/dts/imx51.dtsi ++++ b/arch/arm/boot/dts/imx51.dtsi +@@ -27,6 +27,7 @@ + spi0 = &ecspi1; + spi1 = &ecspi2; + spi2 = &cspi; ++ ethernet0 = &fec; + }; + + tzic: tz-interrupt-controller@e0000000 { +diff --git a/arch/arm/boot/dts/imx53.dtsi b/arch/arm/boot/dts/imx53.dtsi +index dc72353de0b3..50eda500f39a 100644 +--- a/arch/arm/boot/dts/imx53.dtsi ++++ b/arch/arm/boot/dts/imx53.dtsi +@@ -33,6 +33,7 @@ + spi0 = &ecspi1; + spi1 = &ecspi2; + spi2 = &cspi; ++ ethernet0 = &fec; + }; + + cpus { +diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig +index 27bbcfc7202a..65b788410bd9 100644 +--- a/arch/arm64/Kconfig ++++ b/arch/arm64/Kconfig +@@ -2,6 +2,7 @@ config ARM64 + def_bool y + select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE + select ARCH_USE_CMPXCHG_LOCKREF ++ select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST + select ARCH_WANT_OPTIONAL_GPIOLIB + select ARCH_WANT_COMPAT_IPC_PARSE_VERSION +diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig +index 2156fa2d25fe..ee3c6608126a 100644 +--- a/arch/powerpc/Kconfig ++++ b/arch/powerpc/Kconfig +@@ -141,6 +141,7 @@ config PPC + select HAVE_DEBUG_STACKOVERFLOW + select HAVE_IRQ_EXIT_ON_IRQ_STACK + select ARCH_USE_CMPXCHG_LOCKREF if PPC64 ++ select ARCH_SUPPORTS_ATOMIC_RMW + + config GENERIC_CSUM + def_bool CPU_LITTLE_ENDIAN +diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig +index 7d8b7e94b93b..b398c68b2713 100644 +--- a/arch/sparc/Kconfig ++++ b/arch/sparc/Kconfig +@@ -77,6 +77,7 @@ config SPARC64 + select ARCH_HAVE_NMI_SAFE_CMPXCHG + select HAVE_C_RECORDMCOUNT + select NO_BOOTMEM ++ select ARCH_SUPPORTS_ATOMIC_RMW + + config ARCH_DEFCONFIG + string +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig +index 1981dd9b8a11..7324107acb40 100644 +--- a/arch/x86/Kconfig ++++ b/arch/x86/Kconfig +@@ -127,6 +127,7 @@ config X86 + select HAVE_DEBUG_STACKOVERFLOW + select HAVE_IRQ_EXIT_ON_IRQ_STACK if X86_64 + select HAVE_CC_STACKPROTECTOR ++ select ARCH_SUPPORTS_ATOMIC_RMW + + config INSTRUCTION_DECODER + def_bool y +diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c +index aa333d966886..1340ebfcb467 100644 +--- a/arch/x86/kernel/cpu/perf_event_intel.c ++++ b/arch/x86/kernel/cpu/perf_event_intel.c +@@ -1383,6 +1383,15 @@ again: + intel_pmu_lbr_read(); + + /* ++ * CondChgd bit 63 doesn't mean any overflow status. Ignore ++ * and clear the bit. ++ */ ++ if (__test_and_clear_bit(63, (unsigned long *)&status)) { ++ if (!status) ++ goto done; ++ } ++ ++ /* + * PEBS overflow sets bit 62 in the global status register + */ + if (__test_and_clear_bit(62, (unsigned long *)&status)) { +diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c +index cfbe99f88830..e0d1d7a8354e 100644 +--- a/arch/x86/kernel/tsc.c ++++ b/arch/x86/kernel/tsc.c +@@ -921,9 +921,9 @@ static int time_cpufreq_notifier(struct notifier_block *nb, unsigned long val, + tsc_khz = cpufreq_scale(tsc_khz_ref, ref_freq, freq->new); + if (!(freq->flags & CPUFREQ_CONST_LOOPS)) + mark_tsc_unstable("cpufreq changes"); +- } + +- set_cyc2ns_scale(tsc_khz, freq->cpu); ++ set_cyc2ns_scale(tsc_khz, freq->cpu); ++ } + + return 0; + } +diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c +index f6f497450560..e36a0245f2c1 100644 +--- a/drivers/bluetooth/hci_h5.c ++++ b/drivers/bluetooth/hci_h5.c +@@ -406,6 +406,7 @@ static int h5_rx_3wire_hdr(struct hci_uart *hu, unsigned char c) + H5_HDR_PKT_TYPE(hdr) != HCI_3WIRE_LINK_PKT) { + BT_ERR("Non-link packet received in non-active state"); + h5_reset_rx(h5); ++ return 0; + } + + h5->rx_func = h5_rx_payload; +diff --git a/drivers/gpu/drm/qxl/qxl_irq.c b/drivers/gpu/drm/qxl/qxl_irq.c +index 28f84b4fce32..3485bdccf8b8 100644 +--- a/drivers/gpu/drm/qxl/qxl_irq.c ++++ b/drivers/gpu/drm/qxl/qxl_irq.c +@@ -33,6 +33,9 @@ irqreturn_t qxl_irq_handler(int irq, void *arg) + + pending = xchg(&qdev->ram_header->int_pending, 0); + ++ if (!pending) ++ return IRQ_NONE; ++ + atomic_inc(&qdev->irq_received); + + if (pending & QXL_INTERRUPT_DISPLAY) { +diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c +index ccca8b224d18..e7bfd5502410 100644 +--- a/drivers/gpu/drm/radeon/atombios_encoders.c ++++ b/drivers/gpu/drm/radeon/atombios_encoders.c +@@ -183,7 +183,6 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder, + struct backlight_properties props; + struct radeon_backlight_privdata *pdata; + struct radeon_encoder_atom_dig *dig; +- u8 backlight_level; + char bl_name[16]; + + /* Mac laptops with multiple GPUs use the gmux driver for backlight +@@ -222,12 +221,17 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder, + + pdata->encoder = radeon_encoder; + +- backlight_level = radeon_atom_get_backlight_level_from_reg(rdev); +- + dig = radeon_encoder->enc_priv; + dig->bl_dev = bd; + + bd->props.brightness = radeon_atom_backlight_get_brightness(bd); ++ /* Set a reasonable default here if the level is 0 otherwise ++ * fbdev will attempt to turn the backlight on after console ++ * unblanking and it will try and restore 0 which turns the backlight ++ * off again. ++ */ ++ if (bd->props.brightness == 0) ++ bd->props.brightness = RADEON_MAX_BL_LEVEL; + bd->props.power = FB_BLANK_UNBLANK; + backlight_update_status(bd); + +diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c +index df6d0079d0af..11d06c7b5afa 100644 +--- a/drivers/gpu/drm/radeon/radeon_display.c ++++ b/drivers/gpu/drm/radeon/radeon_display.c +@@ -755,6 +755,10 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) + struct radeon_device *rdev = dev->dev_private; + int ret = 0; + ++ /* don't leak the edid if we already fetched it in detect() */ ++ if (radeon_connector->edid) ++ goto got_edid; ++ + /* on hw with routers, select right port */ + if (radeon_connector->router.ddc_valid) + radeon_router_select_ddc_port(radeon_connector); +@@ -794,6 +798,7 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) + radeon_connector->edid = radeon_bios_get_hardcoded_edid(rdev); + } + if (radeon_connector->edid) { ++got_edid: + drm_mode_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid); + ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid); + drm_edid_to_eld(&radeon_connector->base, radeon_connector->edid); +diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c +index 09988b289622..816782a65488 100644 +--- a/drivers/hv/hv_kvp.c ++++ b/drivers/hv/hv_kvp.c +@@ -127,6 +127,15 @@ kvp_work_func(struct work_struct *dummy) + kvp_respond_to_host(NULL, HV_E_FAIL); + } + ++static void poll_channel(struct vmbus_channel *channel) ++{ ++ unsigned long flags; ++ ++ spin_lock_irqsave(&channel->inbound_lock, flags); ++ hv_kvp_onchannelcallback(channel); ++ spin_unlock_irqrestore(&channel->inbound_lock, flags); ++} ++ + static int kvp_handle_handshake(struct hv_kvp_msg *msg) + { + int ret = 1; +@@ -155,7 +164,7 @@ static int kvp_handle_handshake(struct hv_kvp_msg *msg) + kvp_register(dm_reg_value); + kvp_transaction.active = false; + if (kvp_transaction.kvp_context) +- hv_kvp_onchannelcallback(kvp_transaction.kvp_context); ++ poll_channel(kvp_transaction.kvp_context); + } + return ret; + } +@@ -568,6 +577,7 @@ response_done: + + vmbus_sendpacket(channel, recv_buffer, buf_len, req_id, + VM_PKT_DATA_INBAND, 0); ++ poll_channel(channel); + + } + +@@ -603,7 +613,7 @@ void hv_kvp_onchannelcallback(void *context) + return; + } + +- vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 2, &recvlen, ++ vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 4, &recvlen, + &requestid); + + if (recvlen > 0) { +diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c +index 62dfd246b948..d016be36cc03 100644 +--- a/drivers/hv/hv_util.c ++++ b/drivers/hv/hv_util.c +@@ -312,7 +312,7 @@ static int util_probe(struct hv_device *dev, + (struct hv_util_service *)dev_id->driver_data; + int ret; + +- srv->recv_buffer = kmalloc(PAGE_SIZE * 2, GFP_KERNEL); ++ srv->recv_buffer = kmalloc(PAGE_SIZE * 4, GFP_KERNEL); + if (!srv->recv_buffer) + return -ENOMEM; + if (srv->util_init) { +diff --git a/drivers/hwmon/adt7470.c b/drivers/hwmon/adt7470.c +index 0f4dea5ccf17..9ee3913850d6 100644 +--- a/drivers/hwmon/adt7470.c ++++ b/drivers/hwmon/adt7470.c +@@ -515,7 +515,7 @@ static ssize_t set_temp_min(struct device *dev, + return -EINVAL; + + temp = DIV_ROUND_CLOSEST(temp, 1000); +- temp = clamp_val(temp, 0, 255); ++ temp = clamp_val(temp, -128, 127); + + mutex_lock(&data->lock); + data->temp_min[attr->index] = temp; +@@ -549,7 +549,7 @@ static ssize_t set_temp_max(struct device *dev, + return -EINVAL; + + temp = DIV_ROUND_CLOSEST(temp, 1000); +- temp = clamp_val(temp, 0, 255); ++ temp = clamp_val(temp, -128, 127); + + mutex_lock(&data->lock); + data->temp_max[attr->index] = temp; +@@ -826,7 +826,7 @@ static ssize_t set_pwm_tmin(struct device *dev, + return -EINVAL; + + temp = DIV_ROUND_CLOSEST(temp, 1000); +- temp = clamp_val(temp, 0, 255); ++ temp = clamp_val(temp, -128, 127); + + mutex_lock(&data->lock); + data->pwm_tmin[attr->index] = temp; +diff --git a/drivers/hwmon/da9052-hwmon.c b/drivers/hwmon/da9052-hwmon.c +index afd31042b452..d14ab3c45daa 100644 +--- a/drivers/hwmon/da9052-hwmon.c ++++ b/drivers/hwmon/da9052-hwmon.c +@@ -194,7 +194,7 @@ static ssize_t da9052_hwmon_show_name(struct device *dev, + struct device_attribute *devattr, + char *buf) + { +- return sprintf(buf, "da9052-hwmon\n"); ++ return sprintf(buf, "da9052\n"); + } + + static ssize_t show_label(struct device *dev, +diff --git a/drivers/hwmon/da9055-hwmon.c b/drivers/hwmon/da9055-hwmon.c +index 73b3865f1207..35eb7738d711 100644 +--- a/drivers/hwmon/da9055-hwmon.c ++++ b/drivers/hwmon/da9055-hwmon.c +@@ -204,7 +204,7 @@ static ssize_t da9055_hwmon_show_name(struct device *dev, + struct device_attribute *devattr, + char *buf) + { +- return sprintf(buf, "da9055-hwmon\n"); ++ return sprintf(buf, "da9055\n"); + } + + static ssize_t show_label(struct device *dev, +diff --git a/drivers/iio/industrialio-event.c b/drivers/iio/industrialio-event.c +index c9c1419fe6e0..f9360f497ed4 100644 +--- a/drivers/iio/industrialio-event.c ++++ b/drivers/iio/industrialio-event.c +@@ -343,6 +343,9 @@ static int iio_device_add_event(struct iio_dev *indio_dev, + &indio_dev->event_interface->dev_attr_list); + kfree(postfix); + ++ if ((ret == -EBUSY) && (shared_by != IIO_SEPARATE)) ++ continue; ++ + if (ret) + return ret; + +diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c +index ac2d41bd71a0..12698ee9e06b 100644 +--- a/drivers/irqchip/irq-gic.c ++++ b/drivers/irqchip/irq-gic.c +@@ -42,6 +42,7 @@ + #include + #include + ++#include + #include + #include + #include +@@ -903,7 +904,9 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start, + } + + for_each_possible_cpu(cpu) { +- unsigned long offset = percpu_offset * cpu_logical_map(cpu); ++ u32 mpidr = cpu_logical_map(cpu); ++ u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); ++ unsigned long offset = percpu_offset * core_id; + *per_cpu_ptr(gic->dist_base.percpu_base, cpu) = dist_base + offset; + *per_cpu_ptr(gic->cpu_base.percpu_base, cpu) = cpu_base + offset; + } +@@ -1008,8 +1011,10 @@ int __init gic_of_init(struct device_node *node, struct device_node *parent) + gic_cnt++; + return 0; + } ++IRQCHIP_DECLARE(gic_400, "arm,gic-400", gic_of_init); + IRQCHIP_DECLARE(cortex_a15_gic, "arm,cortex-a15-gic", gic_of_init); + IRQCHIP_DECLARE(cortex_a9_gic, "arm,cortex-a9-gic", gic_of_init); ++IRQCHIP_DECLARE(cortex_a7_gic, "arm,cortex-a7-gic", gic_of_init); + IRQCHIP_DECLARE(msm_8660_qgic, "qcom,msm-8660-qgic", gic_of_init); + IRQCHIP_DECLARE(msm_qgic2, "qcom,msm-qgic2", gic_of_init); + +diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c +index 5320332390b7..a87d3fab0271 100644 +--- a/drivers/md/dm-cache-metadata.c ++++ b/drivers/md/dm-cache-metadata.c +@@ -425,6 +425,15 @@ static int __open_metadata(struct dm_cache_metadata *cmd) + + disk_super = dm_block_data(sblock); + ++ /* Verify the data block size hasn't changed */ ++ if (le32_to_cpu(disk_super->data_block_size) != cmd->data_block_size) { ++ DMERR("changing the data block size (from %u to %llu) is not supported", ++ le32_to_cpu(disk_super->data_block_size), ++ (unsigned long long)cmd->data_block_size); ++ r = -EINVAL; ++ goto bad; ++ } ++ + r = __check_incompat_features(disk_super, cmd); + if (r < 0) + goto bad; +diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c +index b086a945edcb..e9d33ad59df5 100644 +--- a/drivers/md/dm-thin-metadata.c ++++ b/drivers/md/dm-thin-metadata.c +@@ -613,6 +613,15 @@ static int __open_metadata(struct dm_pool_metadata *pmd) + + disk_super = dm_block_data(sblock); + ++ /* Verify the data block size hasn't changed */ ++ if (le32_to_cpu(disk_super->data_block_size) != pmd->data_block_size) { ++ DMERR("changing the data block size (from %u to %llu) is not supported", ++ le32_to_cpu(disk_super->data_block_size), ++ (unsigned long long)pmd->data_block_size); ++ r = -EINVAL; ++ goto bad_unlock_sblock; ++ } ++ + r = __check_incompat_features(disk_super, pmd); + if (r < 0) + goto bad_unlock_sblock; +diff --git a/drivers/media/usb/gspca/pac7302.c b/drivers/media/usb/gspca/pac7302.c +index 2fd1c5e31a0f..339adce7c7a5 100644 +--- a/drivers/media/usb/gspca/pac7302.c ++++ b/drivers/media/usb/gspca/pac7302.c +@@ -928,6 +928,7 @@ static const struct usb_device_id device_table[] = { + {USB_DEVICE(0x093a, 0x2620)}, + {USB_DEVICE(0x093a, 0x2621)}, + {USB_DEVICE(0x093a, 0x2622), .driver_info = FL_VFLIP}, ++ {USB_DEVICE(0x093a, 0x2623), .driver_info = FL_VFLIP}, + {USB_DEVICE(0x093a, 0x2624), .driver_info = FL_VFLIP}, + {USB_DEVICE(0x093a, 0x2625)}, + {USB_DEVICE(0x093a, 0x2626)}, +diff --git a/drivers/mtd/devices/elm.c b/drivers/mtd/devices/elm.c +index d1dd6a33a050..3059a7a53bff 100644 +--- a/drivers/mtd/devices/elm.c ++++ b/drivers/mtd/devices/elm.c +@@ -428,6 +428,7 @@ static int elm_context_save(struct elm_info *info) + ELM_SYNDROME_FRAGMENT_1 + offset); + regs->elm_syndrome_fragment_0[i] = elm_read_reg(info, + ELM_SYNDROME_FRAGMENT_0 + offset); ++ break; + default: + return -EINVAL; + } +@@ -466,6 +467,7 @@ static int elm_context_restore(struct elm_info *info) + regs->elm_syndrome_fragment_1[i]); + elm_write_reg(info, ELM_SYNDROME_FRAGMENT_0 + offset, + regs->elm_syndrome_fragment_0[i]); ++ break; + default: + return -EINVAL; + } +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c +index 91ec8cd12478..a95b322f0924 100644 +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -4068,7 +4068,7 @@ static int bond_check_params(struct bond_params *params) + } + + if (ad_select) { +- bond_opt_initstr(&newval, lacp_rate); ++ bond_opt_initstr(&newval, ad_select); + valptr = bond_opt_parse(bond_opt_get(BOND_OPT_AD_SELECT), + &newval); + if (!valptr) { +diff --git a/drivers/net/can/slcan.c b/drivers/net/can/slcan.c +index 3fcdae266377..1d0dab854b90 100644 +--- a/drivers/net/can/slcan.c ++++ b/drivers/net/can/slcan.c +@@ -52,6 +52,7 @@ + #include + #include + #include ++#include + #include + #include + +@@ -85,6 +86,7 @@ struct slcan { + struct tty_struct *tty; /* ptr to TTY structure */ + struct net_device *dev; /* easy for intr handling */ + spinlock_t lock; ++ struct work_struct tx_work; /* Flushes transmit buffer */ + + /* These are pointers to the malloc()ed frame buffers. */ + unsigned char rbuff[SLC_MTU]; /* receiver buffer */ +@@ -309,34 +311,44 @@ static void slc_encaps(struct slcan *sl, struct can_frame *cf) + sl->dev->stats.tx_bytes += cf->can_dlc; + } + +-/* +- * Called by the driver when there's room for more data. If we have +- * more packets to send, we send them here. +- */ +-static void slcan_write_wakeup(struct tty_struct *tty) ++/* Write out any remaining transmit buffer. Scheduled when tty is writable */ ++static void slcan_transmit(struct work_struct *work) + { ++ struct slcan *sl = container_of(work, struct slcan, tx_work); + int actual; +- struct slcan *sl = (struct slcan *) tty->disc_data; + ++ spin_lock_bh(&sl->lock); + /* First make sure we're connected. */ +- if (!sl || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev)) ++ if (!sl->tty || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev)) { ++ spin_unlock_bh(&sl->lock); + return; ++ } + +- spin_lock(&sl->lock); + if (sl->xleft <= 0) { + /* Now serial buffer is almost free & we can start + * transmission of another packet */ + sl->dev->stats.tx_packets++; +- clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); +- spin_unlock(&sl->lock); ++ clear_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags); ++ spin_unlock_bh(&sl->lock); + netif_wake_queue(sl->dev); + return; + } + +- actual = tty->ops->write(tty, sl->xhead, sl->xleft); ++ actual = sl->tty->ops->write(sl->tty, sl->xhead, sl->xleft); + sl->xleft -= actual; + sl->xhead += actual; +- spin_unlock(&sl->lock); ++ spin_unlock_bh(&sl->lock); ++} ++ ++/* ++ * Called by the driver when there's room for more data. ++ * Schedule the transmit. ++ */ ++static void slcan_write_wakeup(struct tty_struct *tty) ++{ ++ struct slcan *sl = tty->disc_data; ++ ++ schedule_work(&sl->tx_work); + } + + /* Send a can_frame to a TTY queue. */ +@@ -522,6 +534,7 @@ static struct slcan *slc_alloc(dev_t line) + sl->magic = SLCAN_MAGIC; + sl->dev = dev; + spin_lock_init(&sl->lock); ++ INIT_WORK(&sl->tx_work, slcan_transmit); + slcan_devs[i] = dev; + + return sl; +@@ -620,8 +633,12 @@ static void slcan_close(struct tty_struct *tty) + if (!sl || sl->magic != SLCAN_MAGIC || sl->tty != tty) + return; + ++ spin_lock_bh(&sl->lock); + tty->disc_data = NULL; + sl->tty = NULL; ++ spin_unlock_bh(&sl->lock); ++ ++ flush_work(&sl->tx_work); + + /* Flush network side */ + unregister_netdev(sl->dev); +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +index dbcff509dc3f..5ed512473b12 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +@@ -793,7 +793,8 @@ static void bnx2x_tpa_stop(struct bnx2x *bp, struct bnx2x_fastpath *fp, + + return; + } +- bnx2x_frag_free(fp, new_data); ++ if (new_data) ++ bnx2x_frag_free(fp, new_data); + drop: + /* drop the packet and keep the buffer in the bin */ + DP(NETIF_MSG_RX_STATUS, +diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c +index 36c80612e21a..80bfa0391913 100644 +--- a/drivers/net/ethernet/emulex/benet/be_main.c ++++ b/drivers/net/ethernet/emulex/benet/be_main.c +@@ -2797,7 +2797,7 @@ static int be_open(struct net_device *netdev) + for_all_evt_queues(adapter, eqo, i) { + napi_enable(&eqo->napi); + be_enable_busy_poll(eqo); +- be_eq_notify(adapter, eqo->q.id, true, false, 0); ++ be_eq_notify(adapter, eqo->q.id, true, true, 0); + } + adapter->flags |= BE_FLAGS_NAPI_ENABLED; + +diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c +index 42f0f6717511..70e16f71f574 100644 +--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c ++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c +@@ -1374,7 +1374,7 @@ static void e1000_rar_set_pch2lan(struct e1000_hw *hw, u8 *addr, u32 index) + /* RAR[1-6] are owned by manageability. Skip those and program the + * next address into the SHRA register array. + */ +- if (index < (u32)(hw->mac.rar_entry_count - 6)) { ++ if (index < (u32)(hw->mac.rar_entry_count)) { + s32 ret_val; + + ret_val = e1000_acquire_swflag_ich8lan(hw); +diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.h b/drivers/net/ethernet/intel/e1000e/ich8lan.h +index 217090df33e7..59865695b282 100644 +--- a/drivers/net/ethernet/intel/e1000e/ich8lan.h ++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.h +@@ -98,7 +98,7 @@ + #define PCIE_ICH8_SNOOP_ALL PCIE_NO_SNOOP_ALL + + #define E1000_ICH_RAR_ENTRIES 7 +-#define E1000_PCH2_RAR_ENTRIES 11 /* RAR[0-6], SHRA[0-3] */ ++#define E1000_PCH2_RAR_ENTRIES 5 /* RAR[0], SHRA[0-3] */ + #define E1000_PCH_LPT_RAR_ENTRIES 12 /* RAR[0], SHRA[0-10] */ + + #define PHY_PAGE_SHIFT 5 +diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.c b/drivers/net/ethernet/intel/igb/e1000_82575.c +index 06df6928f44c..4fa5c2a77d49 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_82575.c ++++ b/drivers/net/ethernet/intel/igb/e1000_82575.c +@@ -1492,6 +1492,13 @@ static s32 igb_init_hw_82575(struct e1000_hw *hw) + s32 ret_val; + u16 i, rar_count = mac->rar_entry_count; + ++ if ((hw->mac.type >= e1000_i210) && ++ !(igb_get_flash_presence_i210(hw))) { ++ ret_val = igb_pll_workaround_i210(hw); ++ if (ret_val) ++ return ret_val; ++ } ++ + /* Initialize identification LED */ + ret_val = igb_id_led_init(hw); + if (ret_val) { +diff --git a/drivers/net/ethernet/intel/igb/e1000_defines.h b/drivers/net/ethernet/intel/igb/e1000_defines.h +index 0571b973be80..20b37668284a 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_defines.h ++++ b/drivers/net/ethernet/intel/igb/e1000_defines.h +@@ -46,14 +46,15 @@ + /* Extended Device Control */ + #define E1000_CTRL_EXT_SDP3_DATA 0x00000080 /* Value of SW Defineable Pin 3 */ + /* Physical Func Reset Done Indication */ +-#define E1000_CTRL_EXT_PFRSTD 0x00004000 +-#define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 +-#define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 +-#define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 +-#define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 +-#define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 +-#define E1000_CTRL_EXT_EIAME 0x01000000 +-#define E1000_CTRL_EXT_IRCA 0x00000001 ++#define E1000_CTRL_EXT_PFRSTD 0x00004000 ++#define E1000_CTRL_EXT_SDLPE 0X00040000 /* SerDes Low Power Enable */ ++#define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 ++#define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 ++#define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 ++#define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 ++#define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 ++#define E1000_CTRL_EXT_EIAME 0x01000000 ++#define E1000_CTRL_EXT_IRCA 0x00000001 + /* Interrupt delay cancellation */ + /* Driver loaded bit for FW */ + #define E1000_CTRL_EXT_DRV_LOAD 0x10000000 +@@ -62,6 +63,7 @@ + /* packet buffer parity error detection enabled */ + /* descriptor FIFO parity error detection enable */ + #define E1000_CTRL_EXT_PBA_CLR 0x80000000 /* PBA Clear */ ++#define E1000_CTRL_EXT_PHYPDEN 0x00100000 + #define E1000_I2CCMD_REG_ADDR_SHIFT 16 + #define E1000_I2CCMD_PHY_ADDR_SHIFT 24 + #define E1000_I2CCMD_OPCODE_READ 0x08000000 +diff --git a/drivers/net/ethernet/intel/igb/e1000_hw.h b/drivers/net/ethernet/intel/igb/e1000_hw.h +index ab99e2b582a8..b79980ad225b 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_hw.h ++++ b/drivers/net/ethernet/intel/igb/e1000_hw.h +@@ -572,4 +572,7 @@ struct net_device *igb_get_hw_dev(struct e1000_hw *hw); + /* These functions must be implemented by drivers */ + s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); + s32 igb_write_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); ++ ++void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); ++void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); + #endif /* _E1000_HW_H_ */ +diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.c b/drivers/net/ethernet/intel/igb/e1000_i210.c +index 0c0393316a3a..0217d4e229a0 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_i210.c ++++ b/drivers/net/ethernet/intel/igb/e1000_i210.c +@@ -835,3 +835,69 @@ s32 igb_init_nvm_params_i210(struct e1000_hw *hw) + } + return ret_val; + } ++ ++/** ++ * igb_pll_workaround_i210 ++ * @hw: pointer to the HW structure ++ * ++ * Works around an errata in the PLL circuit where it occasionally ++ * provides the wrong clock frequency after power up. ++ **/ ++s32 igb_pll_workaround_i210(struct e1000_hw *hw) ++{ ++ s32 ret_val; ++ u32 wuc, mdicnfg, ctrl, ctrl_ext, reg_val; ++ u16 nvm_word, phy_word, pci_word, tmp_nvm; ++ int i; ++ ++ /* Get and set needed register values */ ++ wuc = rd32(E1000_WUC); ++ mdicnfg = rd32(E1000_MDICNFG); ++ reg_val = mdicnfg & ~E1000_MDICNFG_EXT_MDIO; ++ wr32(E1000_MDICNFG, reg_val); ++ ++ /* Get data from NVM, or set default */ ++ ret_val = igb_read_invm_word_i210(hw, E1000_INVM_AUTOLOAD, ++ &nvm_word); ++ if (ret_val) ++ nvm_word = E1000_INVM_DEFAULT_AL; ++ tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL; ++ for (i = 0; i < E1000_MAX_PLL_TRIES; i++) { ++ /* check current state directly from internal PHY */ ++ igb_read_phy_reg_gs40g(hw, (E1000_PHY_PLL_FREQ_PAGE | ++ E1000_PHY_PLL_FREQ_REG), &phy_word); ++ if ((phy_word & E1000_PHY_PLL_UNCONF) ++ != E1000_PHY_PLL_UNCONF) { ++ ret_val = 0; ++ break; ++ } else { ++ ret_val = -E1000_ERR_PHY; ++ } ++ /* directly reset the internal PHY */ ++ ctrl = rd32(E1000_CTRL); ++ wr32(E1000_CTRL, ctrl|E1000_CTRL_PHY_RST); ++ ++ ctrl_ext = rd32(E1000_CTRL_EXT); ++ ctrl_ext |= (E1000_CTRL_EXT_PHYPDEN | E1000_CTRL_EXT_SDLPE); ++ wr32(E1000_CTRL_EXT, ctrl_ext); ++ ++ wr32(E1000_WUC, 0); ++ reg_val = (E1000_INVM_AUTOLOAD << 4) | (tmp_nvm << 16); ++ wr32(E1000_EEARBC_I210, reg_val); ++ ++ igb_read_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); ++ pci_word |= E1000_PCI_PMCSR_D3; ++ igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); ++ usleep_range(1000, 2000); ++ pci_word &= ~E1000_PCI_PMCSR_D3; ++ igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); ++ reg_val = (E1000_INVM_AUTOLOAD << 4) | (nvm_word << 16); ++ wr32(E1000_EEARBC_I210, reg_val); ++ ++ /* restore WUC register */ ++ wr32(E1000_WUC, wuc); ++ } ++ /* restore MDICNFG setting */ ++ wr32(E1000_MDICNFG, mdicnfg); ++ return ret_val; ++} +diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.h b/drivers/net/ethernet/intel/igb/e1000_i210.h +index 2d913716573a..710f8e9f10fb 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_i210.h ++++ b/drivers/net/ethernet/intel/igb/e1000_i210.h +@@ -46,6 +46,7 @@ s32 igb_read_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 *data); + s32 igb_write_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 data); + s32 igb_init_nvm_params_i210(struct e1000_hw *hw); + bool igb_get_flash_presence_i210(struct e1000_hw *hw); ++s32 igb_pll_workaround_i210(struct e1000_hw *hw); + + #define E1000_STM_OPCODE 0xDB00 + #define E1000_EEPROM_FLASH_SIZE_WORD 0x11 +@@ -91,4 +92,15 @@ enum E1000_INVM_STRUCTURE_TYPE { + #define NVM_LED_1_CFG_DEFAULT_I211 0x0184 + #define NVM_LED_0_2_CFG_DEFAULT_I211 0x200C + ++/* PLL Defines */ ++#define E1000_PCI_PMCSR 0x44 ++#define E1000_PCI_PMCSR_D3 0x03 ++#define E1000_MAX_PLL_TRIES 5 ++#define E1000_PHY_PLL_UNCONF 0xFF ++#define E1000_PHY_PLL_FREQ_PAGE 0xFC0000 ++#define E1000_PHY_PLL_FREQ_REG 0x000E ++#define E1000_INVM_DEFAULT_AL 0x202F ++#define E1000_INVM_AUTOLOAD 0x0A ++#define E1000_INVM_PLL_WO_VAL 0x0010 ++ + #endif +diff --git a/drivers/net/ethernet/intel/igb/e1000_regs.h b/drivers/net/ethernet/intel/igb/e1000_regs.h +index 82632c6c53af..7156981ec813 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_regs.h ++++ b/drivers/net/ethernet/intel/igb/e1000_regs.h +@@ -69,6 +69,7 @@ + #define E1000_PBA 0x01000 /* Packet Buffer Allocation - RW */ + #define E1000_PBS 0x01008 /* Packet Buffer Size */ + #define E1000_EEMNGCTL 0x01010 /* MNG EEprom Control */ ++#define E1000_EEARBC_I210 0x12024 /* EEPROM Auto Read Bus Control */ + #define E1000_EEWR 0x0102C /* EEPROM Write Register - RW */ + #define E1000_I2CCMD 0x01028 /* SFPI2C Command Register - RW */ + #define E1000_FRTIMER 0x01048 /* Free Running Timer - RW */ +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c +index d9c7eb279141..5ca8c479666e 100644 +--- a/drivers/net/ethernet/intel/igb/igb_main.c ++++ b/drivers/net/ethernet/intel/igb/igb_main.c +@@ -7128,6 +7128,20 @@ static int igb_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) + } + } + ++void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) ++{ ++ struct igb_adapter *adapter = hw->back; ++ ++ pci_read_config_word(adapter->pdev, reg, value); ++} ++ ++void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) ++{ ++ struct igb_adapter *adapter = hw->back; ++ ++ pci_write_config_word(adapter->pdev, reg, *value); ++} ++ + s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value) + { + struct igb_adapter *adapter = hw->back; +@@ -7491,6 +7505,8 @@ static int igb_sriov_reinit(struct pci_dev *dev) + + if (netif_running(netdev)) + igb_close(netdev); ++ else ++ igb_reset(adapter); + + igb_clear_interrupt_scheme(adapter); + +diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c +index ca2dfbe01598..c4c00d9f2c04 100644 +--- a/drivers/net/ethernet/marvell/mvneta.c ++++ b/drivers/net/ethernet/marvell/mvneta.c +@@ -1217,7 +1217,7 @@ static u32 mvneta_txq_desc_csum(int l3_offs, int l3_proto, + command = l3_offs << MVNETA_TX_L3_OFF_SHIFT; + command |= ip_hdr_len << MVNETA_TX_IP_HLEN_SHIFT; + +- if (l3_proto == swab16(ETH_P_IP)) ++ if (l3_proto == htons(ETH_P_IP)) + command |= MVNETA_TXD_IP_CSUM; + else + command |= MVNETA_TX_L3_IP6; +@@ -2393,7 +2393,7 @@ static void mvneta_adjust_link(struct net_device *ndev) + + if (phydev->speed == SPEED_1000) + val |= MVNETA_GMAC_CONFIG_GMII_SPEED; +- else ++ else if (phydev->speed == SPEED_100) + val |= MVNETA_GMAC_CONFIG_MII_SPEED; + + mvreg_write(pp, MVNETA_GMAC_AUTONEG_CONFIG, val); +diff --git a/drivers/net/ethernet/sun/sunvnet.c b/drivers/net/ethernet/sun/sunvnet.c +index 1c24a8f368bd..fd411d6e19a2 100644 +--- a/drivers/net/ethernet/sun/sunvnet.c ++++ b/drivers/net/ethernet/sun/sunvnet.c +@@ -1083,6 +1083,24 @@ static struct vnet *vnet_find_or_create(const u64 *local_mac) + return vp; + } + ++static void vnet_cleanup(void) ++{ ++ struct vnet *vp; ++ struct net_device *dev; ++ ++ mutex_lock(&vnet_list_mutex); ++ while (!list_empty(&vnet_list)) { ++ vp = list_first_entry(&vnet_list, struct vnet, list); ++ list_del(&vp->list); ++ dev = vp->dev; ++ /* vio_unregister_driver() should have cleaned up port_list */ ++ BUG_ON(!list_empty(&vp->port_list)); ++ unregister_netdev(dev); ++ free_netdev(dev); ++ } ++ mutex_unlock(&vnet_list_mutex); ++} ++ + static const char *local_mac_prop = "local-mac-address"; + + static struct vnet *vnet_find_parent(struct mdesc_handle *hp, +@@ -1240,7 +1258,6 @@ static int vnet_port_remove(struct vio_dev *vdev) + + kfree(port); + +- unregister_netdev(vp->dev); + } + return 0; + } +@@ -1268,6 +1285,7 @@ static int __init vnet_init(void) + static void __exit vnet_exit(void) + { + vio_unregister_driver(&vnet_port_driver); ++ vnet_cleanup(); + } + + module_init(vnet_init); +diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c +index 2ea7efd11857..6c9c16d76935 100644 +--- a/drivers/net/ppp/pppoe.c ++++ b/drivers/net/ppp/pppoe.c +@@ -675,7 +675,7 @@ static int pppoe_connect(struct socket *sock, struct sockaddr *uservaddr, + po->chan.hdrlen = (sizeof(struct pppoe_hdr) + + dev->hard_header_len); + +- po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr); ++ po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr) - 2; + po->chan.private = sk; + po->chan.ops = &pppoe_chan_ops; + +diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c +index ad4a94e9ff57..87526443841f 100644 +--- a/drivers/net/slip/slip.c ++++ b/drivers/net/slip/slip.c +@@ -83,6 +83,7 @@ + #include + #include + #include ++#include + #include "slip.h" + #ifdef CONFIG_INET + #include +@@ -416,36 +417,46 @@ static void sl_encaps(struct slip *sl, unsigned char *icp, int len) + #endif + } + +-/* +- * Called by the driver when there's room for more data. If we have +- * more packets to send, we send them here. +- */ +-static void slip_write_wakeup(struct tty_struct *tty) ++/* Write out any remaining transmit buffer. Scheduled when tty is writable */ ++static void slip_transmit(struct work_struct *work) + { ++ struct slip *sl = container_of(work, struct slip, tx_work); + int actual; +- struct slip *sl = tty->disc_data; + ++ spin_lock_bh(&sl->lock); + /* First make sure we're connected. */ +- if (!sl || sl->magic != SLIP_MAGIC || !netif_running(sl->dev)) ++ if (!sl->tty || sl->magic != SLIP_MAGIC || !netif_running(sl->dev)) { ++ spin_unlock_bh(&sl->lock); + return; ++ } + +- spin_lock_bh(&sl->lock); + if (sl->xleft <= 0) { + /* Now serial buffer is almost free & we can start + * transmission of another packet */ + sl->dev->stats.tx_packets++; +- clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); ++ clear_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags); + spin_unlock_bh(&sl->lock); + sl_unlock(sl); + return; + } + +- actual = tty->ops->write(tty, sl->xhead, sl->xleft); ++ actual = sl->tty->ops->write(sl->tty, sl->xhead, sl->xleft); + sl->xleft -= actual; + sl->xhead += actual; + spin_unlock_bh(&sl->lock); + } + ++/* ++ * Called by the driver when there's room for more data. ++ * Schedule the transmit. ++ */ ++static void slip_write_wakeup(struct tty_struct *tty) ++{ ++ struct slip *sl = tty->disc_data; ++ ++ schedule_work(&sl->tx_work); ++} ++ + static void sl_tx_timeout(struct net_device *dev) + { + struct slip *sl = netdev_priv(dev); +@@ -749,6 +760,7 @@ static struct slip *sl_alloc(dev_t line) + sl->magic = SLIP_MAGIC; + sl->dev = dev; + spin_lock_init(&sl->lock); ++ INIT_WORK(&sl->tx_work, slip_transmit); + sl->mode = SL_MODE_DEFAULT; + #ifdef CONFIG_SLIP_SMART + /* initialize timer_list struct */ +@@ -872,8 +884,12 @@ static void slip_close(struct tty_struct *tty) + if (!sl || sl->magic != SLIP_MAGIC || sl->tty != tty) + return; + ++ spin_lock_bh(&sl->lock); + tty->disc_data = NULL; + sl->tty = NULL; ++ spin_unlock_bh(&sl->lock); ++ ++ flush_work(&sl->tx_work); + + /* VSV = very important to remove timers */ + #ifdef CONFIG_SLIP_SMART +diff --git a/drivers/net/slip/slip.h b/drivers/net/slip/slip.h +index 67673cf1266b..cf32aadf508f 100644 +--- a/drivers/net/slip/slip.h ++++ b/drivers/net/slip/slip.h +@@ -53,6 +53,7 @@ struct slip { + struct tty_struct *tty; /* ptr to TTY structure */ + struct net_device *dev; /* easy for intr handling */ + spinlock_t lock; ++ struct work_struct tx_work; /* Flushes transmit buffer */ + + #ifdef SL_INCLUDE_CSLIP + struct slcompress *slcomp; /* for header compression */ +diff --git a/drivers/net/usb/huawei_cdc_ncm.c b/drivers/net/usb/huawei_cdc_ncm.c +index 312178d7b698..a01462523bc7 100644 +--- a/drivers/net/usb/huawei_cdc_ncm.c ++++ b/drivers/net/usb/huawei_cdc_ncm.c +@@ -84,12 +84,13 @@ static int huawei_cdc_ncm_bind(struct usbnet *usbnet_dev, + ctx = drvstate->ctx; + + if (usbnet_dev->status) +- /* CDC-WMC r1.1 requires wMaxCommand to be "at least 256 +- * decimal (0x100)" ++ /* The wMaxCommand buffer must be big enough to hold ++ * any message from the modem. Experience has shown ++ * that some replies are more than 256 bytes long + */ + subdriver = usb_cdc_wdm_register(ctx->control, + &usbnet_dev->status->desc, +- 256, /* wMaxCommand */ ++ 1024, /* wMaxCommand */ + huawei_cdc_ncm_wdm_manage_power); + if (IS_ERR(subdriver)) { + ret = PTR_ERR(subdriver); +@@ -206,6 +207,9 @@ static const struct usb_device_id huawei_cdc_ncm_devs[] = { + { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x76), + .driver_info = (unsigned long)&huawei_cdc_ncm_info, + }, ++ { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x03, 0x16), ++ .driver_info = (unsigned long)&huawei_cdc_ncm_info, ++ }, + + /* Terminating entry */ + { +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c +index b71120842c4f..d510f1d41bae 100644 +--- a/drivers/net/usb/qmi_wwan.c ++++ b/drivers/net/usb/qmi_wwan.c +@@ -660,6 +660,7 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x05c6, 0x9084, 4)}, + {QMI_FIXED_INTF(0x05c6, 0x920d, 0)}, + {QMI_FIXED_INTF(0x05c6, 0x920d, 5)}, ++ {QMI_FIXED_INTF(0x0846, 0x68a2, 8)}, + {QMI_FIXED_INTF(0x12d1, 0x140c, 1)}, /* Huawei E173 */ + {QMI_FIXED_INTF(0x12d1, 0x14ac, 1)}, /* Huawei E1820 */ + {QMI_FIXED_INTF(0x16d8, 0x6003, 0)}, /* CMOTech 6003 */ +@@ -734,6 +735,7 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x19d2, 0x1424, 2)}, + {QMI_FIXED_INTF(0x19d2, 0x1425, 2)}, + {QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */ ++ {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ + {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ + {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ + {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ +@@ -746,6 +748,7 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x1199, 0x901f, 8)}, /* Sierra Wireless EM7355 */ + {QMI_FIXED_INTF(0x1199, 0x9041, 8)}, /* Sierra Wireless MC7305/MC7355 */ + {QMI_FIXED_INTF(0x1199, 0x9051, 8)}, /* Netgear AirCard 340U */ ++ {QMI_FIXED_INTF(0x1199, 0x9057, 8)}, + {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */ + {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */ + {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ +diff --git a/drivers/net/wireless/iwlwifi/dvm/rxon.c b/drivers/net/wireless/iwlwifi/dvm/rxon.c +index 503a81e58185..c1e311341b74 100644 +--- a/drivers/net/wireless/iwlwifi/dvm/rxon.c ++++ b/drivers/net/wireless/iwlwifi/dvm/rxon.c +@@ -1068,13 +1068,6 @@ int iwlagn_commit_rxon(struct iwl_priv *priv, struct iwl_rxon_context *ctx) + /* recalculate basic rates */ + iwl_calc_basic_rates(priv, ctx); + +- /* +- * force CTS-to-self frames protection if RTS-CTS is not preferred +- * one aggregation protection method +- */ +- if (!priv->hw_params.use_rts_for_aggregation) +- ctx->staging.flags |= RXON_FLG_SELF_CTS_EN; +- + if ((ctx->vif && ctx->vif->bss_conf.use_short_slot) || + !(ctx->staging.flags & RXON_FLG_BAND_24G_MSK)) + ctx->staging.flags |= RXON_FLG_SHORT_SLOT_MSK; +@@ -1480,11 +1473,6 @@ void iwlagn_bss_info_changed(struct ieee80211_hw *hw, + else + ctx->staging.flags &= ~RXON_FLG_TGG_PROTECT_MSK; + +- if (bss_conf->use_cts_prot) +- ctx->staging.flags |= RXON_FLG_SELF_CTS_EN; +- else +- ctx->staging.flags &= ~RXON_FLG_SELF_CTS_EN; +- + memcpy(ctx->staging.bssid_addr, bss_conf->bssid, ETH_ALEN); + + if (vif->type == NL80211_IFTYPE_AP || +diff --git a/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c +index ba723d50939a..820797af7abf 100644 +--- a/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c ++++ b/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c +@@ -651,13 +651,9 @@ static void iwl_mvm_mac_ctxt_cmd_common(struct iwl_mvm *mvm, + if (vif->bss_conf.qos) + cmd->qos_flags |= cpu_to_le32(MAC_QOS_FLG_UPDATE_EDCA); + +- /* Don't use cts to self as the fw doesn't support it currently. */ +- if (vif->bss_conf.use_cts_prot) { ++ if (vif->bss_conf.use_cts_prot) + cmd->protection_flags |= cpu_to_le32(MAC_PROT_FLG_TGG_PROTECT); +- if (IWL_UCODE_API(mvm->fw->ucode_ver) >= 8) +- cmd->protection_flags |= +- cpu_to_le32(MAC_PROT_FLG_SELF_CTS_EN); +- } ++ + IWL_DEBUG_RATE(mvm, "use_cts_prot %d, ht_operation_mode %d\n", + vif->bss_conf.use_cts_prot, + vif->bss_conf.ht_operation_mode); +diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c +index 43e27a174430..df1f5e732ab5 100644 +--- a/drivers/net/wireless/iwlwifi/pcie/drv.c ++++ b/drivers/net/wireless/iwlwifi/pcie/drv.c +@@ -366,6 +366,7 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { + {IWL_PCI_DEVICE(0x095A, 0x5012, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x5412, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)}, ++ {IWL_PCI_DEVICE(0x095A, 0x5510, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x5400, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x5000, iwl7265_2n_cfg)}, +@@ -379,7 +380,7 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { + {IWL_PCI_DEVICE(0x095A, 0x9110, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x9112, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)}, +- {IWL_PCI_DEVICE(0x095A, 0x9200, iwl7265_2ac_cfg)}, ++ {IWL_PCI_DEVICE(0x095B, 0x9200, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x9510, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x9310, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x9410, iwl7265_2ac_cfg)}, +diff --git a/drivers/net/wireless/mwifiex/main.c b/drivers/net/wireless/mwifiex/main.c +index 9d3d2758ec35..952a47f6554e 100644 +--- a/drivers/net/wireless/mwifiex/main.c ++++ b/drivers/net/wireless/mwifiex/main.c +@@ -646,6 +646,7 @@ mwifiex_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) + } + + tx_info = MWIFIEX_SKB_TXCB(skb); ++ memset(tx_info, 0, sizeof(*tx_info)); + tx_info->bss_num = priv->bss_num; + tx_info->bss_type = priv->bss_type; + tx_info->pkt_len = skb->len; +diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c +index 3314516018c6..86b1fd673749 100644 +--- a/drivers/usb/chipidea/udc.c ++++ b/drivers/usb/chipidea/udc.c +@@ -1179,8 +1179,8 @@ static int ep_enable(struct usb_ep *ep, + + if (hwep->type == USB_ENDPOINT_XFER_CONTROL) + cap |= QH_IOS; +- if (hwep->num) +- cap |= QH_ZLT; ++ ++ cap |= QH_ZLT; + cap |= (hwep->ep.maxpacket << __ffs(QH_MAX_PKT)) & QH_MAX_PKT; + /* + * For ISO-TX, we set mult at QH as the largest value, and use +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index 3baa51bf8a6a..36b1e856bd00 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -884,6 +884,25 @@ static int hub_usb3_port_disable(struct usb_hub *hub, int port1) + if (!hub_is_superspeed(hub->hdev)) + return -EINVAL; + ++ ret = hub_port_status(hub, port1, &portstatus, &portchange); ++ if (ret < 0) ++ return ret; ++ ++ /* ++ * USB controller Advanced Micro Devices, Inc. [AMD] FCH USB XHCI ++ * Controller [1022:7814] will have spurious result making the following ++ * usb 3.0 device hotplugging route to the 2.0 root hub and recognized ++ * as high-speed device if we set the usb 3.0 port link state to ++ * Disabled. Since it's already in USB_SS_PORT_LS_RX_DETECT state, we ++ * check the state here to avoid the bug. ++ */ ++ if ((portstatus & USB_PORT_STAT_LINK_STATE) == ++ USB_SS_PORT_LS_RX_DETECT) { ++ dev_dbg(&hub->ports[port1 - 1]->dev, ++ "Not disabling port; link state is RxDetect\n"); ++ return ret; ++ } ++ + ret = hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_SS_DISABLED); + if (ret) + return ret; +diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c +index 61a6ac8fa8fc..08834b565aab 100644 +--- a/drivers/xen/balloon.c ++++ b/drivers/xen/balloon.c +@@ -426,20 +426,18 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) + * p2m are consistent. + */ + if (!xen_feature(XENFEAT_auto_translated_physmap)) { +- unsigned long p; +- struct page *scratch_page = get_balloon_scratch_page(); +- + if (!PageHighMem(page)) { ++ struct page *scratch_page = get_balloon_scratch_page(); ++ + ret = HYPERVISOR_update_va_mapping( + (unsigned long)__va(pfn << PAGE_SHIFT), + pfn_pte(page_to_pfn(scratch_page), + PAGE_KERNEL_RO), 0); + BUG_ON(ret); +- } +- p = page_to_pfn(scratch_page); +- __set_phys_to_machine(pfn, pfn_to_mfn(p)); + +- put_balloon_scratch_page(); ++ put_balloon_scratch_page(); ++ } ++ __set_phys_to_machine(pfn, INVALID_P2M_ENTRY); + } + #endif + +diff --git a/fs/aio.c b/fs/aio.c +index e609e15f36b9..6d68e01dc7ca 100644 +--- a/fs/aio.c ++++ b/fs/aio.c +@@ -830,16 +830,20 @@ void exit_aio(struct mm_struct *mm) + static void put_reqs_available(struct kioctx *ctx, unsigned nr) + { + struct kioctx_cpu *kcpu; ++ unsigned long flags; + + preempt_disable(); + kcpu = this_cpu_ptr(ctx->cpu); + ++ local_irq_save(flags); + kcpu->reqs_available += nr; ++ + while (kcpu->reqs_available >= ctx->req_batch * 2) { + kcpu->reqs_available -= ctx->req_batch; + atomic_add(ctx->req_batch, &ctx->reqs_available); + } + ++ local_irq_restore(flags); + preempt_enable(); + } + +@@ -847,10 +851,12 @@ static bool get_reqs_available(struct kioctx *ctx) + { + struct kioctx_cpu *kcpu; + bool ret = false; ++ unsigned long flags; + + preempt_disable(); + kcpu = this_cpu_ptr(ctx->cpu); + ++ local_irq_save(flags); + if (!kcpu->reqs_available) { + int old, avail = atomic_read(&ctx->reqs_available); + +@@ -869,6 +875,7 @@ static bool get_reqs_available(struct kioctx *ctx) + ret = true; + kcpu->reqs_available--; + out: ++ local_irq_restore(flags); + preempt_enable(); + return ret; + } +diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c +index 1d1292c581c3..342f0239fcbf 100644 +--- a/fs/fuse/dir.c ++++ b/fs/fuse/dir.c +@@ -198,7 +198,8 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags) + inode = ACCESS_ONCE(entry->d_inode); + if (inode && is_bad_inode(inode)) + goto invalid; +- else if (fuse_dentry_time(entry) < get_jiffies_64()) { ++ else if (time_before64(fuse_dentry_time(entry), get_jiffies_64()) || ++ (flags & LOOKUP_REVAL)) { + int err; + struct fuse_entry_out outarg; + struct fuse_req *req; +@@ -925,7 +926,7 @@ int fuse_update_attributes(struct inode *inode, struct kstat *stat, + int err; + bool r; + +- if (fi->i_time < get_jiffies_64()) { ++ if (time_before64(fi->i_time, get_jiffies_64())) { + r = true; + err = fuse_do_getattr(inode, stat, file); + } else { +@@ -1111,7 +1112,7 @@ static int fuse_permission(struct inode *inode, int mask) + ((mask & MAY_EXEC) && S_ISREG(inode->i_mode))) { + struct fuse_inode *fi = get_fuse_inode(inode); + +- if (fi->i_time < get_jiffies_64()) { ++ if (time_before64(fi->i_time, get_jiffies_64())) { + refreshed = true; + + err = fuse_perm_getattr(inode, mask); +diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c +index d468643a68b2..73f6bcb44ea8 100644 +--- a/fs/fuse/inode.c ++++ b/fs/fuse/inode.c +@@ -461,6 +461,17 @@ static const match_table_t tokens = { + {OPT_ERR, NULL} + }; + ++static int fuse_match_uint(substring_t *s, unsigned int *res) ++{ ++ int err = -ENOMEM; ++ char *buf = match_strdup(s); ++ if (buf) { ++ err = kstrtouint(buf, 10, res); ++ kfree(buf); ++ } ++ return err; ++} ++ + static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) + { + char *p; +@@ -471,6 +482,7 @@ static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) + while ((p = strsep(&opt, ",")) != NULL) { + int token; + int value; ++ unsigned uv; + substring_t args[MAX_OPT_ARGS]; + if (!*p) + continue; +@@ -494,18 +506,18 @@ static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) + break; + + case OPT_USER_ID: +- if (match_int(&args[0], &value)) ++ if (fuse_match_uint(&args[0], &uv)) + return 0; +- d->user_id = make_kuid(current_user_ns(), value); ++ d->user_id = make_kuid(current_user_ns(), uv); + if (!uid_valid(d->user_id)) + return 0; + d->user_id_present = 1; + break; + + case OPT_GROUP_ID: +- if (match_int(&args[0], &value)) ++ if (fuse_match_uint(&args[0], &uv)) + return 0; +- d->group_id = make_kgid(current_user_ns(), value); ++ d->group_id = make_kgid(current_user_ns(), uv); + if (!gid_valid(d->group_id)) + return 0; + d->group_id_present = 1; +diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c +index cfc8dcc16043..ce87c9007b0f 100644 +--- a/fs/quota/dquot.c ++++ b/fs/quota/dquot.c +@@ -702,6 +702,7 @@ dqcache_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) + struct dquot *dquot; + unsigned long freed = 0; + ++ spin_lock(&dq_list_lock); + head = free_dquots.prev; + while (head != &free_dquots && sc->nr_to_scan) { + dquot = list_entry(head, struct dquot, dq_free); +@@ -713,6 +714,7 @@ dqcache_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) + freed++; + head = free_dquots.prev; + } ++ spin_unlock(&dq_list_lock); + return freed; + } + +diff --git a/include/net/sock.h b/include/net/sock.h +index 57c31dd15e64..2f7bc435c93d 100644 +--- a/include/net/sock.h ++++ b/include/net/sock.h +@@ -1755,8 +1755,8 @@ sk_dst_get(struct sock *sk) + + rcu_read_lock(); + dst = rcu_dereference(sk->sk_dst_cache); +- if (dst) +- dst_hold(dst); ++ if (dst && !atomic_inc_not_zero(&dst->__refcnt)) ++ dst = NULL; + rcu_read_unlock(); + return dst; + } +@@ -1793,9 +1793,11 @@ __sk_dst_set(struct sock *sk, struct dst_entry *dst) + static inline void + sk_dst_set(struct sock *sk, struct dst_entry *dst) + { +- spin_lock(&sk->sk_dst_lock); +- __sk_dst_set(sk, dst); +- spin_unlock(&sk->sk_dst_lock); ++ struct dst_entry *old_dst; ++ ++ sk_tx_queue_clear(sk); ++ old_dst = xchg((__force struct dst_entry **)&sk->sk_dst_cache, dst); ++ dst_release(old_dst); + } + + static inline void +@@ -1807,9 +1809,7 @@ __sk_dst_reset(struct sock *sk) + static inline void + sk_dst_reset(struct sock *sk) + { +- spin_lock(&sk->sk_dst_lock); +- __sk_dst_reset(sk); +- spin_unlock(&sk->sk_dst_lock); ++ sk_dst_set(sk, NULL); + } + + struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie); +diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks +index d2b32ac27a39..ecee67a00f5f 100644 +--- a/kernel/Kconfig.locks ++++ b/kernel/Kconfig.locks +@@ -220,6 +220,9 @@ config INLINE_WRITE_UNLOCK_IRQRESTORE + + endif + ++config ARCH_SUPPORTS_ATOMIC_RMW ++ bool ++ + config MUTEX_SPIN_ON_OWNER + def_bool y +- depends on SMP && !DEBUG_MUTEXES ++ depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW +diff --git a/kernel/events/core.c b/kernel/events/core.c +index 0e7fea78f565..f774e9365a03 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -2311,7 +2311,7 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn, + next_parent = rcu_dereference(next_ctx->parent_ctx); + + /* If neither context have a parent context; they cannot be clones. */ +- if (!parent && !next_parent) ++ if (!parent || !next_parent) + goto unlock; + + if (next_parent == ctx || next_ctx == parent || next_parent == parent) { +diff --git a/kernel/power/process.c b/kernel/power/process.c +index 06ec8869dbf1..14f9a8d4725d 100644 +--- a/kernel/power/process.c ++++ b/kernel/power/process.c +@@ -184,6 +184,7 @@ void thaw_processes(void) + + printk("Restarting tasks ... "); + ++ __usermodehelper_set_disable_depth(UMH_FREEZING); + thaw_workqueues(); + + read_lock(&tasklist_lock); +diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c +index dd52e7ffb10e..183e8e5c38ba 100644 +--- a/kernel/sched/debug.c ++++ b/kernel/sched/debug.c +@@ -608,7 +608,7 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) + + avg_atom = p->se.sum_exec_runtime; + if (nr_switches) +- do_div(avg_atom, nr_switches); ++ avg_atom = div64_ul(avg_atom, nr_switches); + else + avg_atom = -1LL; + +diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c +index 88c9c65a430d..fe75444ae7ec 100644 +--- a/kernel/time/alarmtimer.c ++++ b/kernel/time/alarmtimer.c +@@ -585,9 +585,14 @@ static int alarm_timer_set(struct k_itimer *timr, int flags, + struct itimerspec *new_setting, + struct itimerspec *old_setting) + { ++ ktime_t exp; ++ + if (!rtcdev) + return -ENOTSUPP; + ++ if (flags & ~TIMER_ABSTIME) ++ return -EINVAL; ++ + if (old_setting) + alarm_timer_get(timr, old_setting); + +@@ -597,8 +602,16 @@ static int alarm_timer_set(struct k_itimer *timr, int flags, + + /* start the timer */ + timr->it.alarm.interval = timespec_to_ktime(new_setting->it_interval); +- alarm_start(&timr->it.alarm.alarmtimer, +- timespec_to_ktime(new_setting->it_value)); ++ exp = timespec_to_ktime(new_setting->it_value); ++ /* Convert (if necessary) to absolute time */ ++ if (flags != TIMER_ABSTIME) { ++ ktime_t now; ++ ++ now = alarm_bases[timr->it.alarm.alarmtimer.type].gettime(); ++ exp = ktime_add(now, exp); ++ } ++ ++ alarm_start(&timr->it.alarm.alarmtimer, exp); + return 0; + } + +@@ -730,6 +743,9 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags, + if (!alarmtimer_get_rtcdev()) + return -ENOTSUPP; + ++ if (flags & ~TIMER_ABSTIME) ++ return -EINVAL; ++ + if (!capable(CAP_WAKE_ALARM)) + return -EPERM; + +diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c +index 868633e61b43..e3be87edde33 100644 +--- a/kernel/trace/ftrace.c ++++ b/kernel/trace/ftrace.c +@@ -331,12 +331,12 @@ static void update_ftrace_function(void) + func = ftrace_ops_list_func; + } + ++ update_function_graph_func(); ++ + /* If there's no change, then do nothing more here */ + if (ftrace_trace_function == func) + return; + +- update_function_graph_func(); +- + /* + * If we are using the list function, it doesn't care + * about the function_trace_ops. +diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c +index 04202d9aa514..0954450df7dc 100644 +--- a/kernel/trace/ring_buffer.c ++++ b/kernel/trace/ring_buffer.c +@@ -616,10 +616,6 @@ int ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu, + struct ring_buffer_per_cpu *cpu_buffer; + struct rb_irq_work *work; + +- if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) || +- (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu))) +- return POLLIN | POLLRDNORM; +- + if (cpu == RING_BUFFER_ALL_CPUS) + work = &buffer->irq_work; + else { +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index 922657f30723..7e259b2bdf44 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -454,6 +454,12 @@ int __trace_puts(unsigned long ip, const char *str, int size) + struct print_entry *entry; + unsigned long irq_flags; + int alloc; ++ int pc; ++ ++ if (!(trace_flags & TRACE_ITER_PRINTK)) ++ return 0; ++ ++ pc = preempt_count(); + + if (unlikely(tracing_selftest_running || tracing_disabled)) + return 0; +@@ -463,7 +469,7 @@ int __trace_puts(unsigned long ip, const char *str, int size) + local_save_flags(irq_flags); + buffer = global_trace.trace_buffer.buffer; + event = trace_buffer_lock_reserve(buffer, TRACE_PRINT, alloc, +- irq_flags, preempt_count()); ++ irq_flags, pc); + if (!event) + return 0; + +@@ -480,6 +486,7 @@ int __trace_puts(unsigned long ip, const char *str, int size) + entry->buf[size] = '\0'; + + __buffer_unlock_commit(buffer, event); ++ ftrace_trace_stack(buffer, irq_flags, 4, pc); + + return size; + } +@@ -497,6 +504,12 @@ int __trace_bputs(unsigned long ip, const char *str) + struct bputs_entry *entry; + unsigned long irq_flags; + int size = sizeof(struct bputs_entry); ++ int pc; ++ ++ if (!(trace_flags & TRACE_ITER_PRINTK)) ++ return 0; ++ ++ pc = preempt_count(); + + if (unlikely(tracing_selftest_running || tracing_disabled)) + return 0; +@@ -504,7 +517,7 @@ int __trace_bputs(unsigned long ip, const char *str) + local_save_flags(irq_flags); + buffer = global_trace.trace_buffer.buffer; + event = trace_buffer_lock_reserve(buffer, TRACE_BPUTS, size, +- irq_flags, preempt_count()); ++ irq_flags, pc); + if (!event) + return 0; + +@@ -513,6 +526,7 @@ int __trace_bputs(unsigned long ip, const char *str) + entry->str = str; + + __buffer_unlock_commit(buffer, event); ++ ftrace_trace_stack(buffer, irq_flags, 4, pc); + + return 1; + } +diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c +index 7b16d40bd64d..e4c4efc4ba0d 100644 +--- a/kernel/trace/trace_events.c ++++ b/kernel/trace/trace_events.c +@@ -439,6 +439,7 @@ static void remove_event_file_dir(struct ftrace_event_file *file) + + list_del(&file->list); + remove_subsystem(file->system); ++ free_event_filter(file->filter); + kmem_cache_free(file_cachep, file); + } + +diff --git a/mm/shmem.c b/mm/shmem.c +index 1f18c9d0d93e..ff85863587ee 100644 +--- a/mm/shmem.c ++++ b/mm/shmem.c +@@ -80,11 +80,12 @@ static struct vfsmount *shm_mnt; + #define SHORT_SYMLINK_LEN 128 + + /* +- * shmem_fallocate and shmem_writepage communicate via inode->i_private +- * (with i_mutex making sure that it has only one user at a time): +- * we would prefer not to enlarge the shmem inode just for that. ++ * shmem_fallocate communicates with shmem_fault or shmem_writepage via ++ * inode->i_private (with i_mutex making sure that it has only one user at ++ * a time): we would prefer not to enlarge the shmem inode just for that. + */ + struct shmem_falloc { ++ wait_queue_head_t *waitq; /* faults into hole wait for punch to end */ + pgoff_t start; /* start of range currently being fallocated */ + pgoff_t next; /* the next page offset to be fallocated */ + pgoff_t nr_falloced; /* how many new pages have been fallocated */ +@@ -533,22 +534,19 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + return; + + index = start; +- for ( ; ; ) { ++ while (index < end) { + cond_resched(); + pvec.nr = shmem_find_get_pages_and_swap(mapping, index, + min(end - index, (pgoff_t)PAGEVEC_SIZE), + pvec.pages, indices); + if (!pvec.nr) { +- if (index == start || unfalloc) ++ /* If all gone or hole-punch or unfalloc, we're done */ ++ if (index == start || end != -1) + break; ++ /* But if truncating, restart to make sure all gone */ + index = start; + continue; + } +- if ((index == start || unfalloc) && indices[0] >= end) { +- shmem_deswap_pagevec(&pvec); +- pagevec_release(&pvec); +- break; +- } + mem_cgroup_uncharge_start(); + for (i = 0; i < pagevec_count(&pvec); i++) { + struct page *page = pvec.pages[i]; +@@ -560,8 +558,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + if (radix_tree_exceptional_entry(page)) { + if (unfalloc) + continue; +- nr_swaps_freed += !shmem_free_swap(mapping, +- index, page); ++ if (shmem_free_swap(mapping, index, page)) { ++ /* Swap was replaced by page: retry */ ++ index--; ++ break; ++ } ++ nr_swaps_freed++; + continue; + } + +@@ -570,6 +572,11 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + if (page->mapping == mapping) { + VM_BUG_ON_PAGE(PageWriteback(page), page); + truncate_inode_page(mapping, page); ++ } else { ++ /* Page was replaced by swap: retry */ ++ unlock_page(page); ++ index--; ++ break; + } + } + unlock_page(page); +@@ -824,6 +831,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) + spin_lock(&inode->i_lock); + shmem_falloc = inode->i_private; + if (shmem_falloc && ++ !shmem_falloc->waitq && + index >= shmem_falloc->start && + index < shmem_falloc->next) + shmem_falloc->nr_unswapped++; +@@ -1298,6 +1306,64 @@ static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) + int error; + int ret = VM_FAULT_LOCKED; + ++ /* ++ * Trinity finds that probing a hole which tmpfs is punching can ++ * prevent the hole-punch from ever completing: which in turn ++ * locks writers out with its hold on i_mutex. So refrain from ++ * faulting pages into the hole while it's being punched. Although ++ * shmem_undo_range() does remove the additions, it may be unable to ++ * keep up, as each new page needs its own unmap_mapping_range() call, ++ * and the i_mmap tree grows ever slower to scan if new vmas are added. ++ * ++ * It does not matter if we sometimes reach this check just before the ++ * hole-punch begins, so that one fault then races with the punch: ++ * we just need to make racing faults a rare case. ++ * ++ * The implementation below would be much simpler if we just used a ++ * standard mutex or completion: but we cannot take i_mutex in fault, ++ * and bloating every shmem inode for this unlikely case would be sad. ++ */ ++ if (unlikely(inode->i_private)) { ++ struct shmem_falloc *shmem_falloc; ++ ++ spin_lock(&inode->i_lock); ++ shmem_falloc = inode->i_private; ++ if (shmem_falloc && ++ shmem_falloc->waitq && ++ vmf->pgoff >= shmem_falloc->start && ++ vmf->pgoff < shmem_falloc->next) { ++ wait_queue_head_t *shmem_falloc_waitq; ++ DEFINE_WAIT(shmem_fault_wait); ++ ++ ret = VM_FAULT_NOPAGE; ++ if ((vmf->flags & FAULT_FLAG_ALLOW_RETRY) && ++ !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { ++ /* It's polite to up mmap_sem if we can */ ++ up_read(&vma->vm_mm->mmap_sem); ++ ret = VM_FAULT_RETRY; ++ } ++ ++ shmem_falloc_waitq = shmem_falloc->waitq; ++ prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait, ++ TASK_UNINTERRUPTIBLE); ++ spin_unlock(&inode->i_lock); ++ schedule(); ++ ++ /* ++ * shmem_falloc_waitq points into the shmem_fallocate() ++ * stack of the hole-punching task: shmem_falloc_waitq ++ * is usually invalid by the time we reach here, but ++ * finish_wait() does not dereference it in that case; ++ * though i_lock needed lest racing with wake_up_all(). ++ */ ++ spin_lock(&inode->i_lock); ++ finish_wait(shmem_falloc_waitq, &shmem_fault_wait); ++ spin_unlock(&inode->i_lock); ++ return ret; ++ } ++ spin_unlock(&inode->i_lock); ++ } ++ + error = shmem_getpage(inode, vmf->pgoff, &vmf->page, SGP_CACHE, &ret); + if (error) + return ((error == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS); +@@ -1817,12 +1883,25 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, + struct address_space *mapping = file->f_mapping; + loff_t unmap_start = round_up(offset, PAGE_SIZE); + loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1; ++ DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq); ++ ++ shmem_falloc.waitq = &shmem_falloc_waitq; ++ shmem_falloc.start = unmap_start >> PAGE_SHIFT; ++ shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT; ++ spin_lock(&inode->i_lock); ++ inode->i_private = &shmem_falloc; ++ spin_unlock(&inode->i_lock); + + if ((u64)unmap_end > (u64)unmap_start) + unmap_mapping_range(mapping, unmap_start, + 1 + unmap_end - unmap_start, 0); + shmem_truncate_range(inode, offset, offset + len - 1); + /* No need to unmap again: hole-punching leaves COWed pages */ ++ ++ spin_lock(&inode->i_lock); ++ inode->i_private = NULL; ++ wake_up_all(&shmem_falloc_waitq); ++ spin_unlock(&inode->i_lock); + error = 0; + goto out; + } +@@ -1840,6 +1919,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, + goto out; + } + ++ shmem_falloc.waitq = NULL; + shmem_falloc.start = start; + shmem_falloc.next = start; + shmem_falloc.nr_falloced = 0; +diff --git a/mm/vmscan.c b/mm/vmscan.c +index 6ef876cae8f1..6ef484f0777f 100644 +--- a/mm/vmscan.c ++++ b/mm/vmscan.c +@@ -1540,19 +1540,18 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, + * If dirty pages are scanned that are not queued for IO, it + * implies that flushers are not keeping up. In this case, flag + * the zone ZONE_TAIL_LRU_DIRTY and kswapd will start writing +- * pages from reclaim context. It will forcibly stall in the +- * next check. ++ * pages from reclaim context. + */ + if (nr_unqueued_dirty == nr_taken) + zone_set_flag(zone, ZONE_TAIL_LRU_DIRTY); + + /* +- * In addition, if kswapd scans pages marked marked for +- * immediate reclaim and under writeback (nr_immediate), it +- * implies that pages are cycling through the LRU faster than ++ * If kswapd scans pages marked marked for immediate ++ * reclaim and under writeback (nr_immediate), it implies ++ * that pages are cycling through the LRU faster than + * they are written so also forcibly stall. + */ +- if (nr_unqueued_dirty == nr_taken || nr_immediate) ++ if (nr_immediate) + congestion_wait(BLK_RW_ASYNC, HZ/10); + } + +diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c +index 6ee48aac776f..7e57135c7cc4 100644 +--- a/net/8021q/vlan_core.c ++++ b/net/8021q/vlan_core.c +@@ -108,8 +108,11 @@ EXPORT_SYMBOL(vlan_dev_vlan_id); + + static struct sk_buff *vlan_reorder_header(struct sk_buff *skb) + { +- if (skb_cow(skb, skb_headroom(skb)) < 0) ++ if (skb_cow(skb, skb_headroom(skb)) < 0) { ++ kfree_skb(skb); + return NULL; ++ } ++ + memmove(skb->data - ETH_HLEN, skb->data - VLAN_ETH_HLEN, 2 * ETH_ALEN); + skb->mac_header += VLAN_HLEN; + return skb; +diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c +index cc0d21895420..1f26a1b8c576 100644 +--- a/net/8021q/vlan_dev.c ++++ b/net/8021q/vlan_dev.c +@@ -635,8 +635,6 @@ static void vlan_dev_uninit(struct net_device *dev) + struct vlan_dev_priv *vlan = vlan_dev_priv(dev); + int i; + +- free_percpu(vlan->vlan_pcpu_stats); +- vlan->vlan_pcpu_stats = NULL; + for (i = 0; i < ARRAY_SIZE(vlan->egress_priority_map); i++) { + while ((pm = vlan->egress_priority_map[i]) != NULL) { + vlan->egress_priority_map[i] = pm->next; +@@ -796,6 +794,15 @@ static const struct net_device_ops vlan_netdev_ops = { + .ndo_get_lock_subclass = vlan_dev_get_lock_subclass, + }; + ++static void vlan_dev_free(struct net_device *dev) ++{ ++ struct vlan_dev_priv *vlan = vlan_dev_priv(dev); ++ ++ free_percpu(vlan->vlan_pcpu_stats); ++ vlan->vlan_pcpu_stats = NULL; ++ free_netdev(dev); ++} ++ + void vlan_setup(struct net_device *dev) + { + ether_setup(dev); +@@ -805,7 +812,7 @@ void vlan_setup(struct net_device *dev) + dev->tx_queue_len = 0; + + dev->netdev_ops = &vlan_netdev_ops; +- dev->destructor = free_netdev; ++ dev->destructor = vlan_dev_free; + dev->ethtool_ops = &vlan_ethtool_ops; + + memset(dev->broadcast, 0, ETH_ALEN); +diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c +index 02806c6b2ff3..0c769cc65f25 100644 +--- a/net/appletalk/ddp.c ++++ b/net/appletalk/ddp.c +@@ -1489,8 +1489,6 @@ static int atalk_rcv(struct sk_buff *skb, struct net_device *dev, + goto drop; + + /* Queue packet (standard) */ +- skb->sk = sock; +- + if (sock_queue_rcv_skb(sock, skb) < 0) + goto drop; + +@@ -1644,7 +1642,6 @@ static int atalk_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr + if (!skb) + goto out; + +- skb->sk = sk; + skb_reserve(skb, ddp_dl->header_length); + skb_reserve(skb, dev->hard_header_len); + skb->dev = dev; +diff --git a/net/core/dev.c b/net/core/dev.c +index 4c1b483f7c07..37bddf729e77 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -148,6 +148,9 @@ struct list_head ptype_all __read_mostly; /* Taps */ + static struct list_head offload_base __read_mostly; + + static int netif_rx_internal(struct sk_buff *skb); ++static int call_netdevice_notifiers_info(unsigned long val, ++ struct net_device *dev, ++ struct netdev_notifier_info *info); + + /* + * The @dev_base_head list is protected by @dev_base_lock and the rtnl +@@ -1207,7 +1210,11 @@ EXPORT_SYMBOL(netdev_features_change); + void netdev_state_change(struct net_device *dev) + { + if (dev->flags & IFF_UP) { +- call_netdevice_notifiers(NETDEV_CHANGE, dev); ++ struct netdev_notifier_change_info change_info; ++ ++ change_info.flags_changed = 0; ++ call_netdevice_notifiers_info(NETDEV_CHANGE, dev, ++ &change_info.info); + rtmsg_ifinfo(RTM_NEWLINK, dev, 0, GFP_KERNEL); + } + } +@@ -4051,6 +4058,8 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb) + skb->vlan_tci = 0; + skb->dev = napi->dev; + skb->skb_iif = 0; ++ skb->encapsulation = 0; ++ skb_shinfo(skb)->gso_type = 0; + skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); + + napi->skb = skb; +diff --git a/net/core/dst.c b/net/core/dst.c +index ca4231ec7347..15b6792e6ebb 100644 +--- a/net/core/dst.c ++++ b/net/core/dst.c +@@ -267,6 +267,15 @@ again: + } + EXPORT_SYMBOL(dst_destroy); + ++static void dst_destroy_rcu(struct rcu_head *head) ++{ ++ struct dst_entry *dst = container_of(head, struct dst_entry, rcu_head); ++ ++ dst = dst_destroy(dst); ++ if (dst) ++ __dst_free(dst); ++} ++ + void dst_release(struct dst_entry *dst) + { + if (dst) { +@@ -274,11 +283,8 @@ void dst_release(struct dst_entry *dst) + + newrefcnt = atomic_dec_return(&dst->__refcnt); + WARN_ON(newrefcnt < 0); +- if (unlikely(dst->flags & DST_NOCACHE) && !newrefcnt) { +- dst = dst_destroy(dst); +- if (dst) +- __dst_free(dst); +- } ++ if (unlikely(dst->flags & DST_NOCACHE) && !newrefcnt) ++ call_rcu(&dst->rcu_head, dst_destroy_rcu); + } + } + EXPORT_SYMBOL(dst_release); +diff --git a/net/core/skbuff.c b/net/core/skbuff.c +index e5ae776ee9b4..7f2e1fce706e 100644 +--- a/net/core/skbuff.c ++++ b/net/core/skbuff.c +@@ -2881,12 +2881,13 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb, + int pos; + int dummy; + ++ __skb_push(head_skb, doffset); + proto = skb_network_protocol(head_skb, &dummy); + if (unlikely(!proto)) + return ERR_PTR(-EINVAL); + + csum = !!can_checksum_protocol(features, proto); +- __skb_push(head_skb, doffset); ++ + headroom = skb_headroom(head_skb); + pos = skb_headlen(head_skb); + +diff --git a/net/dns_resolver/dns_query.c b/net/dns_resolver/dns_query.c +index e7b6d53eef88..f005cc760535 100644 +--- a/net/dns_resolver/dns_query.c ++++ b/net/dns_resolver/dns_query.c +@@ -149,7 +149,9 @@ int dns_query(const char *type, const char *name, size_t namelen, + if (!*_result) + goto put; + +- memcpy(*_result, upayload->data, len + 1); ++ memcpy(*_result, upayload->data, len); ++ (*_result)[len] = '\0'; ++ + if (_expiry) + *_expiry = rkey->expiry; + +diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c +index 19ab78aca547..07bd8edef417 100644 +--- a/net/ipv4/af_inet.c ++++ b/net/ipv4/af_inet.c +@@ -1434,6 +1434,9 @@ static int inet_gro_complete(struct sk_buff *skb, int nhoff) + int proto = iph->protocol; + int err = -ENOSYS; + ++ if (skb->encapsulation) ++ skb_set_inner_network_header(skb, nhoff); ++ + csum_replace2(&iph->check, iph->tot_len, newlen); + iph->tot_len = newlen; + +diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c +index f1d32280cb54..2d24f293f977 100644 +--- a/net/ipv4/gre_offload.c ++++ b/net/ipv4/gre_offload.c +@@ -255,6 +255,9 @@ static int gre_gro_complete(struct sk_buff *skb, int nhoff) + int err = -ENOENT; + __be16 type; + ++ skb->encapsulation = 1; ++ skb_shinfo(skb)->gso_type = SKB_GSO_GRE; ++ + type = greh->protocol; + if (greh->flags & GRE_KEY) + grehlen += GRE_HEADER_SECTION; +diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c +index 0134663fdbce..1e4aa8354f93 100644 +--- a/net/ipv4/icmp.c ++++ b/net/ipv4/icmp.c +@@ -732,8 +732,6 @@ static void icmp_unreach(struct sk_buff *skb) + /* fall through */ + case 0: + info = ntohs(icmph->un.frag.mtu); +- if (!info) +- goto out; + } + break; + case ICMP_SR_FAILED: +diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c +index 97e4d1655d26..9db3b877fcaf 100644 +--- a/net/ipv4/igmp.c ++++ b/net/ipv4/igmp.c +@@ -1952,6 +1952,10 @@ int ip_mc_leave_group(struct sock *sk, struct ip_mreqn *imr) + + rtnl_lock(); + in_dev = ip_mc_find_dev(net, imr); ++ if (!in_dev) { ++ ret = -ENODEV; ++ goto out; ++ } + ifindex = imr->imr_ifindex; + for (imlp = &inet->mc_list; + (iml = rtnl_dereference(*imlp)) != NULL; +@@ -1969,16 +1973,14 @@ int ip_mc_leave_group(struct sock *sk, struct ip_mreqn *imr) + + *imlp = iml->next_rcu; + +- if (in_dev) +- ip_mc_dec_group(in_dev, group); ++ ip_mc_dec_group(in_dev, group); + rtnl_unlock(); + /* decrease mem now to avoid the memleak warning */ + atomic_sub(sizeof(*iml), &sk->sk_omem_alloc); + kfree_rcu(iml, rcu); + return 0; + } +- if (!in_dev) +- ret = -ENODEV; ++out: + rtnl_unlock(); + return ret; + } +diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c +index f4ab72e19af9..96f90b89df32 100644 +--- a/net/ipv4/ip_options.c ++++ b/net/ipv4/ip_options.c +@@ -288,6 +288,10 @@ int ip_options_compile(struct net *net, + optptr++; + continue; + } ++ if (unlikely(l < 2)) { ++ pp_ptr = optptr; ++ goto error; ++ } + optlen = optptr[1]; + if (optlen < 2 || optlen > l) { + pp_ptr = optptr; +diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c +index 0c3a5d17b4a9..62cd9e0ae35b 100644 +--- a/net/ipv4/ip_tunnel.c ++++ b/net/ipv4/ip_tunnel.c +@@ -73,12 +73,7 @@ static void __tunnel_dst_set(struct ip_tunnel_dst *idst, + { + struct dst_entry *old_dst; + +- if (dst) { +- if (dst->flags & DST_NOCACHE) +- dst = NULL; +- else +- dst_clone(dst); +- } ++ dst_clone(dst); + old_dst = xchg((__force struct dst_entry **)&idst->dst, dst); + dst_release(old_dst); + } +@@ -108,13 +103,14 @@ static struct rtable *tunnel_rtable_get(struct ip_tunnel *t, u32 cookie) + + rcu_read_lock(); + dst = rcu_dereference(this_cpu_ptr(t->dst_cache)->dst); ++ if (dst && !atomic_inc_not_zero(&dst->__refcnt)) ++ dst = NULL; + if (dst) { + if (dst->obsolete && dst->ops->check(dst, cookie) == NULL) { +- rcu_read_unlock(); + tunnel_dst_reset(t); +- return NULL; ++ dst_release(dst); ++ dst = NULL; + } +- dst_hold(dst); + } + rcu_read_unlock(); + return (struct rtable *)dst; +@@ -173,6 +169,7 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, + + hlist_for_each_entry_rcu(t, head, hash_node) { + if (remote != t->parms.iph.daddr || ++ t->parms.iph.saddr != 0 || + !(t->dev->flags & IFF_UP)) + continue; + +@@ -189,10 +186,11 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, + head = &itn->tunnels[hash]; + + hlist_for_each_entry_rcu(t, head, hash_node) { +- if ((local != t->parms.iph.saddr && +- (local != t->parms.iph.daddr || +- !ipv4_is_multicast(local))) || +- !(t->dev->flags & IFF_UP)) ++ if ((local != t->parms.iph.saddr || t->parms.iph.daddr != 0) && ++ (local != t->parms.iph.daddr || !ipv4_is_multicast(local))) ++ continue; ++ ++ if (!(t->dev->flags & IFF_UP)) + continue; + + if (!ip_tunnel_key_match(&t->parms, flags, key)) +@@ -209,6 +207,8 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, + + hlist_for_each_entry_rcu(t, head, hash_node) { + if (t->parms.i_key != key || ++ t->parms.iph.saddr != 0 || ++ t->parms.iph.daddr != 0 || + !(t->dev->flags & IFF_UP)) + continue; + +diff --git a/net/ipv4/route.c b/net/ipv4/route.c +index 134437309b1e..031553f8a306 100644 +--- a/net/ipv4/route.c ++++ b/net/ipv4/route.c +@@ -1029,7 +1029,7 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) + const struct iphdr *iph = (const struct iphdr *) skb->data; + struct flowi4 fl4; + struct rtable *rt; +- struct dst_entry *dst; ++ struct dst_entry *odst = NULL; + bool new = false; + + bh_lock_sock(sk); +@@ -1037,16 +1037,17 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) + if (!ip_sk_accept_pmtu(sk)) + goto out; + +- rt = (struct rtable *) __sk_dst_get(sk); ++ odst = sk_dst_get(sk); + +- if (sock_owned_by_user(sk) || !rt) { ++ if (sock_owned_by_user(sk) || !odst) { + __ipv4_sk_update_pmtu(skb, sk, mtu); + goto out; + } + + __build_flow_key(&fl4, sk, iph, 0, 0, 0, 0, 0); + +- if (!__sk_dst_check(sk, 0)) { ++ rt = (struct rtable *)odst; ++ if (odst->obsolete && odst->ops->check(odst, 0) == NULL) { + rt = ip_route_output_flow(sock_net(sk), &fl4, sk); + if (IS_ERR(rt)) + goto out; +@@ -1056,8 +1057,7 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) + + __ip_rt_update_pmtu((struct rtable *) rt->dst.path, &fl4, mtu); + +- dst = dst_check(&rt->dst, 0); +- if (!dst) { ++ if (!dst_check(&rt->dst, 0)) { + if (new) + dst_release(&rt->dst); + +@@ -1069,10 +1069,11 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) + } + + if (new) +- __sk_dst_set(sk, &rt->dst); ++ sk_dst_set(sk, &rt->dst); + + out: + bh_unlock_sock(sk); ++ dst_release(odst); + } + EXPORT_SYMBOL_GPL(ipv4_sk_update_pmtu); + +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c +index 97c8f5620c43..b48fba0aaa92 100644 +--- a/net/ipv4/tcp.c ++++ b/net/ipv4/tcp.c +@@ -1108,7 +1108,7 @@ int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, + if (unlikely(tp->repair)) { + if (tp->repair_queue == TCP_RECV_QUEUE) { + copied = tcp_send_rcvq(sk, msg, size); +- goto out; ++ goto out_nopush; + } + + err = -EINVAL; +@@ -1282,6 +1282,7 @@ wait_for_memory: + out: + if (copied) + tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); ++out_nopush: + release_sock(sk); + return copied + copied_syn; + +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c +index e3647465138b..3898694d0300 100644 +--- a/net/ipv4/tcp_input.c ++++ b/net/ipv4/tcp_input.c +@@ -1113,7 +1113,7 @@ static bool tcp_check_dsack(struct sock *sk, const struct sk_buff *ack_skb, + } + + /* D-SACK for already forgotten data... Do dumb counting. */ +- if (dup_sack && tp->undo_marker && tp->undo_retrans && ++ if (dup_sack && tp->undo_marker && tp->undo_retrans > 0 && + !after(end_seq_0, prior_snd_una) && + after(end_seq_0, tp->undo_marker)) + tp->undo_retrans--; +@@ -1169,7 +1169,7 @@ static int tcp_match_skb_to_sack(struct sock *sk, struct sk_buff *skb, + unsigned int new_len = (pkt_len / mss) * mss; + if (!in_sack && new_len < pkt_len) { + new_len += mss; +- if (new_len > skb->len) ++ if (new_len >= skb->len) + return 0; + } + pkt_len = new_len; +@@ -1193,7 +1193,7 @@ static u8 tcp_sacktag_one(struct sock *sk, + + /* Account D-SACK for retransmitted packet. */ + if (dup_sack && (sacked & TCPCB_RETRANS)) { +- if (tp->undo_marker && tp->undo_retrans && ++ if (tp->undo_marker && tp->undo_retrans > 0 && + after(end_seq, tp->undo_marker)) + tp->undo_retrans--; + if (sacked & TCPCB_SACKED_ACKED) +@@ -1894,7 +1894,7 @@ static void tcp_clear_retrans_partial(struct tcp_sock *tp) + tp->lost_out = 0; + + tp->undo_marker = 0; +- tp->undo_retrans = 0; ++ tp->undo_retrans = -1; + } + + void tcp_clear_retrans(struct tcp_sock *tp) +@@ -2663,7 +2663,7 @@ static void tcp_enter_recovery(struct sock *sk, bool ece_ack) + + tp->prior_ssthresh = 0; + tp->undo_marker = tp->snd_una; +- tp->undo_retrans = tp->retrans_out; ++ tp->undo_retrans = tp->retrans_out ? : -1; + + if (inet_csk(sk)->icsk_ca_state < TCP_CA_CWR) { + if (!ece_ack) +diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c +index b92b81718ca4..c25953a386d0 100644 +--- a/net/ipv4/tcp_offload.c ++++ b/net/ipv4/tcp_offload.c +@@ -310,7 +310,7 @@ static int tcp4_gro_complete(struct sk_buff *skb, int thoff) + + th->check = ~tcp_v4_check(skb->len - thoff, iph->saddr, + iph->daddr, 0); +- skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; ++ skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV4; + + return tcp_gro_complete(skb); + } +diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c +index 17a11e65e57f..b3d1addd816b 100644 +--- a/net/ipv4/tcp_output.c ++++ b/net/ipv4/tcp_output.c +@@ -2448,8 +2448,6 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb) + if (!tp->retrans_stamp) + tp->retrans_stamp = TCP_SKB_CB(skb)->when; + +- tp->undo_retrans += tcp_skb_pcount(skb); +- + /* snd_nxt is stored to detect loss of retransmitted segment, + * see tcp_input.c tcp_sacktag_write_queue(). + */ +@@ -2457,6 +2455,10 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb) + } else { + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL); + } ++ ++ if (tp->undo_retrans < 0) ++ tp->undo_retrans = 0; ++ tp->undo_retrans += tcp_skb_pcount(skb); + return err; + } + +diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c +index 8517d3cd1aed..01b0ff9a0c2c 100644 +--- a/net/ipv6/tcpv6_offload.c ++++ b/net/ipv6/tcpv6_offload.c +@@ -73,7 +73,7 @@ static int tcp6_gro_complete(struct sk_buff *skb, int thoff) + + th->check = ~tcp_v6_check(skb->len - thoff, &iph->saddr, + &iph->daddr, 0); +- skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; ++ skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV6; + + return tcp_gro_complete(skb); + } +diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c +index 7f40fd25acae..0dfe894afd48 100644 +--- a/net/netlink/af_netlink.c ++++ b/net/netlink/af_netlink.c +@@ -636,7 +636,7 @@ static unsigned int netlink_poll(struct file *file, struct socket *sock, + while (nlk->cb_running && netlink_dump_space(nlk)) { + err = netlink_dump(sk); + if (err < 0) { +- sk->sk_err = err; ++ sk->sk_err = -err; + sk->sk_error_report(sk); + break; + } +@@ -2448,7 +2448,7 @@ static int netlink_recvmsg(struct kiocb *kiocb, struct socket *sock, + atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) { + ret = netlink_dump(sk); + if (ret) { +- sk->sk_err = ret; ++ sk->sk_err = -ret; + sk->sk_error_report(sk); + } + } +diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c +index c82fdc1eab7c..dfa532f00d88 100644 +--- a/net/sctp/sysctl.c ++++ b/net/sctp/sysctl.c +@@ -307,41 +307,40 @@ static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, int write, + loff_t *ppos) + { + struct net *net = current->nsproxy->net_ns; +- char tmp[8]; + struct ctl_table tbl; +- int ret; +- int changed = 0; ++ bool changed = false; + char *none = "none"; ++ char tmp[8]; ++ int ret; + + memset(&tbl, 0, sizeof(struct ctl_table)); + + if (write) { + tbl.data = tmp; +- tbl.maxlen = 8; ++ tbl.maxlen = sizeof(tmp); + } else { + tbl.data = net->sctp.sctp_hmac_alg ? : none; + tbl.maxlen = strlen(tbl.data); + } +- ret = proc_dostring(&tbl, write, buffer, lenp, ppos); + +- if (write) { ++ ret = proc_dostring(&tbl, write, buffer, lenp, ppos); ++ if (write && ret == 0) { + #ifdef CONFIG_CRYPTO_MD5 + if (!strncmp(tmp, "md5", 3)) { + net->sctp.sctp_hmac_alg = "md5"; +- changed = 1; ++ changed = true; + } + #endif + #ifdef CONFIG_CRYPTO_SHA1 + if (!strncmp(tmp, "sha1", 4)) { + net->sctp.sctp_hmac_alg = "sha1"; +- changed = 1; ++ changed = true; + } + #endif + if (!strncmp(tmp, "none", 4)) { + net->sctp.sctp_hmac_alg = NULL; +- changed = 1; ++ changed = true; + } +- + if (!changed) + ret = -EINVAL; + } +@@ -354,11 +353,10 @@ static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write, + loff_t *ppos) + { + struct net *net = current->nsproxy->net_ns; +- int new_value; +- struct ctl_table tbl; + unsigned int min = *(unsigned int *) ctl->extra1; + unsigned int max = *(unsigned int *) ctl->extra2; +- int ret; ++ struct ctl_table tbl; ++ int ret, new_value; + + memset(&tbl, 0, sizeof(struct ctl_table)); + tbl.maxlen = sizeof(unsigned int); +@@ -367,12 +365,15 @@ static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write, + tbl.data = &new_value; + else + tbl.data = &net->sctp.rto_min; ++ + ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); +- if (write) { +- if (ret || new_value > max || new_value < min) ++ if (write && ret == 0) { ++ if (new_value > max || new_value < min) + return -EINVAL; ++ + net->sctp.rto_min = new_value; + } ++ + return ret; + } + +@@ -381,11 +382,10 @@ static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write, + loff_t *ppos) + { + struct net *net = current->nsproxy->net_ns; +- int new_value; +- struct ctl_table tbl; + unsigned int min = *(unsigned int *) ctl->extra1; + unsigned int max = *(unsigned int *) ctl->extra2; +- int ret; ++ struct ctl_table tbl; ++ int ret, new_value; + + memset(&tbl, 0, sizeof(struct ctl_table)); + tbl.maxlen = sizeof(unsigned int); +@@ -394,12 +394,15 @@ static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write, + tbl.data = &new_value; + else + tbl.data = &net->sctp.rto_max; ++ + ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); +- if (write) { +- if (ret || new_value > max || new_value < min) ++ if (write && ret == 0) { ++ if (new_value > max || new_value < min) + return -EINVAL; ++ + net->sctp.rto_max = new_value; + } ++ + return ret; + } + +@@ -420,8 +423,7 @@ static int proc_sctp_do_auth(struct ctl_table *ctl, int write, + tbl.data = &net->sctp.auth_enable; + + ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); +- +- if (write) { ++ if (write && ret == 0) { + struct sock *sk = net->sctp.ctl_sock; + + net->sctp.auth_enable = new_value; +diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c +index 85c64658bd0b..b6842fdb53d4 100644 +--- a/net/sctp/ulpevent.c ++++ b/net/sctp/ulpevent.c +@@ -366,9 +366,10 @@ fail: + * specification [SCTP] and any extensions for a list of possible + * error formats. + */ +-struct sctp_ulpevent *sctp_ulpevent_make_remote_error( +- const struct sctp_association *asoc, struct sctp_chunk *chunk, +- __u16 flags, gfp_t gfp) ++struct sctp_ulpevent * ++sctp_ulpevent_make_remote_error(const struct sctp_association *asoc, ++ struct sctp_chunk *chunk, __u16 flags, ++ gfp_t gfp) + { + struct sctp_ulpevent *event; + struct sctp_remote_error *sre; +@@ -387,8 +388,7 @@ struct sctp_ulpevent *sctp_ulpevent_make_remote_error( + /* Copy the skb to a new skb with room for us to prepend + * notification with. + */ +- skb = skb_copy_expand(chunk->skb, sizeof(struct sctp_remote_error), +- 0, gfp); ++ skb = skb_copy_expand(chunk->skb, sizeof(*sre), 0, gfp); + + /* Pull off the rest of the cause TLV from the chunk. */ + skb_pull(chunk->skb, elen); +@@ -399,62 +399,21 @@ struct sctp_ulpevent *sctp_ulpevent_make_remote_error( + event = sctp_skb2event(skb); + sctp_ulpevent_init(event, MSG_NOTIFICATION, skb->truesize); + +- sre = (struct sctp_remote_error *) +- skb_push(skb, sizeof(struct sctp_remote_error)); ++ sre = (struct sctp_remote_error *) skb_push(skb, sizeof(*sre)); + + /* Trim the buffer to the right length. */ +- skb_trim(skb, sizeof(struct sctp_remote_error) + elen); ++ skb_trim(skb, sizeof(*sre) + elen); + +- /* Socket Extensions for SCTP +- * 5.3.1.3 SCTP_REMOTE_ERROR +- * +- * sre_type: +- * It should be SCTP_REMOTE_ERROR. +- */ ++ /* RFC6458, Section 6.1.3. SCTP_REMOTE_ERROR */ ++ memset(sre, 0, sizeof(*sre)); + sre->sre_type = SCTP_REMOTE_ERROR; +- +- /* +- * Socket Extensions for SCTP +- * 5.3.1.3 SCTP_REMOTE_ERROR +- * +- * sre_flags: 16 bits (unsigned integer) +- * Currently unused. +- */ + sre->sre_flags = 0; +- +- /* Socket Extensions for SCTP +- * 5.3.1.3 SCTP_REMOTE_ERROR +- * +- * sre_length: sizeof (__u32) +- * +- * This field is the total length of the notification data, +- * including the notification header. +- */ + sre->sre_length = skb->len; +- +- /* Socket Extensions for SCTP +- * 5.3.1.3 SCTP_REMOTE_ERROR +- * +- * sre_error: 16 bits (unsigned integer) +- * This value represents one of the Operational Error causes defined in +- * the SCTP specification, in network byte order. +- */ + sre->sre_error = cause; +- +- /* Socket Extensions for SCTP +- * 5.3.1.3 SCTP_REMOTE_ERROR +- * +- * sre_assoc_id: sizeof (sctp_assoc_t) +- * +- * The association id field, holds the identifier for the association. +- * All notifications for a given association have the same association +- * identifier. For TCP style socket, this field is ignored. +- */ + sctp_ulpevent_set_owner(event, asoc); + sre->sre_assoc_id = sctp_assoc2id(asoc); + + return event; +- + fail: + return NULL; + } +@@ -899,7 +858,9 @@ __u16 sctp_ulpevent_get_notification_type(const struct sctp_ulpevent *event) + return notification->sn_header.sn_type; + } + +-/* Copy out the sndrcvinfo into a msghdr. */ ++/* RFC6458, Section 5.3.2. SCTP Header Information Structure ++ * (SCTP_SNDRCV, DEPRECATED) ++ */ + void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, + struct msghdr *msghdr) + { +@@ -908,74 +869,21 @@ void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, + if (sctp_ulpevent_is_notification(event)) + return; + +- /* Sockets API Extensions for SCTP +- * Section 5.2.2 SCTP Header Information Structure (SCTP_SNDRCV) +- * +- * sinfo_stream: 16 bits (unsigned integer) +- * +- * For recvmsg() the SCTP stack places the message's stream number in +- * this value. +- */ ++ memset(&sinfo, 0, sizeof(sinfo)); + sinfo.sinfo_stream = event->stream; +- /* sinfo_ssn: 16 bits (unsigned integer) +- * +- * For recvmsg() this value contains the stream sequence number that +- * the remote endpoint placed in the DATA chunk. For fragmented +- * messages this is the same number for all deliveries of the message +- * (if more than one recvmsg() is needed to read the message). +- */ + sinfo.sinfo_ssn = event->ssn; +- /* sinfo_ppid: 32 bits (unsigned integer) +- * +- * In recvmsg() this value is +- * the same information that was passed by the upper layer in the peer +- * application. Please note that byte order issues are NOT accounted +- * for and this information is passed opaquely by the SCTP stack from +- * one end to the other. +- */ + sinfo.sinfo_ppid = event->ppid; +- /* sinfo_flags: 16 bits (unsigned integer) +- * +- * This field may contain any of the following flags and is composed of +- * a bitwise OR of these values. +- * +- * recvmsg() flags: +- * +- * SCTP_UNORDERED - This flag is present when the message was sent +- * non-ordered. +- */ + sinfo.sinfo_flags = event->flags; +- /* sinfo_tsn: 32 bit (unsigned integer) +- * +- * For the receiving side, this field holds a TSN that was +- * assigned to one of the SCTP Data Chunks. +- */ + sinfo.sinfo_tsn = event->tsn; +- /* sinfo_cumtsn: 32 bit (unsigned integer) +- * +- * This field will hold the current cumulative TSN as +- * known by the underlying SCTP layer. Note this field is +- * ignored when sending and only valid for a receive +- * operation when sinfo_flags are set to SCTP_UNORDERED. +- */ + sinfo.sinfo_cumtsn = event->cumtsn; +- /* sinfo_assoc_id: sizeof (sctp_assoc_t) +- * +- * The association handle field, sinfo_assoc_id, holds the identifier +- * for the association announced in the COMMUNICATION_UP notification. +- * All notifications for a given association have the same identifier. +- * Ignored for one-to-one style sockets. +- */ + sinfo.sinfo_assoc_id = sctp_assoc2id(event->asoc); +- +- /* context value that is set via SCTP_CONTEXT socket option. */ ++ /* Context value that is set via SCTP_CONTEXT socket option. */ + sinfo.sinfo_context = event->asoc->default_rcv_context; +- + /* These fields are not used while receiving. */ + sinfo.sinfo_timetolive = 0; + + put_cmsg(msghdr, IPPROTO_SCTP, SCTP_SNDRCV, +- sizeof(struct sctp_sndrcvinfo), (void *)&sinfo); ++ sizeof(sinfo), &sinfo); + } + + /* Do accounting for bytes received and hold a reference to the association +diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c +index bf860d9e75af..3ca45bf5029f 100644 +--- a/net/tipc/bcast.c ++++ b/net/tipc/bcast.c +@@ -537,6 +537,7 @@ receive: + + buf = node->bclink.deferred_head; + node->bclink.deferred_head = buf->next; ++ buf->next = NULL; + node->bclink.deferred_size--; + goto receive; + } +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 22f7883fcb9a..7ec91424ba22 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -2930,7 +2930,7 @@ static int azx_suspend(struct device *dev) + struct azx *chip = card->private_data; + struct azx_pcm *p; + +- if (chip->disabled) ++ if (chip->disabled || chip->init_failed) + return 0; + + snd_power_change_state(card, SNDRV_CTL_POWER_D3hot); +@@ -2961,7 +2961,7 @@ static int azx_resume(struct device *dev) + struct snd_card *card = dev_get_drvdata(dev); + struct azx *chip = card->private_data; + +- if (chip->disabled) ++ if (chip->disabled || chip->init_failed) + return 0; + + if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) +@@ -2996,7 +2996,7 @@ static int azx_runtime_suspend(struct device *dev) + struct snd_card *card = dev_get_drvdata(dev); + struct azx *chip = card->private_data; + +- if (chip->disabled) ++ if (chip->disabled || chip->init_failed) + return 0; + + if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) +@@ -3022,7 +3022,7 @@ static int azx_runtime_resume(struct device *dev) + struct hda_codec *codec; + int status; + +- if (chip->disabled) ++ if (chip->disabled || chip->init_failed) + return 0; + + if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) +@@ -3057,7 +3057,7 @@ static int azx_runtime_idle(struct device *dev) + struct snd_card *card = dev_get_drvdata(dev); + struct azx *chip = card->private_data; + +- if (chip->disabled) ++ if (chip->disabled || chip->init_failed) + return 0; + + if (!power_save_controller || +diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c +index 3abfe2a642ec..d135c906caff 100644 +--- a/sound/pci/hda/patch_hdmi.c ++++ b/sound/pci/hda/patch_hdmi.c +@@ -1062,6 +1062,7 @@ static void hdmi_pin_setup_infoframe(struct hda_codec *codec, + { + union audio_infoframe ai; + ++ memset(&ai, 0, sizeof(ai)); + if (conn_type == 0) { /* HDMI */ + struct hdmi_audio_infoframe *hdmi_ai = &ai.hdmi; + From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id DBFA513877A for ; Tue, 19 Aug 2014 12:17:52 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id B799EE09D8; Tue, 19 Aug 2014 12:17:36 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 38930E09A2 for ; Tue, 19 Aug 2014 12:17:26 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 042C9340357 for ; Tue, 19 Aug 2014 12:17:19 +0000 (UTC) Received: from localhost.localdomain (localhost [127.0.0.1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id E76C43C7A for ; Tue, 19 Aug 2014 11:44:48 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1406575025.0a8452bd52ddbe1bdbbbb9180eec4d67f595d3d2.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:3.14 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1013_linux-3.14.14.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 0a8452bd52ddbe1bdbbbb9180eec4d67f595d3d2 X-VCS-Branch: 3.14 Date: Tue, 19 Aug 2014 11:44:48 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: b73540a3-1e12-46e8-b3d5-1b97c2119fa2 X-Archives-Hash: cad4b838d2b745dbe18f41edf1c894be Message-ID: <20140819114448.IZuRy98cQmor-GEYl77wKfxyZZcgAoRbRfk8PdUmwrM@z> commit: 0a8452bd52ddbe1bdbbbb9180eec4d67f595d3d2 Author: Mike Pagano gentoo org> AuthorDate: Mon Jul 28 19:17:05 2014 +0000 Commit: Mike Pagano gentoo org> CommitDate: Mon Jul 28 19:17:05 2014 +0000 URL: http://sources.gentoo.org/gitweb/?p=proj/linux-patches.git;a=commit;h=0a8452bd Linux patch 3.14.14 --- 0000_README | 4 + 1013_linux-3.14.14.patch | 2814 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 2818 insertions(+) diff --git a/0000_README b/0000_README index 4e6bfc3..d44b3d0 100644 --- a/0000_README +++ b/0000_README @@ -94,6 +94,10 @@ Patch: 1012_linux-3.14.13.patch From: http://www.kernel.org Desc: Linux 3.14.13 +Patch: 1013_linux-3.14.14.patch +From: http://www.kernel.org +Desc: Linux 3.14.14 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1013_linux-3.14.14.patch b/1013_linux-3.14.14.patch new file mode 100644 index 0000000..35b4ef6 --- /dev/null +++ b/1013_linux-3.14.14.patch @@ -0,0 +1,2814 @@ +diff --git a/Makefile b/Makefile +index 7a2981c972ae..230c7f694ab7 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 3 + PATCHLEVEL = 14 +-SUBLEVEL = 13 ++SUBLEVEL = 14 + EXTRAVERSION = + NAME = Remembering Coco + +diff --git a/arch/arc/include/uapi/asm/ptrace.h b/arch/arc/include/uapi/asm/ptrace.h +index 2618cc13ba75..76a7739aab1c 100644 +--- a/arch/arc/include/uapi/asm/ptrace.h ++++ b/arch/arc/include/uapi/asm/ptrace.h +@@ -11,6 +11,7 @@ + #ifndef _UAPI__ASM_ARC_PTRACE_H + #define _UAPI__ASM_ARC_PTRACE_H + ++#define PTRACE_GET_THREAD_AREA 25 + + #ifndef __ASSEMBLY__ + /* +diff --git a/arch/arc/kernel/ptrace.c b/arch/arc/kernel/ptrace.c +index 5d76706139dd..13b3ffb27a38 100644 +--- a/arch/arc/kernel/ptrace.c ++++ b/arch/arc/kernel/ptrace.c +@@ -146,6 +146,10 @@ long arch_ptrace(struct task_struct *child, long request, + pr_debug("REQ=%ld: ADDR =0x%lx, DATA=0x%lx)\n", request, addr, data); + + switch (request) { ++ case PTRACE_GET_THREAD_AREA: ++ ret = put_user(task_thread_info(child)->thr_ptr, ++ (unsigned long __user *)data); ++ break; + default: + ret = ptrace_request(child, request, addr, data); + break; +diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig +index 44298add8a48..4733d327cfb1 100644 +--- a/arch/arm/Kconfig ++++ b/arch/arm/Kconfig +@@ -6,6 +6,7 @@ config ARM + select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST + select ARCH_HAVE_CUSTOM_GPIO_H + select ARCH_MIGHT_HAVE_PC_PARPORT ++ select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_USE_BUILTIN_BSWAP + select ARCH_USE_CMPXCHG_LOCKREF + select ARCH_WANT_IPC_PARSE_VERSION +diff --git a/arch/arm/boot/dts/imx25.dtsi b/arch/arm/boot/dts/imx25.dtsi +index 737ed5da8f71..de1611966d8b 100644 +--- a/arch/arm/boot/dts/imx25.dtsi ++++ b/arch/arm/boot/dts/imx25.dtsi +@@ -30,6 +30,7 @@ + spi2 = &spi3; + usb0 = &usbotg; + usb1 = &usbhost1; ++ ethernet0 = &fec; + }; + + cpus { +diff --git a/arch/arm/boot/dts/imx27.dtsi b/arch/arm/boot/dts/imx27.dtsi +index 826231eb4446..da2eb7f6a5b2 100644 +--- a/arch/arm/boot/dts/imx27.dtsi ++++ b/arch/arm/boot/dts/imx27.dtsi +@@ -30,6 +30,7 @@ + spi0 = &cspi1; + spi1 = &cspi2; + spi2 = &cspi3; ++ ethernet0 = &fec; + }; + + aitc: aitc-interrupt-controller@e0000000 { +diff --git a/arch/arm/boot/dts/imx51.dtsi b/arch/arm/boot/dts/imx51.dtsi +index 4bcdd3ad15e5..e1b601595a09 100644 +--- a/arch/arm/boot/dts/imx51.dtsi ++++ b/arch/arm/boot/dts/imx51.dtsi +@@ -27,6 +27,7 @@ + spi0 = &ecspi1; + spi1 = &ecspi2; + spi2 = &cspi; ++ ethernet0 = &fec; + }; + + tzic: tz-interrupt-controller@e0000000 { +diff --git a/arch/arm/boot/dts/imx53.dtsi b/arch/arm/boot/dts/imx53.dtsi +index dc72353de0b3..50eda500f39a 100644 +--- a/arch/arm/boot/dts/imx53.dtsi ++++ b/arch/arm/boot/dts/imx53.dtsi +@@ -33,6 +33,7 @@ + spi0 = &ecspi1; + spi1 = &ecspi2; + spi2 = &cspi; ++ ethernet0 = &fec; + }; + + cpus { +diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig +index 27bbcfc7202a..65b788410bd9 100644 +--- a/arch/arm64/Kconfig ++++ b/arch/arm64/Kconfig +@@ -2,6 +2,7 @@ config ARM64 + def_bool y + select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE + select ARCH_USE_CMPXCHG_LOCKREF ++ select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST + select ARCH_WANT_OPTIONAL_GPIOLIB + select ARCH_WANT_COMPAT_IPC_PARSE_VERSION +diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig +index 2156fa2d25fe..ee3c6608126a 100644 +--- a/arch/powerpc/Kconfig ++++ b/arch/powerpc/Kconfig +@@ -141,6 +141,7 @@ config PPC + select HAVE_DEBUG_STACKOVERFLOW + select HAVE_IRQ_EXIT_ON_IRQ_STACK + select ARCH_USE_CMPXCHG_LOCKREF if PPC64 ++ select ARCH_SUPPORTS_ATOMIC_RMW + + config GENERIC_CSUM + def_bool CPU_LITTLE_ENDIAN +diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig +index 7d8b7e94b93b..b398c68b2713 100644 +--- a/arch/sparc/Kconfig ++++ b/arch/sparc/Kconfig +@@ -77,6 +77,7 @@ config SPARC64 + select ARCH_HAVE_NMI_SAFE_CMPXCHG + select HAVE_C_RECORDMCOUNT + select NO_BOOTMEM ++ select ARCH_SUPPORTS_ATOMIC_RMW + + config ARCH_DEFCONFIG + string +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig +index 1981dd9b8a11..7324107acb40 100644 +--- a/arch/x86/Kconfig ++++ b/arch/x86/Kconfig +@@ -127,6 +127,7 @@ config X86 + select HAVE_DEBUG_STACKOVERFLOW + select HAVE_IRQ_EXIT_ON_IRQ_STACK if X86_64 + select HAVE_CC_STACKPROTECTOR ++ select ARCH_SUPPORTS_ATOMIC_RMW + + config INSTRUCTION_DECODER + def_bool y +diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c +index aa333d966886..1340ebfcb467 100644 +--- a/arch/x86/kernel/cpu/perf_event_intel.c ++++ b/arch/x86/kernel/cpu/perf_event_intel.c +@@ -1383,6 +1383,15 @@ again: + intel_pmu_lbr_read(); + + /* ++ * CondChgd bit 63 doesn't mean any overflow status. Ignore ++ * and clear the bit. ++ */ ++ if (__test_and_clear_bit(63, (unsigned long *)&status)) { ++ if (!status) ++ goto done; ++ } ++ ++ /* + * PEBS overflow sets bit 62 in the global status register + */ + if (__test_and_clear_bit(62, (unsigned long *)&status)) { +diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c +index cfbe99f88830..e0d1d7a8354e 100644 +--- a/arch/x86/kernel/tsc.c ++++ b/arch/x86/kernel/tsc.c +@@ -921,9 +921,9 @@ static int time_cpufreq_notifier(struct notifier_block *nb, unsigned long val, + tsc_khz = cpufreq_scale(tsc_khz_ref, ref_freq, freq->new); + if (!(freq->flags & CPUFREQ_CONST_LOOPS)) + mark_tsc_unstable("cpufreq changes"); +- } + +- set_cyc2ns_scale(tsc_khz, freq->cpu); ++ set_cyc2ns_scale(tsc_khz, freq->cpu); ++ } + + return 0; + } +diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c +index f6f497450560..e36a0245f2c1 100644 +--- a/drivers/bluetooth/hci_h5.c ++++ b/drivers/bluetooth/hci_h5.c +@@ -406,6 +406,7 @@ static int h5_rx_3wire_hdr(struct hci_uart *hu, unsigned char c) + H5_HDR_PKT_TYPE(hdr) != HCI_3WIRE_LINK_PKT) { + BT_ERR("Non-link packet received in non-active state"); + h5_reset_rx(h5); ++ return 0; + } + + h5->rx_func = h5_rx_payload; +diff --git a/drivers/gpu/drm/qxl/qxl_irq.c b/drivers/gpu/drm/qxl/qxl_irq.c +index 28f84b4fce32..3485bdccf8b8 100644 +--- a/drivers/gpu/drm/qxl/qxl_irq.c ++++ b/drivers/gpu/drm/qxl/qxl_irq.c +@@ -33,6 +33,9 @@ irqreturn_t qxl_irq_handler(int irq, void *arg) + + pending = xchg(&qdev->ram_header->int_pending, 0); + ++ if (!pending) ++ return IRQ_NONE; ++ + atomic_inc(&qdev->irq_received); + + if (pending & QXL_INTERRUPT_DISPLAY) { +diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c +index ccca8b224d18..e7bfd5502410 100644 +--- a/drivers/gpu/drm/radeon/atombios_encoders.c ++++ b/drivers/gpu/drm/radeon/atombios_encoders.c +@@ -183,7 +183,6 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder, + struct backlight_properties props; + struct radeon_backlight_privdata *pdata; + struct radeon_encoder_atom_dig *dig; +- u8 backlight_level; + char bl_name[16]; + + /* Mac laptops with multiple GPUs use the gmux driver for backlight +@@ -222,12 +221,17 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder, + + pdata->encoder = radeon_encoder; + +- backlight_level = radeon_atom_get_backlight_level_from_reg(rdev); +- + dig = radeon_encoder->enc_priv; + dig->bl_dev = bd; + + bd->props.brightness = radeon_atom_backlight_get_brightness(bd); ++ /* Set a reasonable default here if the level is 0 otherwise ++ * fbdev will attempt to turn the backlight on after console ++ * unblanking and it will try and restore 0 which turns the backlight ++ * off again. ++ */ ++ if (bd->props.brightness == 0) ++ bd->props.brightness = RADEON_MAX_BL_LEVEL; + bd->props.power = FB_BLANK_UNBLANK; + backlight_update_status(bd); + +diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c +index df6d0079d0af..11d06c7b5afa 100644 +--- a/drivers/gpu/drm/radeon/radeon_display.c ++++ b/drivers/gpu/drm/radeon/radeon_display.c +@@ -755,6 +755,10 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) + struct radeon_device *rdev = dev->dev_private; + int ret = 0; + ++ /* don't leak the edid if we already fetched it in detect() */ ++ if (radeon_connector->edid) ++ goto got_edid; ++ + /* on hw with routers, select right port */ + if (radeon_connector->router.ddc_valid) + radeon_router_select_ddc_port(radeon_connector); +@@ -794,6 +798,7 @@ int radeon_ddc_get_modes(struct radeon_connector *radeon_connector) + radeon_connector->edid = radeon_bios_get_hardcoded_edid(rdev); + } + if (radeon_connector->edid) { ++got_edid: + drm_mode_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid); + ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid); + drm_edid_to_eld(&radeon_connector->base, radeon_connector->edid); +diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c +index 09988b289622..816782a65488 100644 +--- a/drivers/hv/hv_kvp.c ++++ b/drivers/hv/hv_kvp.c +@@ -127,6 +127,15 @@ kvp_work_func(struct work_struct *dummy) + kvp_respond_to_host(NULL, HV_E_FAIL); + } + ++static void poll_channel(struct vmbus_channel *channel) ++{ ++ unsigned long flags; ++ ++ spin_lock_irqsave(&channel->inbound_lock, flags); ++ hv_kvp_onchannelcallback(channel); ++ spin_unlock_irqrestore(&channel->inbound_lock, flags); ++} ++ + static int kvp_handle_handshake(struct hv_kvp_msg *msg) + { + int ret = 1; +@@ -155,7 +164,7 @@ static int kvp_handle_handshake(struct hv_kvp_msg *msg) + kvp_register(dm_reg_value); + kvp_transaction.active = false; + if (kvp_transaction.kvp_context) +- hv_kvp_onchannelcallback(kvp_transaction.kvp_context); ++ poll_channel(kvp_transaction.kvp_context); + } + return ret; + } +@@ -568,6 +577,7 @@ response_done: + + vmbus_sendpacket(channel, recv_buffer, buf_len, req_id, + VM_PKT_DATA_INBAND, 0); ++ poll_channel(channel); + + } + +@@ -603,7 +613,7 @@ void hv_kvp_onchannelcallback(void *context) + return; + } + +- vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 2, &recvlen, ++ vmbus_recvpacket(channel, recv_buffer, PAGE_SIZE * 4, &recvlen, + &requestid); + + if (recvlen > 0) { +diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c +index 62dfd246b948..d016be36cc03 100644 +--- a/drivers/hv/hv_util.c ++++ b/drivers/hv/hv_util.c +@@ -312,7 +312,7 @@ static int util_probe(struct hv_device *dev, + (struct hv_util_service *)dev_id->driver_data; + int ret; + +- srv->recv_buffer = kmalloc(PAGE_SIZE * 2, GFP_KERNEL); ++ srv->recv_buffer = kmalloc(PAGE_SIZE * 4, GFP_KERNEL); + if (!srv->recv_buffer) + return -ENOMEM; + if (srv->util_init) { +diff --git a/drivers/hwmon/adt7470.c b/drivers/hwmon/adt7470.c +index 0f4dea5ccf17..9ee3913850d6 100644 +--- a/drivers/hwmon/adt7470.c ++++ b/drivers/hwmon/adt7470.c +@@ -515,7 +515,7 @@ static ssize_t set_temp_min(struct device *dev, + return -EINVAL; + + temp = DIV_ROUND_CLOSEST(temp, 1000); +- temp = clamp_val(temp, 0, 255); ++ temp = clamp_val(temp, -128, 127); + + mutex_lock(&data->lock); + data->temp_min[attr->index] = temp; +@@ -549,7 +549,7 @@ static ssize_t set_temp_max(struct device *dev, + return -EINVAL; + + temp = DIV_ROUND_CLOSEST(temp, 1000); +- temp = clamp_val(temp, 0, 255); ++ temp = clamp_val(temp, -128, 127); + + mutex_lock(&data->lock); + data->temp_max[attr->index] = temp; +@@ -826,7 +826,7 @@ static ssize_t set_pwm_tmin(struct device *dev, + return -EINVAL; + + temp = DIV_ROUND_CLOSEST(temp, 1000); +- temp = clamp_val(temp, 0, 255); ++ temp = clamp_val(temp, -128, 127); + + mutex_lock(&data->lock); + data->pwm_tmin[attr->index] = temp; +diff --git a/drivers/hwmon/da9052-hwmon.c b/drivers/hwmon/da9052-hwmon.c +index afd31042b452..d14ab3c45daa 100644 +--- a/drivers/hwmon/da9052-hwmon.c ++++ b/drivers/hwmon/da9052-hwmon.c +@@ -194,7 +194,7 @@ static ssize_t da9052_hwmon_show_name(struct device *dev, + struct device_attribute *devattr, + char *buf) + { +- return sprintf(buf, "da9052-hwmon\n"); ++ return sprintf(buf, "da9052\n"); + } + + static ssize_t show_label(struct device *dev, +diff --git a/drivers/hwmon/da9055-hwmon.c b/drivers/hwmon/da9055-hwmon.c +index 73b3865f1207..35eb7738d711 100644 +--- a/drivers/hwmon/da9055-hwmon.c ++++ b/drivers/hwmon/da9055-hwmon.c +@@ -204,7 +204,7 @@ static ssize_t da9055_hwmon_show_name(struct device *dev, + struct device_attribute *devattr, + char *buf) + { +- return sprintf(buf, "da9055-hwmon\n"); ++ return sprintf(buf, "da9055\n"); + } + + static ssize_t show_label(struct device *dev, +diff --git a/drivers/iio/industrialio-event.c b/drivers/iio/industrialio-event.c +index c9c1419fe6e0..f9360f497ed4 100644 +--- a/drivers/iio/industrialio-event.c ++++ b/drivers/iio/industrialio-event.c +@@ -343,6 +343,9 @@ static int iio_device_add_event(struct iio_dev *indio_dev, + &indio_dev->event_interface->dev_attr_list); + kfree(postfix); + ++ if ((ret == -EBUSY) && (shared_by != IIO_SEPARATE)) ++ continue; ++ + if (ret) + return ret; + +diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c +index ac2d41bd71a0..12698ee9e06b 100644 +--- a/drivers/irqchip/irq-gic.c ++++ b/drivers/irqchip/irq-gic.c +@@ -42,6 +42,7 @@ + #include + #include + ++#include + #include + #include + #include +@@ -903,7 +904,9 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start, + } + + for_each_possible_cpu(cpu) { +- unsigned long offset = percpu_offset * cpu_logical_map(cpu); ++ u32 mpidr = cpu_logical_map(cpu); ++ u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); ++ unsigned long offset = percpu_offset * core_id; + *per_cpu_ptr(gic->dist_base.percpu_base, cpu) = dist_base + offset; + *per_cpu_ptr(gic->cpu_base.percpu_base, cpu) = cpu_base + offset; + } +@@ -1008,8 +1011,10 @@ int __init gic_of_init(struct device_node *node, struct device_node *parent) + gic_cnt++; + return 0; + } ++IRQCHIP_DECLARE(gic_400, "arm,gic-400", gic_of_init); + IRQCHIP_DECLARE(cortex_a15_gic, "arm,cortex-a15-gic", gic_of_init); + IRQCHIP_DECLARE(cortex_a9_gic, "arm,cortex-a9-gic", gic_of_init); ++IRQCHIP_DECLARE(cortex_a7_gic, "arm,cortex-a7-gic", gic_of_init); + IRQCHIP_DECLARE(msm_8660_qgic, "qcom,msm-8660-qgic", gic_of_init); + IRQCHIP_DECLARE(msm_qgic2, "qcom,msm-qgic2", gic_of_init); + +diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c +index 5320332390b7..a87d3fab0271 100644 +--- a/drivers/md/dm-cache-metadata.c ++++ b/drivers/md/dm-cache-metadata.c +@@ -425,6 +425,15 @@ static int __open_metadata(struct dm_cache_metadata *cmd) + + disk_super = dm_block_data(sblock); + ++ /* Verify the data block size hasn't changed */ ++ if (le32_to_cpu(disk_super->data_block_size) != cmd->data_block_size) { ++ DMERR("changing the data block size (from %u to %llu) is not supported", ++ le32_to_cpu(disk_super->data_block_size), ++ (unsigned long long)cmd->data_block_size); ++ r = -EINVAL; ++ goto bad; ++ } ++ + r = __check_incompat_features(disk_super, cmd); + if (r < 0) + goto bad; +diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c +index b086a945edcb..e9d33ad59df5 100644 +--- a/drivers/md/dm-thin-metadata.c ++++ b/drivers/md/dm-thin-metadata.c +@@ -613,6 +613,15 @@ static int __open_metadata(struct dm_pool_metadata *pmd) + + disk_super = dm_block_data(sblock); + ++ /* Verify the data block size hasn't changed */ ++ if (le32_to_cpu(disk_super->data_block_size) != pmd->data_block_size) { ++ DMERR("changing the data block size (from %u to %llu) is not supported", ++ le32_to_cpu(disk_super->data_block_size), ++ (unsigned long long)pmd->data_block_size); ++ r = -EINVAL; ++ goto bad_unlock_sblock; ++ } ++ + r = __check_incompat_features(disk_super, pmd); + if (r < 0) + goto bad_unlock_sblock; +diff --git a/drivers/media/usb/gspca/pac7302.c b/drivers/media/usb/gspca/pac7302.c +index 2fd1c5e31a0f..339adce7c7a5 100644 +--- a/drivers/media/usb/gspca/pac7302.c ++++ b/drivers/media/usb/gspca/pac7302.c +@@ -928,6 +928,7 @@ static const struct usb_device_id device_table[] = { + {USB_DEVICE(0x093a, 0x2620)}, + {USB_DEVICE(0x093a, 0x2621)}, + {USB_DEVICE(0x093a, 0x2622), .driver_info = FL_VFLIP}, ++ {USB_DEVICE(0x093a, 0x2623), .driver_info = FL_VFLIP}, + {USB_DEVICE(0x093a, 0x2624), .driver_info = FL_VFLIP}, + {USB_DEVICE(0x093a, 0x2625)}, + {USB_DEVICE(0x093a, 0x2626)}, +diff --git a/drivers/mtd/devices/elm.c b/drivers/mtd/devices/elm.c +index d1dd6a33a050..3059a7a53bff 100644 +--- a/drivers/mtd/devices/elm.c ++++ b/drivers/mtd/devices/elm.c +@@ -428,6 +428,7 @@ static int elm_context_save(struct elm_info *info) + ELM_SYNDROME_FRAGMENT_1 + offset); + regs->elm_syndrome_fragment_0[i] = elm_read_reg(info, + ELM_SYNDROME_FRAGMENT_0 + offset); ++ break; + default: + return -EINVAL; + } +@@ -466,6 +467,7 @@ static int elm_context_restore(struct elm_info *info) + regs->elm_syndrome_fragment_1[i]); + elm_write_reg(info, ELM_SYNDROME_FRAGMENT_0 + offset, + regs->elm_syndrome_fragment_0[i]); ++ break; + default: + return -EINVAL; + } +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c +index 91ec8cd12478..a95b322f0924 100644 +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -4068,7 +4068,7 @@ static int bond_check_params(struct bond_params *params) + } + + if (ad_select) { +- bond_opt_initstr(&newval, lacp_rate); ++ bond_opt_initstr(&newval, ad_select); + valptr = bond_opt_parse(bond_opt_get(BOND_OPT_AD_SELECT), + &newval); + if (!valptr) { +diff --git a/drivers/net/can/slcan.c b/drivers/net/can/slcan.c +index 3fcdae266377..1d0dab854b90 100644 +--- a/drivers/net/can/slcan.c ++++ b/drivers/net/can/slcan.c +@@ -52,6 +52,7 @@ + #include + #include + #include ++#include + #include + #include + +@@ -85,6 +86,7 @@ struct slcan { + struct tty_struct *tty; /* ptr to TTY structure */ + struct net_device *dev; /* easy for intr handling */ + spinlock_t lock; ++ struct work_struct tx_work; /* Flushes transmit buffer */ + + /* These are pointers to the malloc()ed frame buffers. */ + unsigned char rbuff[SLC_MTU]; /* receiver buffer */ +@@ -309,34 +311,44 @@ static void slc_encaps(struct slcan *sl, struct can_frame *cf) + sl->dev->stats.tx_bytes += cf->can_dlc; + } + +-/* +- * Called by the driver when there's room for more data. If we have +- * more packets to send, we send them here. +- */ +-static void slcan_write_wakeup(struct tty_struct *tty) ++/* Write out any remaining transmit buffer. Scheduled when tty is writable */ ++static void slcan_transmit(struct work_struct *work) + { ++ struct slcan *sl = container_of(work, struct slcan, tx_work); + int actual; +- struct slcan *sl = (struct slcan *) tty->disc_data; + ++ spin_lock_bh(&sl->lock); + /* First make sure we're connected. */ +- if (!sl || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev)) ++ if (!sl->tty || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev)) { ++ spin_unlock_bh(&sl->lock); + return; ++ } + +- spin_lock(&sl->lock); + if (sl->xleft <= 0) { + /* Now serial buffer is almost free & we can start + * transmission of another packet */ + sl->dev->stats.tx_packets++; +- clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); +- spin_unlock(&sl->lock); ++ clear_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags); ++ spin_unlock_bh(&sl->lock); + netif_wake_queue(sl->dev); + return; + } + +- actual = tty->ops->write(tty, sl->xhead, sl->xleft); ++ actual = sl->tty->ops->write(sl->tty, sl->xhead, sl->xleft); + sl->xleft -= actual; + sl->xhead += actual; +- spin_unlock(&sl->lock); ++ spin_unlock_bh(&sl->lock); ++} ++ ++/* ++ * Called by the driver when there's room for more data. ++ * Schedule the transmit. ++ */ ++static void slcan_write_wakeup(struct tty_struct *tty) ++{ ++ struct slcan *sl = tty->disc_data; ++ ++ schedule_work(&sl->tx_work); + } + + /* Send a can_frame to a TTY queue. */ +@@ -522,6 +534,7 @@ static struct slcan *slc_alloc(dev_t line) + sl->magic = SLCAN_MAGIC; + sl->dev = dev; + spin_lock_init(&sl->lock); ++ INIT_WORK(&sl->tx_work, slcan_transmit); + slcan_devs[i] = dev; + + return sl; +@@ -620,8 +633,12 @@ static void slcan_close(struct tty_struct *tty) + if (!sl || sl->magic != SLCAN_MAGIC || sl->tty != tty) + return; + ++ spin_lock_bh(&sl->lock); + tty->disc_data = NULL; + sl->tty = NULL; ++ spin_unlock_bh(&sl->lock); ++ ++ flush_work(&sl->tx_work); + + /* Flush network side */ + unregister_netdev(sl->dev); +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +index dbcff509dc3f..5ed512473b12 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +@@ -793,7 +793,8 @@ static void bnx2x_tpa_stop(struct bnx2x *bp, struct bnx2x_fastpath *fp, + + return; + } +- bnx2x_frag_free(fp, new_data); ++ if (new_data) ++ bnx2x_frag_free(fp, new_data); + drop: + /* drop the packet and keep the buffer in the bin */ + DP(NETIF_MSG_RX_STATUS, +diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c +index 36c80612e21a..80bfa0391913 100644 +--- a/drivers/net/ethernet/emulex/benet/be_main.c ++++ b/drivers/net/ethernet/emulex/benet/be_main.c +@@ -2797,7 +2797,7 @@ static int be_open(struct net_device *netdev) + for_all_evt_queues(adapter, eqo, i) { + napi_enable(&eqo->napi); + be_enable_busy_poll(eqo); +- be_eq_notify(adapter, eqo->q.id, true, false, 0); ++ be_eq_notify(adapter, eqo->q.id, true, true, 0); + } + adapter->flags |= BE_FLAGS_NAPI_ENABLED; + +diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c +index 42f0f6717511..70e16f71f574 100644 +--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c ++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c +@@ -1374,7 +1374,7 @@ static void e1000_rar_set_pch2lan(struct e1000_hw *hw, u8 *addr, u32 index) + /* RAR[1-6] are owned by manageability. Skip those and program the + * next address into the SHRA register array. + */ +- if (index < (u32)(hw->mac.rar_entry_count - 6)) { ++ if (index < (u32)(hw->mac.rar_entry_count)) { + s32 ret_val; + + ret_val = e1000_acquire_swflag_ich8lan(hw); +diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.h b/drivers/net/ethernet/intel/e1000e/ich8lan.h +index 217090df33e7..59865695b282 100644 +--- a/drivers/net/ethernet/intel/e1000e/ich8lan.h ++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.h +@@ -98,7 +98,7 @@ + #define PCIE_ICH8_SNOOP_ALL PCIE_NO_SNOOP_ALL + + #define E1000_ICH_RAR_ENTRIES 7 +-#define E1000_PCH2_RAR_ENTRIES 11 /* RAR[0-6], SHRA[0-3] */ ++#define E1000_PCH2_RAR_ENTRIES 5 /* RAR[0], SHRA[0-3] */ + #define E1000_PCH_LPT_RAR_ENTRIES 12 /* RAR[0], SHRA[0-10] */ + + #define PHY_PAGE_SHIFT 5 +diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.c b/drivers/net/ethernet/intel/igb/e1000_82575.c +index 06df6928f44c..4fa5c2a77d49 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_82575.c ++++ b/drivers/net/ethernet/intel/igb/e1000_82575.c +@@ -1492,6 +1492,13 @@ static s32 igb_init_hw_82575(struct e1000_hw *hw) + s32 ret_val; + u16 i, rar_count = mac->rar_entry_count; + ++ if ((hw->mac.type >= e1000_i210) && ++ !(igb_get_flash_presence_i210(hw))) { ++ ret_val = igb_pll_workaround_i210(hw); ++ if (ret_val) ++ return ret_val; ++ } ++ + /* Initialize identification LED */ + ret_val = igb_id_led_init(hw); + if (ret_val) { +diff --git a/drivers/net/ethernet/intel/igb/e1000_defines.h b/drivers/net/ethernet/intel/igb/e1000_defines.h +index 0571b973be80..20b37668284a 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_defines.h ++++ b/drivers/net/ethernet/intel/igb/e1000_defines.h +@@ -46,14 +46,15 @@ + /* Extended Device Control */ + #define E1000_CTRL_EXT_SDP3_DATA 0x00000080 /* Value of SW Defineable Pin 3 */ + /* Physical Func Reset Done Indication */ +-#define E1000_CTRL_EXT_PFRSTD 0x00004000 +-#define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 +-#define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 +-#define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 +-#define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 +-#define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 +-#define E1000_CTRL_EXT_EIAME 0x01000000 +-#define E1000_CTRL_EXT_IRCA 0x00000001 ++#define E1000_CTRL_EXT_PFRSTD 0x00004000 ++#define E1000_CTRL_EXT_SDLPE 0X00040000 /* SerDes Low Power Enable */ ++#define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 ++#define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 ++#define E1000_CTRL_EXT_LINK_MODE_1000BASE_KX 0x00400000 ++#define E1000_CTRL_EXT_LINK_MODE_SGMII 0x00800000 ++#define E1000_CTRL_EXT_LINK_MODE_GMII 0x00000000 ++#define E1000_CTRL_EXT_EIAME 0x01000000 ++#define E1000_CTRL_EXT_IRCA 0x00000001 + /* Interrupt delay cancellation */ + /* Driver loaded bit for FW */ + #define E1000_CTRL_EXT_DRV_LOAD 0x10000000 +@@ -62,6 +63,7 @@ + /* packet buffer parity error detection enabled */ + /* descriptor FIFO parity error detection enable */ + #define E1000_CTRL_EXT_PBA_CLR 0x80000000 /* PBA Clear */ ++#define E1000_CTRL_EXT_PHYPDEN 0x00100000 + #define E1000_I2CCMD_REG_ADDR_SHIFT 16 + #define E1000_I2CCMD_PHY_ADDR_SHIFT 24 + #define E1000_I2CCMD_OPCODE_READ 0x08000000 +diff --git a/drivers/net/ethernet/intel/igb/e1000_hw.h b/drivers/net/ethernet/intel/igb/e1000_hw.h +index ab99e2b582a8..b79980ad225b 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_hw.h ++++ b/drivers/net/ethernet/intel/igb/e1000_hw.h +@@ -572,4 +572,7 @@ struct net_device *igb_get_hw_dev(struct e1000_hw *hw); + /* These functions must be implemented by drivers */ + s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); + s32 igb_write_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value); ++ ++void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); ++void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); + #endif /* _E1000_HW_H_ */ +diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.c b/drivers/net/ethernet/intel/igb/e1000_i210.c +index 0c0393316a3a..0217d4e229a0 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_i210.c ++++ b/drivers/net/ethernet/intel/igb/e1000_i210.c +@@ -835,3 +835,69 @@ s32 igb_init_nvm_params_i210(struct e1000_hw *hw) + } + return ret_val; + } ++ ++/** ++ * igb_pll_workaround_i210 ++ * @hw: pointer to the HW structure ++ * ++ * Works around an errata in the PLL circuit where it occasionally ++ * provides the wrong clock frequency after power up. ++ **/ ++s32 igb_pll_workaround_i210(struct e1000_hw *hw) ++{ ++ s32 ret_val; ++ u32 wuc, mdicnfg, ctrl, ctrl_ext, reg_val; ++ u16 nvm_word, phy_word, pci_word, tmp_nvm; ++ int i; ++ ++ /* Get and set needed register values */ ++ wuc = rd32(E1000_WUC); ++ mdicnfg = rd32(E1000_MDICNFG); ++ reg_val = mdicnfg & ~E1000_MDICNFG_EXT_MDIO; ++ wr32(E1000_MDICNFG, reg_val); ++ ++ /* Get data from NVM, or set default */ ++ ret_val = igb_read_invm_word_i210(hw, E1000_INVM_AUTOLOAD, ++ &nvm_word); ++ if (ret_val) ++ nvm_word = E1000_INVM_DEFAULT_AL; ++ tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL; ++ for (i = 0; i < E1000_MAX_PLL_TRIES; i++) { ++ /* check current state directly from internal PHY */ ++ igb_read_phy_reg_gs40g(hw, (E1000_PHY_PLL_FREQ_PAGE | ++ E1000_PHY_PLL_FREQ_REG), &phy_word); ++ if ((phy_word & E1000_PHY_PLL_UNCONF) ++ != E1000_PHY_PLL_UNCONF) { ++ ret_val = 0; ++ break; ++ } else { ++ ret_val = -E1000_ERR_PHY; ++ } ++ /* directly reset the internal PHY */ ++ ctrl = rd32(E1000_CTRL); ++ wr32(E1000_CTRL, ctrl|E1000_CTRL_PHY_RST); ++ ++ ctrl_ext = rd32(E1000_CTRL_EXT); ++ ctrl_ext |= (E1000_CTRL_EXT_PHYPDEN | E1000_CTRL_EXT_SDLPE); ++ wr32(E1000_CTRL_EXT, ctrl_ext); ++ ++ wr32(E1000_WUC, 0); ++ reg_val = (E1000_INVM_AUTOLOAD << 4) | (tmp_nvm << 16); ++ wr32(E1000_EEARBC_I210, reg_val); ++ ++ igb_read_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); ++ pci_word |= E1000_PCI_PMCSR_D3; ++ igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); ++ usleep_range(1000, 2000); ++ pci_word &= ~E1000_PCI_PMCSR_D3; ++ igb_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); ++ reg_val = (E1000_INVM_AUTOLOAD << 4) | (nvm_word << 16); ++ wr32(E1000_EEARBC_I210, reg_val); ++ ++ /* restore WUC register */ ++ wr32(E1000_WUC, wuc); ++ } ++ /* restore MDICNFG setting */ ++ wr32(E1000_MDICNFG, mdicnfg); ++ return ret_val; ++} +diff --git a/drivers/net/ethernet/intel/igb/e1000_i210.h b/drivers/net/ethernet/intel/igb/e1000_i210.h +index 2d913716573a..710f8e9f10fb 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_i210.h ++++ b/drivers/net/ethernet/intel/igb/e1000_i210.h +@@ -46,6 +46,7 @@ s32 igb_read_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 *data); + s32 igb_write_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 data); + s32 igb_init_nvm_params_i210(struct e1000_hw *hw); + bool igb_get_flash_presence_i210(struct e1000_hw *hw); ++s32 igb_pll_workaround_i210(struct e1000_hw *hw); + + #define E1000_STM_OPCODE 0xDB00 + #define E1000_EEPROM_FLASH_SIZE_WORD 0x11 +@@ -91,4 +92,15 @@ enum E1000_INVM_STRUCTURE_TYPE { + #define NVM_LED_1_CFG_DEFAULT_I211 0x0184 + #define NVM_LED_0_2_CFG_DEFAULT_I211 0x200C + ++/* PLL Defines */ ++#define E1000_PCI_PMCSR 0x44 ++#define E1000_PCI_PMCSR_D3 0x03 ++#define E1000_MAX_PLL_TRIES 5 ++#define E1000_PHY_PLL_UNCONF 0xFF ++#define E1000_PHY_PLL_FREQ_PAGE 0xFC0000 ++#define E1000_PHY_PLL_FREQ_REG 0x000E ++#define E1000_INVM_DEFAULT_AL 0x202F ++#define E1000_INVM_AUTOLOAD 0x0A ++#define E1000_INVM_PLL_WO_VAL 0x0010 ++ + #endif +diff --git a/drivers/net/ethernet/intel/igb/e1000_regs.h b/drivers/net/ethernet/intel/igb/e1000_regs.h +index 82632c6c53af..7156981ec813 100644 +--- a/drivers/net/ethernet/intel/igb/e1000_regs.h ++++ b/drivers/net/ethernet/intel/igb/e1000_regs.h +@@ -69,6 +69,7 @@ + #define E1000_PBA 0x01000 /* Packet Buffer Allocation - RW */ + #define E1000_PBS 0x01008 /* Packet Buffer Size */ + #define E1000_EEMNGCTL 0x01010 /* MNG EEprom Control */ ++#define E1000_EEARBC_I210 0x12024 /* EEPROM Auto Read Bus Control */ + #define E1000_EEWR 0x0102C /* EEPROM Write Register - RW */ + #define E1000_I2CCMD 0x01028 /* SFPI2C Command Register - RW */ + #define E1000_FRTIMER 0x01048 /* Free Running Timer - RW */ +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c +index d9c7eb279141..5ca8c479666e 100644 +--- a/drivers/net/ethernet/intel/igb/igb_main.c ++++ b/drivers/net/ethernet/intel/igb/igb_main.c +@@ -7128,6 +7128,20 @@ static int igb_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) + } + } + ++void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) ++{ ++ struct igb_adapter *adapter = hw->back; ++ ++ pci_read_config_word(adapter->pdev, reg, value); ++} ++ ++void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value) ++{ ++ struct igb_adapter *adapter = hw->back; ++ ++ pci_write_config_word(adapter->pdev, reg, *value); ++} ++ + s32 igb_read_pcie_cap_reg(struct e1000_hw *hw, u32 reg, u16 *value) + { + struct igb_adapter *adapter = hw->back; +@@ -7491,6 +7505,8 @@ static int igb_sriov_reinit(struct pci_dev *dev) + + if (netif_running(netdev)) + igb_close(netdev); ++ else ++ igb_reset(adapter); + + igb_clear_interrupt_scheme(adapter); + +diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c +index ca2dfbe01598..c4c00d9f2c04 100644 +--- a/drivers/net/ethernet/marvell/mvneta.c ++++ b/drivers/net/ethernet/marvell/mvneta.c +@@ -1217,7 +1217,7 @@ static u32 mvneta_txq_desc_csum(int l3_offs, int l3_proto, + command = l3_offs << MVNETA_TX_L3_OFF_SHIFT; + command |= ip_hdr_len << MVNETA_TX_IP_HLEN_SHIFT; + +- if (l3_proto == swab16(ETH_P_IP)) ++ if (l3_proto == htons(ETH_P_IP)) + command |= MVNETA_TXD_IP_CSUM; + else + command |= MVNETA_TX_L3_IP6; +@@ -2393,7 +2393,7 @@ static void mvneta_adjust_link(struct net_device *ndev) + + if (phydev->speed == SPEED_1000) + val |= MVNETA_GMAC_CONFIG_GMII_SPEED; +- else ++ else if (phydev->speed == SPEED_100) + val |= MVNETA_GMAC_CONFIG_MII_SPEED; + + mvreg_write(pp, MVNETA_GMAC_AUTONEG_CONFIG, val); +diff --git a/drivers/net/ethernet/sun/sunvnet.c b/drivers/net/ethernet/sun/sunvnet.c +index 1c24a8f368bd..fd411d6e19a2 100644 +--- a/drivers/net/ethernet/sun/sunvnet.c ++++ b/drivers/net/ethernet/sun/sunvnet.c +@@ -1083,6 +1083,24 @@ static struct vnet *vnet_find_or_create(const u64 *local_mac) + return vp; + } + ++static void vnet_cleanup(void) ++{ ++ struct vnet *vp; ++ struct net_device *dev; ++ ++ mutex_lock(&vnet_list_mutex); ++ while (!list_empty(&vnet_list)) { ++ vp = list_first_entry(&vnet_list, struct vnet, list); ++ list_del(&vp->list); ++ dev = vp->dev; ++ /* vio_unregister_driver() should have cleaned up port_list */ ++ BUG_ON(!list_empty(&vp->port_list)); ++ unregister_netdev(dev); ++ free_netdev(dev); ++ } ++ mutex_unlock(&vnet_list_mutex); ++} ++ + static const char *local_mac_prop = "local-mac-address"; + + static struct vnet *vnet_find_parent(struct mdesc_handle *hp, +@@ -1240,7 +1258,6 @@ static int vnet_port_remove(struct vio_dev *vdev) + + kfree(port); + +- unregister_netdev(vp->dev); + } + return 0; + } +@@ -1268,6 +1285,7 @@ static int __init vnet_init(void) + static void __exit vnet_exit(void) + { + vio_unregister_driver(&vnet_port_driver); ++ vnet_cleanup(); + } + + module_init(vnet_init); +diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c +index 2ea7efd11857..6c9c16d76935 100644 +--- a/drivers/net/ppp/pppoe.c ++++ b/drivers/net/ppp/pppoe.c +@@ -675,7 +675,7 @@ static int pppoe_connect(struct socket *sock, struct sockaddr *uservaddr, + po->chan.hdrlen = (sizeof(struct pppoe_hdr) + + dev->hard_header_len); + +- po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr); ++ po->chan.mtu = dev->mtu - sizeof(struct pppoe_hdr) - 2; + po->chan.private = sk; + po->chan.ops = &pppoe_chan_ops; + +diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c +index ad4a94e9ff57..87526443841f 100644 +--- a/drivers/net/slip/slip.c ++++ b/drivers/net/slip/slip.c +@@ -83,6 +83,7 @@ + #include + #include + #include ++#include + #include "slip.h" + #ifdef CONFIG_INET + #include +@@ -416,36 +417,46 @@ static void sl_encaps(struct slip *sl, unsigned char *icp, int len) + #endif + } + +-/* +- * Called by the driver when there's room for more data. If we have +- * more packets to send, we send them here. +- */ +-static void slip_write_wakeup(struct tty_struct *tty) ++/* Write out any remaining transmit buffer. Scheduled when tty is writable */ ++static void slip_transmit(struct work_struct *work) + { ++ struct slip *sl = container_of(work, struct slip, tx_work); + int actual; +- struct slip *sl = tty->disc_data; + ++ spin_lock_bh(&sl->lock); + /* First make sure we're connected. */ +- if (!sl || sl->magic != SLIP_MAGIC || !netif_running(sl->dev)) ++ if (!sl->tty || sl->magic != SLIP_MAGIC || !netif_running(sl->dev)) { ++ spin_unlock_bh(&sl->lock); + return; ++ } + +- spin_lock_bh(&sl->lock); + if (sl->xleft <= 0) { + /* Now serial buffer is almost free & we can start + * transmission of another packet */ + sl->dev->stats.tx_packets++; +- clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags); ++ clear_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags); + spin_unlock_bh(&sl->lock); + sl_unlock(sl); + return; + } + +- actual = tty->ops->write(tty, sl->xhead, sl->xleft); ++ actual = sl->tty->ops->write(sl->tty, sl->xhead, sl->xleft); + sl->xleft -= actual; + sl->xhead += actual; + spin_unlock_bh(&sl->lock); + } + ++/* ++ * Called by the driver when there's room for more data. ++ * Schedule the transmit. ++ */ ++static void slip_write_wakeup(struct tty_struct *tty) ++{ ++ struct slip *sl = tty->disc_data; ++ ++ schedule_work(&sl->tx_work); ++} ++ + static void sl_tx_timeout(struct net_device *dev) + { + struct slip *sl = netdev_priv(dev); +@@ -749,6 +760,7 @@ static struct slip *sl_alloc(dev_t line) + sl->magic = SLIP_MAGIC; + sl->dev = dev; + spin_lock_init(&sl->lock); ++ INIT_WORK(&sl->tx_work, slip_transmit); + sl->mode = SL_MODE_DEFAULT; + #ifdef CONFIG_SLIP_SMART + /* initialize timer_list struct */ +@@ -872,8 +884,12 @@ static void slip_close(struct tty_struct *tty) + if (!sl || sl->magic != SLIP_MAGIC || sl->tty != tty) + return; + ++ spin_lock_bh(&sl->lock); + tty->disc_data = NULL; + sl->tty = NULL; ++ spin_unlock_bh(&sl->lock); ++ ++ flush_work(&sl->tx_work); + + /* VSV = very important to remove timers */ + #ifdef CONFIG_SLIP_SMART +diff --git a/drivers/net/slip/slip.h b/drivers/net/slip/slip.h +index 67673cf1266b..cf32aadf508f 100644 +--- a/drivers/net/slip/slip.h ++++ b/drivers/net/slip/slip.h +@@ -53,6 +53,7 @@ struct slip { + struct tty_struct *tty; /* ptr to TTY structure */ + struct net_device *dev; /* easy for intr handling */ + spinlock_t lock; ++ struct work_struct tx_work; /* Flushes transmit buffer */ + + #ifdef SL_INCLUDE_CSLIP + struct slcompress *slcomp; /* for header compression */ +diff --git a/drivers/net/usb/huawei_cdc_ncm.c b/drivers/net/usb/huawei_cdc_ncm.c +index 312178d7b698..a01462523bc7 100644 +--- a/drivers/net/usb/huawei_cdc_ncm.c ++++ b/drivers/net/usb/huawei_cdc_ncm.c +@@ -84,12 +84,13 @@ static int huawei_cdc_ncm_bind(struct usbnet *usbnet_dev, + ctx = drvstate->ctx; + + if (usbnet_dev->status) +- /* CDC-WMC r1.1 requires wMaxCommand to be "at least 256 +- * decimal (0x100)" ++ /* The wMaxCommand buffer must be big enough to hold ++ * any message from the modem. Experience has shown ++ * that some replies are more than 256 bytes long + */ + subdriver = usb_cdc_wdm_register(ctx->control, + &usbnet_dev->status->desc, +- 256, /* wMaxCommand */ ++ 1024, /* wMaxCommand */ + huawei_cdc_ncm_wdm_manage_power); + if (IS_ERR(subdriver)) { + ret = PTR_ERR(subdriver); +@@ -206,6 +207,9 @@ static const struct usb_device_id huawei_cdc_ncm_devs[] = { + { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x76), + .driver_info = (unsigned long)&huawei_cdc_ncm_info, + }, ++ { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x03, 0x16), ++ .driver_info = (unsigned long)&huawei_cdc_ncm_info, ++ }, + + /* Terminating entry */ + { +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c +index b71120842c4f..d510f1d41bae 100644 +--- a/drivers/net/usb/qmi_wwan.c ++++ b/drivers/net/usb/qmi_wwan.c +@@ -660,6 +660,7 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x05c6, 0x9084, 4)}, + {QMI_FIXED_INTF(0x05c6, 0x920d, 0)}, + {QMI_FIXED_INTF(0x05c6, 0x920d, 5)}, ++ {QMI_FIXED_INTF(0x0846, 0x68a2, 8)}, + {QMI_FIXED_INTF(0x12d1, 0x140c, 1)}, /* Huawei E173 */ + {QMI_FIXED_INTF(0x12d1, 0x14ac, 1)}, /* Huawei E1820 */ + {QMI_FIXED_INTF(0x16d8, 0x6003, 0)}, /* CMOTech 6003 */ +@@ -734,6 +735,7 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x19d2, 0x1424, 2)}, + {QMI_FIXED_INTF(0x19d2, 0x1425, 2)}, + {QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */ ++ {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ + {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ + {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ + {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ +@@ -746,6 +748,7 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x1199, 0x901f, 8)}, /* Sierra Wireless EM7355 */ + {QMI_FIXED_INTF(0x1199, 0x9041, 8)}, /* Sierra Wireless MC7305/MC7355 */ + {QMI_FIXED_INTF(0x1199, 0x9051, 8)}, /* Netgear AirCard 340U */ ++ {QMI_FIXED_INTF(0x1199, 0x9057, 8)}, + {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */ + {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */ + {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ +diff --git a/drivers/net/wireless/iwlwifi/dvm/rxon.c b/drivers/net/wireless/iwlwifi/dvm/rxon.c +index 503a81e58185..c1e311341b74 100644 +--- a/drivers/net/wireless/iwlwifi/dvm/rxon.c ++++ b/drivers/net/wireless/iwlwifi/dvm/rxon.c +@@ -1068,13 +1068,6 @@ int iwlagn_commit_rxon(struct iwl_priv *priv, struct iwl_rxon_context *ctx) + /* recalculate basic rates */ + iwl_calc_basic_rates(priv, ctx); + +- /* +- * force CTS-to-self frames protection if RTS-CTS is not preferred +- * one aggregation protection method +- */ +- if (!priv->hw_params.use_rts_for_aggregation) +- ctx->staging.flags |= RXON_FLG_SELF_CTS_EN; +- + if ((ctx->vif && ctx->vif->bss_conf.use_short_slot) || + !(ctx->staging.flags & RXON_FLG_BAND_24G_MSK)) + ctx->staging.flags |= RXON_FLG_SHORT_SLOT_MSK; +@@ -1480,11 +1473,6 @@ void iwlagn_bss_info_changed(struct ieee80211_hw *hw, + else + ctx->staging.flags &= ~RXON_FLG_TGG_PROTECT_MSK; + +- if (bss_conf->use_cts_prot) +- ctx->staging.flags |= RXON_FLG_SELF_CTS_EN; +- else +- ctx->staging.flags &= ~RXON_FLG_SELF_CTS_EN; +- + memcpy(ctx->staging.bssid_addr, bss_conf->bssid, ETH_ALEN); + + if (vif->type == NL80211_IFTYPE_AP || +diff --git a/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c +index ba723d50939a..820797af7abf 100644 +--- a/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c ++++ b/drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c +@@ -651,13 +651,9 @@ static void iwl_mvm_mac_ctxt_cmd_common(struct iwl_mvm *mvm, + if (vif->bss_conf.qos) + cmd->qos_flags |= cpu_to_le32(MAC_QOS_FLG_UPDATE_EDCA); + +- /* Don't use cts to self as the fw doesn't support it currently. */ +- if (vif->bss_conf.use_cts_prot) { ++ if (vif->bss_conf.use_cts_prot) + cmd->protection_flags |= cpu_to_le32(MAC_PROT_FLG_TGG_PROTECT); +- if (IWL_UCODE_API(mvm->fw->ucode_ver) >= 8) +- cmd->protection_flags |= +- cpu_to_le32(MAC_PROT_FLG_SELF_CTS_EN); +- } ++ + IWL_DEBUG_RATE(mvm, "use_cts_prot %d, ht_operation_mode %d\n", + vif->bss_conf.use_cts_prot, + vif->bss_conf.ht_operation_mode); +diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c +index 43e27a174430..df1f5e732ab5 100644 +--- a/drivers/net/wireless/iwlwifi/pcie/drv.c ++++ b/drivers/net/wireless/iwlwifi/pcie/drv.c +@@ -366,6 +366,7 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { + {IWL_PCI_DEVICE(0x095A, 0x5012, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x5412, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)}, ++ {IWL_PCI_DEVICE(0x095A, 0x5510, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x5400, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x5000, iwl7265_2n_cfg)}, +@@ -379,7 +380,7 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = { + {IWL_PCI_DEVICE(0x095A, 0x9110, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x9112, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)}, +- {IWL_PCI_DEVICE(0x095A, 0x9200, iwl7265_2ac_cfg)}, ++ {IWL_PCI_DEVICE(0x095B, 0x9200, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x9510, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x9310, iwl7265_2ac_cfg)}, + {IWL_PCI_DEVICE(0x095A, 0x9410, iwl7265_2ac_cfg)}, +diff --git a/drivers/net/wireless/mwifiex/main.c b/drivers/net/wireless/mwifiex/main.c +index 9d3d2758ec35..952a47f6554e 100644 +--- a/drivers/net/wireless/mwifiex/main.c ++++ b/drivers/net/wireless/mwifiex/main.c +@@ -646,6 +646,7 @@ mwifiex_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) + } + + tx_info = MWIFIEX_SKB_TXCB(skb); ++ memset(tx_info, 0, sizeof(*tx_info)); + tx_info->bss_num = priv->bss_num; + tx_info->bss_type = priv->bss_type; + tx_info->pkt_len = skb->len; +diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c +index 3314516018c6..86b1fd673749 100644 +--- a/drivers/usb/chipidea/udc.c ++++ b/drivers/usb/chipidea/udc.c +@@ -1179,8 +1179,8 @@ static int ep_enable(struct usb_ep *ep, + + if (hwep->type == USB_ENDPOINT_XFER_CONTROL) + cap |= QH_IOS; +- if (hwep->num) +- cap |= QH_ZLT; ++ ++ cap |= QH_ZLT; + cap |= (hwep->ep.maxpacket << __ffs(QH_MAX_PKT)) & QH_MAX_PKT; + /* + * For ISO-TX, we set mult at QH as the largest value, and use +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index 3baa51bf8a6a..36b1e856bd00 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -884,6 +884,25 @@ static int hub_usb3_port_disable(struct usb_hub *hub, int port1) + if (!hub_is_superspeed(hub->hdev)) + return -EINVAL; + ++ ret = hub_port_status(hub, port1, &portstatus, &portchange); ++ if (ret < 0) ++ return ret; ++ ++ /* ++ * USB controller Advanced Micro Devices, Inc. [AMD] FCH USB XHCI ++ * Controller [1022:7814] will have spurious result making the following ++ * usb 3.0 device hotplugging route to the 2.0 root hub and recognized ++ * as high-speed device if we set the usb 3.0 port link state to ++ * Disabled. Since it's already in USB_SS_PORT_LS_RX_DETECT state, we ++ * check the state here to avoid the bug. ++ */ ++ if ((portstatus & USB_PORT_STAT_LINK_STATE) == ++ USB_SS_PORT_LS_RX_DETECT) { ++ dev_dbg(&hub->ports[port1 - 1]->dev, ++ "Not disabling port; link state is RxDetect\n"); ++ return ret; ++ } ++ + ret = hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_SS_DISABLED); + if (ret) + return ret; +diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c +index 61a6ac8fa8fc..08834b565aab 100644 +--- a/drivers/xen/balloon.c ++++ b/drivers/xen/balloon.c +@@ -426,20 +426,18 @@ static enum bp_state decrease_reservation(unsigned long nr_pages, gfp_t gfp) + * p2m are consistent. + */ + if (!xen_feature(XENFEAT_auto_translated_physmap)) { +- unsigned long p; +- struct page *scratch_page = get_balloon_scratch_page(); +- + if (!PageHighMem(page)) { ++ struct page *scratch_page = get_balloon_scratch_page(); ++ + ret = HYPERVISOR_update_va_mapping( + (unsigned long)__va(pfn << PAGE_SHIFT), + pfn_pte(page_to_pfn(scratch_page), + PAGE_KERNEL_RO), 0); + BUG_ON(ret); +- } +- p = page_to_pfn(scratch_page); +- __set_phys_to_machine(pfn, pfn_to_mfn(p)); + +- put_balloon_scratch_page(); ++ put_balloon_scratch_page(); ++ } ++ __set_phys_to_machine(pfn, INVALID_P2M_ENTRY); + } + #endif + +diff --git a/fs/aio.c b/fs/aio.c +index e609e15f36b9..6d68e01dc7ca 100644 +--- a/fs/aio.c ++++ b/fs/aio.c +@@ -830,16 +830,20 @@ void exit_aio(struct mm_struct *mm) + static void put_reqs_available(struct kioctx *ctx, unsigned nr) + { + struct kioctx_cpu *kcpu; ++ unsigned long flags; + + preempt_disable(); + kcpu = this_cpu_ptr(ctx->cpu); + ++ local_irq_save(flags); + kcpu->reqs_available += nr; ++ + while (kcpu->reqs_available >= ctx->req_batch * 2) { + kcpu->reqs_available -= ctx->req_batch; + atomic_add(ctx->req_batch, &ctx->reqs_available); + } + ++ local_irq_restore(flags); + preempt_enable(); + } + +@@ -847,10 +851,12 @@ static bool get_reqs_available(struct kioctx *ctx) + { + struct kioctx_cpu *kcpu; + bool ret = false; ++ unsigned long flags; + + preempt_disable(); + kcpu = this_cpu_ptr(ctx->cpu); + ++ local_irq_save(flags); + if (!kcpu->reqs_available) { + int old, avail = atomic_read(&ctx->reqs_available); + +@@ -869,6 +875,7 @@ static bool get_reqs_available(struct kioctx *ctx) + ret = true; + kcpu->reqs_available--; + out: ++ local_irq_restore(flags); + preempt_enable(); + return ret; + } +diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c +index 1d1292c581c3..342f0239fcbf 100644 +--- a/fs/fuse/dir.c ++++ b/fs/fuse/dir.c +@@ -198,7 +198,8 @@ static int fuse_dentry_revalidate(struct dentry *entry, unsigned int flags) + inode = ACCESS_ONCE(entry->d_inode); + if (inode && is_bad_inode(inode)) + goto invalid; +- else if (fuse_dentry_time(entry) < get_jiffies_64()) { ++ else if (time_before64(fuse_dentry_time(entry), get_jiffies_64()) || ++ (flags & LOOKUP_REVAL)) { + int err; + struct fuse_entry_out outarg; + struct fuse_req *req; +@@ -925,7 +926,7 @@ int fuse_update_attributes(struct inode *inode, struct kstat *stat, + int err; + bool r; + +- if (fi->i_time < get_jiffies_64()) { ++ if (time_before64(fi->i_time, get_jiffies_64())) { + r = true; + err = fuse_do_getattr(inode, stat, file); + } else { +@@ -1111,7 +1112,7 @@ static int fuse_permission(struct inode *inode, int mask) + ((mask & MAY_EXEC) && S_ISREG(inode->i_mode))) { + struct fuse_inode *fi = get_fuse_inode(inode); + +- if (fi->i_time < get_jiffies_64()) { ++ if (time_before64(fi->i_time, get_jiffies_64())) { + refreshed = true; + + err = fuse_perm_getattr(inode, mask); +diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c +index d468643a68b2..73f6bcb44ea8 100644 +--- a/fs/fuse/inode.c ++++ b/fs/fuse/inode.c +@@ -461,6 +461,17 @@ static const match_table_t tokens = { + {OPT_ERR, NULL} + }; + ++static int fuse_match_uint(substring_t *s, unsigned int *res) ++{ ++ int err = -ENOMEM; ++ char *buf = match_strdup(s); ++ if (buf) { ++ err = kstrtouint(buf, 10, res); ++ kfree(buf); ++ } ++ return err; ++} ++ + static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) + { + char *p; +@@ -471,6 +482,7 @@ static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) + while ((p = strsep(&opt, ",")) != NULL) { + int token; + int value; ++ unsigned uv; + substring_t args[MAX_OPT_ARGS]; + if (!*p) + continue; +@@ -494,18 +506,18 @@ static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev) + break; + + case OPT_USER_ID: +- if (match_int(&args[0], &value)) ++ if (fuse_match_uint(&args[0], &uv)) + return 0; +- d->user_id = make_kuid(current_user_ns(), value); ++ d->user_id = make_kuid(current_user_ns(), uv); + if (!uid_valid(d->user_id)) + return 0; + d->user_id_present = 1; + break; + + case OPT_GROUP_ID: +- if (match_int(&args[0], &value)) ++ if (fuse_match_uint(&args[0], &uv)) + return 0; +- d->group_id = make_kgid(current_user_ns(), value); ++ d->group_id = make_kgid(current_user_ns(), uv); + if (!gid_valid(d->group_id)) + return 0; + d->group_id_present = 1; +diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c +index cfc8dcc16043..ce87c9007b0f 100644 +--- a/fs/quota/dquot.c ++++ b/fs/quota/dquot.c +@@ -702,6 +702,7 @@ dqcache_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) + struct dquot *dquot; + unsigned long freed = 0; + ++ spin_lock(&dq_list_lock); + head = free_dquots.prev; + while (head != &free_dquots && sc->nr_to_scan) { + dquot = list_entry(head, struct dquot, dq_free); +@@ -713,6 +714,7 @@ dqcache_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) + freed++; + head = free_dquots.prev; + } ++ spin_unlock(&dq_list_lock); + return freed; + } + +diff --git a/include/net/sock.h b/include/net/sock.h +index 57c31dd15e64..2f7bc435c93d 100644 +--- a/include/net/sock.h ++++ b/include/net/sock.h +@@ -1755,8 +1755,8 @@ sk_dst_get(struct sock *sk) + + rcu_read_lock(); + dst = rcu_dereference(sk->sk_dst_cache); +- if (dst) +- dst_hold(dst); ++ if (dst && !atomic_inc_not_zero(&dst->__refcnt)) ++ dst = NULL; + rcu_read_unlock(); + return dst; + } +@@ -1793,9 +1793,11 @@ __sk_dst_set(struct sock *sk, struct dst_entry *dst) + static inline void + sk_dst_set(struct sock *sk, struct dst_entry *dst) + { +- spin_lock(&sk->sk_dst_lock); +- __sk_dst_set(sk, dst); +- spin_unlock(&sk->sk_dst_lock); ++ struct dst_entry *old_dst; ++ ++ sk_tx_queue_clear(sk); ++ old_dst = xchg((__force struct dst_entry **)&sk->sk_dst_cache, dst); ++ dst_release(old_dst); + } + + static inline void +@@ -1807,9 +1809,7 @@ __sk_dst_reset(struct sock *sk) + static inline void + sk_dst_reset(struct sock *sk) + { +- spin_lock(&sk->sk_dst_lock); +- __sk_dst_reset(sk); +- spin_unlock(&sk->sk_dst_lock); ++ sk_dst_set(sk, NULL); + } + + struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie); +diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks +index d2b32ac27a39..ecee67a00f5f 100644 +--- a/kernel/Kconfig.locks ++++ b/kernel/Kconfig.locks +@@ -220,6 +220,9 @@ config INLINE_WRITE_UNLOCK_IRQRESTORE + + endif + ++config ARCH_SUPPORTS_ATOMIC_RMW ++ bool ++ + config MUTEX_SPIN_ON_OWNER + def_bool y +- depends on SMP && !DEBUG_MUTEXES ++ depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW +diff --git a/kernel/events/core.c b/kernel/events/core.c +index 0e7fea78f565..f774e9365a03 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -2311,7 +2311,7 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn, + next_parent = rcu_dereference(next_ctx->parent_ctx); + + /* If neither context have a parent context; they cannot be clones. */ +- if (!parent && !next_parent) ++ if (!parent || !next_parent) + goto unlock; + + if (next_parent == ctx || next_ctx == parent || next_parent == parent) { +diff --git a/kernel/power/process.c b/kernel/power/process.c +index 06ec8869dbf1..14f9a8d4725d 100644 +--- a/kernel/power/process.c ++++ b/kernel/power/process.c +@@ -184,6 +184,7 @@ void thaw_processes(void) + + printk("Restarting tasks ... "); + ++ __usermodehelper_set_disable_depth(UMH_FREEZING); + thaw_workqueues(); + + read_lock(&tasklist_lock); +diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c +index dd52e7ffb10e..183e8e5c38ba 100644 +--- a/kernel/sched/debug.c ++++ b/kernel/sched/debug.c +@@ -608,7 +608,7 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) + + avg_atom = p->se.sum_exec_runtime; + if (nr_switches) +- do_div(avg_atom, nr_switches); ++ avg_atom = div64_ul(avg_atom, nr_switches); + else + avg_atom = -1LL; + +diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c +index 88c9c65a430d..fe75444ae7ec 100644 +--- a/kernel/time/alarmtimer.c ++++ b/kernel/time/alarmtimer.c +@@ -585,9 +585,14 @@ static int alarm_timer_set(struct k_itimer *timr, int flags, + struct itimerspec *new_setting, + struct itimerspec *old_setting) + { ++ ktime_t exp; ++ + if (!rtcdev) + return -ENOTSUPP; + ++ if (flags & ~TIMER_ABSTIME) ++ return -EINVAL; ++ + if (old_setting) + alarm_timer_get(timr, old_setting); + +@@ -597,8 +602,16 @@ static int alarm_timer_set(struct k_itimer *timr, int flags, + + /* start the timer */ + timr->it.alarm.interval = timespec_to_ktime(new_setting->it_interval); +- alarm_start(&timr->it.alarm.alarmtimer, +- timespec_to_ktime(new_setting->it_value)); ++ exp = timespec_to_ktime(new_setting->it_value); ++ /* Convert (if necessary) to absolute time */ ++ if (flags != TIMER_ABSTIME) { ++ ktime_t now; ++ ++ now = alarm_bases[timr->it.alarm.alarmtimer.type].gettime(); ++ exp = ktime_add(now, exp); ++ } ++ ++ alarm_start(&timr->it.alarm.alarmtimer, exp); + return 0; + } + +@@ -730,6 +743,9 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags, + if (!alarmtimer_get_rtcdev()) + return -ENOTSUPP; + ++ if (flags & ~TIMER_ABSTIME) ++ return -EINVAL; ++ + if (!capable(CAP_WAKE_ALARM)) + return -EPERM; + +diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c +index 868633e61b43..e3be87edde33 100644 +--- a/kernel/trace/ftrace.c ++++ b/kernel/trace/ftrace.c +@@ -331,12 +331,12 @@ static void update_ftrace_function(void) + func = ftrace_ops_list_func; + } + ++ update_function_graph_func(); ++ + /* If there's no change, then do nothing more here */ + if (ftrace_trace_function == func) + return; + +- update_function_graph_func(); +- + /* + * If we are using the list function, it doesn't care + * about the function_trace_ops. +diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c +index 04202d9aa514..0954450df7dc 100644 +--- a/kernel/trace/ring_buffer.c ++++ b/kernel/trace/ring_buffer.c +@@ -616,10 +616,6 @@ int ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu, + struct ring_buffer_per_cpu *cpu_buffer; + struct rb_irq_work *work; + +- if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) || +- (cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu))) +- return POLLIN | POLLRDNORM; +- + if (cpu == RING_BUFFER_ALL_CPUS) + work = &buffer->irq_work; + else { +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index 922657f30723..7e259b2bdf44 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -454,6 +454,12 @@ int __trace_puts(unsigned long ip, const char *str, int size) + struct print_entry *entry; + unsigned long irq_flags; + int alloc; ++ int pc; ++ ++ if (!(trace_flags & TRACE_ITER_PRINTK)) ++ return 0; ++ ++ pc = preempt_count(); + + if (unlikely(tracing_selftest_running || tracing_disabled)) + return 0; +@@ -463,7 +469,7 @@ int __trace_puts(unsigned long ip, const char *str, int size) + local_save_flags(irq_flags); + buffer = global_trace.trace_buffer.buffer; + event = trace_buffer_lock_reserve(buffer, TRACE_PRINT, alloc, +- irq_flags, preempt_count()); ++ irq_flags, pc); + if (!event) + return 0; + +@@ -480,6 +486,7 @@ int __trace_puts(unsigned long ip, const char *str, int size) + entry->buf[size] = '\0'; + + __buffer_unlock_commit(buffer, event); ++ ftrace_trace_stack(buffer, irq_flags, 4, pc); + + return size; + } +@@ -497,6 +504,12 @@ int __trace_bputs(unsigned long ip, const char *str) + struct bputs_entry *entry; + unsigned long irq_flags; + int size = sizeof(struct bputs_entry); ++ int pc; ++ ++ if (!(trace_flags & TRACE_ITER_PRINTK)) ++ return 0; ++ ++ pc = preempt_count(); + + if (unlikely(tracing_selftest_running || tracing_disabled)) + return 0; +@@ -504,7 +517,7 @@ int __trace_bputs(unsigned long ip, const char *str) + local_save_flags(irq_flags); + buffer = global_trace.trace_buffer.buffer; + event = trace_buffer_lock_reserve(buffer, TRACE_BPUTS, size, +- irq_flags, preempt_count()); ++ irq_flags, pc); + if (!event) + return 0; + +@@ -513,6 +526,7 @@ int __trace_bputs(unsigned long ip, const char *str) + entry->str = str; + + __buffer_unlock_commit(buffer, event); ++ ftrace_trace_stack(buffer, irq_flags, 4, pc); + + return 1; + } +diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c +index 7b16d40bd64d..e4c4efc4ba0d 100644 +--- a/kernel/trace/trace_events.c ++++ b/kernel/trace/trace_events.c +@@ -439,6 +439,7 @@ static void remove_event_file_dir(struct ftrace_event_file *file) + + list_del(&file->list); + remove_subsystem(file->system); ++ free_event_filter(file->filter); + kmem_cache_free(file_cachep, file); + } + +diff --git a/mm/shmem.c b/mm/shmem.c +index 1f18c9d0d93e..ff85863587ee 100644 +--- a/mm/shmem.c ++++ b/mm/shmem.c +@@ -80,11 +80,12 @@ static struct vfsmount *shm_mnt; + #define SHORT_SYMLINK_LEN 128 + + /* +- * shmem_fallocate and shmem_writepage communicate via inode->i_private +- * (with i_mutex making sure that it has only one user at a time): +- * we would prefer not to enlarge the shmem inode just for that. ++ * shmem_fallocate communicates with shmem_fault or shmem_writepage via ++ * inode->i_private (with i_mutex making sure that it has only one user at ++ * a time): we would prefer not to enlarge the shmem inode just for that. + */ + struct shmem_falloc { ++ wait_queue_head_t *waitq; /* faults into hole wait for punch to end */ + pgoff_t start; /* start of range currently being fallocated */ + pgoff_t next; /* the next page offset to be fallocated */ + pgoff_t nr_falloced; /* how many new pages have been fallocated */ +@@ -533,22 +534,19 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + return; + + index = start; +- for ( ; ; ) { ++ while (index < end) { + cond_resched(); + pvec.nr = shmem_find_get_pages_and_swap(mapping, index, + min(end - index, (pgoff_t)PAGEVEC_SIZE), + pvec.pages, indices); + if (!pvec.nr) { +- if (index == start || unfalloc) ++ /* If all gone or hole-punch or unfalloc, we're done */ ++ if (index == start || end != -1) + break; ++ /* But if truncating, restart to make sure all gone */ + index = start; + continue; + } +- if ((index == start || unfalloc) && indices[0] >= end) { +- shmem_deswap_pagevec(&pvec); +- pagevec_release(&pvec); +- break; +- } + mem_cgroup_uncharge_start(); + for (i = 0; i < pagevec_count(&pvec); i++) { + struct page *page = pvec.pages[i]; +@@ -560,8 +558,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + if (radix_tree_exceptional_entry(page)) { + if (unfalloc) + continue; +- nr_swaps_freed += !shmem_free_swap(mapping, +- index, page); ++ if (shmem_free_swap(mapping, index, page)) { ++ /* Swap was replaced by page: retry */ ++ index--; ++ break; ++ } ++ nr_swaps_freed++; + continue; + } + +@@ -570,6 +572,11 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, + if (page->mapping == mapping) { + VM_BUG_ON_PAGE(PageWriteback(page), page); + truncate_inode_page(mapping, page); ++ } else { ++ /* Page was replaced by swap: retry */ ++ unlock_page(page); ++ index--; ++ break; + } + } + unlock_page(page); +@@ -824,6 +831,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) + spin_lock(&inode->i_lock); + shmem_falloc = inode->i_private; + if (shmem_falloc && ++ !shmem_falloc->waitq && + index >= shmem_falloc->start && + index < shmem_falloc->next) + shmem_falloc->nr_unswapped++; +@@ -1298,6 +1306,64 @@ static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) + int error; + int ret = VM_FAULT_LOCKED; + ++ /* ++ * Trinity finds that probing a hole which tmpfs is punching can ++ * prevent the hole-punch from ever completing: which in turn ++ * locks writers out with its hold on i_mutex. So refrain from ++ * faulting pages into the hole while it's being punched. Although ++ * shmem_undo_range() does remove the additions, it may be unable to ++ * keep up, as each new page needs its own unmap_mapping_range() call, ++ * and the i_mmap tree grows ever slower to scan if new vmas are added. ++ * ++ * It does not matter if we sometimes reach this check just before the ++ * hole-punch begins, so that one fault then races with the punch: ++ * we just need to make racing faults a rare case. ++ * ++ * The implementation below would be much simpler if we just used a ++ * standard mutex or completion: but we cannot take i_mutex in fault, ++ * and bloating every shmem inode for this unlikely case would be sad. ++ */ ++ if (unlikely(inode->i_private)) { ++ struct shmem_falloc *shmem_falloc; ++ ++ spin_lock(&inode->i_lock); ++ shmem_falloc = inode->i_private; ++ if (shmem_falloc && ++ shmem_falloc->waitq && ++ vmf->pgoff >= shmem_falloc->start && ++ vmf->pgoff < shmem_falloc->next) { ++ wait_queue_head_t *shmem_falloc_waitq; ++ DEFINE_WAIT(shmem_fault_wait); ++ ++ ret = VM_FAULT_NOPAGE; ++ if ((vmf->flags & FAULT_FLAG_ALLOW_RETRY) && ++ !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { ++ /* It's polite to up mmap_sem if we can */ ++ up_read(&vma->vm_mm->mmap_sem); ++ ret = VM_FAULT_RETRY; ++ } ++ ++ shmem_falloc_waitq = shmem_falloc->waitq; ++ prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait, ++ TASK_UNINTERRUPTIBLE); ++ spin_unlock(&inode->i_lock); ++ schedule(); ++ ++ /* ++ * shmem_falloc_waitq points into the shmem_fallocate() ++ * stack of the hole-punching task: shmem_falloc_waitq ++ * is usually invalid by the time we reach here, but ++ * finish_wait() does not dereference it in that case; ++ * though i_lock needed lest racing with wake_up_all(). ++ */ ++ spin_lock(&inode->i_lock); ++ finish_wait(shmem_falloc_waitq, &shmem_fault_wait); ++ spin_unlock(&inode->i_lock); ++ return ret; ++ } ++ spin_unlock(&inode->i_lock); ++ } ++ + error = shmem_getpage(inode, vmf->pgoff, &vmf->page, SGP_CACHE, &ret); + if (error) + return ((error == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS); +@@ -1817,12 +1883,25 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, + struct address_space *mapping = file->f_mapping; + loff_t unmap_start = round_up(offset, PAGE_SIZE); + loff_t unmap_end = round_down(offset + len, PAGE_SIZE) - 1; ++ DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq); ++ ++ shmem_falloc.waitq = &shmem_falloc_waitq; ++ shmem_falloc.start = unmap_start >> PAGE_SHIFT; ++ shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT; ++ spin_lock(&inode->i_lock); ++ inode->i_private = &shmem_falloc; ++ spin_unlock(&inode->i_lock); + + if ((u64)unmap_end > (u64)unmap_start) + unmap_mapping_range(mapping, unmap_start, + 1 + unmap_end - unmap_start, 0); + shmem_truncate_range(inode, offset, offset + len - 1); + /* No need to unmap again: hole-punching leaves COWed pages */ ++ ++ spin_lock(&inode->i_lock); ++ inode->i_private = NULL; ++ wake_up_all(&shmem_falloc_waitq); ++ spin_unlock(&inode->i_lock); + error = 0; + goto out; + } +@@ -1840,6 +1919,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, + goto out; + } + ++ shmem_falloc.waitq = NULL; + shmem_falloc.start = start; + shmem_falloc.next = start; + shmem_falloc.nr_falloced = 0; +diff --git a/mm/vmscan.c b/mm/vmscan.c +index 6ef876cae8f1..6ef484f0777f 100644 +--- a/mm/vmscan.c ++++ b/mm/vmscan.c +@@ -1540,19 +1540,18 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, + * If dirty pages are scanned that are not queued for IO, it + * implies that flushers are not keeping up. In this case, flag + * the zone ZONE_TAIL_LRU_DIRTY and kswapd will start writing +- * pages from reclaim context. It will forcibly stall in the +- * next check. ++ * pages from reclaim context. + */ + if (nr_unqueued_dirty == nr_taken) + zone_set_flag(zone, ZONE_TAIL_LRU_DIRTY); + + /* +- * In addition, if kswapd scans pages marked marked for +- * immediate reclaim and under writeback (nr_immediate), it +- * implies that pages are cycling through the LRU faster than ++ * If kswapd scans pages marked marked for immediate ++ * reclaim and under writeback (nr_immediate), it implies ++ * that pages are cycling through the LRU faster than + * they are written so also forcibly stall. + */ +- if (nr_unqueued_dirty == nr_taken || nr_immediate) ++ if (nr_immediate) + congestion_wait(BLK_RW_ASYNC, HZ/10); + } + +diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c +index 6ee48aac776f..7e57135c7cc4 100644 +--- a/net/8021q/vlan_core.c ++++ b/net/8021q/vlan_core.c +@@ -108,8 +108,11 @@ EXPORT_SYMBOL(vlan_dev_vlan_id); + + static struct sk_buff *vlan_reorder_header(struct sk_buff *skb) + { +- if (skb_cow(skb, skb_headroom(skb)) < 0) ++ if (skb_cow(skb, skb_headroom(skb)) < 0) { ++ kfree_skb(skb); + return NULL; ++ } ++ + memmove(skb->data - ETH_HLEN, skb->data - VLAN_ETH_HLEN, 2 * ETH_ALEN); + skb->mac_header += VLAN_HLEN; + return skb; +diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c +index cc0d21895420..1f26a1b8c576 100644 +--- a/net/8021q/vlan_dev.c ++++ b/net/8021q/vlan_dev.c +@@ -635,8 +635,6 @@ static void vlan_dev_uninit(struct net_device *dev) + struct vlan_dev_priv *vlan = vlan_dev_priv(dev); + int i; + +- free_percpu(vlan->vlan_pcpu_stats); +- vlan->vlan_pcpu_stats = NULL; + for (i = 0; i < ARRAY_SIZE(vlan->egress_priority_map); i++) { + while ((pm = vlan->egress_priority_map[i]) != NULL) { + vlan->egress_priority_map[i] = pm->next; +@@ -796,6 +794,15 @@ static const struct net_device_ops vlan_netdev_ops = { + .ndo_get_lock_subclass = vlan_dev_get_lock_subclass, + }; + ++static void vlan_dev_free(struct net_device *dev) ++{ ++ struct vlan_dev_priv *vlan = vlan_dev_priv(dev); ++ ++ free_percpu(vlan->vlan_pcpu_stats); ++ vlan->vlan_pcpu_stats = NULL; ++ free_netdev(dev); ++} ++ + void vlan_setup(struct net_device *dev) + { + ether_setup(dev); +@@ -805,7 +812,7 @@ void vlan_setup(struct net_device *dev) + dev->tx_queue_len = 0; + + dev->netdev_ops = &vlan_netdev_ops; +- dev->destructor = free_netdev; ++ dev->destructor = vlan_dev_free; + dev->ethtool_ops = &vlan_ethtool_ops; + + memset(dev->broadcast, 0, ETH_ALEN); +diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c +index 02806c6b2ff3..0c769cc65f25 100644 +--- a/net/appletalk/ddp.c ++++ b/net/appletalk/ddp.c +@@ -1489,8 +1489,6 @@ static int atalk_rcv(struct sk_buff *skb, struct net_device *dev, + goto drop; + + /* Queue packet (standard) */ +- skb->sk = sock; +- + if (sock_queue_rcv_skb(sock, skb) < 0) + goto drop; + +@@ -1644,7 +1642,6 @@ static int atalk_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr + if (!skb) + goto out; + +- skb->sk = sk; + skb_reserve(skb, ddp_dl->header_length); + skb_reserve(skb, dev->hard_header_len); + skb->dev = dev; +diff --git a/net/core/dev.c b/net/core/dev.c +index 4c1b483f7c07..37bddf729e77 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -148,6 +148,9 @@ struct list_head ptype_all __read_mostly; /* Taps */ + static struct list_head offload_base __read_mostly; + + static int netif_rx_internal(struct sk_buff *skb); ++static int call_netdevice_notifiers_info(unsigned long val, ++ struct net_device *dev, ++ struct netdev_notifier_info *info); + + /* + * The @dev_base_head list is protected by @dev_base_lock and the rtnl +@@ -1207,7 +1210,11 @@ EXPORT_SYMBOL(netdev_features_change); + void netdev_state_change(struct net_device *dev) + { + if (dev->flags & IFF_UP) { +- call_netdevice_notifiers(NETDEV_CHANGE, dev); ++ struct netdev_notifier_change_info change_info; ++ ++ change_info.flags_changed = 0; ++ call_netdevice_notifiers_info(NETDEV_CHANGE, dev, ++ &change_info.info); + rtmsg_ifinfo(RTM_NEWLINK, dev, 0, GFP_KERNEL); + } + } +@@ -4051,6 +4058,8 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb) + skb->vlan_tci = 0; + skb->dev = napi->dev; + skb->skb_iif = 0; ++ skb->encapsulation = 0; ++ skb_shinfo(skb)->gso_type = 0; + skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); + + napi->skb = skb; +diff --git a/net/core/dst.c b/net/core/dst.c +index ca4231ec7347..15b6792e6ebb 100644 +--- a/net/core/dst.c ++++ b/net/core/dst.c +@@ -267,6 +267,15 @@ again: + } + EXPORT_SYMBOL(dst_destroy); + ++static void dst_destroy_rcu(struct rcu_head *head) ++{ ++ struct dst_entry *dst = container_of(head, struct dst_entry, rcu_head); ++ ++ dst = dst_destroy(dst); ++ if (dst) ++ __dst_free(dst); ++} ++ + void dst_release(struct dst_entry *dst) + { + if (dst) { +@@ -274,11 +283,8 @@ void dst_release(struct dst_entry *dst) + + newrefcnt = atomic_dec_return(&dst->__refcnt); + WARN_ON(newrefcnt < 0); +- if (unlikely(dst->flags & DST_NOCACHE) && !newrefcnt) { +- dst = dst_destroy(dst); +- if (dst) +- __dst_free(dst); +- } ++ if (unlikely(dst->flags & DST_NOCACHE) && !newrefcnt) ++ call_rcu(&dst->rcu_head, dst_destroy_rcu); + } + } + EXPORT_SYMBOL(dst_release); +diff --git a/net/core/skbuff.c b/net/core/skbuff.c +index e5ae776ee9b4..7f2e1fce706e 100644 +--- a/net/core/skbuff.c ++++ b/net/core/skbuff.c +@@ -2881,12 +2881,13 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb, + int pos; + int dummy; + ++ __skb_push(head_skb, doffset); + proto = skb_network_protocol(head_skb, &dummy); + if (unlikely(!proto)) + return ERR_PTR(-EINVAL); + + csum = !!can_checksum_protocol(features, proto); +- __skb_push(head_skb, doffset); ++ + headroom = skb_headroom(head_skb); + pos = skb_headlen(head_skb); + +diff --git a/net/dns_resolver/dns_query.c b/net/dns_resolver/dns_query.c +index e7b6d53eef88..f005cc760535 100644 +--- a/net/dns_resolver/dns_query.c ++++ b/net/dns_resolver/dns_query.c +@@ -149,7 +149,9 @@ int dns_query(const char *type, const char *name, size_t namelen, + if (!*_result) + goto put; + +- memcpy(*_result, upayload->data, len + 1); ++ memcpy(*_result, upayload->data, len); ++ (*_result)[len] = '\0'; ++ + if (_expiry) + *_expiry = rkey->expiry; + +diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c +index 19ab78aca547..07bd8edef417 100644 +--- a/net/ipv4/af_inet.c ++++ b/net/ipv4/af_inet.c +@@ -1434,6 +1434,9 @@ static int inet_gro_complete(struct sk_buff *skb, int nhoff) + int proto = iph->protocol; + int err = -ENOSYS; + ++ if (skb->encapsulation) ++ skb_set_inner_network_header(skb, nhoff); ++ + csum_replace2(&iph->check, iph->tot_len, newlen); + iph->tot_len = newlen; + +diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c +index f1d32280cb54..2d24f293f977 100644 +--- a/net/ipv4/gre_offload.c ++++ b/net/ipv4/gre_offload.c +@@ -255,6 +255,9 @@ static int gre_gro_complete(struct sk_buff *skb, int nhoff) + int err = -ENOENT; + __be16 type; + ++ skb->encapsulation = 1; ++ skb_shinfo(skb)->gso_type = SKB_GSO_GRE; ++ + type = greh->protocol; + if (greh->flags & GRE_KEY) + grehlen += GRE_HEADER_SECTION; +diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c +index 0134663fdbce..1e4aa8354f93 100644 +--- a/net/ipv4/icmp.c ++++ b/net/ipv4/icmp.c +@@ -732,8 +732,6 @@ static void icmp_unreach(struct sk_buff *skb) + /* fall through */ + case 0: + info = ntohs(icmph->un.frag.mtu); +- if (!info) +- goto out; + } + break; + case ICMP_SR_FAILED: +diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c +index 97e4d1655d26..9db3b877fcaf 100644 +--- a/net/ipv4/igmp.c ++++ b/net/ipv4/igmp.c +@@ -1952,6 +1952,10 @@ int ip_mc_leave_group(struct sock *sk, struct ip_mreqn *imr) + + rtnl_lock(); + in_dev = ip_mc_find_dev(net, imr); ++ if (!in_dev) { ++ ret = -ENODEV; ++ goto out; ++ } + ifindex = imr->imr_ifindex; + for (imlp = &inet->mc_list; + (iml = rtnl_dereference(*imlp)) != NULL; +@@ -1969,16 +1973,14 @@ int ip_mc_leave_group(struct sock *sk, struct ip_mreqn *imr) + + *imlp = iml->next_rcu; + +- if (in_dev) +- ip_mc_dec_group(in_dev, group); ++ ip_mc_dec_group(in_dev, group); + rtnl_unlock(); + /* decrease mem now to avoid the memleak warning */ + atomic_sub(sizeof(*iml), &sk->sk_omem_alloc); + kfree_rcu(iml, rcu); + return 0; + } +- if (!in_dev) +- ret = -ENODEV; ++out: + rtnl_unlock(); + return ret; + } +diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c +index f4ab72e19af9..96f90b89df32 100644 +--- a/net/ipv4/ip_options.c ++++ b/net/ipv4/ip_options.c +@@ -288,6 +288,10 @@ int ip_options_compile(struct net *net, + optptr++; + continue; + } ++ if (unlikely(l < 2)) { ++ pp_ptr = optptr; ++ goto error; ++ } + optlen = optptr[1]; + if (optlen < 2 || optlen > l) { + pp_ptr = optptr; +diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c +index 0c3a5d17b4a9..62cd9e0ae35b 100644 +--- a/net/ipv4/ip_tunnel.c ++++ b/net/ipv4/ip_tunnel.c +@@ -73,12 +73,7 @@ static void __tunnel_dst_set(struct ip_tunnel_dst *idst, + { + struct dst_entry *old_dst; + +- if (dst) { +- if (dst->flags & DST_NOCACHE) +- dst = NULL; +- else +- dst_clone(dst); +- } ++ dst_clone(dst); + old_dst = xchg((__force struct dst_entry **)&idst->dst, dst); + dst_release(old_dst); + } +@@ -108,13 +103,14 @@ static struct rtable *tunnel_rtable_get(struct ip_tunnel *t, u32 cookie) + + rcu_read_lock(); + dst = rcu_dereference(this_cpu_ptr(t->dst_cache)->dst); ++ if (dst && !atomic_inc_not_zero(&dst->__refcnt)) ++ dst = NULL; + if (dst) { + if (dst->obsolete && dst->ops->check(dst, cookie) == NULL) { +- rcu_read_unlock(); + tunnel_dst_reset(t); +- return NULL; ++ dst_release(dst); ++ dst = NULL; + } +- dst_hold(dst); + } + rcu_read_unlock(); + return (struct rtable *)dst; +@@ -173,6 +169,7 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, + + hlist_for_each_entry_rcu(t, head, hash_node) { + if (remote != t->parms.iph.daddr || ++ t->parms.iph.saddr != 0 || + !(t->dev->flags & IFF_UP)) + continue; + +@@ -189,10 +186,11 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, + head = &itn->tunnels[hash]; + + hlist_for_each_entry_rcu(t, head, hash_node) { +- if ((local != t->parms.iph.saddr && +- (local != t->parms.iph.daddr || +- !ipv4_is_multicast(local))) || +- !(t->dev->flags & IFF_UP)) ++ if ((local != t->parms.iph.saddr || t->parms.iph.daddr != 0) && ++ (local != t->parms.iph.daddr || !ipv4_is_multicast(local))) ++ continue; ++ ++ if (!(t->dev->flags & IFF_UP)) + continue; + + if (!ip_tunnel_key_match(&t->parms, flags, key)) +@@ -209,6 +207,8 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, + + hlist_for_each_entry_rcu(t, head, hash_node) { + if (t->parms.i_key != key || ++ t->parms.iph.saddr != 0 || ++ t->parms.iph.daddr != 0 || + !(t->dev->flags & IFF_UP)) + continue; + +diff --git a/net/ipv4/route.c b/net/ipv4/route.c +index 134437309b1e..031553f8a306 100644 +--- a/net/ipv4/route.c ++++ b/net/ipv4/route.c +@@ -1029,7 +1029,7 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) + const struct iphdr *iph = (const struct iphdr *) skb->data; + struct flowi4 fl4; + struct rtable *rt; +- struct dst_entry *dst; ++ struct dst_entry *odst = NULL; + bool new = false; + + bh_lock_sock(sk); +@@ -1037,16 +1037,17 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) + if (!ip_sk_accept_pmtu(sk)) + goto out; + +- rt = (struct rtable *) __sk_dst_get(sk); ++ odst = sk_dst_get(sk); + +- if (sock_owned_by_user(sk) || !rt) { ++ if (sock_owned_by_user(sk) || !odst) { + __ipv4_sk_update_pmtu(skb, sk, mtu); + goto out; + } + + __build_flow_key(&fl4, sk, iph, 0, 0, 0, 0, 0); + +- if (!__sk_dst_check(sk, 0)) { ++ rt = (struct rtable *)odst; ++ if (odst->obsolete && odst->ops->check(odst, 0) == NULL) { + rt = ip_route_output_flow(sock_net(sk), &fl4, sk); + if (IS_ERR(rt)) + goto out; +@@ -1056,8 +1057,7 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) + + __ip_rt_update_pmtu((struct rtable *) rt->dst.path, &fl4, mtu); + +- dst = dst_check(&rt->dst, 0); +- if (!dst) { ++ if (!dst_check(&rt->dst, 0)) { + if (new) + dst_release(&rt->dst); + +@@ -1069,10 +1069,11 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu) + } + + if (new) +- __sk_dst_set(sk, &rt->dst); ++ sk_dst_set(sk, &rt->dst); + + out: + bh_unlock_sock(sk); ++ dst_release(odst); + } + EXPORT_SYMBOL_GPL(ipv4_sk_update_pmtu); + +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c +index 97c8f5620c43..b48fba0aaa92 100644 +--- a/net/ipv4/tcp.c ++++ b/net/ipv4/tcp.c +@@ -1108,7 +1108,7 @@ int tcp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg, + if (unlikely(tp->repair)) { + if (tp->repair_queue == TCP_RECV_QUEUE) { + copied = tcp_send_rcvq(sk, msg, size); +- goto out; ++ goto out_nopush; + } + + err = -EINVAL; +@@ -1282,6 +1282,7 @@ wait_for_memory: + out: + if (copied) + tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); ++out_nopush: + release_sock(sk); + return copied + copied_syn; + +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c +index e3647465138b..3898694d0300 100644 +--- a/net/ipv4/tcp_input.c ++++ b/net/ipv4/tcp_input.c +@@ -1113,7 +1113,7 @@ static bool tcp_check_dsack(struct sock *sk, const struct sk_buff *ack_skb, + } + + /* D-SACK for already forgotten data... Do dumb counting. */ +- if (dup_sack && tp->undo_marker && tp->undo_retrans && ++ if (dup_sack && tp->undo_marker && tp->undo_retrans > 0 && + !after(end_seq_0, prior_snd_una) && + after(end_seq_0, tp->undo_marker)) + tp->undo_retrans--; +@@ -1169,7 +1169,7 @@ static int tcp_match_skb_to_sack(struct sock *sk, struct sk_buff *skb, + unsigned int new_len = (pkt_len / mss) * mss; + if (!in_sack && new_len < pkt_len) { + new_len += mss; +- if (new_len > skb->len) ++ if (new_len >= skb->len) + return 0; + } + pkt_len = new_len; +@@ -1193,7 +1193,7 @@ static u8 tcp_sacktag_one(struct sock *sk, + + /* Account D-SACK for retransmitted packet. */ + if (dup_sack && (sacked & TCPCB_RETRANS)) { +- if (tp->undo_marker && tp->undo_retrans && ++ if (tp->undo_marker && tp->undo_retrans > 0 && + after(end_seq, tp->undo_marker)) + tp->undo_retrans--; + if (sacked & TCPCB_SACKED_ACKED) +@@ -1894,7 +1894,7 @@ static void tcp_clear_retrans_partial(struct tcp_sock *tp) + tp->lost_out = 0; + + tp->undo_marker = 0; +- tp->undo_retrans = 0; ++ tp->undo_retrans = -1; + } + + void tcp_clear_retrans(struct tcp_sock *tp) +@@ -2663,7 +2663,7 @@ static void tcp_enter_recovery(struct sock *sk, bool ece_ack) + + tp->prior_ssthresh = 0; + tp->undo_marker = tp->snd_una; +- tp->undo_retrans = tp->retrans_out; ++ tp->undo_retrans = tp->retrans_out ? : -1; + + if (inet_csk(sk)->icsk_ca_state < TCP_CA_CWR) { + if (!ece_ack) +diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c +index b92b81718ca4..c25953a386d0 100644 +--- a/net/ipv4/tcp_offload.c ++++ b/net/ipv4/tcp_offload.c +@@ -310,7 +310,7 @@ static int tcp4_gro_complete(struct sk_buff *skb, int thoff) + + th->check = ~tcp_v4_check(skb->len - thoff, iph->saddr, + iph->daddr, 0); +- skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; ++ skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV4; + + return tcp_gro_complete(skb); + } +diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c +index 17a11e65e57f..b3d1addd816b 100644 +--- a/net/ipv4/tcp_output.c ++++ b/net/ipv4/tcp_output.c +@@ -2448,8 +2448,6 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb) + if (!tp->retrans_stamp) + tp->retrans_stamp = TCP_SKB_CB(skb)->when; + +- tp->undo_retrans += tcp_skb_pcount(skb); +- + /* snd_nxt is stored to detect loss of retransmitted segment, + * see tcp_input.c tcp_sacktag_write_queue(). + */ +@@ -2457,6 +2455,10 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb) + } else { + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRETRANSFAIL); + } ++ ++ if (tp->undo_retrans < 0) ++ tp->undo_retrans = 0; ++ tp->undo_retrans += tcp_skb_pcount(skb); + return err; + } + +diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c +index 8517d3cd1aed..01b0ff9a0c2c 100644 +--- a/net/ipv6/tcpv6_offload.c ++++ b/net/ipv6/tcpv6_offload.c +@@ -73,7 +73,7 @@ static int tcp6_gro_complete(struct sk_buff *skb, int thoff) + + th->check = ~tcp_v6_check(skb->len - thoff, &iph->saddr, + &iph->daddr, 0); +- skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; ++ skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV6; + + return tcp_gro_complete(skb); + } +diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c +index 7f40fd25acae..0dfe894afd48 100644 +--- a/net/netlink/af_netlink.c ++++ b/net/netlink/af_netlink.c +@@ -636,7 +636,7 @@ static unsigned int netlink_poll(struct file *file, struct socket *sock, + while (nlk->cb_running && netlink_dump_space(nlk)) { + err = netlink_dump(sk); + if (err < 0) { +- sk->sk_err = err; ++ sk->sk_err = -err; + sk->sk_error_report(sk); + break; + } +@@ -2448,7 +2448,7 @@ static int netlink_recvmsg(struct kiocb *kiocb, struct socket *sock, + atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) { + ret = netlink_dump(sk); + if (ret) { +- sk->sk_err = ret; ++ sk->sk_err = -ret; + sk->sk_error_report(sk); + } + } +diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c +index c82fdc1eab7c..dfa532f00d88 100644 +--- a/net/sctp/sysctl.c ++++ b/net/sctp/sysctl.c +@@ -307,41 +307,40 @@ static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, int write, + loff_t *ppos) + { + struct net *net = current->nsproxy->net_ns; +- char tmp[8]; + struct ctl_table tbl; +- int ret; +- int changed = 0; ++ bool changed = false; + char *none = "none"; ++ char tmp[8]; ++ int ret; + + memset(&tbl, 0, sizeof(struct ctl_table)); + + if (write) { + tbl.data = tmp; +- tbl.maxlen = 8; ++ tbl.maxlen = sizeof(tmp); + } else { + tbl.data = net->sctp.sctp_hmac_alg ? : none; + tbl.maxlen = strlen(tbl.data); + } +- ret = proc_dostring(&tbl, write, buffer, lenp, ppos); + +- if (write) { ++ ret = proc_dostring(&tbl, write, buffer, lenp, ppos); ++ if (write && ret == 0) { + #ifdef CONFIG_CRYPTO_MD5 + if (!strncmp(tmp, "md5", 3)) { + net->sctp.sctp_hmac_alg = "md5"; +- changed = 1; ++ changed = true; + } + #endif + #ifdef CONFIG_CRYPTO_SHA1 + if (!strncmp(tmp, "sha1", 4)) { + net->sctp.sctp_hmac_alg = "sha1"; +- changed = 1; ++ changed = true; + } + #endif + if (!strncmp(tmp, "none", 4)) { + net->sctp.sctp_hmac_alg = NULL; +- changed = 1; ++ changed = true; + } +- + if (!changed) + ret = -EINVAL; + } +@@ -354,11 +353,10 @@ static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write, + loff_t *ppos) + { + struct net *net = current->nsproxy->net_ns; +- int new_value; +- struct ctl_table tbl; + unsigned int min = *(unsigned int *) ctl->extra1; + unsigned int max = *(unsigned int *) ctl->extra2; +- int ret; ++ struct ctl_table tbl; ++ int ret, new_value; + + memset(&tbl, 0, sizeof(struct ctl_table)); + tbl.maxlen = sizeof(unsigned int); +@@ -367,12 +365,15 @@ static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write, + tbl.data = &new_value; + else + tbl.data = &net->sctp.rto_min; ++ + ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); +- if (write) { +- if (ret || new_value > max || new_value < min) ++ if (write && ret == 0) { ++ if (new_value > max || new_value < min) + return -EINVAL; ++ + net->sctp.rto_min = new_value; + } ++ + return ret; + } + +@@ -381,11 +382,10 @@ static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write, + loff_t *ppos) + { + struct net *net = current->nsproxy->net_ns; +- int new_value; +- struct ctl_table tbl; + unsigned int min = *(unsigned int *) ctl->extra1; + unsigned int max = *(unsigned int *) ctl->extra2; +- int ret; ++ struct ctl_table tbl; ++ int ret, new_value; + + memset(&tbl, 0, sizeof(struct ctl_table)); + tbl.maxlen = sizeof(unsigned int); +@@ -394,12 +394,15 @@ static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write, + tbl.data = &new_value; + else + tbl.data = &net->sctp.rto_max; ++ + ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); +- if (write) { +- if (ret || new_value > max || new_value < min) ++ if (write && ret == 0) { ++ if (new_value > max || new_value < min) + return -EINVAL; ++ + net->sctp.rto_max = new_value; + } ++ + return ret; + } + +@@ -420,8 +423,7 @@ static int proc_sctp_do_auth(struct ctl_table *ctl, int write, + tbl.data = &net->sctp.auth_enable; + + ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); +- +- if (write) { ++ if (write && ret == 0) { + struct sock *sk = net->sctp.ctl_sock; + + net->sctp.auth_enable = new_value; +diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c +index 85c64658bd0b..b6842fdb53d4 100644 +--- a/net/sctp/ulpevent.c ++++ b/net/sctp/ulpevent.c +@@ -366,9 +366,10 @@ fail: + * specification [SCTP] and any extensions for a list of possible + * error formats. + */ +-struct sctp_ulpevent *sctp_ulpevent_make_remote_error( +- const struct sctp_association *asoc, struct sctp_chunk *chunk, +- __u16 flags, gfp_t gfp) ++struct sctp_ulpevent * ++sctp_ulpevent_make_remote_error(const struct sctp_association *asoc, ++ struct sctp_chunk *chunk, __u16 flags, ++ gfp_t gfp) + { + struct sctp_ulpevent *event; + struct sctp_remote_error *sre; +@@ -387,8 +388,7 @@ struct sctp_ulpevent *sctp_ulpevent_make_remote_error( + /* Copy the skb to a new skb with room for us to prepend + * notification with. + */ +- skb = skb_copy_expand(chunk->skb, sizeof(struct sctp_remote_error), +- 0, gfp); ++ skb = skb_copy_expand(chunk->skb, sizeof(*sre), 0, gfp); + + /* Pull off the rest of the cause TLV from the chunk. */ + skb_pull(chunk->skb, elen); +@@ -399,62 +399,21 @@ struct sctp_ulpevent *sctp_ulpevent_make_remote_error( + event = sctp_skb2event(skb); + sctp_ulpevent_init(event, MSG_NOTIFICATION, skb->truesize); + +- sre = (struct sctp_remote_error *) +- skb_push(skb, sizeof(struct sctp_remote_error)); ++ sre = (struct sctp_remote_error *) skb_push(skb, sizeof(*sre)); + + /* Trim the buffer to the right length. */ +- skb_trim(skb, sizeof(struct sctp_remote_error) + elen); ++ skb_trim(skb, sizeof(*sre) + elen); + +- /* Socket Extensions for SCTP +- * 5.3.1.3 SCTP_REMOTE_ERROR +- * +- * sre_type: +- * It should be SCTP_REMOTE_ERROR. +- */ ++ /* RFC6458, Section 6.1.3. SCTP_REMOTE_ERROR */ ++ memset(sre, 0, sizeof(*sre)); + sre->sre_type = SCTP_REMOTE_ERROR; +- +- /* +- * Socket Extensions for SCTP +- * 5.3.1.3 SCTP_REMOTE_ERROR +- * +- * sre_flags: 16 bits (unsigned integer) +- * Currently unused. +- */ + sre->sre_flags = 0; +- +- /* Socket Extensions for SCTP +- * 5.3.1.3 SCTP_REMOTE_ERROR +- * +- * sre_length: sizeof (__u32) +- * +- * This field is the total length of the notification data, +- * including the notification header. +- */ + sre->sre_length = skb->len; +- +- /* Socket Extensions for SCTP +- * 5.3.1.3 SCTP_REMOTE_ERROR +- * +- * sre_error: 16 bits (unsigned integer) +- * This value represents one of the Operational Error causes defined in +- * the SCTP specification, in network byte order. +- */ + sre->sre_error = cause; +- +- /* Socket Extensions for SCTP +- * 5.3.1.3 SCTP_REMOTE_ERROR +- * +- * sre_assoc_id: sizeof (sctp_assoc_t) +- * +- * The association id field, holds the identifier for the association. +- * All notifications for a given association have the same association +- * identifier. For TCP style socket, this field is ignored. +- */ + sctp_ulpevent_set_owner(event, asoc); + sre->sre_assoc_id = sctp_assoc2id(asoc); + + return event; +- + fail: + return NULL; + } +@@ -899,7 +858,9 @@ __u16 sctp_ulpevent_get_notification_type(const struct sctp_ulpevent *event) + return notification->sn_header.sn_type; + } + +-/* Copy out the sndrcvinfo into a msghdr. */ ++/* RFC6458, Section 5.3.2. SCTP Header Information Structure ++ * (SCTP_SNDRCV, DEPRECATED) ++ */ + void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, + struct msghdr *msghdr) + { +@@ -908,74 +869,21 @@ void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event, + if (sctp_ulpevent_is_notification(event)) + return; + +- /* Sockets API Extensions for SCTP +- * Section 5.2.2 SCTP Header Information Structure (SCTP_SNDRCV) +- * +- * sinfo_stream: 16 bits (unsigned integer) +- * +- * For recvmsg() the SCTP stack places the message's stream number in +- * this value. +- */ ++ memset(&sinfo, 0, sizeof(sinfo)); + sinfo.sinfo_stream = event->stream; +- /* sinfo_ssn: 16 bits (unsigned integer) +- * +- * For recvmsg() this value contains the stream sequence number that +- * the remote endpoint placed in the DATA chunk. For fragmented +- * messages this is the same number for all deliveries of the message +- * (if more than one recvmsg() is needed to read the message). +- */ + sinfo.sinfo_ssn = event->ssn; +- /* sinfo_ppid: 32 bits (unsigned integer) +- * +- * In recvmsg() this value is +- * the same information that was passed by the upper layer in the peer +- * application. Please note that byte order issues are NOT accounted +- * for and this information is passed opaquely by the SCTP stack from +- * one end to the other. +- */ + sinfo.sinfo_ppid = event->ppid; +- /* sinfo_flags: 16 bits (unsigned integer) +- * +- * This field may contain any of the following flags and is composed of +- * a bitwise OR of these values. +- * +- * recvmsg() flags: +- * +- * SCTP_UNORDERED - This flag is present when the message was sent +- * non-ordered. +- */ + sinfo.sinfo_flags = event->flags; +- /* sinfo_tsn: 32 bit (unsigned integer) +- * +- * For the receiving side, this field holds a TSN that was +- * assigned to one of the SCTP Data Chunks. +- */ + sinfo.sinfo_tsn = event->tsn; +- /* sinfo_cumtsn: 32 bit (unsigned integer) +- * +- * This field will hold the current cumulative TSN as +- * known by the underlying SCTP layer. Note this field is +- * ignored when sending and only valid for a receive +- * operation when sinfo_flags are set to SCTP_UNORDERED. +- */ + sinfo.sinfo_cumtsn = event->cumtsn; +- /* sinfo_assoc_id: sizeof (sctp_assoc_t) +- * +- * The association handle field, sinfo_assoc_id, holds the identifier +- * for the association announced in the COMMUNICATION_UP notification. +- * All notifications for a given association have the same identifier. +- * Ignored for one-to-one style sockets. +- */ + sinfo.sinfo_assoc_id = sctp_assoc2id(event->asoc); +- +- /* context value that is set via SCTP_CONTEXT socket option. */ ++ /* Context value that is set via SCTP_CONTEXT socket option. */ + sinfo.sinfo_context = event->asoc->default_rcv_context; +- + /* These fields are not used while receiving. */ + sinfo.sinfo_timetolive = 0; + + put_cmsg(msghdr, IPPROTO_SCTP, SCTP_SNDRCV, +- sizeof(struct sctp_sndrcvinfo), (void *)&sinfo); ++ sizeof(sinfo), &sinfo); + } + + /* Do accounting for bytes received and hold a reference to the association +diff --git a/net/tipc/bcast.c b/net/tipc/bcast.c +index bf860d9e75af..3ca45bf5029f 100644 +--- a/net/tipc/bcast.c ++++ b/net/tipc/bcast.c +@@ -537,6 +537,7 @@ receive: + + buf = node->bclink.deferred_head; + node->bclink.deferred_head = buf->next; ++ buf->next = NULL; + node->bclink.deferred_size--; + goto receive; + } +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 22f7883fcb9a..7ec91424ba22 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -2930,7 +2930,7 @@ static int azx_suspend(struct device *dev) + struct azx *chip = card->private_data; + struct azx_pcm *p; + +- if (chip->disabled) ++ if (chip->disabled || chip->init_failed) + return 0; + + snd_power_change_state(card, SNDRV_CTL_POWER_D3hot); +@@ -2961,7 +2961,7 @@ static int azx_resume(struct device *dev) + struct snd_card *card = dev_get_drvdata(dev); + struct azx *chip = card->private_data; + +- if (chip->disabled) ++ if (chip->disabled || chip->init_failed) + return 0; + + if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) +@@ -2996,7 +2996,7 @@ static int azx_runtime_suspend(struct device *dev) + struct snd_card *card = dev_get_drvdata(dev); + struct azx *chip = card->private_data; + +- if (chip->disabled) ++ if (chip->disabled || chip->init_failed) + return 0; + + if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) +@@ -3022,7 +3022,7 @@ static int azx_runtime_resume(struct device *dev) + struct hda_codec *codec; + int status; + +- if (chip->disabled) ++ if (chip->disabled || chip->init_failed) + return 0; + + if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME)) +@@ -3057,7 +3057,7 @@ static int azx_runtime_idle(struct device *dev) + struct snd_card *card = dev_get_drvdata(dev); + struct azx *chip = card->private_data; + +- if (chip->disabled) ++ if (chip->disabled || chip->init_failed) + return 0; + + if (!power_save_controller || +diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c +index 3abfe2a642ec..d135c906caff 100644 +--- a/sound/pci/hda/patch_hdmi.c ++++ b/sound/pci/hda/patch_hdmi.c +@@ -1062,6 +1062,7 @@ static void hdmi_pin_setup_infoframe(struct hda_codec *codec, + { + union audio_infoframe ai; + ++ memset(&ai, 0, sizeof(ai)); + if (conn_type == 0) { /* HDMI */ + struct hdmi_audio_infoframe *hdmi_ai = &ai.hdmi; +