From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id CEBAB138A1A for ; Fri, 30 Jan 2015 12:51:32 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 18DB3E091B; Fri, 30 Jan 2015 12:51:32 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 9B4D6E091B for ; Fri, 30 Jan 2015 12:51:31 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 5AF91340773 for ; Fri, 30 Jan 2015 12:51:30 +0000 (UTC) Received: from localhost.localdomain (localhost [127.0.0.1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 4AD7810AEC for ; Fri, 30 Jan 2015 12:51:27 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1422621600.30b7838f41cfebf7103845bb41ef499afe0c5e1e.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:3.10 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1066_linux-3.10.67.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 30b7838f41cfebf7103845bb41ef499afe0c5e1e X-VCS-Branch: 3.10 Date: Fri, 30 Jan 2015 12:51:27 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: 9cdc7777-30b1-45cc-ad5b-a9cc5d7ecd50 X-Archives-Hash: 300d3dc1f5a87cdc6acba692581f786c commit: 30b7838f41cfebf7103845bb41ef499afe0c5e1e Author: Mike Pagano gentoo org> AuthorDate: Fri Jan 30 12:40:00 2015 +0000 Commit: Mike Pagano gentoo org> CommitDate: Fri Jan 30 12:40:00 2015 +0000 URL: http://sources.gentoo.org/gitweb/?p=proj/linux-patches.git;a=commit;h=30b7838f Linux patch 3.10.67 --- 0000_README | 4 + 1066_linux-3.10.67.patch | 2417 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 2421 insertions(+) diff --git a/0000_README b/0000_README index 1f0923b..f5f4229 100644 --- a/0000_README +++ b/0000_README @@ -306,6 +306,10 @@ Patch: 1065_linux-3.10.66.patch From: http://www.kernel.org Desc: Linux 3.10.66 +Patch: 1066_linux-3.10.67.patch +From: http://www.kernel.org +Desc: Linux 3.10.67 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1066_linux-3.10.67.patch b/1066_linux-3.10.67.patch new file mode 100644 index 0000000..43fdd4f --- /dev/null +++ b/1066_linux-3.10.67.patch @@ -0,0 +1,2417 @@ +diff --git a/Makefile b/Makefile +index 12ae1ef5437a..7c6711fa3c3f 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 3 + PATCHLEVEL = 10 +-SUBLEVEL = 66 ++SUBLEVEL = 67 + EXTRAVERSION = + NAME = TOSSUG Baby Fish + +diff --git a/arch/arm/boot/dts/imx25.dtsi b/arch/arm/boot/dts/imx25.dtsi +index 82897e2d8d5a..97d1a550eb98 100644 +--- a/arch/arm/boot/dts/imx25.dtsi ++++ b/arch/arm/boot/dts/imx25.dtsi +@@ -335,7 +335,7 @@ + compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; + #pwm-cells = <2>; + reg = <0x53fa0000 0x4000>; +- clocks = <&clks 106>, <&clks 36>; ++ clocks = <&clks 106>, <&clks 52>; + clock-names = "ipg", "per"; + interrupts = <36>; + }; +@@ -354,7 +354,7 @@ + compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; + #pwm-cells = <2>; + reg = <0x53fa8000 0x4000>; +- clocks = <&clks 107>, <&clks 36>; ++ clocks = <&clks 107>, <&clks 52>; + clock-names = "ipg", "per"; + interrupts = <41>; + }; +@@ -394,7 +394,7 @@ + pwm4: pwm@53fc8000 { + compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; + reg = <0x53fc8000 0x4000>; +- clocks = <&clks 108>, <&clks 36>; ++ clocks = <&clks 108>, <&clks 52>; + clock-names = "ipg", "per"; + interrupts = <42>; + }; +@@ -439,7 +439,7 @@ + compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; + #pwm-cells = <2>; + reg = <0x53fe0000 0x4000>; +- clocks = <&clks 105>, <&clks 36>; ++ clocks = <&clks 105>, <&clks 52>; + clock-names = "ipg", "per"; + interrupts = <26>; + }; +diff --git a/arch/arm/crypto/aes_glue.c b/arch/arm/crypto/aes_glue.c +index 59f7877ead6a..e73ec2ab1316 100644 +--- a/arch/arm/crypto/aes_glue.c ++++ b/arch/arm/crypto/aes_glue.c +@@ -103,6 +103,6 @@ module_exit(aes_fini); + + MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm (ASM)"); + MODULE_LICENSE("GPL"); +-MODULE_ALIAS("aes"); +-MODULE_ALIAS("aes-asm"); ++MODULE_ALIAS_CRYPTO("aes"); ++MODULE_ALIAS_CRYPTO("aes-asm"); + MODULE_AUTHOR("David McCullough "); +diff --git a/arch/arm/crypto/sha1_glue.c b/arch/arm/crypto/sha1_glue.c +index 76cd976230bc..ace4cd67464c 100644 +--- a/arch/arm/crypto/sha1_glue.c ++++ b/arch/arm/crypto/sha1_glue.c +@@ -175,5 +175,5 @@ module_exit(sha1_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm (ARM)"); +-MODULE_ALIAS("sha1"); ++MODULE_ALIAS_CRYPTO("sha1"); + MODULE_AUTHOR("David McCullough "); +diff --git a/arch/powerpc/crypto/sha1.c b/arch/powerpc/crypto/sha1.c +index f9e8b9491efc..b51da9132744 100644 +--- a/arch/powerpc/crypto/sha1.c ++++ b/arch/powerpc/crypto/sha1.c +@@ -154,4 +154,5 @@ module_exit(sha1_powerpc_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm"); + +-MODULE_ALIAS("sha1-powerpc"); ++MODULE_ALIAS_CRYPTO("sha1"); ++MODULE_ALIAS_CRYPTO("sha1-powerpc"); +diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c +index fd104db9cea1..92eb4d6ad39d 100644 +--- a/arch/s390/crypto/aes_s390.c ++++ b/arch/s390/crypto/aes_s390.c +@@ -970,7 +970,7 @@ static void __exit aes_s390_fini(void) + module_init(aes_s390_init); + module_exit(aes_s390_fini); + +-MODULE_ALIAS("aes-all"); ++MODULE_ALIAS_CRYPTO("aes-all"); + + MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm"); + MODULE_LICENSE("GPL"); +diff --git a/arch/s390/crypto/des_s390.c b/arch/s390/crypto/des_s390.c +index f2d6cccddcf8..a89feffb22b5 100644 +--- a/arch/s390/crypto/des_s390.c ++++ b/arch/s390/crypto/des_s390.c +@@ -619,8 +619,8 @@ static void __exit des_s390_exit(void) + module_init(des_s390_init); + module_exit(des_s390_exit); + +-MODULE_ALIAS("des"); +-MODULE_ALIAS("des3_ede"); ++MODULE_ALIAS_CRYPTO("des"); ++MODULE_ALIAS_CRYPTO("des3_ede"); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("DES & Triple DES EDE Cipher Algorithms"); +diff --git a/arch/s390/crypto/ghash_s390.c b/arch/s390/crypto/ghash_s390.c +index d43485d142e9..7940dc90e80b 100644 +--- a/arch/s390/crypto/ghash_s390.c ++++ b/arch/s390/crypto/ghash_s390.c +@@ -160,7 +160,7 @@ static void __exit ghash_mod_exit(void) + module_init(ghash_mod_init); + module_exit(ghash_mod_exit); + +-MODULE_ALIAS("ghash"); ++MODULE_ALIAS_CRYPTO("ghash"); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("GHASH Message Digest Algorithm, s390 implementation"); +diff --git a/arch/s390/crypto/sha1_s390.c b/arch/s390/crypto/sha1_s390.c +index a1b3a9dc9d8a..5b2bee323694 100644 +--- a/arch/s390/crypto/sha1_s390.c ++++ b/arch/s390/crypto/sha1_s390.c +@@ -103,6 +103,6 @@ static void __exit sha1_s390_fini(void) + module_init(sha1_s390_init); + module_exit(sha1_s390_fini); + +-MODULE_ALIAS("sha1"); ++MODULE_ALIAS_CRYPTO("sha1"); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm"); +diff --git a/arch/s390/crypto/sha256_s390.c b/arch/s390/crypto/sha256_s390.c +index 9b853809a492..b74ff158108c 100644 +--- a/arch/s390/crypto/sha256_s390.c ++++ b/arch/s390/crypto/sha256_s390.c +@@ -143,7 +143,7 @@ static void __exit sha256_s390_fini(void) + module_init(sha256_s390_init); + module_exit(sha256_s390_fini); + +-MODULE_ALIAS("sha256"); +-MODULE_ALIAS("sha224"); ++MODULE_ALIAS_CRYPTO("sha256"); ++MODULE_ALIAS_CRYPTO("sha224"); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA256 and SHA224 Secure Hash Algorithm"); +diff --git a/arch/s390/crypto/sha512_s390.c b/arch/s390/crypto/sha512_s390.c +index 32a81383b69c..0c36989ba182 100644 +--- a/arch/s390/crypto/sha512_s390.c ++++ b/arch/s390/crypto/sha512_s390.c +@@ -86,7 +86,7 @@ static struct shash_alg sha512_alg = { + } + }; + +-MODULE_ALIAS("sha512"); ++MODULE_ALIAS_CRYPTO("sha512"); + + static int sha384_init(struct shash_desc *desc) + { +@@ -126,7 +126,7 @@ static struct shash_alg sha384_alg = { + } + }; + +-MODULE_ALIAS("sha384"); ++MODULE_ALIAS_CRYPTO("sha384"); + + static int __init init(void) + { +diff --git a/arch/sparc/crypto/aes_glue.c b/arch/sparc/crypto/aes_glue.c +index 503e6d96ad4e..ded4cee35318 100644 +--- a/arch/sparc/crypto/aes_glue.c ++++ b/arch/sparc/crypto/aes_glue.c +@@ -499,6 +499,6 @@ module_exit(aes_sparc64_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("AES Secure Hash Algorithm, sparc64 aes opcode accelerated"); + +-MODULE_ALIAS("aes"); ++MODULE_ALIAS_CRYPTO("aes"); + + #include "crop_devid.c" +diff --git a/arch/sparc/crypto/camellia_glue.c b/arch/sparc/crypto/camellia_glue.c +index 888f6260b4ec..641f55cb61c3 100644 +--- a/arch/sparc/crypto/camellia_glue.c ++++ b/arch/sparc/crypto/camellia_glue.c +@@ -322,6 +322,6 @@ module_exit(camellia_sparc64_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Camellia Cipher Algorithm, sparc64 camellia opcode accelerated"); + +-MODULE_ALIAS("aes"); ++MODULE_ALIAS_CRYPTO("aes"); + + #include "crop_devid.c" +diff --git a/arch/sparc/crypto/crc32c_glue.c b/arch/sparc/crypto/crc32c_glue.c +index 5162fad912ce..d1064e46efe8 100644 +--- a/arch/sparc/crypto/crc32c_glue.c ++++ b/arch/sparc/crypto/crc32c_glue.c +@@ -176,6 +176,6 @@ module_exit(crc32c_sparc64_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("CRC32c (Castagnoli), sparc64 crc32c opcode accelerated"); + +-MODULE_ALIAS("crc32c"); ++MODULE_ALIAS_CRYPTO("crc32c"); + + #include "crop_devid.c" +diff --git a/arch/sparc/crypto/des_glue.c b/arch/sparc/crypto/des_glue.c +index 3065bc61f9d3..d11500972994 100644 +--- a/arch/sparc/crypto/des_glue.c ++++ b/arch/sparc/crypto/des_glue.c +@@ -532,6 +532,6 @@ module_exit(des_sparc64_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("DES & Triple DES EDE Cipher Algorithms, sparc64 des opcode accelerated"); + +-MODULE_ALIAS("des"); ++MODULE_ALIAS_CRYPTO("des"); + + #include "crop_devid.c" +diff --git a/arch/sparc/crypto/md5_glue.c b/arch/sparc/crypto/md5_glue.c +index 09a9ea1dfb69..64c7ff5f72a9 100644 +--- a/arch/sparc/crypto/md5_glue.c ++++ b/arch/sparc/crypto/md5_glue.c +@@ -185,6 +185,6 @@ module_exit(md5_sparc64_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("MD5 Secure Hash Algorithm, sparc64 md5 opcode accelerated"); + +-MODULE_ALIAS("md5"); ++MODULE_ALIAS_CRYPTO("md5"); + + #include "crop_devid.c" +diff --git a/arch/sparc/crypto/sha1_glue.c b/arch/sparc/crypto/sha1_glue.c +index 6cd5f29e1e0d..1b3e47accc74 100644 +--- a/arch/sparc/crypto/sha1_glue.c ++++ b/arch/sparc/crypto/sha1_glue.c +@@ -180,6 +180,6 @@ module_exit(sha1_sparc64_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm, sparc64 sha1 opcode accelerated"); + +-MODULE_ALIAS("sha1"); ++MODULE_ALIAS_CRYPTO("sha1"); + + #include "crop_devid.c" +diff --git a/arch/sparc/crypto/sha256_glue.c b/arch/sparc/crypto/sha256_glue.c +index 04f555ab2680..41f27cca2a22 100644 +--- a/arch/sparc/crypto/sha256_glue.c ++++ b/arch/sparc/crypto/sha256_glue.c +@@ -237,7 +237,7 @@ module_exit(sha256_sparc64_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithm, sparc64 sha256 opcode accelerated"); + +-MODULE_ALIAS("sha224"); +-MODULE_ALIAS("sha256"); ++MODULE_ALIAS_CRYPTO("sha224"); ++MODULE_ALIAS_CRYPTO("sha256"); + + #include "crop_devid.c" +diff --git a/arch/sparc/crypto/sha512_glue.c b/arch/sparc/crypto/sha512_glue.c +index f04d1994d19a..9fff88541b8c 100644 +--- a/arch/sparc/crypto/sha512_glue.c ++++ b/arch/sparc/crypto/sha512_glue.c +@@ -222,7 +222,7 @@ module_exit(sha512_sparc64_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA-384 and SHA-512 Secure Hash Algorithm, sparc64 sha512 opcode accelerated"); + +-MODULE_ALIAS("sha384"); +-MODULE_ALIAS("sha512"); ++MODULE_ALIAS_CRYPTO("sha384"); ++MODULE_ALIAS_CRYPTO("sha512"); + + #include "crop_devid.c" +diff --git a/arch/x86/crypto/aes_glue.c b/arch/x86/crypto/aes_glue.c +index aafe8ce0d65d..e26984f7ab8d 100644 +--- a/arch/x86/crypto/aes_glue.c ++++ b/arch/x86/crypto/aes_glue.c +@@ -66,5 +66,5 @@ module_exit(aes_fini); + + MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm, asm optimized"); + MODULE_LICENSE("GPL"); +-MODULE_ALIAS("aes"); +-MODULE_ALIAS("aes-asm"); ++MODULE_ALIAS_CRYPTO("aes"); ++MODULE_ALIAS_CRYPTO("aes-asm"); +diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c +index f80e668785c0..f89e7490d303 100644 +--- a/arch/x86/crypto/aesni-intel_glue.c ++++ b/arch/x86/crypto/aesni-intel_glue.c +@@ -1373,4 +1373,4 @@ module_exit(aesni_exit); + + MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm, Intel AES-NI instructions optimized"); + MODULE_LICENSE("GPL"); +-MODULE_ALIAS("aes"); ++MODULE_ALIAS_CRYPTO("aes"); +diff --git a/arch/x86/crypto/blowfish_avx2_glue.c b/arch/x86/crypto/blowfish_avx2_glue.c +index 4417e9aea78d..183395bfc724 100644 +--- a/arch/x86/crypto/blowfish_avx2_glue.c ++++ b/arch/x86/crypto/blowfish_avx2_glue.c +@@ -581,5 +581,5 @@ module_exit(fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Blowfish Cipher Algorithm, AVX2 optimized"); +-MODULE_ALIAS("blowfish"); +-MODULE_ALIAS("blowfish-asm"); ++MODULE_ALIAS_CRYPTO("blowfish"); ++MODULE_ALIAS_CRYPTO("blowfish-asm"); +diff --git a/arch/x86/crypto/blowfish_glue.c b/arch/x86/crypto/blowfish_glue.c +index 3548d76dbaa9..9f7cc6bde5c8 100644 +--- a/arch/x86/crypto/blowfish_glue.c ++++ b/arch/x86/crypto/blowfish_glue.c +@@ -465,5 +465,5 @@ module_exit(fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Blowfish Cipher Algorithm, asm optimized"); +-MODULE_ALIAS("blowfish"); +-MODULE_ALIAS("blowfish-asm"); ++MODULE_ALIAS_CRYPTO("blowfish"); ++MODULE_ALIAS_CRYPTO("blowfish-asm"); +diff --git a/arch/x86/crypto/camellia_aesni_avx2_glue.c b/arch/x86/crypto/camellia_aesni_avx2_glue.c +index 414fe5d7946b..da710fcf8631 100644 +--- a/arch/x86/crypto/camellia_aesni_avx2_glue.c ++++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c +@@ -582,5 +582,5 @@ module_exit(camellia_aesni_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Camellia Cipher Algorithm, AES-NI/AVX2 optimized"); +-MODULE_ALIAS("camellia"); +-MODULE_ALIAS("camellia-asm"); ++MODULE_ALIAS_CRYPTO("camellia"); ++MODULE_ALIAS_CRYPTO("camellia-asm"); +diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c b/arch/x86/crypto/camellia_aesni_avx_glue.c +index 37fd0c0a81ea..883e1af10dc5 100644 +--- a/arch/x86/crypto/camellia_aesni_avx_glue.c ++++ b/arch/x86/crypto/camellia_aesni_avx_glue.c +@@ -574,5 +574,5 @@ module_exit(camellia_aesni_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Camellia Cipher Algorithm, AES-NI/AVX optimized"); +-MODULE_ALIAS("camellia"); +-MODULE_ALIAS("camellia-asm"); ++MODULE_ALIAS_CRYPTO("camellia"); ++MODULE_ALIAS_CRYPTO("camellia-asm"); +diff --git a/arch/x86/crypto/camellia_glue.c b/arch/x86/crypto/camellia_glue.c +index 5cb86ccd4acb..16d65b0d28d1 100644 +--- a/arch/x86/crypto/camellia_glue.c ++++ b/arch/x86/crypto/camellia_glue.c +@@ -1725,5 +1725,5 @@ module_exit(fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Camellia Cipher Algorithm, asm optimized"); +-MODULE_ALIAS("camellia"); +-MODULE_ALIAS("camellia-asm"); ++MODULE_ALIAS_CRYPTO("camellia"); ++MODULE_ALIAS_CRYPTO("camellia-asm"); +diff --git a/arch/x86/crypto/cast5_avx_glue.c b/arch/x86/crypto/cast5_avx_glue.c +index c6631813dc11..d416069e3184 100644 +--- a/arch/x86/crypto/cast5_avx_glue.c ++++ b/arch/x86/crypto/cast5_avx_glue.c +@@ -494,4 +494,4 @@ module_exit(cast5_exit); + + MODULE_DESCRIPTION("Cast5 Cipher Algorithm, AVX optimized"); + MODULE_LICENSE("GPL"); +-MODULE_ALIAS("cast5"); ++MODULE_ALIAS_CRYPTO("cast5"); +diff --git a/arch/x86/crypto/cast6_avx_glue.c b/arch/x86/crypto/cast6_avx_glue.c +index 8d0dfb86a559..c19756265d4e 100644 +--- a/arch/x86/crypto/cast6_avx_glue.c ++++ b/arch/x86/crypto/cast6_avx_glue.c +@@ -611,4 +611,4 @@ module_exit(cast6_exit); + + MODULE_DESCRIPTION("Cast6 Cipher Algorithm, AVX optimized"); + MODULE_LICENSE("GPL"); +-MODULE_ALIAS("cast6"); ++MODULE_ALIAS_CRYPTO("cast6"); +diff --git a/arch/x86/crypto/crc32-pclmul_glue.c b/arch/x86/crypto/crc32-pclmul_glue.c +index 9d014a74ef96..1937fc1d8763 100644 +--- a/arch/x86/crypto/crc32-pclmul_glue.c ++++ b/arch/x86/crypto/crc32-pclmul_glue.c +@@ -197,5 +197,5 @@ module_exit(crc32_pclmul_mod_fini); + MODULE_AUTHOR("Alexander Boyko "); + MODULE_LICENSE("GPL"); + +-MODULE_ALIAS("crc32"); +-MODULE_ALIAS("crc32-pclmul"); ++MODULE_ALIAS_CRYPTO("crc32"); ++MODULE_ALIAS_CRYPTO("crc32-pclmul"); +diff --git a/arch/x86/crypto/crc32c-intel_glue.c b/arch/x86/crypto/crc32c-intel_glue.c +index 6812ad98355c..28640c3d6af7 100644 +--- a/arch/x86/crypto/crc32c-intel_glue.c ++++ b/arch/x86/crypto/crc32c-intel_glue.c +@@ -280,5 +280,5 @@ MODULE_AUTHOR("Austin Zhang , Kent Liu + #include + #include ++#include + #include + + struct crypto_fpu_ctx { +@@ -159,3 +160,5 @@ void __exit crypto_fpu_exit(void) + { + crypto_unregister_template(&crypto_fpu_tmpl); + } ++ ++MODULE_ALIAS_CRYPTO("fpu"); +diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c +index d785cf2c529c..a8d6f69f92a3 100644 +--- a/arch/x86/crypto/ghash-clmulni-intel_glue.c ++++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c +@@ -341,4 +341,4 @@ module_exit(ghash_pclmulqdqni_mod_exit); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("GHASH Message Digest Algorithm, " + "acclerated by PCLMULQDQ-NI"); +-MODULE_ALIAS("ghash"); ++MODULE_ALIAS_CRYPTO("ghash"); +diff --git a/arch/x86/crypto/salsa20_glue.c b/arch/x86/crypto/salsa20_glue.c +index 5e8e67739bb5..399a29d067d6 100644 +--- a/arch/x86/crypto/salsa20_glue.c ++++ b/arch/x86/crypto/salsa20_glue.c +@@ -119,5 +119,5 @@ module_exit(fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION ("Salsa20 stream cipher algorithm (optimized assembly version)"); +-MODULE_ALIAS("salsa20"); +-MODULE_ALIAS("salsa20-asm"); ++MODULE_ALIAS_CRYPTO("salsa20"); ++MODULE_ALIAS_CRYPTO("salsa20-asm"); +diff --git a/arch/x86/crypto/serpent_avx2_glue.c b/arch/x86/crypto/serpent_avx2_glue.c +index 23aabc6c20a5..cb57caf13ef7 100644 +--- a/arch/x86/crypto/serpent_avx2_glue.c ++++ b/arch/x86/crypto/serpent_avx2_glue.c +@@ -558,5 +558,5 @@ module_exit(fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Serpent Cipher Algorithm, AVX2 optimized"); +-MODULE_ALIAS("serpent"); +-MODULE_ALIAS("serpent-asm"); ++MODULE_ALIAS_CRYPTO("serpent"); ++MODULE_ALIAS_CRYPTO("serpent-asm"); +diff --git a/arch/x86/crypto/serpent_avx_glue.c b/arch/x86/crypto/serpent_avx_glue.c +index 9ae83cf8d21e..0a86e8b65e60 100644 +--- a/arch/x86/crypto/serpent_avx_glue.c ++++ b/arch/x86/crypto/serpent_avx_glue.c +@@ -617,4 +617,4 @@ module_exit(serpent_exit); + + MODULE_DESCRIPTION("Serpent Cipher Algorithm, AVX optimized"); + MODULE_LICENSE("GPL"); +-MODULE_ALIAS("serpent"); ++MODULE_ALIAS_CRYPTO("serpent"); +diff --git a/arch/x86/crypto/serpent_sse2_glue.c b/arch/x86/crypto/serpent_sse2_glue.c +index 97a356ece24d..279f3899c779 100644 +--- a/arch/x86/crypto/serpent_sse2_glue.c ++++ b/arch/x86/crypto/serpent_sse2_glue.c +@@ -618,4 +618,4 @@ module_exit(serpent_sse2_exit); + + MODULE_DESCRIPTION("Serpent Cipher Algorithm, SSE2 optimized"); + MODULE_LICENSE("GPL"); +-MODULE_ALIAS("serpent"); ++MODULE_ALIAS_CRYPTO("serpent"); +diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c +index 4a11a9d72451..29e1060e9001 100644 +--- a/arch/x86/crypto/sha1_ssse3_glue.c ++++ b/arch/x86/crypto/sha1_ssse3_glue.c +@@ -237,4 +237,4 @@ module_exit(sha1_ssse3_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm, Supplemental SSE3 accelerated"); + +-MODULE_ALIAS("sha1"); ++MODULE_ALIAS_CRYPTO("sha1"); +diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c +index 597d4da69656..ceafb01885ed 100644 +--- a/arch/x86/crypto/sha256_ssse3_glue.c ++++ b/arch/x86/crypto/sha256_ssse3_glue.c +@@ -272,4 +272,4 @@ module_exit(sha256_ssse3_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA256 Secure Hash Algorithm, Supplemental SSE3 accelerated"); + +-MODULE_ALIAS("sha256"); ++MODULE_ALIAS_CRYPTO("sha256"); +diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c +index 9f5e71f06671..d1ee9f638d1c 100644 +--- a/arch/x86/crypto/sha512_ssse3_glue.c ++++ b/arch/x86/crypto/sha512_ssse3_glue.c +@@ -279,4 +279,4 @@ module_exit(sha512_ssse3_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA512 Secure Hash Algorithm, Supplemental SSE3 accelerated"); + +-MODULE_ALIAS("sha512"); ++MODULE_ALIAS_CRYPTO("sha512"); +diff --git a/arch/x86/crypto/twofish_avx2_glue.c b/arch/x86/crypto/twofish_avx2_glue.c +index ce33b5be64ee..bb1f0a194d97 100644 +--- a/arch/x86/crypto/twofish_avx2_glue.c ++++ b/arch/x86/crypto/twofish_avx2_glue.c +@@ -580,5 +580,5 @@ module_exit(fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Twofish Cipher Algorithm, AVX2 optimized"); +-MODULE_ALIAS("twofish"); +-MODULE_ALIAS("twofish-asm"); ++MODULE_ALIAS_CRYPTO("twofish"); ++MODULE_ALIAS_CRYPTO("twofish-asm"); +diff --git a/arch/x86/crypto/twofish_avx_glue.c b/arch/x86/crypto/twofish_avx_glue.c +index 2047a562f6b3..4a1f94422fbb 100644 +--- a/arch/x86/crypto/twofish_avx_glue.c ++++ b/arch/x86/crypto/twofish_avx_glue.c +@@ -589,4 +589,4 @@ module_exit(twofish_exit); + + MODULE_DESCRIPTION("Twofish Cipher Algorithm, AVX optimized"); + MODULE_LICENSE("GPL"); +-MODULE_ALIAS("twofish"); ++MODULE_ALIAS_CRYPTO("twofish"); +diff --git a/arch/x86/crypto/twofish_glue.c b/arch/x86/crypto/twofish_glue.c +index 0a5202303501..77e06c2da83d 100644 +--- a/arch/x86/crypto/twofish_glue.c ++++ b/arch/x86/crypto/twofish_glue.c +@@ -96,5 +96,5 @@ module_exit(fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION ("Twofish Cipher Algorithm, asm optimized"); +-MODULE_ALIAS("twofish"); +-MODULE_ALIAS("twofish-asm"); ++MODULE_ALIAS_CRYPTO("twofish"); ++MODULE_ALIAS_CRYPTO("twofish-asm"); +diff --git a/arch/x86/crypto/twofish_glue_3way.c b/arch/x86/crypto/twofish_glue_3way.c +index 13e63b3e1dfb..56d8a08ee479 100644 +--- a/arch/x86/crypto/twofish_glue_3way.c ++++ b/arch/x86/crypto/twofish_glue_3way.c +@@ -495,5 +495,5 @@ module_exit(fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Twofish Cipher Algorithm, 3-way parallel asm optimized"); +-MODULE_ALIAS("twofish"); +-MODULE_ALIAS("twofish-asm"); ++MODULE_ALIAS_CRYPTO("twofish"); ++MODULE_ALIAS_CRYPTO("twofish-asm"); +diff --git a/arch/x86/include/asm/desc.h b/arch/x86/include/asm/desc.h +index 8bf1c06070d5..23fb67e6f845 100644 +--- a/arch/x86/include/asm/desc.h ++++ b/arch/x86/include/asm/desc.h +@@ -251,7 +251,8 @@ static inline void native_load_tls(struct thread_struct *t, unsigned int cpu) + gdt[GDT_ENTRY_TLS_MIN + i] = t->tls_array[i]; + } + +-#define _LDT_empty(info) \ ++/* This intentionally ignores lm, since 32-bit apps don't have that field. */ ++#define LDT_empty(info) \ + ((info)->base_addr == 0 && \ + (info)->limit == 0 && \ + (info)->contents == 0 && \ +@@ -261,11 +262,18 @@ static inline void native_load_tls(struct thread_struct *t, unsigned int cpu) + (info)->seg_not_present == 1 && \ + (info)->useable == 0) + +-#ifdef CONFIG_X86_64 +-#define LDT_empty(info) (_LDT_empty(info) && ((info)->lm == 0)) +-#else +-#define LDT_empty(info) (_LDT_empty(info)) +-#endif ++/* Lots of programs expect an all-zero user_desc to mean "no segment at all". */ ++static inline bool LDT_zero(const struct user_desc *info) ++{ ++ return (info->base_addr == 0 && ++ info->limit == 0 && ++ info->contents == 0 && ++ info->read_exec_only == 0 && ++ info->seg_32bit == 0 && ++ info->limit_in_pages == 0 && ++ info->seg_not_present == 0 && ++ info->useable == 0); ++} + + static inline void clear_LDT(void) + { +diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c +index 8f4be53ea04b..1853659820e0 100644 +--- a/arch/x86/kernel/cpu/mshyperv.c ++++ b/arch/x86/kernel/cpu/mshyperv.c +@@ -60,6 +60,7 @@ static struct clocksource hyperv_cs = { + .rating = 400, /* use this when running on Hyperv*/ + .read = read_hv_clock, + .mask = CLOCKSOURCE_MASK(64), ++ .flags = CLOCK_SOURCE_IS_CONTINUOUS, + }; + + static void __init ms_hyperv_init_platform(void) +diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c +index 4e942f31b1a7..7fc5e843f247 100644 +--- a/arch/x86/kernel/tls.c ++++ b/arch/x86/kernel/tls.c +@@ -29,7 +29,28 @@ static int get_free_idx(void) + + static bool tls_desc_okay(const struct user_desc *info) + { +- if (LDT_empty(info)) ++ /* ++ * For historical reasons (i.e. no one ever documented how any ++ * of the segmentation APIs work), user programs can and do ++ * assume that a struct user_desc that's all zeros except for ++ * entry_number means "no segment at all". This never actually ++ * worked. In fact, up to Linux 3.19, a struct user_desc like ++ * this would create a 16-bit read-write segment with base and ++ * limit both equal to zero. ++ * ++ * That was close enough to "no segment at all" until we ++ * hardened this function to disallow 16-bit TLS segments. Fix ++ * it up by interpreting these zeroed segments the way that they ++ * were almost certainly intended to be interpreted. ++ * ++ * The correct way to ask for "no segment at all" is to specify ++ * a user_desc that satisfies LDT_empty. To keep everything ++ * working, we accept both. ++ * ++ * Note that there's a similar kludge in modify_ldt -- look at ++ * the distinction between modes 1 and 0x11. ++ */ ++ if (LDT_empty(info) || LDT_zero(info)) + return true; + + /* +@@ -71,7 +92,7 @@ static void set_tls_desc(struct task_struct *p, int idx, + cpu = get_cpu(); + + while (n-- > 0) { +- if (LDT_empty(info)) ++ if (LDT_empty(info) || LDT_zero(info)) + desc->a = desc->b = 0; + else + fill_ldt(desc, info); +diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c +index 332cafe909eb..0010ed7c3ec2 100644 +--- a/arch/x86/kernel/traps.c ++++ b/arch/x86/kernel/traps.c +@@ -362,7 +362,7 @@ exit: + * for scheduling or signal handling. The actual stack switch is done in + * entry.S + */ +-asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs) ++asmlinkage notrace __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs) + { + struct pt_regs *regs = eregs; + /* Did already sync */ +@@ -387,7 +387,7 @@ struct bad_iret_stack { + struct pt_regs regs; + }; + +-asmlinkage __visible ++asmlinkage __visible notrace __kprobes + struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s) + { + /* +diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c +index 4e27ba53c40c..27e3a14fc917 100644 +--- a/arch/x86/kernel/tsc.c ++++ b/arch/x86/kernel/tsc.c +@@ -380,7 +380,7 @@ static unsigned long quick_pit_calibrate(void) + goto success; + } + } +- pr_err("Fast TSC calibration failed\n"); ++ pr_info("Fast TSC calibration failed\n"); + return 0; + + success: +diff --git a/crypto/842.c b/crypto/842.c +index 65c7a89cfa09..b48f4f108c47 100644 +--- a/crypto/842.c ++++ b/crypto/842.c +@@ -180,3 +180,4 @@ module_exit(nx842_mod_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("842 Compression Algorithm"); ++MODULE_ALIAS_CRYPTO("842"); +diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c +index 47f2e5c71759..e138ad85bd83 100644 +--- a/crypto/aes_generic.c ++++ b/crypto/aes_generic.c +@@ -1474,4 +1474,5 @@ module_exit(aes_fini); + + MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm"); + MODULE_LICENSE("Dual BSD/GPL"); +-MODULE_ALIAS("aes"); ++MODULE_ALIAS_CRYPTO("aes"); ++MODULE_ALIAS_CRYPTO("aes-generic"); +diff --git a/crypto/algapi.c b/crypto/algapi.c +index 7a1ae87f1683..00d8d939733b 100644 +--- a/crypto/algapi.c ++++ b/crypto/algapi.c +@@ -495,8 +495,8 @@ static struct crypto_template *__crypto_lookup_template(const char *name) + + struct crypto_template *crypto_lookup_template(const char *name) + { +- return try_then_request_module(__crypto_lookup_template(name), "%s", +- name); ++ return try_then_request_module(__crypto_lookup_template(name), ++ "crypto-%s", name); + } + EXPORT_SYMBOL_GPL(crypto_lookup_template); + +diff --git a/crypto/ansi_cprng.c b/crypto/ansi_cprng.c +index 666f1962a160..6f5bebc9bf01 100644 +--- a/crypto/ansi_cprng.c ++++ b/crypto/ansi_cprng.c +@@ -476,4 +476,5 @@ module_param(dbg, int, 0); + MODULE_PARM_DESC(dbg, "Boolean to enable debugging (0/1 == off/on)"); + module_init(prng_mod_init); + module_exit(prng_mod_fini); +-MODULE_ALIAS("stdrng"); ++MODULE_ALIAS_CRYPTO("stdrng"); ++MODULE_ALIAS_CRYPTO("ansi_cprng"); +diff --git a/crypto/anubis.c b/crypto/anubis.c +index 008c8a4fb67c..4bb187c2a902 100644 +--- a/crypto/anubis.c ++++ b/crypto/anubis.c +@@ -704,3 +704,4 @@ module_exit(anubis_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Anubis Cryptographic Algorithm"); ++MODULE_ALIAS_CRYPTO("anubis"); +diff --git a/crypto/api.c b/crypto/api.c +index 37c4c7213de0..335abea14f19 100644 +--- a/crypto/api.c ++++ b/crypto/api.c +@@ -216,11 +216,11 @@ struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, u32 mask) + + alg = crypto_alg_lookup(name, type, mask); + if (!alg) { +- request_module("%s", name); ++ request_module("crypto-%s", name); + + if (!((type ^ CRYPTO_ALG_NEED_FALLBACK) & mask & + CRYPTO_ALG_NEED_FALLBACK)) +- request_module("%s-all", name); ++ request_module("crypto-%s-all", name); + + alg = crypto_alg_lookup(name, type, mask); + } +diff --git a/crypto/arc4.c b/crypto/arc4.c +index 5a772c3657d5..f1a81925558f 100644 +--- a/crypto/arc4.c ++++ b/crypto/arc4.c +@@ -166,3 +166,4 @@ module_exit(arc4_exit); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("ARC4 Cipher Algorithm"); + MODULE_AUTHOR("Jon Oberheide "); ++MODULE_ALIAS_CRYPTO("arc4"); +diff --git a/crypto/authenc.c b/crypto/authenc.c +index 528b00bc4769..a2cfae251dd5 100644 +--- a/crypto/authenc.c ++++ b/crypto/authenc.c +@@ -709,3 +709,4 @@ module_exit(crypto_authenc_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Simple AEAD wrapper for IPsec"); ++MODULE_ALIAS_CRYPTO("authenc"); +diff --git a/crypto/authencesn.c b/crypto/authencesn.c +index ab53762fc309..16c225cb28c2 100644 +--- a/crypto/authencesn.c ++++ b/crypto/authencesn.c +@@ -832,3 +832,4 @@ module_exit(crypto_authenc_esn_module_exit); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Steffen Klassert "); + MODULE_DESCRIPTION("AEAD wrapper for IPsec with extended sequence numbers"); ++MODULE_ALIAS_CRYPTO("authencesn"); +diff --git a/crypto/blowfish_generic.c b/crypto/blowfish_generic.c +index 8baf5447d35b..87b392a77a93 100644 +--- a/crypto/blowfish_generic.c ++++ b/crypto/blowfish_generic.c +@@ -138,4 +138,5 @@ module_exit(blowfish_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Blowfish Cipher Algorithm"); +-MODULE_ALIAS("blowfish"); ++MODULE_ALIAS_CRYPTO("blowfish"); ++MODULE_ALIAS_CRYPTO("blowfish-generic"); +diff --git a/crypto/camellia_generic.c b/crypto/camellia_generic.c +index 75efa2052305..029587f808f4 100644 +--- a/crypto/camellia_generic.c ++++ b/crypto/camellia_generic.c +@@ -1098,4 +1098,5 @@ module_exit(camellia_fini); + + MODULE_DESCRIPTION("Camellia Cipher Algorithm"); + MODULE_LICENSE("GPL"); +-MODULE_ALIAS("camellia"); ++MODULE_ALIAS_CRYPTO("camellia"); ++MODULE_ALIAS_CRYPTO("camellia-generic"); +diff --git a/crypto/cast5_generic.c b/crypto/cast5_generic.c +index 5558f630a0eb..df5c72629383 100644 +--- a/crypto/cast5_generic.c ++++ b/crypto/cast5_generic.c +@@ -549,4 +549,5 @@ module_exit(cast5_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Cast5 Cipher Algorithm"); +-MODULE_ALIAS("cast5"); ++MODULE_ALIAS_CRYPTO("cast5"); ++MODULE_ALIAS_CRYPTO("cast5-generic"); +diff --git a/crypto/cast6_generic.c b/crypto/cast6_generic.c +index de732528a430..058c8d755d03 100644 +--- a/crypto/cast6_generic.c ++++ b/crypto/cast6_generic.c +@@ -291,4 +291,5 @@ module_exit(cast6_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Cast6 Cipher Algorithm"); +-MODULE_ALIAS("cast6"); ++MODULE_ALIAS_CRYPTO("cast6"); ++MODULE_ALIAS_CRYPTO("cast6-generic"); +diff --git a/crypto/cbc.c b/crypto/cbc.c +index 61ac42e1e32b..780ee27b2d43 100644 +--- a/crypto/cbc.c ++++ b/crypto/cbc.c +@@ -289,3 +289,4 @@ module_exit(crypto_cbc_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("CBC block cipher algorithm"); ++MODULE_ALIAS_CRYPTO("cbc"); +diff --git a/crypto/ccm.c b/crypto/ccm.c +index ed009b77e67d..c569c9c6afe3 100644 +--- a/crypto/ccm.c ++++ b/crypto/ccm.c +@@ -879,5 +879,6 @@ module_exit(crypto_ccm_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Counter with CBC MAC"); +-MODULE_ALIAS("ccm_base"); +-MODULE_ALIAS("rfc4309"); ++MODULE_ALIAS_CRYPTO("ccm_base"); ++MODULE_ALIAS_CRYPTO("rfc4309"); ++MODULE_ALIAS_CRYPTO("ccm"); +diff --git a/crypto/chainiv.c b/crypto/chainiv.c +index 834d8dd3d4fc..22b7e55b0e1b 100644 +--- a/crypto/chainiv.c ++++ b/crypto/chainiv.c +@@ -359,3 +359,4 @@ module_exit(chainiv_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Chain IV Generator"); ++MODULE_ALIAS_CRYPTO("chainiv"); +diff --git a/crypto/cmac.c b/crypto/cmac.c +index 50880cf17fad..7a8bfbd548f6 100644 +--- a/crypto/cmac.c ++++ b/crypto/cmac.c +@@ -313,3 +313,4 @@ module_exit(crypto_cmac_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("CMAC keyed hash algorithm"); ++MODULE_ALIAS_CRYPTO("cmac"); +diff --git a/crypto/crc32.c b/crypto/crc32.c +index 9d1c41569898..187ded28cb0b 100644 +--- a/crypto/crc32.c ++++ b/crypto/crc32.c +@@ -156,3 +156,4 @@ module_exit(crc32_mod_fini); + MODULE_AUTHOR("Alexander Boyko "); + MODULE_DESCRIPTION("CRC32 calculations wrapper for lib/crc32"); + MODULE_LICENSE("GPL"); ++MODULE_ALIAS_CRYPTO("crc32"); +diff --git a/crypto/cryptd.c b/crypto/cryptd.c +index 7bdd61b867c8..75c415d37086 100644 +--- a/crypto/cryptd.c ++++ b/crypto/cryptd.c +@@ -955,3 +955,4 @@ module_exit(cryptd_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Software async crypto daemon"); ++MODULE_ALIAS_CRYPTO("cryptd"); +diff --git a/crypto/crypto_null.c b/crypto/crypto_null.c +index fee7265cd35d..7b39fa3deac2 100644 +--- a/crypto/crypto_null.c ++++ b/crypto/crypto_null.c +@@ -149,9 +149,9 @@ static struct crypto_alg null_algs[3] = { { + .coa_decompress = null_compress } } + } }; + +-MODULE_ALIAS("compress_null"); +-MODULE_ALIAS("digest_null"); +-MODULE_ALIAS("cipher_null"); ++MODULE_ALIAS_CRYPTO("compress_null"); ++MODULE_ALIAS_CRYPTO("digest_null"); ++MODULE_ALIAS_CRYPTO("cipher_null"); + + static int __init crypto_null_mod_init(void) + { +diff --git a/crypto/ctr.c b/crypto/ctr.c +index f2b94f27bb2c..2386f7313952 100644 +--- a/crypto/ctr.c ++++ b/crypto/ctr.c +@@ -466,4 +466,5 @@ module_exit(crypto_ctr_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("CTR Counter block mode"); +-MODULE_ALIAS("rfc3686"); ++MODULE_ALIAS_CRYPTO("rfc3686"); ++MODULE_ALIAS_CRYPTO("ctr"); +diff --git a/crypto/cts.c b/crypto/cts.c +index 042223f8e733..60b9da3fa7c1 100644 +--- a/crypto/cts.c ++++ b/crypto/cts.c +@@ -350,3 +350,4 @@ module_exit(crypto_cts_module_exit); + + MODULE_LICENSE("Dual BSD/GPL"); + MODULE_DESCRIPTION("CTS-CBC CipherText Stealing for CBC"); ++MODULE_ALIAS_CRYPTO("cts"); +diff --git a/crypto/deflate.c b/crypto/deflate.c +index b57d70eb156b..95d8d37c5021 100644 +--- a/crypto/deflate.c ++++ b/crypto/deflate.c +@@ -222,4 +222,4 @@ module_exit(deflate_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Deflate Compression Algorithm for IPCOMP"); + MODULE_AUTHOR("James Morris "); +- ++MODULE_ALIAS_CRYPTO("deflate"); +diff --git a/crypto/des_generic.c b/crypto/des_generic.c +index f6cf63f88468..3ec6071309d9 100644 +--- a/crypto/des_generic.c ++++ b/crypto/des_generic.c +@@ -971,8 +971,6 @@ static struct crypto_alg des_algs[2] = { { + .cia_decrypt = des3_ede_decrypt } } + } }; + +-MODULE_ALIAS("des3_ede"); +- + static int __init des_generic_mod_init(void) + { + return crypto_register_algs(des_algs, ARRAY_SIZE(des_algs)); +@@ -989,4 +987,7 @@ module_exit(des_generic_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("DES & Triple DES EDE Cipher Algorithms"); + MODULE_AUTHOR("Dag Arne Osvik "); +-MODULE_ALIAS("des"); ++MODULE_ALIAS_CRYPTO("des"); ++MODULE_ALIAS_CRYPTO("des-generic"); ++MODULE_ALIAS_CRYPTO("des3_ede"); ++MODULE_ALIAS_CRYPTO("des3_ede-generic"); +diff --git a/crypto/ecb.c b/crypto/ecb.c +index 935cfef4aa84..12011aff0971 100644 +--- a/crypto/ecb.c ++++ b/crypto/ecb.c +@@ -185,3 +185,4 @@ module_exit(crypto_ecb_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("ECB block cipher algorithm"); ++MODULE_ALIAS_CRYPTO("ecb"); +diff --git a/crypto/eseqiv.c b/crypto/eseqiv.c +index 42ce9f570aec..388f582ab0b9 100644 +--- a/crypto/eseqiv.c ++++ b/crypto/eseqiv.c +@@ -267,3 +267,4 @@ module_exit(eseqiv_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Encrypted Sequence Number IV Generator"); ++MODULE_ALIAS_CRYPTO("eseqiv"); +diff --git a/crypto/fcrypt.c b/crypto/fcrypt.c +index 3b2cf569c684..300f5b80a074 100644 +--- a/crypto/fcrypt.c ++++ b/crypto/fcrypt.c +@@ -420,3 +420,4 @@ module_exit(fcrypt_mod_fini); + MODULE_LICENSE("Dual BSD/GPL"); + MODULE_DESCRIPTION("FCrypt Cipher Algorithm"); + MODULE_AUTHOR("David Howells "); ++MODULE_ALIAS_CRYPTO("fcrypt"); +diff --git a/crypto/gcm.c b/crypto/gcm.c +index 43e1fb05ea54..b4c252066f7b 100644 +--- a/crypto/gcm.c ++++ b/crypto/gcm.c +@@ -1441,6 +1441,7 @@ module_exit(crypto_gcm_module_exit); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Galois/Counter Mode"); + MODULE_AUTHOR("Mikko Herranen "); +-MODULE_ALIAS("gcm_base"); +-MODULE_ALIAS("rfc4106"); +-MODULE_ALIAS("rfc4543"); ++MODULE_ALIAS_CRYPTO("gcm_base"); ++MODULE_ALIAS_CRYPTO("rfc4106"); ++MODULE_ALIAS_CRYPTO("rfc4543"); ++MODULE_ALIAS_CRYPTO("gcm"); +diff --git a/crypto/ghash-generic.c b/crypto/ghash-generic.c +index 9d3f0c69a86f..bac70995e064 100644 +--- a/crypto/ghash-generic.c ++++ b/crypto/ghash-generic.c +@@ -172,4 +172,5 @@ module_exit(ghash_mod_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("GHASH Message Digest Algorithm"); +-MODULE_ALIAS("ghash"); ++MODULE_ALIAS_CRYPTO("ghash"); ++MODULE_ALIAS_CRYPTO("ghash-generic"); +diff --git a/crypto/hmac.c b/crypto/hmac.c +index 8d9544cf8169..ade790b454e9 100644 +--- a/crypto/hmac.c ++++ b/crypto/hmac.c +@@ -271,3 +271,4 @@ module_exit(hmac_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("HMAC hash algorithm"); ++MODULE_ALIAS_CRYPTO("hmac"); +diff --git a/crypto/khazad.c b/crypto/khazad.c +index 60e7cd66facc..873eb5ded6d7 100644 +--- a/crypto/khazad.c ++++ b/crypto/khazad.c +@@ -880,3 +880,4 @@ module_exit(khazad_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Khazad Cryptographic Algorithm"); ++MODULE_ALIAS_CRYPTO("khazad"); +diff --git a/crypto/krng.c b/crypto/krng.c +index a2d2b72fc135..0224841b6579 100644 +--- a/crypto/krng.c ++++ b/crypto/krng.c +@@ -62,4 +62,5 @@ module_exit(krng_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Kernel Random Number Generator"); +-MODULE_ALIAS("stdrng"); ++MODULE_ALIAS_CRYPTO("stdrng"); ++MODULE_ALIAS_CRYPTO("krng"); +diff --git a/crypto/lrw.c b/crypto/lrw.c +index ba42acc4deba..6f9908a7ebcb 100644 +--- a/crypto/lrw.c ++++ b/crypto/lrw.c +@@ -400,3 +400,4 @@ module_exit(crypto_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("LRW block cipher mode"); ++MODULE_ALIAS_CRYPTO("lrw"); +diff --git a/crypto/lzo.c b/crypto/lzo.c +index 1c2aa69c54b8..d1ff69404353 100644 +--- a/crypto/lzo.c ++++ b/crypto/lzo.c +@@ -103,3 +103,4 @@ module_exit(lzo_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("LZO Compression Algorithm"); ++MODULE_ALIAS_CRYPTO("lzo"); +diff --git a/crypto/md4.c b/crypto/md4.c +index 0477a6a01d58..3515af425cc9 100644 +--- a/crypto/md4.c ++++ b/crypto/md4.c +@@ -255,4 +255,4 @@ module_exit(md4_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("MD4 Message Digest Algorithm"); +- ++MODULE_ALIAS_CRYPTO("md4"); +diff --git a/crypto/md5.c b/crypto/md5.c +index 7febeaab923b..36f5e5b103f3 100644 +--- a/crypto/md5.c ++++ b/crypto/md5.c +@@ -168,3 +168,4 @@ module_exit(md5_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("MD5 Message Digest Algorithm"); ++MODULE_ALIAS_CRYPTO("md5"); +diff --git a/crypto/michael_mic.c b/crypto/michael_mic.c +index 079b761bc70d..46195e0d0f4d 100644 +--- a/crypto/michael_mic.c ++++ b/crypto/michael_mic.c +@@ -184,3 +184,4 @@ module_exit(michael_mic_exit); + MODULE_LICENSE("GPL v2"); + MODULE_DESCRIPTION("Michael MIC"); + MODULE_AUTHOR("Jouni Malinen "); ++MODULE_ALIAS_CRYPTO("michael_mic"); +diff --git a/crypto/pcbc.c b/crypto/pcbc.c +index d1b8bdfb5855..f654965f0933 100644 +--- a/crypto/pcbc.c ++++ b/crypto/pcbc.c +@@ -295,3 +295,4 @@ module_exit(crypto_pcbc_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("PCBC block cipher algorithm"); ++MODULE_ALIAS_CRYPTO("pcbc"); +diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c +index b2c99dc1c5e2..61ff946db748 100644 +--- a/crypto/pcrypt.c ++++ b/crypto/pcrypt.c +@@ -565,3 +565,4 @@ module_exit(pcrypt_exit); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Steffen Klassert "); + MODULE_DESCRIPTION("Parallel crypto wrapper"); ++MODULE_ALIAS_CRYPTO("pcrypt"); +diff --git a/crypto/rmd128.c b/crypto/rmd128.c +index 8a0f68b7f257..049486ede938 100644 +--- a/crypto/rmd128.c ++++ b/crypto/rmd128.c +@@ -327,3 +327,4 @@ module_exit(rmd128_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Adrian-Ken Rueegsegger "); + MODULE_DESCRIPTION("RIPEMD-128 Message Digest"); ++MODULE_ALIAS_CRYPTO("rmd128"); +diff --git a/crypto/rmd160.c b/crypto/rmd160.c +index 525d7bb752cf..de585e51d455 100644 +--- a/crypto/rmd160.c ++++ b/crypto/rmd160.c +@@ -371,3 +371,4 @@ module_exit(rmd160_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Adrian-Ken Rueegsegger "); + MODULE_DESCRIPTION("RIPEMD-160 Message Digest"); ++MODULE_ALIAS_CRYPTO("rmd160"); +diff --git a/crypto/rmd256.c b/crypto/rmd256.c +index 69293d9b56e0..4ec02a754e09 100644 +--- a/crypto/rmd256.c ++++ b/crypto/rmd256.c +@@ -346,3 +346,4 @@ module_exit(rmd256_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Adrian-Ken Rueegsegger "); + MODULE_DESCRIPTION("RIPEMD-256 Message Digest"); ++MODULE_ALIAS_CRYPTO("rmd256"); +diff --git a/crypto/rmd320.c b/crypto/rmd320.c +index 09f97dfdfbba..770f2cb369f8 100644 +--- a/crypto/rmd320.c ++++ b/crypto/rmd320.c +@@ -395,3 +395,4 @@ module_exit(rmd320_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Adrian-Ken Rueegsegger "); + MODULE_DESCRIPTION("RIPEMD-320 Message Digest"); ++MODULE_ALIAS_CRYPTO("rmd320"); +diff --git a/crypto/salsa20_generic.c b/crypto/salsa20_generic.c +index 9a4770c02284..f550b5d94630 100644 +--- a/crypto/salsa20_generic.c ++++ b/crypto/salsa20_generic.c +@@ -248,4 +248,5 @@ module_exit(salsa20_generic_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION ("Salsa20 stream cipher algorithm"); +-MODULE_ALIAS("salsa20"); ++MODULE_ALIAS_CRYPTO("salsa20"); ++MODULE_ALIAS_CRYPTO("salsa20-generic"); +diff --git a/crypto/seed.c b/crypto/seed.c +index 9c904d6d2151..c6ba8438be43 100644 +--- a/crypto/seed.c ++++ b/crypto/seed.c +@@ -476,3 +476,4 @@ module_exit(seed_fini); + MODULE_DESCRIPTION("SEED Cipher Algorithm"); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Hye-Shik Chang , Kim Hyun "); ++MODULE_ALIAS_CRYPTO("seed"); +diff --git a/crypto/seqiv.c b/crypto/seqiv.c +index f2cba4ed6f25..49a4069ff453 100644 +--- a/crypto/seqiv.c ++++ b/crypto/seqiv.c +@@ -362,3 +362,4 @@ module_exit(seqiv_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Sequence Number IV Generator"); ++MODULE_ALIAS_CRYPTO("seqiv"); +diff --git a/crypto/serpent_generic.c b/crypto/serpent_generic.c +index 7ddbd7e88859..94970a794975 100644 +--- a/crypto/serpent_generic.c ++++ b/crypto/serpent_generic.c +@@ -665,5 +665,6 @@ module_exit(serpent_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Serpent and tnepres (kerneli compatible serpent reversed) Cipher Algorithm"); + MODULE_AUTHOR("Dag Arne Osvik "); +-MODULE_ALIAS("tnepres"); +-MODULE_ALIAS("serpent"); ++MODULE_ALIAS_CRYPTO("tnepres"); ++MODULE_ALIAS_CRYPTO("serpent"); ++MODULE_ALIAS_CRYPTO("serpent-generic"); +diff --git a/crypto/sha1_generic.c b/crypto/sha1_generic.c +index 42794803c480..fdf7c00de4b0 100644 +--- a/crypto/sha1_generic.c ++++ b/crypto/sha1_generic.c +@@ -153,4 +153,5 @@ module_exit(sha1_generic_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm"); + +-MODULE_ALIAS("sha1"); ++MODULE_ALIAS_CRYPTO("sha1"); ++MODULE_ALIAS_CRYPTO("sha1-generic"); +diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c +index 543366779524..136381bdd48d 100644 +--- a/crypto/sha256_generic.c ++++ b/crypto/sha256_generic.c +@@ -384,5 +384,7 @@ module_exit(sha256_generic_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithm"); + +-MODULE_ALIAS("sha224"); +-MODULE_ALIAS("sha256"); ++MODULE_ALIAS_CRYPTO("sha224"); ++MODULE_ALIAS_CRYPTO("sha224-generic"); ++MODULE_ALIAS_CRYPTO("sha256"); ++MODULE_ALIAS_CRYPTO("sha256-generic"); +diff --git a/crypto/sha512_generic.c b/crypto/sha512_generic.c +index 4c5862095679..fb2d7b8f163f 100644 +--- a/crypto/sha512_generic.c ++++ b/crypto/sha512_generic.c +@@ -285,5 +285,7 @@ module_exit(sha512_generic_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("SHA-512 and SHA-384 Secure Hash Algorithms"); + +-MODULE_ALIAS("sha384"); +-MODULE_ALIAS("sha512"); ++MODULE_ALIAS_CRYPTO("sha384"); ++MODULE_ALIAS_CRYPTO("sha384-generic"); ++MODULE_ALIAS_CRYPTO("sha512"); ++MODULE_ALIAS_CRYPTO("sha512-generic"); +diff --git a/crypto/tea.c b/crypto/tea.c +index 0a572323ee4a..b70b441c7d1e 100644 +--- a/crypto/tea.c ++++ b/crypto/tea.c +@@ -270,8 +270,9 @@ static void __exit tea_mod_fini(void) + crypto_unregister_algs(tea_algs, ARRAY_SIZE(tea_algs)); + } + +-MODULE_ALIAS("xtea"); +-MODULE_ALIAS("xeta"); ++MODULE_ALIAS_CRYPTO("tea"); ++MODULE_ALIAS_CRYPTO("xtea"); ++MODULE_ALIAS_CRYPTO("xeta"); + + module_init(tea_mod_init); + module_exit(tea_mod_fini); +diff --git a/crypto/tgr192.c b/crypto/tgr192.c +index 87403556fd0b..f7ed2fba396c 100644 +--- a/crypto/tgr192.c ++++ b/crypto/tgr192.c +@@ -676,8 +676,9 @@ static void __exit tgr192_mod_fini(void) + crypto_unregister_shashes(tgr_algs, ARRAY_SIZE(tgr_algs)); + } + +-MODULE_ALIAS("tgr160"); +-MODULE_ALIAS("tgr128"); ++MODULE_ALIAS_CRYPTO("tgr192"); ++MODULE_ALIAS_CRYPTO("tgr160"); ++MODULE_ALIAS_CRYPTO("tgr128"); + + module_init(tgr192_mod_init); + module_exit(tgr192_mod_fini); +diff --git a/crypto/twofish_generic.c b/crypto/twofish_generic.c +index 2d5000552d0f..ebf7a3efb572 100644 +--- a/crypto/twofish_generic.c ++++ b/crypto/twofish_generic.c +@@ -211,4 +211,5 @@ module_exit(twofish_mod_fini); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION ("Twofish Cipher Algorithm"); +-MODULE_ALIAS("twofish"); ++MODULE_ALIAS_CRYPTO("twofish"); ++MODULE_ALIAS_CRYPTO("twofish-generic"); +diff --git a/crypto/vmac.c b/crypto/vmac.c +index 2eb11a30c29c..bf2d3a89845f 100644 +--- a/crypto/vmac.c ++++ b/crypto/vmac.c +@@ -713,3 +713,4 @@ module_exit(vmac_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("VMAC hash algorithm"); ++MODULE_ALIAS_CRYPTO("vmac"); +diff --git a/crypto/wp512.c b/crypto/wp512.c +index 180f1d6e03f4..253db94b5479 100644 +--- a/crypto/wp512.c ++++ b/crypto/wp512.c +@@ -1167,8 +1167,9 @@ static void __exit wp512_mod_fini(void) + crypto_unregister_shashes(wp_algs, ARRAY_SIZE(wp_algs)); + } + +-MODULE_ALIAS("wp384"); +-MODULE_ALIAS("wp256"); ++MODULE_ALIAS_CRYPTO("wp512"); ++MODULE_ALIAS_CRYPTO("wp384"); ++MODULE_ALIAS_CRYPTO("wp256"); + + module_init(wp512_mod_init); + module_exit(wp512_mod_fini); +diff --git a/crypto/xcbc.c b/crypto/xcbc.c +index a5fbdf3738cf..df90b332554c 100644 +--- a/crypto/xcbc.c ++++ b/crypto/xcbc.c +@@ -286,3 +286,4 @@ module_exit(crypto_xcbc_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("XCBC keyed hash algorithm"); ++MODULE_ALIAS_CRYPTO("xcbc"); +diff --git a/crypto/xts.c b/crypto/xts.c +index ca1608f44cb5..f6fd43f100c8 100644 +--- a/crypto/xts.c ++++ b/crypto/xts.c +@@ -362,3 +362,4 @@ module_exit(crypto_module_exit); + + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("XTS block cipher mode"); ++MODULE_ALIAS_CRYPTO("xts"); +diff --git a/crypto/zlib.c b/crypto/zlib.c +index 06b62e5cdcc7..d98078835281 100644 +--- a/crypto/zlib.c ++++ b/crypto/zlib.c +@@ -378,3 +378,4 @@ module_exit(zlib_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Zlib Compression Algorithm"); + MODULE_AUTHOR("Sony Corporation"); ++MODULE_ALIAS_CRYPTO("zlib"); +diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c +index 37acda6fa7e4..136803c47cdb 100644 +--- a/drivers/ata/libata-sff.c ++++ b/drivers/ata/libata-sff.c +@@ -1333,7 +1333,19 @@ void ata_sff_flush_pio_task(struct ata_port *ap) + DPRINTK("ENTER\n"); + + cancel_delayed_work_sync(&ap->sff_pio_task); ++ ++ /* ++ * We wanna reset the HSM state to IDLE. If we do so without ++ * grabbing the port lock, critical sections protected by it which ++ * expect the HSM state to stay stable may get surprised. For ++ * example, we may set IDLE in between the time ++ * __ata_sff_port_intr() checks for HSM_ST_IDLE and before it calls ++ * ata_sff_hsm_move() causing ata_sff_hsm_move() to BUG(). ++ */ ++ spin_lock_irq(ap->lock); + ap->hsm_task_state = HSM_ST_IDLE; ++ spin_unlock_irq(ap->lock); ++ + ap->sff_pio_task_link = NULL; + + if (ata_msg_ctl(ap)) +diff --git a/drivers/ata/sata_dwc_460ex.c b/drivers/ata/sata_dwc_460ex.c +index 2e391730e8be..776b59fbe861 100644 +--- a/drivers/ata/sata_dwc_460ex.c ++++ b/drivers/ata/sata_dwc_460ex.c +@@ -797,7 +797,7 @@ static int dma_dwc_init(struct sata_dwc_device *hsdev, int irq) + if (err) { + dev_err(host_pvt.dwc_dev, "%s: dma_request_interrupts returns" + " %d\n", __func__, err); +- goto error_out; ++ return err; + } + + /* Enabe DMA */ +@@ -808,11 +808,6 @@ static int dma_dwc_init(struct sata_dwc_device *hsdev, int irq) + sata_dma_regs); + + return 0; +- +-error_out: +- dma_dwc_exit(hsdev); +- +- return err; + } + + static int sata_dwc_scr_read(struct ata_link *link, unsigned int scr, u32 *val) +@@ -1662,7 +1657,7 @@ static int sata_dwc_probe(struct platform_device *ofdev) + char *ver = (char *)&versionr; + u8 *base = NULL; + int err = 0; +- int irq, rc; ++ int irq; + struct ata_host *host; + struct ata_port_info pi = sata_dwc_port_info[0]; + const struct ata_port_info *ppi[] = { &pi, NULL }; +@@ -1725,7 +1720,7 @@ static int sata_dwc_probe(struct platform_device *ofdev) + if (irq == NO_IRQ) { + dev_err(&ofdev->dev, "no SATA DMA irq\n"); + err = -ENODEV; +- goto error_out; ++ goto error_iomap; + } + + /* Get physical SATA DMA register base address */ +@@ -1734,14 +1729,16 @@ static int sata_dwc_probe(struct platform_device *ofdev) + dev_err(&ofdev->dev, "ioremap failed for AHBDMA register" + " address\n"); + err = -ENODEV; +- goto error_out; ++ goto error_iomap; + } + + /* Save dev for later use in dev_xxx() routines */ + host_pvt.dwc_dev = &ofdev->dev; + + /* Initialize AHB DMAC */ +- dma_dwc_init(hsdev, irq); ++ err = dma_dwc_init(hsdev, irq); ++ if (err) ++ goto error_dma_iomap; + + /* Enable SATA Interrupts */ + sata_dwc_enable_interrupts(hsdev); +@@ -1759,9 +1756,8 @@ static int sata_dwc_probe(struct platform_device *ofdev) + * device discovery process, invoking our port_start() handler & + * error_handler() to execute a dummy Softreset EH session + */ +- rc = ata_host_activate(host, irq, sata_dwc_isr, 0, &sata_dwc_sht); +- +- if (rc != 0) ++ err = ata_host_activate(host, irq, sata_dwc_isr, 0, &sata_dwc_sht); ++ if (err) + dev_err(&ofdev->dev, "failed to activate host"); + + dev_set_drvdata(&ofdev->dev, host); +@@ -1770,7 +1766,8 @@ static int sata_dwc_probe(struct platform_device *ofdev) + error_out: + /* Free SATA DMA resources */ + dma_dwc_exit(hsdev); +- ++error_dma_iomap: ++ iounmap((void __iomem *)host_pvt.sata_dma_regs); + error_iomap: + iounmap(base); + error_kmalloc: +@@ -1791,6 +1788,7 @@ static int sata_dwc_remove(struct platform_device *ofdev) + /* Free SATA DMA resources */ + dma_dwc_exit(hsdev); + ++ iounmap((void __iomem *)host_pvt.sata_dma_regs); + iounmap(hsdev->reg_base); + kfree(hsdev); + kfree(host); +diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c +index c24379ffd4e3..b2ae184a637c 100644 +--- a/drivers/block/drbd/drbd_req.c ++++ b/drivers/block/drbd/drbd_req.c +@@ -1309,6 +1309,7 @@ int drbd_merge_bvec(struct request_queue *q, struct bvec_merge_data *bvm, struct + struct request_queue * const b = + mdev->ldev->backing_bdev->bd_disk->queue; + if (b->merge_bvec_fn) { ++ bvm->bi_bdev = mdev->ldev->backing_bdev; + backing_limit = b->merge_bvec_fn(b, bvm, bvec); + limit = min(limit, backing_limit); + } +diff --git a/drivers/bus/mvebu-mbus.c b/drivers/bus/mvebu-mbus.c +index 5dcc8305abd1..711dcf4a0313 100644 +--- a/drivers/bus/mvebu-mbus.c ++++ b/drivers/bus/mvebu-mbus.c +@@ -209,12 +209,25 @@ static void mvebu_mbus_disable_window(struct mvebu_mbus_state *mbus, + } + + /* Checks whether the given window number is available */ ++ ++/* On Armada XP, 375 and 38x the MBus window 13 has the remap ++ * capability, like windows 0 to 7. However, the mvebu-mbus driver ++ * isn't currently taking into account this special case, which means ++ * that when window 13 is actually used, the remap registers are left ++ * to 0, making the device using this MBus window unavailable. The ++ * quick fix for stable is to not use window 13. A follow up patch ++ * will correctly handle this window. ++*/ + static int mvebu_mbus_window_is_free(struct mvebu_mbus_state *mbus, + const int win) + { + void __iomem *addr = mbus->mbuswins_base + + mbus->soc->win_cfg_offset(win); + u32 ctrl = readl(addr + WIN_CTRL_OFF); ++ ++ if (win == 13) ++ return false; ++ + return !(ctrl & WIN_CTRL_ENABLE); + } + +diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c +index b7960185919d..3dfa3e5e3705 100644 +--- a/drivers/clocksource/exynos_mct.c ++++ b/drivers/clocksource/exynos_mct.c +@@ -94,8 +94,8 @@ static void exynos4_mct_write(unsigned int value, unsigned long offset) + __raw_writel(value, reg_base + offset); + + if (likely(offset >= EXYNOS4_MCT_L_BASE(0))) { +- stat_addr = (offset & ~EXYNOS4_MCT_L_MASK) + MCT_L_WSTAT_OFFSET; +- switch (offset & EXYNOS4_MCT_L_MASK) { ++ stat_addr = (offset & EXYNOS4_MCT_L_MASK) + MCT_L_WSTAT_OFFSET; ++ switch (offset & ~EXYNOS4_MCT_L_MASK) { + case MCT_L_TCON_OFFSET: + mask = 1 << 3; /* L_TCON write status */ + break; +diff --git a/drivers/crypto/padlock-aes.c b/drivers/crypto/padlock-aes.c +index 633ba945e153..c178ed8c3908 100644 +--- a/drivers/crypto/padlock-aes.c ++++ b/drivers/crypto/padlock-aes.c +@@ -563,4 +563,4 @@ MODULE_DESCRIPTION("VIA PadLock AES algorithm support"); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Michal Ludvig"); + +-MODULE_ALIAS("aes"); ++MODULE_ALIAS_CRYPTO("aes"); +diff --git a/drivers/crypto/padlock-sha.c b/drivers/crypto/padlock-sha.c +index 9266c0e25492..93d7753ab38a 100644 +--- a/drivers/crypto/padlock-sha.c ++++ b/drivers/crypto/padlock-sha.c +@@ -593,7 +593,7 @@ MODULE_DESCRIPTION("VIA PadLock SHA1/SHA256 algorithms support."); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Michal Ludvig"); + +-MODULE_ALIAS("sha1-all"); +-MODULE_ALIAS("sha256-all"); +-MODULE_ALIAS("sha1-padlock"); +-MODULE_ALIAS("sha256-padlock"); ++MODULE_ALIAS_CRYPTO("sha1-all"); ++MODULE_ALIAS_CRYPTO("sha256-all"); ++MODULE_ALIAS_CRYPTO("sha1-padlock"); ++MODULE_ALIAS_CRYPTO("sha256-padlock"); +diff --git a/drivers/crypto/ux500/cryp/cryp_core.c b/drivers/crypto/ux500/cryp/cryp_core.c +index 3833bd71cc5d..e08275de37ef 100644 +--- a/drivers/crypto/ux500/cryp/cryp_core.c ++++ b/drivers/crypto/ux500/cryp/cryp_core.c +@@ -1775,7 +1775,7 @@ module_exit(ux500_cryp_mod_fini); + module_param(cryp_mode, int, 0); + + MODULE_DESCRIPTION("Driver for ST-Ericsson UX500 CRYP crypto engine."); +-MODULE_ALIAS("aes-all"); +-MODULE_ALIAS("des-all"); ++MODULE_ALIAS_CRYPTO("aes-all"); ++MODULE_ALIAS_CRYPTO("des-all"); + + MODULE_LICENSE("GPL"); +diff --git a/drivers/crypto/ux500/hash/hash_core.c b/drivers/crypto/ux500/hash/hash_core.c +index cf5508967539..6789c1653913 100644 +--- a/drivers/crypto/ux500/hash/hash_core.c ++++ b/drivers/crypto/ux500/hash/hash_core.c +@@ -1998,7 +1998,7 @@ module_exit(ux500_hash_mod_fini); + MODULE_DESCRIPTION("Driver for ST-Ericsson UX500 HASH engine."); + MODULE_LICENSE("GPL"); + +-MODULE_ALIAS("sha1-all"); +-MODULE_ALIAS("sha256-all"); +-MODULE_ALIAS("hmac-sha1-all"); +-MODULE_ALIAS("hmac-sha256-all"); ++MODULE_ALIAS_CRYPTO("sha1-all"); ++MODULE_ALIAS_CRYPTO("sha256-all"); ++MODULE_ALIAS_CRYPTO("hmac-sha1-all"); ++MODULE_ALIAS_CRYPTO("hmac-sha256-all"); +diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c +index c2534d62911c..1d74a80e031e 100644 +--- a/drivers/gpio/gpiolib.c ++++ b/drivers/gpio/gpiolib.c +@@ -362,7 +362,7 @@ static ssize_t gpio_value_store(struct device *dev, + return status; + } + +-static const DEVICE_ATTR(value, 0644, ++static DEVICE_ATTR(value, 0644, + gpio_value_show, gpio_value_store); + + static irqreturn_t gpio_sysfs_irq(int irq, void *priv) +@@ -580,17 +580,17 @@ static ssize_t gpio_active_low_store(struct device *dev, + return status ? : size; + } + +-static const DEVICE_ATTR(active_low, 0644, ++static DEVICE_ATTR(active_low, 0644, + gpio_active_low_show, gpio_active_low_store); + +-static const struct attribute *gpio_attrs[] = { ++static struct attribute *gpio_attrs[] = { + &dev_attr_value.attr, + &dev_attr_active_low.attr, + NULL, + }; + + static const struct attribute_group gpio_attr_group = { +- .attrs = (struct attribute **) gpio_attrs, ++ .attrs = gpio_attrs, + }; + + /* +@@ -627,7 +627,7 @@ static ssize_t chip_ngpio_show(struct device *dev, + } + static DEVICE_ATTR(ngpio, 0444, chip_ngpio_show, NULL); + +-static const struct attribute *gpiochip_attrs[] = { ++static struct attribute *gpiochip_attrs[] = { + &dev_attr_base.attr, + &dev_attr_label.attr, + &dev_attr_ngpio.attr, +@@ -635,7 +635,7 @@ static const struct attribute *gpiochip_attrs[] = { + }; + + static const struct attribute_group gpiochip_attr_group = { +- .attrs = (struct attribute **) gpiochip_attrs, ++ .attrs = gpiochip_attrs, + }; + + /* +@@ -806,20 +806,24 @@ static int gpiod_export(struct gpio_desc *desc, bool direction_may_change) + if (direction_may_change) { + status = device_create_file(dev, &dev_attr_direction); + if (status) +- goto fail_unregister_device; ++ goto fail_remove_attr_group; + } + + if (gpiod_to_irq(desc) >= 0 && (direction_may_change || + !test_bit(FLAG_IS_OUT, &desc->flags))) { + status = device_create_file(dev, &dev_attr_edge); + if (status) +- goto fail_unregister_device; ++ goto fail_remove_attr_direction; + } + + set_bit(FLAG_EXPORT, &desc->flags); + mutex_unlock(&sysfs_lock); + return 0; + ++fail_remove_attr_direction: ++ device_remove_file(dev, &dev_attr_direction); ++fail_remove_attr_group: ++ sysfs_remove_group(&dev->kobj, &gpio_attr_group); + fail_unregister_device: + device_unregister(dev); + fail_unlock: +@@ -971,6 +975,9 @@ static void gpiod_unexport(struct gpio_desc *desc) + mutex_unlock(&sysfs_lock); + + if (dev) { ++ device_remove_file(dev, &dev_attr_edge); ++ device_remove_file(dev, &dev_attr_direction); ++ sysfs_remove_group(&dev->kobj, &gpio_attr_group); + device_unregister(dev); + put_device(dev); + } +@@ -1036,6 +1043,7 @@ static void gpiochip_unexport(struct gpio_chip *chip) + mutex_lock(&sysfs_lock); + dev = class_find_device(&gpio_class, NULL, chip, match_export); + if (dev) { ++ sysfs_remove_group(&dev->kobj, &gpiochip_attr_group); + put_device(dev); + device_unregister(dev); + chip->exported = 0; +diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c +index 0a30088178b0..0b71a0aaf4fc 100644 +--- a/drivers/gpu/drm/i915/i915_gem.c ++++ b/drivers/gpu/drm/i915/i915_gem.c +@@ -4449,7 +4449,7 @@ static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task) + if (!mutex_is_locked(mutex)) + return false; + +-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES) ++#if defined(CONFIG_SMP) && !defined(CONFIG_DEBUG_MUTEXES) + return mutex->owner == task; + #else + /* Since UP may be pre-empted, we cannot assume that we own the lock */ +diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c +index de737ba1d351..b361ce4ce511 100644 +--- a/drivers/md/dm-cache-metadata.c ++++ b/drivers/md/dm-cache-metadata.c +@@ -88,6 +88,9 @@ struct cache_disk_superblock { + } __packed; + + struct dm_cache_metadata { ++ atomic_t ref_count; ++ struct list_head list; ++ + struct block_device *bdev; + struct dm_block_manager *bm; + struct dm_space_map *metadata_sm; +@@ -634,10 +637,10 @@ static void unpack_value(__le64 value_le, dm_oblock_t *block, unsigned *flags) + + /*----------------------------------------------------------------*/ + +-struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev, +- sector_t data_block_size, +- bool may_format_device, +- size_t policy_hint_size) ++static struct dm_cache_metadata *metadata_open(struct block_device *bdev, ++ sector_t data_block_size, ++ bool may_format_device, ++ size_t policy_hint_size) + { + int r; + struct dm_cache_metadata *cmd; +@@ -648,6 +651,7 @@ struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev, + return NULL; + } + ++ atomic_set(&cmd->ref_count, 1); + init_rwsem(&cmd->root_lock); + cmd->bdev = bdev; + cmd->data_block_size = data_block_size; +@@ -670,10 +674,95 @@ struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev, + return cmd; + } + ++/* ++ * We keep a little list of ref counted metadata objects to prevent two ++ * different target instances creating separate bufio instances. This is ++ * an issue if a table is reloaded before the suspend. ++ */ ++static DEFINE_MUTEX(table_lock); ++static LIST_HEAD(table); ++ ++static struct dm_cache_metadata *lookup(struct block_device *bdev) ++{ ++ struct dm_cache_metadata *cmd; ++ ++ list_for_each_entry(cmd, &table, list) ++ if (cmd->bdev == bdev) { ++ atomic_inc(&cmd->ref_count); ++ return cmd; ++ } ++ ++ return NULL; ++} ++ ++static struct dm_cache_metadata *lookup_or_open(struct block_device *bdev, ++ sector_t data_block_size, ++ bool may_format_device, ++ size_t policy_hint_size) ++{ ++ struct dm_cache_metadata *cmd, *cmd2; ++ ++ mutex_lock(&table_lock); ++ cmd = lookup(bdev); ++ mutex_unlock(&table_lock); ++ ++ if (cmd) ++ return cmd; ++ ++ cmd = metadata_open(bdev, data_block_size, may_format_device, policy_hint_size); ++ if (cmd) { ++ mutex_lock(&table_lock); ++ cmd2 = lookup(bdev); ++ if (cmd2) { ++ mutex_unlock(&table_lock); ++ __destroy_persistent_data_objects(cmd); ++ kfree(cmd); ++ return cmd2; ++ } ++ list_add(&cmd->list, &table); ++ mutex_unlock(&table_lock); ++ } ++ ++ return cmd; ++} ++ ++static bool same_params(struct dm_cache_metadata *cmd, sector_t data_block_size) ++{ ++ if (cmd->data_block_size != data_block_size) { ++ DMERR("data_block_size (%llu) different from that in metadata (%llu)\n", ++ (unsigned long long) data_block_size, ++ (unsigned long long) cmd->data_block_size); ++ return false; ++ } ++ ++ return true; ++} ++ ++struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev, ++ sector_t data_block_size, ++ bool may_format_device, ++ size_t policy_hint_size) ++{ ++ struct dm_cache_metadata *cmd = lookup_or_open(bdev, data_block_size, ++ may_format_device, policy_hint_size); ++ if (cmd && !same_params(cmd, data_block_size)) { ++ dm_cache_metadata_close(cmd); ++ return NULL; ++ } ++ ++ return cmd; ++} ++ + void dm_cache_metadata_close(struct dm_cache_metadata *cmd) + { +- __destroy_persistent_data_objects(cmd); +- kfree(cmd); ++ if (atomic_dec_and_test(&cmd->ref_count)) { ++ mutex_lock(&table_lock); ++ list_del(&cmd->list); ++ mutex_unlock(&table_lock); ++ ++ __destroy_persistent_data_objects(cmd); ++ kfree(cmd); ++ } + } + + int dm_cache_resize(struct dm_cache_metadata *cmd, dm_cblock_t new_cache_size) +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index 2332b5ced0dd..4daf5c03b33b 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -2678,7 +2678,8 @@ static int fetch_block(struct stripe_head *sh, struct stripe_head_state *s, + (s->failed >= 2 && fdev[1]->toread) || + (sh->raid_conf->level <= 5 && s->failed && fdev[0]->towrite && + !test_bit(R5_OVERWRITE, &fdev[0]->flags)) || +- (sh->raid_conf->level == 6 && s->failed && s->to_write))) { ++ ((sh->raid_conf->level == 6 || sh->sector >= sh->raid_conf->mddev->recovery_cp) ++ && s->failed && s->to_write))) { + /* we would like to get this block, possibly by computing it, + * otherwise read it if the backing disk is insync + */ +diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c +index 9bf47a064cdf..a4694aa20a3e 100644 +--- a/drivers/net/can/dev.c ++++ b/drivers/net/can/dev.c +@@ -643,10 +643,14 @@ static int can_changelink(struct net_device *dev, + if (dev->flags & IFF_UP) + return -EBUSY; + cm = nla_data(data[IFLA_CAN_CTRLMODE]); +- if (cm->flags & ~priv->ctrlmode_supported) ++ ++ /* check whether changed bits are allowed to be modified */ ++ if (cm->mask & ~priv->ctrlmode_supported) + return -EOPNOTSUPP; ++ ++ /* clear bits to be modified and copy the flag values */ + priv->ctrlmode &= ~cm->mask; +- priv->ctrlmode |= cm->flags; ++ priv->ctrlmode |= (cm->flags & cm->mask); + } + + if (data[IFLA_CAN_BITTIMING]) { +diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c +index bb7ee9cb00b1..9c9fc69a01b3 100644 +--- a/drivers/pinctrl/core.c ++++ b/drivers/pinctrl/core.c +@@ -1693,14 +1693,15 @@ void pinctrl_unregister(struct pinctrl_dev *pctldev) + if (pctldev == NULL) + return; + +- mutex_lock(&pinctrldev_list_mutex); + mutex_lock(&pctldev->mutex); +- + pinctrl_remove_device_debugfs(pctldev); ++ mutex_unlock(&pctldev->mutex); + + if (!IS_ERR(pctldev->p)) + pinctrl_put(pctldev->p); + ++ mutex_lock(&pinctrldev_list_mutex); ++ mutex_lock(&pctldev->mutex); + /* TODO: check that no pinmuxes are still active? */ + list_del(&pctldev->node); + /* Destroy descriptor tree */ +diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c +index 9de41aa14896..6f512fa4fa03 100644 +--- a/drivers/s390/crypto/ap_bus.c ++++ b/drivers/s390/crypto/ap_bus.c +@@ -44,6 +44,7 @@ + #include + #include + #include ++#include + + #include "ap_bus.h" + +diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c +index 0ff37a5e286c..f7732f3b9804 100644 +--- a/drivers/scsi/ipr.c ++++ b/drivers/scsi/ipr.c +@@ -645,6 +645,7 @@ static void ipr_init_ipr_cmnd(struct ipr_cmnd *ipr_cmd, + ipr_reinit_ipr_cmnd(ipr_cmd); + ipr_cmd->u.scratch = 0; + ipr_cmd->sibling = NULL; ++ ipr_cmd->eh_comp = NULL; + ipr_cmd->fast_done = fast_done; + init_timer(&ipr_cmd->timer); + } +@@ -810,6 +811,8 @@ static void ipr_scsi_eh_done(struct ipr_cmnd *ipr_cmd) + + scsi_dma_unmap(ipr_cmd->scsi_cmd); + scsi_cmd->scsi_done(scsi_cmd); ++ if (ipr_cmd->eh_comp) ++ complete(ipr_cmd->eh_comp); + list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); + } + +@@ -4767,6 +4770,84 @@ static int ipr_slave_alloc(struct scsi_device *sdev) + return rc; + } + ++/** ++ * ipr_match_lun - Match function for specified LUN ++ * @ipr_cmd: ipr command struct ++ * @device: device to match (sdev) ++ * ++ * Returns: ++ * 1 if command matches sdev / 0 if command does not match sdev ++ **/ ++static int ipr_match_lun(struct ipr_cmnd *ipr_cmd, void *device) ++{ ++ if (ipr_cmd->scsi_cmd && ipr_cmd->scsi_cmd->device == device) ++ return 1; ++ return 0; ++} ++ ++/** ++ * ipr_wait_for_ops - Wait for matching commands to complete ++ * @ipr_cmd: ipr command struct ++ * @device: device to match (sdev) ++ * @match: match function to use ++ * ++ * Returns: ++ * SUCCESS / FAILED ++ **/ ++static int ipr_wait_for_ops(struct ipr_ioa_cfg *ioa_cfg, void *device, ++ int (*match)(struct ipr_cmnd *, void *)) ++{ ++ struct ipr_cmnd *ipr_cmd; ++ int wait; ++ unsigned long flags; ++ struct ipr_hrr_queue *hrrq; ++ signed long timeout = IPR_ABORT_TASK_TIMEOUT; ++ DECLARE_COMPLETION_ONSTACK(comp); ++ ++ ENTER; ++ do { ++ wait = 0; ++ ++ for_each_hrrq(hrrq, ioa_cfg) { ++ spin_lock_irqsave(hrrq->lock, flags); ++ list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) { ++ if (match(ipr_cmd, device)) { ++ ipr_cmd->eh_comp = ∁ ++ wait++; ++ } ++ } ++ spin_unlock_irqrestore(hrrq->lock, flags); ++ } ++ ++ if (wait) { ++ timeout = wait_for_completion_timeout(&comp, timeout); ++ ++ if (!timeout) { ++ wait = 0; ++ ++ for_each_hrrq(hrrq, ioa_cfg) { ++ spin_lock_irqsave(hrrq->lock, flags); ++ list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) { ++ if (match(ipr_cmd, device)) { ++ ipr_cmd->eh_comp = NULL; ++ wait++; ++ } ++ } ++ spin_unlock_irqrestore(hrrq->lock, flags); ++ } ++ ++ if (wait) ++ dev_err(&ioa_cfg->pdev->dev, "Timed out waiting for aborted commands\n"); ++ LEAVE; ++ return wait ? FAILED : SUCCESS; ++ } ++ } ++ } while (wait); ++ ++ LEAVE; ++ return SUCCESS; ++} ++ + static int ipr_eh_host_reset(struct scsi_cmnd *cmd) + { + struct ipr_ioa_cfg *ioa_cfg; +@@ -4985,11 +5066,17 @@ static int __ipr_eh_dev_reset(struct scsi_cmnd *scsi_cmd) + static int ipr_eh_dev_reset(struct scsi_cmnd *cmd) + { + int rc; ++ struct ipr_ioa_cfg *ioa_cfg; ++ ++ ioa_cfg = (struct ipr_ioa_cfg *) cmd->device->host->hostdata; + + spin_lock_irq(cmd->device->host->host_lock); + rc = __ipr_eh_dev_reset(cmd); + spin_unlock_irq(cmd->device->host->host_lock); + ++ if (rc == SUCCESS) ++ rc = ipr_wait_for_ops(ioa_cfg, cmd->device, ipr_match_lun); ++ + return rc; + } + +@@ -5167,13 +5254,18 @@ static int ipr_eh_abort(struct scsi_cmnd *scsi_cmd) + { + unsigned long flags; + int rc; ++ struct ipr_ioa_cfg *ioa_cfg; + + ENTER; + ++ ioa_cfg = (struct ipr_ioa_cfg *) scsi_cmd->device->host->hostdata; ++ + spin_lock_irqsave(scsi_cmd->device->host->host_lock, flags); + rc = ipr_cancel_op(scsi_cmd); + spin_unlock_irqrestore(scsi_cmd->device->host->host_lock, flags); + ++ if (rc == SUCCESS) ++ rc = ipr_wait_for_ops(ioa_cfg, scsi_cmd->device, ipr_match_lun); + LEAVE; + return rc; + } +diff --git a/drivers/scsi/ipr.h b/drivers/scsi/ipr.h +index 07a85ce41782..535f57328a72 100644 +--- a/drivers/scsi/ipr.h ++++ b/drivers/scsi/ipr.h +@@ -1578,6 +1578,7 @@ struct ipr_cmnd { + struct scsi_device *sdev; + } u; + ++ struct completion *eh_comp; + struct ipr_hrr_queue *hrrq; + struct ipr_ioa_cfg *ioa_cfg; + }; +diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c +index 301b08496478..1d94316f0ea4 100644 +--- a/drivers/xen/swiotlb-xen.c ++++ b/drivers/xen/swiotlb-xen.c +@@ -390,7 +390,7 @@ static void xen_unmap_single(struct device *hwdev, dma_addr_t dev_addr, + + /* NOTE: We use dev_addr here, not paddr! */ + if (is_xen_swiotlb_buffer(dev_addr)) { +- swiotlb_tbl_unmap_single(hwdev, dev_addr, size, dir); ++ swiotlb_tbl_unmap_single(hwdev, paddr, size, dir); + return; + } + +diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h +index e4c4ac07cc32..2a71466b0115 100644 +--- a/fs/ext4/ext4.h ++++ b/fs/ext4/ext4.h +@@ -589,6 +589,7 @@ enum { + #define EXT4_FREE_BLOCKS_NO_QUOT_UPDATE 0x0008 + #define EXT4_FREE_BLOCKS_NOFREE_FIRST_CLUSTER 0x0010 + #define EXT4_FREE_BLOCKS_NOFREE_LAST_CLUSTER 0x0020 ++#define EXT4_FREE_BLOCKS_RESERVE 0x0040 + + /* + * Flags used by ext4_discard_partial_page_buffers +diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c +index 84d817b842a8..7fbd1c5b74af 100644 +--- a/fs/ext4/extents.c ++++ b/fs/ext4/extents.c +@@ -1722,7 +1722,8 @@ static void ext4_ext_try_to_merge_up(handle_t *handle, + + brelse(path[1].p_bh); + ext4_free_blocks(handle, inode, NULL, blk, 1, +- EXT4_FREE_BLOCKS_METADATA | EXT4_FREE_BLOCKS_FORGET); ++ EXT4_FREE_BLOCKS_METADATA | EXT4_FREE_BLOCKS_FORGET | ++ EXT4_FREE_BLOCKS_RESERVE); + } + + /* +diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c +index 162b80d527a0..df5050f9080b 100644 +--- a/fs/ext4/mballoc.c ++++ b/fs/ext4/mballoc.c +@@ -4610,6 +4610,7 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode, + struct buffer_head *gd_bh; + ext4_group_t block_group; + struct ext4_sb_info *sbi; ++ struct ext4_inode_info *ei = EXT4_I(inode); + struct ext4_buddy e4b; + unsigned int count_clusters; + int err = 0; +@@ -4808,7 +4809,6 @@ do_more: + ext4_block_bitmap_csum_set(sb, block_group, gdp, bitmap_bh); + ext4_group_desc_csum_set(sb, block_group, gdp); + ext4_unlock_group(sb, block_group); +- percpu_counter_add(&sbi->s_freeclusters_counter, count_clusters); + + if (sbi->s_log_groups_per_flex) { + ext4_group_t flex_group = ext4_flex_group(sbi, block_group); +@@ -4816,10 +4816,23 @@ do_more: + &sbi->s_flex_groups[flex_group].free_clusters); + } + +- ext4_mb_unload_buddy(&e4b); +- +- if (!(flags & EXT4_FREE_BLOCKS_NO_QUOT_UPDATE)) ++ if (flags & EXT4_FREE_BLOCKS_RESERVE && ei->i_reserved_data_blocks) { ++ percpu_counter_add(&sbi->s_dirtyclusters_counter, ++ count_clusters); ++ spin_lock(&ei->i_block_reservation_lock); ++ if (flags & EXT4_FREE_BLOCKS_METADATA) ++ ei->i_reserved_meta_blocks += count_clusters; ++ else ++ ei->i_reserved_data_blocks += count_clusters; ++ spin_unlock(&ei->i_block_reservation_lock); ++ if (!(flags & EXT4_FREE_BLOCKS_NO_QUOT_UPDATE)) ++ dquot_reclaim_block(inode, ++ EXT4_C2B(sbi, count_clusters)); ++ } else if (!(flags & EXT4_FREE_BLOCKS_NO_QUOT_UPDATE)) + dquot_free_block(inode, EXT4_C2B(sbi, count_clusters)); ++ percpu_counter_add(&sbi->s_freeclusters_counter, count_clusters); ++ ++ ext4_mb_unload_buddy(&e4b); + + /* We dirtied the bitmap block */ + BUFFER_TRACE(bitmap_bh, "dirtied bitmap block"); +diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c +index 7a10e047bc33..4f7f451ca70d 100644 +--- a/fs/quota/dquot.c ++++ b/fs/quota/dquot.c +@@ -1102,6 +1102,14 @@ static void dquot_claim_reserved_space(struct dquot *dquot, qsize_t number) + dquot->dq_dqb.dqb_rsvspace -= number; + } + ++static void dquot_reclaim_reserved_space(struct dquot *dquot, qsize_t number) ++{ ++ if (WARN_ON_ONCE(dquot->dq_dqb.dqb_curspace < number)) ++ number = dquot->dq_dqb.dqb_curspace; ++ dquot->dq_dqb.dqb_rsvspace += number; ++ dquot->dq_dqb.dqb_curspace -= number; ++} ++ + static inline + void dquot_free_reserved_space(struct dquot *dquot, qsize_t number) + { +@@ -1536,6 +1544,15 @@ void inode_claim_rsv_space(struct inode *inode, qsize_t number) + } + EXPORT_SYMBOL(inode_claim_rsv_space); + ++void inode_reclaim_rsv_space(struct inode *inode, qsize_t number) ++{ ++ spin_lock(&inode->i_lock); ++ *inode_reserved_space(inode) += number; ++ __inode_sub_bytes(inode, number); ++ spin_unlock(&inode->i_lock); ++} ++EXPORT_SYMBOL(inode_reclaim_rsv_space); ++ + void inode_sub_rsv_space(struct inode *inode, qsize_t number) + { + spin_lock(&inode->i_lock); +@@ -1710,6 +1727,35 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number) + EXPORT_SYMBOL(dquot_claim_space_nodirty); + + /* ++ * Convert allocated space back to in-memory reserved quotas ++ */ ++void dquot_reclaim_space_nodirty(struct inode *inode, qsize_t number) ++{ ++ int cnt; ++ ++ if (!dquot_active(inode)) { ++ inode_reclaim_rsv_space(inode, number); ++ return; ++ } ++ ++ down_read(&sb_dqopt(inode->i_sb)->dqptr_sem); ++ spin_lock(&dq_data_lock); ++ /* Claim reserved quotas to allocated quotas */ ++ for (cnt = 0; cnt < MAXQUOTAS; cnt++) { ++ if (inode->i_dquot[cnt]) ++ dquot_reclaim_reserved_space(inode->i_dquot[cnt], ++ number); ++ } ++ /* Update inode bytes */ ++ inode_reclaim_rsv_space(inode, number); ++ spin_unlock(&dq_data_lock); ++ mark_all_dquot_dirty(inode->i_dquot); ++ up_read(&sb_dqopt(inode->i_sb)->dqptr_sem); ++ return; ++} ++EXPORT_SYMBOL(dquot_reclaim_space_nodirty); ++ ++/* + * This operation can block, but only after everything is updated + */ + void __dquot_free_space(struct inode *inode, qsize_t number, int flags) +diff --git a/fs/stat.c b/fs/stat.c +index 04ce1ac20d20..d0ea7ef75e26 100644 +--- a/fs/stat.c ++++ b/fs/stat.c +@@ -447,9 +447,8 @@ void inode_add_bytes(struct inode *inode, loff_t bytes) + + EXPORT_SYMBOL(inode_add_bytes); + +-void inode_sub_bytes(struct inode *inode, loff_t bytes) ++void __inode_sub_bytes(struct inode *inode, loff_t bytes) + { +- spin_lock(&inode->i_lock); + inode->i_blocks -= bytes >> 9; + bytes &= 511; + if (inode->i_bytes < bytes) { +@@ -457,6 +456,14 @@ void inode_sub_bytes(struct inode *inode, loff_t bytes) + inode->i_bytes += 512; + } + inode->i_bytes -= bytes; ++} ++ ++EXPORT_SYMBOL(__inode_sub_bytes); ++ ++void inode_sub_bytes(struct inode *inode, loff_t bytes) ++{ ++ spin_lock(&inode->i_lock); ++ __inode_sub_bytes(inode, bytes); + spin_unlock(&inode->i_lock); + } + +diff --git a/include/linux/crypto.h b/include/linux/crypto.h +index b92eadf92d72..2b00d92a6e6f 100644 +--- a/include/linux/crypto.h ++++ b/include/linux/crypto.h +@@ -26,6 +26,19 @@ + #include + + /* ++ * Autoloaded crypto modules should only use a prefixed name to avoid allowing ++ * arbitrary modules to be loaded. Loading from userspace may still need the ++ * unprefixed names, so retains those aliases as well. ++ * This uses __MODULE_INFO directly instead of MODULE_ALIAS because pre-4.3 ++ * gcc (e.g. avr32 toolchain) uses __LINE__ for uniqueness, and this macro ++ * expands twice on the same line. Instead, use a separate base name for the ++ * alias. ++ */ ++#define MODULE_ALIAS_CRYPTO(name) \ ++ __MODULE_INFO(alias, alias_userspace, name); \ ++ __MODULE_INFO(alias, alias_crypto, "crypto-" name) ++ ++/* + * Algorithm masks and types. + */ + #define CRYPTO_ALG_TYPE_MASK 0x0000000f +diff --git a/include/linux/fs.h b/include/linux/fs.h +index 65c2be22b601..d57bc5df7225 100644 +--- a/include/linux/fs.h ++++ b/include/linux/fs.h +@@ -2489,6 +2489,7 @@ extern void generic_fillattr(struct inode *, struct kstat *); + extern int vfs_getattr(struct path *, struct kstat *); + void __inode_add_bytes(struct inode *inode, loff_t bytes); + void inode_add_bytes(struct inode *inode, loff_t bytes); ++void __inode_sub_bytes(struct inode *inode, loff_t bytes); + void inode_sub_bytes(struct inode *inode, loff_t bytes); + loff_t inode_get_bytes(struct inode *inode); + void inode_set_bytes(struct inode *inode, loff_t bytes); +diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h +index 1c50093ae656..6965fe394c3b 100644 +--- a/include/linux/quotaops.h ++++ b/include/linux/quotaops.h +@@ -41,6 +41,7 @@ void __quota_error(struct super_block *sb, const char *func, + void inode_add_rsv_space(struct inode *inode, qsize_t number); + void inode_claim_rsv_space(struct inode *inode, qsize_t number); + void inode_sub_rsv_space(struct inode *inode, qsize_t number); ++void inode_reclaim_rsv_space(struct inode *inode, qsize_t number); + + void dquot_initialize(struct inode *inode); + void dquot_drop(struct inode *inode); +@@ -59,6 +60,7 @@ int dquot_alloc_inode(const struct inode *inode); + + int dquot_claim_space_nodirty(struct inode *inode, qsize_t number); + void dquot_free_inode(const struct inode *inode); ++void dquot_reclaim_space_nodirty(struct inode *inode, qsize_t number); + + int dquot_disable(struct super_block *sb, int type, unsigned int flags); + /* Suspend quotas on remount RO */ +@@ -238,6 +240,13 @@ static inline int dquot_claim_space_nodirty(struct inode *inode, qsize_t number) + return 0; + } + ++static inline int dquot_reclaim_space_nodirty(struct inode *inode, ++ qsize_t number) ++{ ++ inode_sub_bytes(inode, number); ++ return 0; ++} ++ + static inline int dquot_disable(struct super_block *sb, int type, + unsigned int flags) + { +@@ -336,6 +345,12 @@ static inline int dquot_claim_block(struct inode *inode, qsize_t nr) + return ret; + } + ++static inline void dquot_reclaim_block(struct inode *inode, qsize_t nr) ++{ ++ dquot_reclaim_space_nodirty(inode, nr << inode->i_blkbits); ++ mark_inode_dirty_sync(inode); ++} ++ + static inline void dquot_free_space_nodirty(struct inode *inode, qsize_t nr) + { + __dquot_free_space(inode, nr, 0); +diff --git a/include/linux/time.h b/include/linux/time.h +index d5d229b2e5af..7d532a32ff3a 100644 +--- a/include/linux/time.h ++++ b/include/linux/time.h +@@ -173,6 +173,19 @@ extern void getboottime(struct timespec *ts); + extern void monotonic_to_bootbased(struct timespec *ts); + extern void get_monotonic_boottime(struct timespec *ts); + ++static inline bool timeval_valid(const struct timeval *tv) ++{ ++ /* Dates before 1970 are bogus */ ++ if (tv->tv_sec < 0) ++ return false; ++ ++ /* Can't have more microseconds then a second */ ++ if (tv->tv_usec < 0 || tv->tv_usec >= USEC_PER_SEC) ++ return false; ++ ++ return true; ++} ++ + extern struct timespec timespec_trunc(struct timespec t, unsigned gran); + extern int timekeeping_valid_for_hres(void); + extern u64 timekeeping_max_deferment(void); +diff --git a/kernel/time.c b/kernel/time.c +index d21398e6da87..31ec845d0e80 100644 +--- a/kernel/time.c ++++ b/kernel/time.c +@@ -195,6 +195,10 @@ SYSCALL_DEFINE2(settimeofday, struct timeval __user *, tv, + if (tv) { + if (copy_from_user(&user_tv, tv, sizeof(*tv))) + return -EFAULT; ++ ++ if (!timeval_valid(&user_tv)) ++ return -EINVAL; ++ + new_ts.tv_sec = user_tv.tv_sec; + new_ts.tv_nsec = user_tv.tv_usec * NSEC_PER_USEC; + } +diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c +index af8d1d4f3d55..28db9bedc857 100644 +--- a/kernel/time/ntp.c ++++ b/kernel/time/ntp.c +@@ -631,6 +631,13 @@ int ntp_validate_timex(struct timex *txc) + if ((txc->modes & ADJ_SETOFFSET) && (!capable(CAP_SYS_TIME))) + return -EPERM; + ++ if (txc->modes & ADJ_FREQUENCY) { ++ if (LONG_MIN / PPM_SCALE > txc->freq) ++ return -EINVAL; ++ if (LONG_MAX / PPM_SCALE < txc->freq) ++ return -EINVAL; ++ } ++ + return 0; + } + +diff --git a/net/netfilter/ipvs/ip_vs_ftp.c b/net/netfilter/ipvs/ip_vs_ftp.c +index 77c173282f38..4a662f15eaee 100644 +--- a/net/netfilter/ipvs/ip_vs_ftp.c ++++ b/net/netfilter/ipvs/ip_vs_ftp.c +@@ -183,6 +183,8 @@ static int ip_vs_ftp_out(struct ip_vs_app *app, struct ip_vs_conn *cp, + struct nf_conn *ct; + struct net *net; + ++ *diff = 0; ++ + #ifdef CONFIG_IP_VS_IPV6 + /* This application helper doesn't work with IPv6 yet, + * so turn this into a no-op for IPv6 packets +@@ -191,8 +193,6 @@ static int ip_vs_ftp_out(struct ip_vs_app *app, struct ip_vs_conn *cp, + return 1; + #endif + +- *diff = 0; +- + /* Only useful for established sessions */ + if (cp->state != IP_VS_TCP_S_ESTABLISHED) + return 1; +@@ -321,6 +321,9 @@ static int ip_vs_ftp_in(struct ip_vs_app *app, struct ip_vs_conn *cp, + struct ip_vs_conn *n_cp; + struct net *net; + ++ /* no diff required for incoming packets */ ++ *diff = 0; ++ + #ifdef CONFIG_IP_VS_IPV6 + /* This application helper doesn't work with IPv6 yet, + * so turn this into a no-op for IPv6 packets +@@ -329,9 +332,6 @@ static int ip_vs_ftp_in(struct ip_vs_app *app, struct ip_vs_conn *cp, + return 1; + #endif + +- /* no diff required for incoming packets */ +- *diff = 0; +- + /* Only useful for established sessions */ + if (cp->state != IP_VS_TCP_S_ESTABLISHED) + return 1; +diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl +index 858966ab019c..679218b56ede 100755 +--- a/scripts/recordmcount.pl ++++ b/scripts/recordmcount.pl +@@ -262,7 +262,6 @@ if ($arch eq "x86_64") { + # force flags for this arch + $ld .= " -m shlelf_linux"; + $objcopy .= " -O elf32-sh-linux"; +- $cc .= " -m32"; + + } elsif ($arch eq "powerpc") { + $local_regex = "^[0-9a-fA-F]+\\s+t\\s+(\\.?\\S+)"; +diff --git a/security/keys/gc.c b/security/keys/gc.c +index d67c97bb1025..797818695c87 100644 +--- a/security/keys/gc.c ++++ b/security/keys/gc.c +@@ -201,12 +201,12 @@ static noinline void key_gc_unused_keys(struct list_head *keys) + if (test_bit(KEY_FLAG_INSTANTIATED, &key->flags)) + atomic_dec(&key->user->nikeys); + +- key_user_put(key->user); +- + /* now throw away the key memory */ + if (key->type->destroy) + key->type->destroy(key); + ++ key_user_put(key->user); ++ + kfree(key->description); + + #ifdef KEY_DEBUGGING +diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c +index be4db47cb2d9..061be0e5fa5a 100644 +--- a/sound/usb/mixer.c ++++ b/sound/usb/mixer.c +@@ -886,6 +886,7 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval, + case USB_ID(0x046d, 0x0807): /* Logitech Webcam C500 */ + case USB_ID(0x046d, 0x0808): + case USB_ID(0x046d, 0x0809): ++ case USB_ID(0x046d, 0x0819): /* Logitech Webcam C210 */ + case USB_ID(0x046d, 0x081b): /* HD Webcam c310 */ + case USB_ID(0x046d, 0x081d): /* HD Webcam c510 */ + case USB_ID(0x046d, 0x0825): /* HD Webcam c270 */