- 15 Sep, 2015 3 commits
-
-
Dennis Rassmann authored
-
Dennis Rassmann authored
Conflicts: build.config
-
Dennis Rassmann authored
-
- 01 Jul, 2015 2 commits
-
-
Patrick Tjin authored
This reverts commit 7555e42c. Mako kernel should build with 4.6 toolchain
-
David S. Miller authored
If we don't do that, then the poison value is left in the ->pprev backlink. This can cause crashes if we do a disconnect, followed by a connect(). Tested-by:
Linus Torvalds <torvalds@linux-foundation.org> Reported-by:
Wen Xu <hotdog3645@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Bug: 20770158 Change-Id: I944eb20fddea190892c2da681d934801d268096a
-
- 11 May, 2015 1 commit
-
-
Dennis Rassmann authored
-
- 02 May, 2015 1 commit
-
-
Dennis Rassmann authored
-
- 05 Apr, 2015 1 commit
-
-
Dennis Rassmann authored
-
- 23 Mar, 2015 3 commits
-
-
Chad Jones authored
Change-Id: I4b38e29a4bdaca50e64473420dfa97e89f53da81
-
Naveen Ramaraj authored
Attempting to unregister it again will cause a panic Bug: 18759663 Change-Id: Iff2adbc79136c55141f35b872cb70584d303a689 Signed-off-by:
Naveen Ramaraj <nramaraj@codeaurora.org>
-
Naveen Ramaraj authored
smd_pkt_release() relies on checks that would pass by default (reference count == 0) to proceed with the release even if the open was unsuccessful in the first place Bug: b/18759663 Signed-off-by:
Naveen Ramaraj <nramaraj@codeaurora.org>
-
- 21 Mar, 2015 2 commits
-
-
Naveen Ramaraj authored
Attempting to unregister it again will cause a panic Bug: 18759663 Change-Id: Iff2adbc79136c55141f35b872cb70584d303a689 Signed-off-by:
Naveen Ramaraj <nramaraj@codeaurora.org>
-
Naveen Ramaraj authored
smd_pkt_release() relies on checks that would pass by default (reference count == 0) to proceed with the release even if the open was unsuccessful in the first place Bug: b/18759663 Signed-off-by:
Naveen Ramaraj <nramaraj@codeaurora.org>
-
- 13 Mar, 2015 2 commits
-
-
Chad Jones authored
Bug: 19661035 Change-Id: I859169880dea38bd16a33050957fb9c8dcd3538c
-
Chad Jones authored
Bug: 19661035,19715067 Change-Id: I9efc9521df282b7866059c15d4b20b9219a32364
-
- 12 Mar, 2015 4 commits
-
-
Rom Lemarchand authored
Make oom_adj and oom_score_adj user read-only. Bug: 19636629 Conflicts: fs/proc/base.c Signed-off-by:
Rom Lemarchand <romlem@google.com> Signed-off-by:
Patrick Tjin <pattjin@google.com>
-
Tejun Heo authored
commit 6cdae7416a1c45c2ce105a78187d9b7e8feb9e24 upstream. The iteration logic of idr_get_next() is borrowed mostly verbatim from idr_for_each(). It walks down the tree looking for the slot matching the current ID. If the matching slot is not found, the ID is incremented by the distance of single slot at the given level and repeats. The implementation assumes that during the whole iteration id is aligned to the layer boundaries of the level closest to the leaf, which is true for all iterations starting from zero or an existing element and thus is fine for idr_for_each(). However, idr_get_next() may be given any point and if the starting id hits in the middle of a non-existent layer, increment to the next layer will end up skipping the same offset into it. For example, an IDR with IDs filled between [64, 127] would look like the following. [ 0 64 ... ] /----/ | | | NULL [ 64 ... 127 ] If idr_get_next() is called with 63 as the starting point, it will try to follow down the pointer from 0. As it is NULL, it will then try to proceed to the next slot in the same level by adding the slot distance at that level which is 64 - making the next try 127. It goes around the loop and finds and returns 127 skipping [64, 126]. Note that this bug also triggers in idr_for_each_entry() loop which deletes during iteration as deletions can make layers go away leaving the iteration with unaligned ID into missing layers. Fix it by ensuring proceeding to the next slot doesn't carry over the unaligned offset - ie. use round_up(id + 1, slot_distance) instead of id += slot_distance. Bug: 19665182 Bug: 18069309 Bug: 19236185 Change-Id: Iddd4b6fb27c39d7607bc778fc00bafe6ec289478 Signed-off-by:
Tejun Heo <tj@kernel.org> Reported-by:
David Teigland <teigland@redhat.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Patrick Tjin authored
This reverts commit e89b006a. Bug: 19636629
-
Ed Tam authored
Bug: 18759528
-
- 05 Feb, 2015 1 commit
-
-
Praveen Chavan authored
For the streams with height lesser than minimum required height(96), decoder initiates a reconfig to indicate client about new size. For clients that calculate output buffer-size based on frame dimensions, resulting buffer sizes could be small and eventually rejected b/ 18528130 Change-Id: I17cf166a40f77fcea74e7d0c19af801e6e6244d5 Signed-off-by:
Praveen Chavan <pchavan@codeaurora.org> Signed-off-by:
c_sridur <sridur@codeaurora.org>
-
- 28 Jan, 2015 1 commit
-
-
Naseer Ahmed authored
For rotator version 2.0 and above, there is support for fast YUV which gives better rotator performance. However, there are few constraints related to height under which fast YUV cannot be enabled when 90 degree rotation is enabled. Make sure to add proper checks before enabling fast YUV. Change-Id: Iafd204a6e4ef09e1151d01d8fde35359b7c3b7a5 Signed-off-by:
Padmanabhan Komanduru <pkomandu@codeaurora.org> Signed-off-by:
Naseer Ahmed <naseer@codeaurora.org> Bug: 19043183
-
- 26 Jan, 2015 1 commit
-
-
Sabrina Dubroca authored
If we try to rmmod the driver for an interface while sockets with setsockopt(JOIN_ANYCAST) are alive, some refcounts aren't cleaned up and we get stuck on: unregister_netdevice: waiting for ens3 to become free. Usage count = 1 If we LEAVE_ANYCAST/close everything before rmmod'ing, there is no problem. We need to perform a cleanup similar to the one for multicast in addrconf_ifdown(how == 1). BUG: 18902601 Bug: 19100303 Change-Id: I6d51aed5755eb5738fcba91950e7773a1c985d2e Signed-off-by:
Sabrina Dubroca <sd@queasysnail.net> Acked-by:
Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Patrick Tjin <pattjin@google.com>
-
- 21 Jan, 2015 1 commit
-
-
Naseer Ahmed authored
Fully verify the input arguments from user client are safe to use. Change-Id: Ie14332443b187951009c63ebfb78456dcd9ba60f Signed-off-by:
Raghavendra Ambadas <rambad@codeaurora.org> Signed-off-by:
Naseer Ahmed <naseer@codeaurora.org> Bug: 19091590
-
- 07 Jan, 2015 13 commits
-
-
Iliyan Malchev authored
Change-Id: I976d5aaa7676196a94914215f388dcee26492932 Signed-off-by:
Iliyan Malchev <malchev@google.com>
-
Jussi Kivilinna authored
This patch adds ARM NEON assembly implementation of SHA-512 and SHA-384 algorithms. tcrypt benchmark results on Cortex-A8, sha512-generic vs sha512-neon-asm: block-size bytes/update old-vs-new 16 16 2.99x 64 16 2.67x 64 64 3.00x 256 16 2.64x 256 64 3.06x 256 256 3.33x 1024 16 2.53x 1024 256 3.39x 1024 1024 3.52x 2048 16 2.50x 2048 256 3.41x 2048 1024 3.54x 2048 2048 3.57x 4096 16 2.49x 4096 256 3.42x 4096 1024 3.56x 4096 4096 3.59x 8192 16 2.48x 8192 256 3.42x 8192 1024 3.56x 8192 4096 3.60x 8192 8192 3.60x Change-Id: Ibc318f8c9136507f57e2bb8d8f51b4714d8ed70b Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by:
Iliyan Malchev <malchev@google.com>
-
Jussi Kivilinna authored
This patch adds ARM NEON assembly implementation of SHA-1 algorithm. tcrypt benchmark results on Cortex-A8, sha1-arm-asm vs sha1-neon-asm: block-size bytes/update old-vs-new 16 16 1.04x 64 16 1.02x 64 64 1.05x 256 16 1.03x 256 64 1.04x 256 256 1.30x 1024 16 1.03x 1024 256 1.36x 1024 1024 1.52x 2048 16 1.03x 2048 256 1.39x 2048 1024 1.55x 2048 2048 1.59x 4096 16 1.03x 4096 256 1.40x 4096 1024 1.57x 4096 4096 1.62x 8192 16 1.03x 8192 256 1.40x 8192 1024 1.58x 8192 4096 1.63x 8192 8192 1.63x Change-Id: I6df3c0a9ba8d450d034cf78785b6ce80a72bef4a Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by:
Iliyan Malchev <malchev@google.com>
-
Jussi Kivilinna authored
Common SHA-1 structures are defined in <crypto/sha.h> for code sharing. This patch changes SHA-1/ARM glue code to use these structures. Change-Id: Iedcc2210314d52d7e13bf5d2b535052a18f04e49 Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Mikulas Patocka authored
Fix the same alignment bug as in arm64 - we need to pass residue unprocessed bytes as the last argument to blkcipher_walk_done. Change-Id: I8d49b8a190327b46801a3db4884e2b309138525b Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org # 3.13+ Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Russell King authored
Building a multi-arch kernel results in: arch/arm/crypto/built-in.o: In function `aesbs_xts_decrypt': sha1_glue.c:(.text+0x15c8): undefined reference to `bsaes_xts_decrypt' arch/arm/crypto/built-in.o: In function `aesbs_xts_encrypt': sha1_glue.c:(.text+0x1664): undefined reference to `bsaes_xts_encrypt' arch/arm/crypto/built-in.o: In function `aesbs_ctr_encrypt': sha1_glue.c:(.text+0x184c): undefined reference to `bsaes_ctr32_encrypt_blocks' arch/arm/crypto/built-in.o: In function `aesbs_cbc_decrypt': sha1_glue.c:(.text+0x19b4): undefined reference to `bsaes_cbc_encrypt' This code is already runtime-conditional on NEON being supported, so there's no point compiling it out depending on the minimum build architecture. Change-Id: I219dc496b3ad60754f95a6db2a71ce73d037a6e0 Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
This avoids this file being incorrectly added to git. Change-Id: Ibafeec2c5d3ca806737f8d865716d3b2ea419e93 Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Ard Biesheuvel authored
Bit sliced AES gives around 45% speedup on Cortex-A15 for encryption and around 25% for decryption. This implementation of the AES algorithm does not rely on any lookup tables so it is believed to be invulnerable to cache timing attacks. This algorithm processes up to 8 blocks in parallel in constant time. This means that it is not usable by chaining modes that are strictly sequential in nature, such as CBC encryption. CBC decryption, however, can benefit from this implementation and runs about 25% faster. The other chaining modes implemented in this module, XTS and CTR, can execute fully in parallel in both directions. The core code has been adopted from the OpenSSL project (in collaboration with the original author, on cc). For ease of maintenance, this version is identical to the upstream OpenSSL code, i.e., all modifications that were required to make it suitable for inclusion into the kernel have been made upstream. The original can be found here: http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=6f6a6130 Note to integrators: While this implementation is significantly faster than the existing table based ones (generic or ARM asm), especially in CTR mode, the effects on power efficiency are unclear as of yet. This code does fundamentally more work, by calculating values that the table based code obtains by a simple lookup; only by doing all of that work in a SIMD fashion, it manages to perform better. Change-Id: I936dc7142b91133c55c7cf0af6a565d219d62e11 Cc: Andy Polyakov <appro@openssl.org> Acked-by:
Nicolas Pitre <nico@linaro.org> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org>
-
Ard Biesheuvel authored
Put the struct definitions for AES keys and the asm function prototypes in a separate header and export the asm functions from the module. This allows other drivers to use them directly. Change-Id: I5ce0cf285e2981755adb55b66a846eb738cedd58 Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org>
-
Ard Biesheuvel authored
commit 40190c85f427dcfdbab5dbef4ffd2510d649da1f upstream. Patch 638591c enabled building the AES assembler code in Thumb2 mode. However, this code used arithmetic involving PC rather than adr{l} instructions to generate PC-relative references to the lookup tables, and this needs to take into account the different PC offset when running in Thumb mode. Change-Id: Iadf37cb5db3a826ced7b99e5ee6d298479355cbd Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by:
Nicolas Pitre <nico@linaro.org> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ard Biesheuvel authored
Make the SHA1 asm code ABI conformant by making sure all stack accesses occur above the stack pointer. Origin: http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=1a9d60d2 Change-Id: I1f17f23f168d40de14b907f470476b7fd9bdd274 Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by:
Nicolas Pitre <nico@linaro.org> Cc: stable@vger.kernel.org Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Dave Martin authored
This patch fixes aes-armv4.S and sha1-armv4-large.S to work natively in Thumb. This allows ARM/Thumb interworking workarounds to be removed. I also take the opportunity to convert some explicit assembler directives for exported functions to the standard ENTRY()/ENDPROC(). For the code itself: * In sha1_block_data_order, use of TEQ with sp is deprecated in ARMv7 and not supported in Thumb. For the branches back to .L_00_15 and .L_40_59, the TEQ is converted to a CMP, under the assumption that clobbering the C flag here will not cause incorrect behaviour. For the first branch back to .L_20_39_or_60_79 the C flag is important, so sp is moved temporarily into another register so that TEQ can be used for the comparison. * In the AES code, most forms of register-indexed addressing with shifts and rotates are not permitted for loads and stores in Thumb, so the address calculation is done using a separate instruction for the Thumb case. The resulting code is unlikely to be optimally scheduled, but it should not have a large impact given the overall size of the code. I haven't run any benchmarks. Change-Id: I8b015aa239e5513d43680d82aeb93db07c5adf9f Signed-off-by:
Dave Martin <dave.martin@linaro.org> Tested-by: David McCullough <ucdevel@gmail.com> (ARM only) Acked-by:
David McCullough <ucdevel@gmail.com> Acked-by:
Nicolas Pitre <nico@linaro.org> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
David McCullough authored
Add assembler versions of AES and SHA1 for ARM platforms. This has provided up to a 50% improvement in IPsec/TCP throughout for tunnels using AES128/SHA1. Platform CPU SPeed Endian Before (bps) After (bps) Improvement IXP425 533 MHz big 11217042 15566294 ~38% KS8695 166 MHz little 3828549 5795373 ~51% Change-Id: I6e950d8c858ef1134352bf959804eeaf5b879d7e Signed-off-by:
David McCullough <ucdevel@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 06 Jan, 2015 2 commits
-
-
Peter Boonstoppel authored
migrate_tasks() uses _pick_next_task_rt() to get tasks from the real-time runqueues to be migrated. When rt_rq is throttled _pick_next_task_rt() won't return anything, in which case migrate_tasks() can't move all threads over and gets stuck in an infinite loop. Instead unthrottle rt runqueues before migrating tasks. Additionally: move unthrottle_offline_cfs_rqs() to rq_offline_fair() Signed-off-by:
Peter Boonstoppel <pboonstoppel@nvidia.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Turner <pjt@google.com> Link: http://lkml.kernel.org/r/5FBF8E85CA34454794F0F7ECBA79798F379D3648B7@HQMAIL04.nvidia.comSigned-off-by:
Ingo Molnar <mingo@kernel.org> Change-Id: I4d73bc0a556882bdb9ade15a661d665c6c2c8df4 Bug: 18729519
-
Mike Galbraith authored
With multiple instances of task_groups, for_each_rt_rq() is a noop, no task groups having been added to the rt.c list instance. This renders __enable/disable_runtime() and print_rt_stats() noop, the user (non) visible effect being that rt task groups are missing in /proc/sched_debug. Signed-off-by:
Mike Galbraith <efault@gmx.de> Cc: stable@kernel.org # v3.3+ Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1344308413.6846.7.camel@marge.simpson.netSigned-off-by:
Thomas Gleixner <tglx@linutronix.de> Bug: 18729519 Change-Id: I415af0d95b2cb2e6bf9910513bc7470056b0cfa9
-
- 23 Dec, 2014 2 commits
-
-
Dennis Rassmann authored
Signed-off-by:
Dennis Rassmann <showp1984@gmail.com>
-
Dennis Rassmann authored
-