|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6 |
|
| #
e3cd33ab |
| 07-Mar-2025 |
Takashi Iwai <[email protected]> |
ALSA: seq: Improve data consistency at polling
snd_seq_poll() calls snd_seq_write_pool_allocated() that reads out a field in client->pool object, while it can be updated concurrently via ioctls, as
ALSA: seq: Improve data consistency at polling
snd_seq_poll() calls snd_seq_write_pool_allocated() that reads out a field in client->pool object, while it can be updated concurrently via ioctls, as reported by syzbot. The data race itself is harmless, as it's merely a poll() call, and the state is volatile. OTOH, the read out of poll object info from the caller side is fragile, and we can leave it better in snd_seq_pool_poll_wait() alone.
A similar pattern is seen in snd_seq_kernel_client_write_poll(), too, which is called from the OSS sequencer.
This patch drops the pool checks from the caller side and add the pool->lock in snd_seq_pool_poll_wait() for better data consistency.
Reported-by: [email protected] Closes: https://lore.kernel.org/[email protected] Link: https://patch.msgid.link/[email protected] Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7 |
|
| #
6768bd10 |
| 27-Feb-2024 |
Takashi Iwai <[email protected]> |
ALSA: seq: memory: Use guard() for locking
We can simplify the code gracefully with new guard() macro and co for automatic cleanup of locks.
Only the code refactoring, and no functional changes.
S
ALSA: seq: memory: Use guard() for locking
We can simplify the code gracefully with new guard() macro and co for automatic cleanup of locks.
Only the code refactoring, and no functional changes.
Signed-off-by: Takashi Iwai <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7 |
|
| #
126c18a4 |
| 21-Dec-2023 |
Dmitry Antipov <[email protected]> |
ALSA: seq: fix kvmalloc_array() arguments order
When compiling with gcc version 14.0.0 20231220 (experimental) and W=1, I've noticed the following warning:
sound/core/seq/seq_memory.c: In function
ALSA: seq: fix kvmalloc_array() arguments order
When compiling with gcc version 14.0.0 20231220 (experimental) and W=1, I've noticed the following warning:
sound/core/seq/seq_memory.c: In function 'snd_seq_pool_init': sound/core/seq/seq_memory.c:445:41: warning: 'kvmalloc_array' sizes specified with 'sizeof' in the earlier argument and not in the later argument [-Wcalloc-transposed-args] 445 | cellptr = kvmalloc_array(sizeof(struct snd_seq_event_cell), pool->size, | ^~~~~~
Since 'n' and 'size' arguments of 'kvmalloc_array()' are multiplied to calculate the final size, their actual order doesn't affect the result and so this is not a bug. But it's still worth to fix it.
Signed-off-by: Dmitry Antipov <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1 |
|
| #
86496fd4 |
| 05-Sep-2023 |
Takashi Iwai <[email protected]> |
ALSA: seq: Fix snd_seq_expand_var_event() call to user-space
The recent fix to clear the padding bytes at snd_seq_expand_var_event() broke the read to user-space with in_kernel=0 parameter. For use
ALSA: seq: Fix snd_seq_expand_var_event() call to user-space
The recent fix to clear the padding bytes at snd_seq_expand_var_event() broke the read to user-space with in_kernel=0 parameter. For user-space address, it has to use clear_user() instead of memset().
Fixes: f80e6d60d677 ("ALSA: seq: Clear padded bytes at expanding events") Reported-and-tested-by: Ash Holland <[email protected]> Closes: https://lore.kernel.org/r/[email protected] Link: https://lore.kernel.org/r/[email protected] Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4 |
|
| #
46397622 |
| 23-May-2023 |
Takashi Iwai <[email protected]> |
ALSA: seq: Add UMP support
Starting from this commit, we add the basic support of UMP (Universal MIDI Packet) events on ALSA sequencer infrastructure. The biggest change here is that, for transferr
ALSA: seq: Add UMP support
Starting from this commit, we add the basic support of UMP (Universal MIDI Packet) events on ALSA sequencer infrastructure. The biggest change here is that, for transferring UMP packets that are up to 128 bits, we extend the data payload of ALSA sequencer event record when the client is declared to support for the new UMP events.
A new event flag bit, SNDRV_SEQ_EVENT_UMP, is defined and it shall be set for the UMP packet events that have the larger payload of 128 bits, defined as struct snd_seq_ump_event.
For controlling the UMP feature enablement in kernel, a new Kconfig, CONFIG_SND_SEQ_UMP is introduced. The extended event for UMP is available only when this Kconfig item is set. Similarly, the size of the internal snd_seq_event_cell also increases (in 4 bytes) when the Kconfig item is set. (But the size increase is effective only for 32bit architectures; 64bit archs already have padding there.) Overall, when CONFIG_SND_SEQ_UMP isn't set, there is no change in the event and cell, keeping the old sizes.
For applications that want to access the UMP packets, first of all, a sequencer client has to declare the user-protocol to match with the latest one via the new SNDRV_SEQ_IOCTL_USER_PVERSION; otherwise it's treated as if a legacy client without UMP support.
Then the client can switch to the new UMP mode (MIDI 1.0 or MIDI 2.0) with a new field, midi_version, in snd_seq_client_info. When switched to UMP mode (midi_version = 1 or 2), the client can write the UMP events with SNDRV_SEQ_EVENT_UMP flag. For reads, the alignment size is changed from snd_seq_event (28 bytes) to snd_seq_ump_event (32 bytes). When a UMP sequencer event is delivered to a legacy sequencer client, it's ignored or handled as an error.
Conceptually, ALSA sequencer client and port correspond to the UMP Endpoint and Group, respectively; each client may have multiple ports and each port has the fixed number (16) of channels, total up to 256 channels.
As of this commit, ALSA sequencer core just sends and receives the UMP events as-is from/to clients. The automatic conversions between the legacy events and the new UMP events will be implemented in a later patch.
Along with this commit, bump the sequencer protocol version to 1.0.3.
Reviewed-by: Jaroslav Kysela <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
| #
ea46f797 |
| 23-May-2023 |
Takashi Iwai <[email protected]> |
ALSA: seq: Add snd_seq_expand_var_event_at() helper
Create a new variant of snd_seq_expand_var_event() for expanding the data starting from the given byte offset. It'll be used by the new UMP seque
ALSA: seq: Add snd_seq_expand_var_event_at() helper
Create a new variant of snd_seq_expand_var_event() for expanding the data starting from the given byte offset. It'll be used by the new UMP sequencer code later.
Reviewed-by: Jaroslav Kysela <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
| #
f80e6d60 |
| 23-May-2023 |
Takashi Iwai <[email protected]> |
ALSA: seq: Clear padded bytes at expanding events
There can be a small memory hole that may not be cleared at expanding an event with the variable length type. Make sure to clear it.
Reviewed-by:
ALSA: seq: Clear padded bytes at expanding events
There can be a small memory hole that may not be cleared at expanding an event with the variable length type. Make sure to clear it.
Reviewed-by: Jaroslav Kysela <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6 |
|
| #
05530ef7 |
| 18-Nov-2022 |
Kees Cook <[email protected]> |
ALSA: seq: Fix function prototype mismatch in snd_seq_expand_var_event
With clang's kernel control flow integrity (kCFI, CONFIG_CFI_CLANG), indirect call targets are validated against the expected f
ALSA: seq: Fix function prototype mismatch in snd_seq_expand_var_event
With clang's kernel control flow integrity (kCFI, CONFIG_CFI_CLANG), indirect call targets are validated against the expected function pointer prototype to make sure the call target is valid to help mitigate ROP attacks. If they are not identical, there is a failure at run time, which manifests as either a kernel panic or thread getting killed.
seq_copy_in_user() and seq_copy_in_kernel() did not have prototypes matching snd_seq_dump_func_t. Adjust this and remove the casts. There are not resulting binary output differences.
This was found as a result of Clang's new -Wcast-function-type-strict flag, which is more sensitive than the simpler -Wcast-function-type, which only checks for type width mismatches.
Reported-by: kernel test robot <[email protected]> Link: https://lore.kernel.org/lkml/[email protected] Cc: Jaroslav Kysela <[email protected]> Cc: Takashi Iwai <[email protected]> Cc: "Gustavo A. R. Silva" <[email protected]> Cc: [email protected] Signed-off-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6 |
|
| #
f9a6bb84 |
| 08-Jun-2021 |
Takashi Iwai <[email protected]> |
ALSA: seq: Fix assignment in if condition
There are lots of places doing assignments in if condition in ALSA sequencer core, which is a bad coding style that may confuse readers and occasionally lea
ALSA: seq: Fix assignment in if condition
There are lots of places doing assignments in if condition in ALSA sequencer core, which is a bad coding style that may confuse readers and occasionally lead to bugs.
This patch is merely for coding-style fixes, no functional changes.
Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1 |
|
| #
afcfbcb3 |
| 23-Dec-2020 |
Lars-Peter Clausen <[email protected]> |
ALSA: core: Use DIV_ROUND_UP() instead of open-coding it
Use DIV_ROUND_UP() instead of open-coding it. This documents intent and makes it more clear what is going on for the casual reviewer.
Genera
ALSA: core: Use DIV_ROUND_UP() instead of open-coding it
Use DIV_ROUND_UP() instead of open-coding it. This documents intent and makes it more clear what is going on for the casual reviewer.
Generated using the following the Coccinelle semantic patch.
// <smpl> @@ expression x, y; @@ -(((x) + (y) - 1) / (y)) +DIV_ROUND_UP(x, y) // </smpl>
Signed-off-by: Lars-Peter Clausen <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4, v5.2-rc3 |
|
| #
1a59d1b8 |
| 27-May-2019 |
Thomas Gleixner <[email protected]> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 1334 file(s).
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Richard Fontana <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3 |
|
| #
f823b8a7 |
| 28-Mar-2019 |
Takashi Iwai <[email protected]> |
ALSA: seq: Remove superfluous irqsave flags
spin_lock_irqsave() is used unnecessarily in various places in sequencer core code although it's pretty obvious that the context is sleepable. Remove irq
ALSA: seq: Remove superfluous irqsave flags
spin_lock_irqsave() is used unnecessarily in various places in sequencer core code although it's pretty obvious that the context is sleepable. Remove irqsave and use the plain spin_lock_irq() in such places for simplicity.
Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
| #
4b24b960 |
| 28-Mar-2019 |
Takashi Iwai <[email protected]> |
ALSA: seq: Align temporary re-locking with irqsave version
In a few places in sequencer core, we temporarily unlock / re-lock the pool spin lock while waiting for the allocation in the blocking mode
ALSA: seq: Align temporary re-locking with irqsave version
In a few places in sequencer core, we temporarily unlock / re-lock the pool spin lock while waiting for the allocation in the blocking mode. There spin_unlock_irq() / spin_lock_irq() pairs are called while initially spin_lock_irqsave() is used (and spin_lock_irqrestore() at the end of the function again). This is likely OK for now, but it's a bit confusing and error-prone.
This patch replaces these temporary relocking lines with the irqsave variant to make the lock/unlock sequence more consistently.
Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
| #
fd7ae83d |
| 28-Mar-2019 |
Takashi Iwai <[email protected]> |
ALSA: seq: Use kvmalloc() for cell pools
Use kvmalloc() for allocating cell pools since the pool size can be relatively small that may be covered better by slab.
Signed-off-by: Takashi Iwai <tiwai@
ALSA: seq: Use kvmalloc() for cell pools
Use kvmalloc() for allocating cell pools since the pool size can be relatively small that may be covered better by slab.
Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1, v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8 |
|
| #
fc4bfd9a |
| 01-Aug-2018 |
Takashi Iwai <[email protected]> |
ALSA: seq: Remove dead codes
There are a few functions that have been commented out for ages. And also there are functions that do nothing but placeholders. Let's kill them.
Signed-off-by: Takashi
ALSA: seq: Remove dead codes
There are a few functions that have been commented out for ages. And also there are functions that do nothing but placeholders. Let's kill them.
Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v4.18-rc7, v4.18-rc6, v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1 |
|
| #
42bc47b3 |
| 12-Jun-2018 |
Kees Cook <[email protected]> |
treewide: Use array_size() in vmalloc()
The vmalloc() function has no 2-factor argument form, so multiplication factors need to be wrapped in array_size(). This patch replaces cases of:
vma
treewide: Use array_size() in vmalloc()
The vmalloc() function has no 2-factor argument form, so multiplication factors need to be wrapped in array_size(). This patch replaces cases of:
vmalloc(a * b)
with: vmalloc(array_size(a, b))
as well as handling cases of:
vmalloc(a * b * c)
with:
vmalloc(array3_size(a, b, c))
This does, however, attempt to ignore constant size factors like:
vmalloc(4 * 1024)
though any constants defined via macros get caught up in the conversion.
Any factors with a sizeof() of "unsigned char", "char", and "u8" were dropped, since they're redundant.
The Coccinelle script used for this was:
// Fix redundant parens around sizeof(). @@ type TYPE; expression THING, E; @@
( vmalloc( - (sizeof(TYPE)) * E + sizeof(TYPE) * E , ...) | vmalloc( - (sizeof(THING)) * E + sizeof(THING) * E , ...) )
// Drop single-byte sizes and redundant parens. @@ expression COUNT; typedef u8; typedef __u8; @@
( vmalloc( - sizeof(u8) * (COUNT) + COUNT , ...) | vmalloc( - sizeof(__u8) * (COUNT) + COUNT , ...) | vmalloc( - sizeof(char) * (COUNT) + COUNT , ...) | vmalloc( - sizeof(unsigned char) * (COUNT) + COUNT , ...) | vmalloc( - sizeof(u8) * COUNT + COUNT , ...) | vmalloc( - sizeof(__u8) * COUNT + COUNT , ...) | vmalloc( - sizeof(char) * COUNT + COUNT , ...) | vmalloc( - sizeof(unsigned char) * COUNT + COUNT , ...) )
// 2-factor product with sizeof(type/expression) and identifier or constant. @@ type TYPE; expression THING; identifier COUNT_ID; constant COUNT_CONST; @@
( vmalloc( - sizeof(TYPE) * (COUNT_ID) + array_size(COUNT_ID, sizeof(TYPE)) , ...) | vmalloc( - sizeof(TYPE) * COUNT_ID + array_size(COUNT_ID, sizeof(TYPE)) , ...) | vmalloc( - sizeof(TYPE) * (COUNT_CONST) + array_size(COUNT_CONST, sizeof(TYPE)) , ...) | vmalloc( - sizeof(TYPE) * COUNT_CONST + array_size(COUNT_CONST, sizeof(TYPE)) , ...) | vmalloc( - sizeof(THING) * (COUNT_ID) + array_size(COUNT_ID, sizeof(THING)) , ...) | vmalloc( - sizeof(THING) * COUNT_ID + array_size(COUNT_ID, sizeof(THING)) , ...) | vmalloc( - sizeof(THING) * (COUNT_CONST) + array_size(COUNT_CONST, sizeof(THING)) , ...) | vmalloc( - sizeof(THING) * COUNT_CONST + array_size(COUNT_CONST, sizeof(THING)) , ...) )
// 2-factor product, only identifiers. @@ identifier SIZE, COUNT; @@
vmalloc( - SIZE * COUNT + array_size(COUNT, SIZE) , ...)
// 3-factor product with 1 sizeof(type) or sizeof(expression), with // redundant parens removed. @@ expression THING; identifier STRIDE, COUNT; type TYPE; @@
( vmalloc( - sizeof(TYPE) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | vmalloc( - sizeof(TYPE) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | vmalloc( - sizeof(TYPE) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | vmalloc( - sizeof(TYPE) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | vmalloc( - sizeof(THING) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | vmalloc( - sizeof(THING) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | vmalloc( - sizeof(THING) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | vmalloc( - sizeof(THING) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) )
// 3-factor product with 2 sizeof(variable), with redundant parens removed. @@ expression THING1, THING2; identifier COUNT; type TYPE1, TYPE2; @@
( vmalloc( - sizeof(TYPE1) * sizeof(TYPE2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | vmalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | vmalloc( - sizeof(THING1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | vmalloc( - sizeof(THING1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | vmalloc( - sizeof(TYPE1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) | vmalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) )
// 3-factor product, only identifiers, with redundant parens removed. @@ identifier STRIDE, SIZE, COUNT; @@
( vmalloc( - (COUNT) * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | vmalloc( - COUNT * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | vmalloc( - COUNT * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | vmalloc( - (COUNT) * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | vmalloc( - COUNT * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | vmalloc( - (COUNT) * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | vmalloc( - (COUNT) * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | vmalloc( - COUNT * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) )
// Any remaining multi-factor products, first at least 3-factor products // when they're not all constants... @@ expression E1, E2, E3; constant C1, C2, C3; @@
( vmalloc(C1 * C2 * C3, ...) | vmalloc( - E1 * E2 * E3 + array3_size(E1, E2, E3) , ...) )
// And then all remaining 2 factors products when they're not all constants. @@ expression E1, E2; constant C1, C2; @@
( vmalloc(C1 * C2, ...) | vmalloc( - E1 * E2 + array_size(E1, E2) , ...) )
Signed-off-by: Kees Cook <[email protected]>
show more ...
|
|
Revision tags: v4.17, v4.17-rc7, v4.17-rc6, v4.17-rc5, v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1, v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5 |
|
| #
7bd80091 |
| 05-Mar-2018 |
Takashi Iwai <[email protected]> |
ALSA: seq: More protection for concurrent write and ioctl races
This patch is an attempt for further hardening against races between the concurrent write and ioctls. The previous fix d15d662e89fc (
ALSA: seq: More protection for concurrent write and ioctl races
This patch is an attempt for further hardening against races between the concurrent write and ioctls. The previous fix d15d662e89fc ("ALSA: seq: Fix racy pool initializations") covered the race of the pool initialization at writer and the pool resize ioctl by the client->ioctl_mutex (CVE-2018-1000004). However, basically this mutex should be applied more widely to the whole write operation for avoiding the unexpected pool operations by another thread.
The only change outside snd_seq_write() is the additional mutex argument to helper functions, so that we can unlock / relock the given mutex temporarily during schedule() call for blocking write.
Fixes: d15d662e89fc ("ALSA: seq: Fix racy pool initializations") Reported-by: 范龙飞 <[email protected]> Reported-by: Nicolai Stange <[email protected]> Reviewed-and-tested-by: Nicolai Stange <[email protected]> Cc: <[email protected]> Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v4.16-rc4, v4.16-rc3, v4.16-rc2, v4.16-rc1, v4.15, v4.15-rc9, v4.15-rc8, v4.15-rc7, v4.15-rc6, v4.15-rc5, v4.15-rc4, v4.15-rc3, v4.15-rc2, v4.15-rc1, v4.14, v4.14-rc8, v4.14-rc7, v4.14-rc6, v4.14-rc5, v4.14-rc4, v4.14-rc3, v4.14-rc2, v4.14-rc1, v4.13, v4.13-rc7, v4.13-rc6, v4.13-rc5, v4.13-rc4, v4.13-rc3, v4.13-rc2, v4.13-rc1, v4.12, v4.12-rc7 |
|
| #
ac6424b9 |
| 20-Jun-2017 |
Ingo Molnar <[email protected]> |
sched/wait: Rename wait_queue_t => wait_queue_entry_t
Rename:
wait_queue_t => wait_queue_entry_t
'wait_queue_t' was always a slight misnomer: its name implies that it's a "queue", but in reality
sched/wait: Rename wait_queue_t => wait_queue_entry_t
Rename:
wait_queue_t => wait_queue_entry_t
'wait_queue_t' was always a slight misnomer: its name implies that it's a "queue", but in reality it's a queue *entry*. The 'real' queue is the wait queue head, which had to carry the name.
Start sorting this out by renaming it to 'wait_queue_entry_t'.
This also allows the real structure name 'struct __wait_queue' to lose its double underscore and become 'struct wait_queue_entry', which is the more canonical nomenclature for such data types.
Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.12-rc6 |
|
| #
9c8ddd10 |
| 16-Jun-2017 |
Takashi Iwai <[email protected]> |
ALSA: seq: Follow standard EXPORT_SYMBOL() declarations
Just a tidy up to follow the standard EXPORT_SYMBOL*() declarations in order to improve grep-ability.
- Move EXPORT_SYMBOL*() to the position
ALSA: seq: Follow standard EXPORT_SYMBOL() declarations
Just a tidy up to follow the standard EXPORT_SYMBOL*() declarations in order to improve grep-ability.
- Move EXPORT_SYMBOL*() to the position right after its definition - Remove superfluous blank line before EXPORT_SYMBOL*() lines
Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v4.12-rc5, v4.12-rc4, v4.12-rc3, v4.12-rc2, v4.12-rc1, v4.11, v4.11-rc8, v4.11-rc7, v4.11-rc6, v4.11-rc5, v4.11-rc4 |
|
| #
c520ff3d |
| 21-Mar-2017 |
Takashi Iwai <[email protected]> |
ALSA: seq: Fix racy cell insertions during snd_seq_pool_done()
When snd_seq_pool_done() is called, it marks the closing flag to refuse the further cell insertions. But snd_seq_pool_done() itself do
ALSA: seq: Fix racy cell insertions during snd_seq_pool_done()
When snd_seq_pool_done() is called, it marks the closing flag to refuse the further cell insertions. But snd_seq_pool_done() itself doesn't clear the cells but just waits until all cells are cleared by the caller side. That is, it's racy, and this leads to the endless stall as syzkaller spotted.
This patch addresses the racy by splitting the setup of pool->closing flag out of snd_seq_pool_done(), and calling it properly before snd_seq_pool_done().
BugLink: http://lkml.kernel.org/r/CACT4Y+aqqy8bZA1fFieifNxR2fAfFQQABcBHj801+u5ePV0URw@mail.gmail.com Reported-and-tested-by: Dmitry Vyukov <[email protected]> Cc: <[email protected]> Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v4.11-rc3, v4.11-rc2, v4.11-rc1, v4.10, v4.10-rc8, v4.10-rc7 |
|
| #
174cd4b1 |
| 02-Feb-2017 |
Ingo Molnar <[email protected]> |
sched/headers: Prepare to move signal wakeup & sigpending methods from <linux/sched.h> into <linux/sched/signal.h>
Fix up affected files that include this signal functionality via sched.h.
Acked-by
sched/headers: Prepare to move signal wakeup & sigpending methods from <linux/sched.h> into <linux/sched/signal.h>
Fix up affected files that include this signal functionality via sched.h.
Acked-by: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
| #
37a7ea4a |
| 06-Feb-2017 |
Takashi Iwai <[email protected]> |
ALSA: seq: Don't handle loop timeout at snd_seq_pool_done()
snd_seq_pool_done() syncs with closing of all opened threads, but it aborts the wait loop with a timeout, and proceeds to the release reso
ALSA: seq: Don't handle loop timeout at snd_seq_pool_done()
snd_seq_pool_done() syncs with closing of all opened threads, but it aborts the wait loop with a timeout, and proceeds to the release resource even if not all threads have been closed. The timeout was 5 seconds, and if you run a crazy stuff, it can exceed easily, and may result in the access of the invalid memory address -- this is what syzkaller detected in a bug report.
As a fix, let the code graduate from naiveness, simply remove the loop timeout.
BugLink: http://lkml.kernel.org/r/CACT4Y+YdhDV2H5LLzDTJDVF-qiYHUHhtRaW4rbb4gUhTCQB81w@mail.gmail.com Reported-by: Dmitry Vyukov <[email protected]> Cc: <[email protected]> Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v4.10-rc6, v4.10-rc5, v4.10-rc4, v4.10-rc3, v4.10-rc2, v4.10-rc1, v4.9, v4.9-rc8, v4.9-rc7, v4.9-rc6, v4.9-rc5, v4.9-rc4, v4.9-rc3, v4.9-rc2, v4.9-rc1, v4.8, v4.8-rc8, v4.8-rc7, v4.8-rc6, v4.8-rc5, v4.8-rc4, v4.8-rc3, v4.8-rc2, v4.8-rc1, v4.7, v4.7-rc7, v4.7-rc6, v4.7-rc5, v4.7-rc4, v4.7-rc3, v4.7-rc2, v4.7-rc1, v4.6, v4.6-rc7, v4.6-rc6, v4.6-rc5, v4.6-rc4, v4.6-rc3, v4.6-rc2, v4.6-rc1, v4.5, v4.5-rc7, v4.5-rc6, v4.5-rc5 |
|
| #
d99a36f4 |
| 15-Feb-2016 |
Takashi Iwai <[email protected]> |
ALSA: seq: Fix leak of pool buffer at concurrent writes
When multiple concurrent writes happen on the ALSA sequencer device right after the open, it may try to allocate vmalloc buffer for each write
ALSA: seq: Fix leak of pool buffer at concurrent writes
When multiple concurrent writes happen on the ALSA sequencer device right after the open, it may try to allocate vmalloc buffer for each write and leak some of them. It's because the presence check and the assignment of the buffer is done outside the spinlock for the pool.
The fix is to move the check and the assignment into the spinlock.
(The current implementation is suboptimal, as there can be multiple unnecessary vmallocs because the allocation is done before the check in the spinlock. But the pool size is already checked beforehand, so this isn't a big problem; that is, the only possible path is the multiple writes before any pool assignment, and practically seen, the current coverage should be "good enough".)
The issue was triggered by syzkaller fuzzer.
BugLink: http://lkml.kernel.org/r/CACT4Y+bSzazpXNvtAr=WXaL8hptqjHwqEyFA+VN2AWEx=aurkg@mail.gmail.com Reported-by: Dmitry Vyukov <[email protected]> Tested-by: Dmitry Vyukov <[email protected]> Cc: <[email protected]> Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v4.5-rc4, v4.5-rc3, v4.5-rc2, v4.5-rc1, v4.4, v4.4-rc8, v4.4-rc7, v4.4-rc6, v4.4-rc5, v4.4-rc4, v4.4-rc3, v4.4-rc2, v4.4-rc1, v4.3, v4.3-rc7, v4.3-rc6, v4.3-rc5, v4.3-rc4, v4.3-rc3, v4.3-rc2, v4.3-rc1, v4.2, v4.2-rc8, v4.2-rc7, v4.2-rc6, v4.2-rc5, v4.2-rc4, v4.2-rc3, v4.2-rc2, v4.2-rc1, v4.1, v4.1-rc8, v4.1-rc7, v4.1-rc6, v4.1-rc5, v4.1-rc4, v4.1-rc3, v4.1-rc2, v4.1-rc1, v4.0, v4.0-rc7, v4.0-rc6, v4.0-rc5, v4.0-rc4 |
|
| #
24db8bba |
| 10-Mar-2015 |
Takashi Iwai <[email protected]> |
ALSA: seq: Drop superfluous error/debug messages after malloc failures
The kernel memory allocators already report the errors when the requested allocation fails, thus we don't need to warn it again
ALSA: seq: Drop superfluous error/debug messages after malloc failures
The kernel memory allocators already report the errors when the requested allocation fails, thus we don't need to warn it again in each caller side.
Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|
|
Revision tags: v4.0-rc3, v4.0-rc2, v4.0-rc1, v3.19, v3.19-rc7, v3.19-rc6, v3.19-rc5, v3.19-rc4, v3.19-rc3, v3.19-rc2, v3.19-rc1, v3.18, v3.18-rc7, v3.18-rc6, v3.18-rc5, v3.18-rc4, v3.18-rc3, v3.18-rc2, v3.18-rc1, v3.17, v3.17-rc7, v3.17-rc6, v3.17-rc5, v3.17-rc4, v3.17-rc3, v3.17-rc2, v3.17-rc1, v3.16, v3.16-rc7, v3.16-rc6, v3.16-rc5, v3.16-rc4, v3.16-rc3 |
|
| #
b245a822 |
| 23-Jun-2014 |
Rasmus Villemoes <[email protected]> |
ALSA: seq: seq_memory.c: Fix closing brace followed by if
Add a newline and, while at it, remove a space and redundant braces.
Signed-off-by: Rasmus Villemoes <[email protected]> Signed-off-
ALSA: seq: seq_memory.c: Fix closing brace followed by if
Add a newline and, while at it, remove a space and redundant braces.
Signed-off-by: Rasmus Villemoes <[email protected]> Signed-off-by: Takashi Iwai <[email protected]>
show more ...
|