|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6 |
|
| #
f551103c |
| 15-Dec-2023 |
Kent Overstreet <[email protected]> |
sched.h: move pid helpers to pid.h
This is needed for killing the sched.h dependency on rcupdate.h, and pid.h is a better place for this code anyways.
Signed-off-by: Kent Overstreet <kent.overstree
sched.h: move pid helpers to pid.h
This is needed for killing the sched.h dependency on rcupdate.h, and pid.h is a better place for this code anyways.
Signed-off-by: Kent Overstreet <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6 |
|
| #
da27f796 |
| 27-Jan-2023 |
Rik van Riel <[email protected]> |
ipc,namespace: batch free ipc_namespace structures
Instead of waiting for an RCU grace period between each ipc_namespace structure that is being freed, wait an RCU grace period for every batch of ip
ipc,namespace: batch free ipc_namespace structures
Instead of waiting for an RCU grace period between each ipc_namespace structure that is being freed, wait an RCU grace period for every batch of ipc_namespace structures.
Thanks to Al Viro for the suggestion of the helper function.
This speeds up the run time of the test case that allocates ipc_namespaces in a loop from 6 minutes, to a little over 1 second:
real 0m1.192s user 0m0.038s sys 0m1.152s
Signed-off-by: Rik van Riel <[email protected]> Reported-by: Chris Mason <[email protected]> Tested-by: Giuseppe Scrivano <[email protected]> Suggested-by: Al Viro <[email protected]> Signed-off-by: Al Viro <[email protected]>
show more ...
|
|
Revision tags: v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6 |
|
| #
72d1e611 |
| 13-Sep-2022 |
Jiebin Sun <[email protected]> |
ipc/msg: mitigate the lock contention with percpu counter
The msg_bytes and msg_hdrs atomic counters are frequently updated when IPC msg queue is in heavy use, causing heavy cache bounce and overhea
ipc/msg: mitigate the lock contention with percpu counter
The msg_bytes and msg_hdrs atomic counters are frequently updated when IPC msg queue is in heavy use, causing heavy cache bounce and overhead. Change them to percpu_counter greatly improve the performance. Since there is one percpu struct per namespace, additional memory cost is minimal. Reading of the count done in msgctl call, which is infrequent. So the need to sum up the counts in each CPU is infrequent.
Apply the patch and test the pts/stress-ng-1.4.0 -- system v message passing (160 threads).
Score gain: 3.99x
CPU: ICX 8380 x 2 sockets Core number: 40 x 2 physical cores Benchmark: pts/stress-ng-1.4.0 -- system v message passing (160 threads)
[[email protected]: coding-style cleanups] [[email protected]: avoid negative value by overflow in msginfo] Link: https://lkml.kernel.org/r/[email protected] [[email protected]: fix min() warnings] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Jiebin Sun <[email protected]> Reviewed-by: Tim Chen <[email protected]> Cc: Alexander Mikhalitsyn <[email protected]> Cc: Alexey Gladkov <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: "Eric W . Biederman" <[email protected]> Cc: Manfred Spraul <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Vasily Averin <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1 |
|
| #
b869d5be |
| 01-Jul-2021 |
Manfred Spraul <[email protected]> |
ipc/util.c: use binary search for max_idx
If semctl(), msgctl() and shmctl() are called with IPC_INFO, SEM_INFO, MSG_INFO or SHM_INFO, then the return value is the index of the highest used index in
ipc/util.c: use binary search for max_idx
If semctl(), msgctl() and shmctl() are called with IPC_INFO, SEM_INFO, MSG_INFO or SHM_INFO, then the return value is the index of the highest used index in the kernel's internal array recording information about all SysV objects of the requested type for the current namespace. (This information can be used with repeated ..._STAT or ..._STAT_ANY operations to obtain information about all SysV objects on the system.)
There is a cache for this value. But when the cache needs up be updated, then the highest used index is determined by looping over all possible values. With the introduction of IPCMNI_EXTEND_SHIFT, this could be a loop over 16 million entries. And due to /proc/sys/kernel/*next_id, the index values do not need to be consecutive.
With <write 16000000 to msg_next_id>, msgget(), msgctl(,IPC_RMID) in a loop, I have observed a performance increase of around factor 13000.
As there is no get_last() function for idr structures: Implement a "get_last()" using a binary search.
As far as I see, ipc is the only user that needs get_last(), thus implement it in ipc/util.c and not in a central location.
[[email protected]: tweak comment, fix typo]
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Manfred Spraul <[email protected]> Acked-by: Davidlohr Bueso <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8 |
|
| #
fb377eb8 |
| 05-Sep-2019 |
Arnd Bergmann <[email protected]> |
ipc: fix sparc64 ipc() wrapper
Matt bisected a sparc64 specific issue with semctl, shmctl and msgctl to a commit from my y2038 series in linux-5.1, as I missed the custom sys_ipc() wrapper that spar
ipc: fix sparc64 ipc() wrapper
Matt bisected a sparc64 specific issue with semctl, shmctl and msgctl to a commit from my y2038 series in linux-5.1, as I missed the custom sys_ipc() wrapper that sparc64 uses in place of the generic version that I patched.
The problem is that the sys_{sem,shm,msg}ctl() functions in the kernel now do not allow being called with the IPC_64 flag any more, resulting in a -EINVAL error when they don't recognize the command.
Instead, the correct way to do this now is to call the internal ksys_old_{sem,shm,msg}ctl() functions to select the API version.
As we generally move towards these functions anyway, change all of sparc_ipc() to consistently use those in place of the sys_*() versions, and move the required ksys_*() declarations into linux/syscalls.h
The IS_ENABLED(CONFIG_SYSVIPC) check is required to avoid link errors when ipc is disabled.
Reported-by: Matt Turner <[email protected]> Fixes: 275f22148e87 ("ipc: rename old-style shmctl/semctl/msgctl syscalls") Cc: [email protected] Tested-by: Matt Turner <[email protected]> Tested-by: Anatoly Pugachev <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
show more ...
|
|
Revision tags: v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4, v5.2-rc3, v5.2-rc2, v5.2-rc1 |
|
| #
99db46ea |
| 14-May-2019 |
Manfred Spraul <[email protected]> |
ipc: do cyclic id allocation for the ipc object.
For ipcmni_extend mode, the sequence number space is only 7 bits. So the chance of id reuse is relatively high compared with the non-extended mode.
ipc: do cyclic id allocation for the ipc object.
For ipcmni_extend mode, the sequence number space is only 7 bits. So the chance of id reuse is relatively high compared with the non-extended mode.
To alleviate this id reuse problem, this patch enables cyclic allocation for the index to the radix tree (idx). The disadvantage is that this can cause a slight slow-down of the fast path, as the radix tree could be higher than necessary.
To limit the radix tree height, I have chosen the following limits: 1) The cycling is done over in_use*1.5. 2) At least, the cycling is done over "normal" ipcnmi mode: RADIX_TREE_MAP_SIZE elements "ipcmni_extended": 4096 elements
Result: - for normal mode: No change for <= 42 active ipc elements. With more than 42 active ipc elements, a 2nd level would be added to the radix tree. Without cyclic allocation, a 2nd level would be added only with more than 63 active elements.
- for extended mode: Cycling creates always at least a 2-level radix tree. With more than 2730 active objects, a 3rd level would be added, instead of > 4095 active objects until the 3rd level is added without cyclic allocation.
For a 2-level radix tree compared to a 1-level radix tree, I have observed < 1% performance impact.
Notes: 1) Normal "x=semget();y=semget();" is unaffected: Then the idx is e.g. a and a+1, regardless if idr_alloc() or idr_alloc_cyclic() is used.
2) The -1% happens in a microbenchmark after this situation: x=semget(); for(i=0;i<4000;i++) {t=semget();semctl(t,0,IPC_RMID);} y=semget(); Now perform semget calls on x and y that do not sleep.
3) The worst-case reuse cycle time is unfortunately unaffected: If you have 2^24-1 ipc objects allocated, and get/remove the last possible element in a loop, then the id is reused after 128 get/remove pairs.
Performance check: A microbenchmark that performes no-op semop() randomly on two IDs, with only these two IDs allocated. The IDs were set using /proc/sys/kernel/sem_next_id. The test was run 5 times, averages are shown.
1 & 2: Base (6.22 seconds for 10.000.000 semops) 1 & 40: -0.2% 1 & 3348: - 0.8% 1 & 27348: - 1.6% 1 & 15777204: - 3.2%
Or: ~12.6 cpu cycles per additional radix tree level. The cpu is an Intel I3-5010U. ~1300 cpu cycles/syscall is slower than what I remember (spectre impact?).
V2 of the patch: - use "min" and "max" - use RADIX_TREE_MAP_SIZE * RADIX_TREE_MAP_SIZE instead of (2<<12).
[[email protected]: fix max() warning] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Manfred Spraul <[email protected]> Acked-by: Waiman Long <[email protected]> Cc: "Luis R. Rodriguez" <[email protected]> Cc: Kees Cook <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Al Viro <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: "Eric W . Biederman" <[email protected]> Cc: Takashi Iwai <[email protected]> Cc: Davidlohr Bueso <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
3278a2c2 |
| 14-May-2019 |
Manfred Spraul <[email protected]> |
ipc: conserve sequence numbers in ipcmni_extend mode
Rewrite, based on the patch from Waiman Long:
The mixing in of a sequence number into the IPC IDs is probably to avoid ID reuse in userspace as
ipc: conserve sequence numbers in ipcmni_extend mode
Rewrite, based on the patch from Waiman Long:
The mixing in of a sequence number into the IPC IDs is probably to avoid ID reuse in userspace as much as possible. With ipcmni_extend mode, the number of usable sequence numbers is greatly reduced leading to higher chance of ID reuse.
To address this issue, we need to conserve the sequence number space as much as possible. Right now, the sequence number is incremented for every new ID created. In reality, we only need to increment the sequence number when new allocated ID is not greater than the last one allocated. It is in such case that the new ID may collide with an existing one. This is being done irrespective of the ipcmni mode.
In order to avoid any races, the index is first allocated and then the pointer is replaced.
Changes compared to the initial patch: - Handle failures from idr_alloc(). - Avoid that concurrent operations can see the wrong sequence number. (This is achieved by using idr_replace()). - IPCMNI_SEQ_SHIFT is not a constant, thus renamed to ipcmni_seq_shift(). - IPCMNI_SEQ_MAX is not a constant, thus renamed to ipcmni_seq_max().
Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Manfred Spraul <[email protected]> Signed-off-by: Waiman Long <[email protected]> Suggested-by: Matthew Wilcox <[email protected]> Acked-by: Waiman Long <[email protected]> Cc: Al Viro <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: "Eric W . Biederman" <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Kees Cook <[email protected]> Cc: "Luis R. Rodriguez" <[email protected]> Cc: Takashi Iwai <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
5ac893b8 |
| 14-May-2019 |
Waiman Long <[email protected]> |
ipc: allow boot time extension of IPCMNI from 32k to 16M
The maximum number of unique System V IPC identifiers was limited to 32k. That limit should be big enough for most use cases.
However, ther
ipc: allow boot time extension of IPCMNI from 32k to 16M
The maximum number of unique System V IPC identifiers was limited to 32k. That limit should be big enough for most use cases.
However, there are some users out there requesting for more, especially those that are migrating from Solaris which uses 24 bits for unique identifiers. To satisfy the need of those users, a new boot time kernel option "ipcmni_extend" is added to extend the IPCMNI value to 16M. This is a 512X increase which should be big enough for users out there that need a large number of unique IPC identifier.
The use of this new option will change the pattern of the IPC identifiers returned by functions like shmget(2). An application that depends on such pattern may not work properly. So it should only be used if the users really need more than 32k of unique IPC numbers.
This new option does have the side effect of reducing the maximum number of unique sequence numbers from 64k down to 128. So it is a trade-off.
The computation of a new IPC id is not done in the performance critical path. So a little bit of additional overhead shouldn't have any real performance impact.
Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Waiman Long <[email protected]> Acked-by: Manfred Spraul <[email protected]> Cc: Al Viro <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: "Eric W . Biederman" <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Kees Cook <[email protected]> Cc: "Luis R. Rodriguez" <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Takashi Iwai <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1 |
|
| #
275f2214 |
| 31-Dec-2018 |
Arnd Bergmann <[email protected]> |
ipc: rename old-style shmctl/semctl/msgctl syscalls
The behavior of these system calls is slightly different between architectures, as determined by the CONFIG_ARCH_WANT_IPC_PARSE_VERSION symbol. Mo
ipc: rename old-style shmctl/semctl/msgctl syscalls
The behavior of these system calls is slightly different between architectures, as determined by the CONFIG_ARCH_WANT_IPC_PARSE_VERSION symbol. Most architectures that implement the split IPC syscalls don't set that symbol and only get the modern version, but alpha, arm, microblaze, mips-n32, mips-n64 and xtensa expect the caller to pass the IPC_64 flag.
For the architectures that so far only implement sys_ipc(), i.e. m68k, mips-o32, powerpc, s390, sh, sparc, and x86-32, we want the new behavior when adding the split syscalls, so we need to distinguish between the two groups of architectures.
The method I picked for this distinction is to have a separate system call entry point: sys_old_*ctl() now uses ipc_parse_version, while sys_*ctl() does not. The system call tables of the five architectures are changed accordingly.
As an additional benefit, we no longer need the configuration specific definition for ipc_parse_version(), it always does the same thing now, but simply won't get called on architectures with the modern interface.
A small downside is that on architectures that do set ARCH_WANT_IPC_PARSE_VERSION, we now have an extra set of entry points that are never called. They only add a few bytes of bloat, so it seems better to keep them compared to adding yet another Kconfig symbol. I considered adding new syscall numbers for the IPC_64 variants for consistency, but decided against that for now.
Signed-off-by: Arnd Bergmann <[email protected]>
show more ...
|
|
Revision tags: v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1 |
|
| #
8c81ddd2 |
| 30-Oct-2018 |
Waiman Long <[email protected]> |
ipc: IPCMNI limit check for semmni
For SysV semaphores, the semmni value is the last part of the 4-element sem number array. To make semmni behave in a similar way to msgmni and shmmni, we can't di
ipc: IPCMNI limit check for semmni
For SysV semaphores, the semmni value is the last part of the 4-element sem number array. To make semmni behave in a similar way to msgmni and shmmni, we can't directly use the _minmax handler. Instead, a special sem specific handler is added to check the last argument to make sure that it is limited to the [0, IPCMNI] range. An error will be returned if this is not the case.
Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Waiman Long <[email protected]> Reviewed-by: Davidlohr Bueso <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Kees Cook <[email protected]> Cc: Luis R. Rodriguez <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Takashi Iwai <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6, v4.18-rc5 |
|
| #
9afc5eee |
| 13-Jul-2018 |
Arnd Bergmann <[email protected]> |
y2038: globally rename compat_time to old_time32
Christoph Hellwig suggested a slightly different path for handling backwards compatibility with the 32-bit time_t based system calls:
Rather than si
y2038: globally rename compat_time to old_time32
Christoph Hellwig suggested a slightly different path for handling backwards compatibility with the 32-bit time_t based system calls:
Rather than simply reusing the compat_sys_* entry points on 32-bit architectures unchanged, we get rid of those entry points and the compat_time types by renaming them to something that makes more sense on 32-bit architectures (which don't have a compat mode otherwise), and then share the entry points under the new name with the 64-bit architectures that use them for implementing the compatibility.
The following types and interfaces are renamed here, and moved from linux/compat_time.h to linux/time32.h:
old new --- --- compat_time_t old_time32_t struct compat_timeval struct old_timeval32 struct compat_timespec struct old_timespec32 struct compat_itimerspec struct old_itimerspec32 ns_to_compat_timeval() ns_to_old_timeval32() get_compat_itimerspec64() get_old_itimerspec32() put_compat_itimerspec64() put_old_itimerspec32() compat_get_timespec64() get_old_timespec32() compat_put_timespec64() put_old_timespec32()
As we already have aliases in place, this patch addresses only the instances that are relevant to the system call interface in particular, not those that occur in device drivers and other modules. Those will get handled separately, while providing the 64-bit version of the respective interfaces.
I'm not renaming the timex, rusage and itimerval structures, as we are still debating what the new interface will look like, and whether we will need a replacement at all.
This also doesn't change the names of the syscall entry points, which can be done more easily when we actually switch over the 32-bit architectures to use them, at that point we need to change COMPAT_SYSCALL_DEFINEx to SYSCALL_DEFINEx with a new name, e.g. with a _time32 suffix.
Suggested-by: Christoph Hellwig <[email protected]> Link: https://lore.kernel.org/lkml/[email protected]/ Signed-off-by: Arnd Bergmann <[email protected]>
show more ...
|
| #
2a9d6481 |
| 22-Aug-2018 |
Manfred Spraul <[email protected]> |
ipc/util.c: update return value of ipc_getref from int to bool
ipc_getref has still a return value of type "int", matching the atomic_t interface of atomic_inc_not_zero()/atomic_add_unless().
ipc_g
ipc/util.c: update return value of ipc_getref from int to bool
ipc_getref has still a return value of type "int", matching the atomic_t interface of atomic_inc_not_zero()/atomic_add_unless().
ipc_getref now uses refcount_inc_not_zero, which has a return value of type "bool".
Therefore, update the return code to avoid implicit conversions.
Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Manfred Spraul <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Kees Cook <[email protected]> Cc: Michael Kerrisk <[email protected]> Cc: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
27c331a1 |
| 22-Aug-2018 |
Manfred Spraul <[email protected]> |
ipc/util.c: further variable name cleanups
The varable names got a mess, thus standardize them again:
id: user space id. Called semid, shmid, msgid if the type is known. Most functions use "id"
ipc/util.c: further variable name cleanups
The varable names got a mess, thus standardize them again:
id: user space id. Called semid, shmid, msgid if the type is known. Most functions use "id" already. idx: "index" for the idr lookup Right now, some functions use lid, ipc_addid() already uses idx as the variable name. seq: sequence number, to avoid quick collisions of the user space id key: user space key, used for the rhash tree
Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Manfred Spraul <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Kees Cook <[email protected]> Cc: Michael Kerrisk <[email protected]> Cc: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
eae04d25 |
| 22-Aug-2018 |
Davidlohr Bueso <[email protected]> |
ipc: simplify ipc initialization
Now that we know that rhashtable_init() will not fail, we can get rid of a lot of the unnecessary cleanup paths when the call errored out.
[[email protected]
ipc: simplify ipc initialization
Now that we know that rhashtable_init() will not fail, we can get rid of a lot of the unnecessary cleanup paths when the call errored out.
[[email protected]: variable name added to util.h to resolve checkpatch warning] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Davidlohr Bueso <[email protected]> Signed-off-by: Manfred Spraul <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Kees Cook <[email protected]> Cc: Michael Kerrisk <[email protected]> Cc: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
82061c57 |
| 22-Aug-2018 |
Davidlohr Bueso <[email protected]> |
ipc: drop ipc_lock()
ipc/util.c contains multiple functions to get the ipc object pointer given an id number.
There are two sets of function: One set verifies the sequence counter part of the id nu
ipc: drop ipc_lock()
ipc/util.c contains multiple functions to get the ipc object pointer given an id number.
There are two sets of function: One set verifies the sequence counter part of the id number, other functions do not check the sequence counter.
The standard for function names in ipc/util.c is - ..._check() functions verify the sequence counter - ..._idr() functions do not verify the sequence counter
ipc_lock() is an exception: It does not verify the sequence counter value, but this is not obvious from the function name.
Furthermore, shm.c is the only user of this helper. Thus, we can simply move the logic into shm_lock() and get rid of the function altogether.
[[email protected]: most of changelog] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Davidlohr Bueso <[email protected]> Signed-off-by: Manfred Spraul <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Kees Cook <[email protected]> Cc: Michael Kerrisk <[email protected]> Cc: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
| #
4241c1a3 |
| 22-Aug-2018 |
Manfred Spraul <[email protected]> |
ipc: rename ipcctl_pre_down_nolock()
Both the comment and the name of ipcctl_pre_down_nolock() are misleading: The function must be called while holdling the rw semaphore.
Therefore the patch renam
ipc: rename ipcctl_pre_down_nolock()
Both the comment and the name of ipcctl_pre_down_nolock() are misleading: The function must be called while holdling the rw semaphore.
Therefore the patch renames the function to ipcctl_obtain_check(): This name matches the other names used in util.c:
- "obtain" function look up a pointer in the idr, without acquiring the object lock. - The caller is responsible for locking. - _check means that the sequence number is checked.
Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Manfred Spraul <[email protected]> Reviewed-by: Davidlohr Bueso <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Kees Cook <[email protected]> Cc: Michael Kerrisk <[email protected]> Cc: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1, v4.17, v4.17-rc7, v4.17-rc6, v4.17-rc5, v4.17-rc4, v4.17-rc3, v4.17-rc2, v4.17-rc1 |
|
| #
b0d17578 |
| 13-Apr-2018 |
Arnd Bergmann <[email protected]> |
y2038: ipc: Enable COMPAT_32BIT_TIME
Three ipc syscalls (mq_timedsend, mq_timedreceive and and semtimedop) take a timespec argument. After we move 32-bit architectures over to useing 64-bit time_t b
y2038: ipc: Enable COMPAT_32BIT_TIME
Three ipc syscalls (mq_timedsend, mq_timedreceive and and semtimedop) take a timespec argument. After we move 32-bit architectures over to useing 64-bit time_t based syscalls, we need seperate entry points for the old 32-bit based interfaces.
This changes the #ifdef guards for the existing 32-bit compat syscalls to check for CONFIG_COMPAT_32BIT_TIME instead, which will then be enabled on all existing 32-bit architectures.
Signed-off-by: Arnd Bergmann <[email protected]>
show more ...
|
| #
21fc538d |
| 13-Apr-2018 |
Arnd Bergmann <[email protected]> |
y2038: ipc: Use __kernel_timespec
This is a preparatation for changing over __kernel_timespec to 64-bit times, which involves assigning new system call numbers for mq_timedsend(), mq_timedreceive()
y2038: ipc: Use __kernel_timespec
This is a preparatation for changing over __kernel_timespec to 64-bit times, which involves assigning new system call numbers for mq_timedsend(), mq_timedreceive() and semtimedop() for compatibility with future y2038 proof user space.
The existing ABIs will remain available through compat code.
Signed-off-by: Arnd Bergmann <[email protected]>
show more ...
|
|
Revision tags: v4.16, v4.16-rc7 |
|
| #
31c213f2 |
| 20-Mar-2018 |
Dominik Brodowski <[email protected]> |
ipc: add msgsnd syscall/compat_syscall wrappers
Provide ksys_msgsnd() and compat_ksys_msgsnd() wrappers to avoid in-kernel calls to these syscalls. The ksys_ prefix denotes that these functions are
ipc: add msgsnd syscall/compat_syscall wrappers
Provide ksys_msgsnd() and compat_ksys_msgsnd() wrappers to avoid in-kernel calls to these syscalls. The ksys_ prefix denotes that these functions are meant as a drop-in replacement for the syscalls. In particular, they use the same calling convention as sys_msgsnd() and compat_sys_msgsnd().
This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/[email protected]
Cc: Al Viro <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: Dominik Brodowski <[email protected]>
show more ...
|
| #
078faac9 |
| 20-Mar-2018 |
Dominik Brodowski <[email protected]> |
ipc: add msgrcv syscall/compat_syscall wrappers
Provide ksys_msgrcv() and compat_ksys_msgrcv() wrappers to avoid in-kernel calls to these syscalls. The ksys_ prefix denotes that these functions are
ipc: add msgrcv syscall/compat_syscall wrappers
Provide ksys_msgrcv() and compat_ksys_msgrcv() wrappers to avoid in-kernel calls to these syscalls. The ksys_ prefix denotes that these functions are meant as a drop-in replacement for the syscalls. In particular, they use the same calling convention as sys_msgrcv() and compat_sys_msgrcv().
This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/[email protected]
Cc: Al Viro <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: Dominik Brodowski <[email protected]>
show more ...
|
| #
e340db56 |
| 20-Mar-2018 |
Dominik Brodowski <[email protected]> |
ipc: add msgctl syscall/compat_syscall wrappers
Provide ksys_msgctl() and compat_ksys_msgctl() wrappers to avoid in-kernel calls to these syscalls. The ksys_ prefix denotes that these functions are
ipc: add msgctl syscall/compat_syscall wrappers
Provide ksys_msgctl() and compat_ksys_msgctl() wrappers to avoid in-kernel calls to these syscalls. The ksys_ prefix denotes that these functions are meant as a drop-in replacement for the syscalls. In particular, they use the same calling convention as sys_msgctl() and compat_sys_msgctl().
This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/[email protected]
Cc: Al Viro <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: Dominik Brodowski <[email protected]>
show more ...
|
| #
c84d0791 |
| 20-Mar-2018 |
Dominik Brodowski <[email protected]> |
ipc: add shmctl syscall/compat_syscall wrappers
Provide ksys_shmctl() and compat_ksys_shmctl() wrappers to avoid in-kernel calls to these syscalls. The ksys_ prefix denotes that these functions are
ipc: add shmctl syscall/compat_syscall wrappers
Provide ksys_shmctl() and compat_ksys_shmctl() wrappers to avoid in-kernel calls to these syscalls. The ksys_ prefix denotes that these functions are meant as a drop-in replacement for the syscalls. In particular, they use the same calling convention as sys_shmctl() and compat_sys_shmctl().
This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/[email protected]
Cc: Al Viro <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: Dominik Brodowski <[email protected]>
show more ...
|
| #
da1e2744 |
| 20-Mar-2018 |
Dominik Brodowski <[email protected]> |
ipc: add shmdt syscall wrapper
Provide ksys_shmdt() wrapper to avoid in-kernel calls to this syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. I
ipc: add shmdt syscall wrapper
Provide ksys_shmdt() wrapper to avoid in-kernel calls to this syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_shmdt().
This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/[email protected]
Cc: Al Viro <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: Dominik Brodowski <[email protected]>
show more ...
|
| #
65749e0b |
| 20-Mar-2018 |
Dominik Brodowski <[email protected]> |
ipc: add shmget syscall wrapper
Provide ksys_shmget() wrapper to avoid in-kernel calls to this syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall.
ipc: add shmget syscall wrapper
Provide ksys_shmget() wrapper to avoid in-kernel calls to this syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_shmget().
This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/[email protected]
Cc: Al Viro <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: Dominik Brodowski <[email protected]>
show more ...
|
| #
3d65661a |
| 20-Mar-2018 |
Dominik Brodowski <[email protected]> |
ipc: add msgget syscall wrapper
Provide ksys_msgget() wrapper to avoid in-kernel calls to this syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall.
ipc: add msgget syscall wrapper
Provide ksys_msgget() wrapper to avoid in-kernel calls to this syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_msgget().
This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/[email protected]
Cc: Al Viro <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: Dominik Brodowski <[email protected]>
show more ...
|