|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7 |
|
| #
d2444986 |
| 07-Sep-2024 |
Helge Deller <[email protected]> |
parisc: Fix 64-bit userspace syscall path
Currently the glibc isn't yet ported to 64-bit for hppa, so there is no usable userspace available yet. But it's possible to manually build a static 64-bit
parisc: Fix 64-bit userspace syscall path
Currently the glibc isn't yet ported to 64-bit for hppa, so there is no usable userspace available yet. But it's possible to manually build a static 64-bit binary and run that for testing. One such 64-bit test program is available at http://ftp.parisc-linux.org/src/64bit.tar.gz and it shows various issues with the existing 64-bit syscall path in the kernel. This patch fixes those issues.
Signed-off-by: Helge Deller <[email protected]> Cc: [email protected] # v4.19+
show more ...
|
|
Revision tags: v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6 |
|
| #
a0f4b787 |
| 09-Aug-2023 |
Helge Deller <[email protected]> |
parisc: Fix lightweight spinlock checks to not break futexes
The lightweight spinlock checks verify that a spinlock has either value 0 (spinlock locked) and that not any other bits than in __ARCH_SP
parisc: Fix lightweight spinlock checks to not break futexes
The lightweight spinlock checks verify that a spinlock has either value 0 (spinlock locked) and that not any other bits than in __ARCH_SPIN_LOCK_UNLOCKED_VAL is set.
This breaks the current LWS code, which writes the address of the lock into the lock word to unlock it, which was an optimization to save one assembler instruction.
Fix it by making spinlock_types.h accessible for asm code, change the LWS spinlock-unlocking code to write __ARCH_SPIN_LOCK_UNLOCKED_VAL into the lock word, and add some missing lightweight spinlock checks to the LWS path. Finally, make the spinlock checks dependend on DEBUG_KERNEL.
Noticed-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]> Tested-by: John David Anglin <[email protected]> Cc: [email protected] # v6.4+ Fixes: 15e64ef6520e ("parisc: Add lightweight spinlock checks")
show more ...
|
|
Revision tags: v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16 |
|
| #
72c3dd82 |
| 04-Jan-2022 |
John David Anglin <[email protected]> |
parisc: Add lws_atomic_xchg and lws_atomic_store syscalls
This patch adds two new LWS routines - lws_atomic_xchg and lws_atomic_store.
These are simpler than the CAS routines. Currently, we use th
parisc: Add lws_atomic_xchg and lws_atomic_store syscalls
This patch adds two new LWS routines - lws_atomic_xchg and lws_atomic_store.
These are simpler than the CAS routines. Currently, we use the CAS routines for atomic stores. This is inefficient since it requires both winning the spinlock and a successful CAS operation.
Change has been tested on c8000 and rp3440.
In v2, I moved the code to disble/enable page faults inside the spinlocks.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
| #
d0585d74 |
| 04-Jan-2022 |
John David Anglin <[email protected]> |
parisc: Rewrite light-weight syscall and futex code
The parisc architecture lacks general hardware support for compare and swap. Particularly for userspace, it is difficult to implement software ato
parisc: Rewrite light-weight syscall and futex code
The parisc architecture lacks general hardware support for compare and swap. Particularly for userspace, it is difficult to implement software atomic support. Page faults in critical regions can cause processes to sleep and block the forward progress of other processes. Thus, it is essential that page faults be disabled in critical regions. For performance reasons, we also need to disable external interrupts in critical regions.
In order to do this, we need a mechanism to trigger COW breaks outside the critical region. Fortunately, parisc has the "stbys,e" instruction. When the leftmost byte of a word is addressed, this instruction triggers all the exceptions of a normal store but it does not write to memory. Thus, we can use it to trigger COW breaks outside the critical region without modifying the data that is to be updated atomically.
COW breaks occur randomly. So even if we have priviously executed a "stbys,e" instruction, we still need to disable pagefaults around the critical region. If a fault occurs in the critical region, we return -EAGAIN. I had to add a wrapper around _arch_futex_atomic_op_inuser() as I found in testing that returning -EAGAIN caused problems for some processes even though it is listed as a possible return value.
The patch implements the above. The code no longer attempts to sleep with interrupts disabled and I haven't seen any stalls with the change.
I have attempted to merge common code and streamline the fast path. In the futex code, we only compute the spinlock address once.
I eliminated some debug code in the original CAS routine that just made the flow more complicated.
I don't clip the arguments when called from wide mode. As a result, the LWS routines should work when called from 64-bit processes.
I defined TASK_PAGEFAULT_DISABLED offset for use in the lws_pagefault_disable and lws_pagefault_enable macros.
Since we now disable interrupts on the gateway page where necessary, it might be possible to allow processes to be scheduled when they are on the gateway page.
Change has been tested on c8000 and rp3440. It improves glibc build and test time by about 10%.
In v2, I removed the lws_atomic_xchg and and lws_atomic_store calls. I also removed the bug fixes that were not directly related to this patch.
In v3, I removed the code to force interruptions from arch_futex_atomic_op_inuser(). It is always called with page faults disabled, so this code had no effect.
In v4, I fixed a typo in depi_safe line.
In v5, I moved the code to disable/enable page faults inside the spinlocks.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v5.16-rc8, v5.16-rc7 |
|
| #
8f66fce0 |
| 21-Dec-2021 |
John David Anglin <[email protected]> |
parisc: Correct completer in lws start
The completer in the "or,ev %r1,%r30,%r30" instruction is reversed, so we are not clipping the LWS number when we are called from a 32-bit process (W=0). We ne
parisc: Correct completer in lws start
The completer in the "or,ev %r1,%r30,%r30" instruction is reversed, so we are not clipping the LWS number when we are called from a 32-bit process (W=0). We need to nulify the following depdi instruction when the least-significant bit of %r30 is 1.
If the %r20 register is not clipped, a user process could perform a LWS call that would branch to an undefined location in the kernel and potentially crash the machine.
Signed-off-by: John David Anglin <[email protected]> Cc: [email protected] # 4.19+ Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2 |
|
| #
df2ffeda |
| 19-Nov-2021 |
John David Anglin <[email protected]> |
parisc: Fix extraction of hash lock bits in syscall.S
The extru instruction leaves the most significant 32 bits of the target register in an undefined state on PA 2.0 systems. If any of these bits a
parisc: Fix extraction of hash lock bits in syscall.S
The extru instruction leaves the most significant 32 bits of the target register in an undefined state on PA 2.0 systems. If any of these bits are nonzero, this will break the calculation of the lock pointer.
Fix by using extrd,u instruction via extru_safe macro on 64-bit kernels.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v5.16-rc1 |
|
| #
7e992711 |
| 03-Nov-2021 |
Dave Anglin <[email protected]> |
parisc: Don't disable interrupts in cmpxchg and futex operations
I no longer think interrupts can be disabled in the futex and cmpxchg operations because of COW breaks. This not ideal but I suspect
parisc: Don't disable interrupts in cmpxchg and futex operations
I no longer think interrupts can be disabled in the futex and cmpxchg operations because of COW breaks. This not ideal but I suspect it's the best we can do.
For the cmpxchg operations in syscall.S, we rely on the code to not schedule off the gateway page. For the futex, I added code to disable preemption.
So far, I haven't seen the warnings with the attached change but the change is only lightly tested.
Signed-off-by: Dave Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v5.15, v5.15-rc7, v5.15-rc6 |
|
| #
fdc9e4e0 |
| 17-Oct-2021 |
Helge Deller <[email protected]> |
parisc: Use PRIV_USER in syscall.S
Signed-off-by: Helge Deller <[email protected]>
|
| #
2214c0e7 |
| 15-Oct-2021 |
Helge Deller <[email protected]> |
parisc: Move thread_info into task struct
This implements the CONFIG_THREAD_INFO_IN_TASK option.
With this change: - before thread_info was part of the stack and located at the beginning of the sta
parisc: Move thread_info into task struct
This implements the CONFIG_THREAD_INFO_IN_TASK option.
With this change: - before thread_info was part of the stack and located at the beginning of the stack - now the thread_info struct is moved and located inside the task_struct structure - the stack is allocated and handled like the major other platforms - drop the cpu field of thread_info and use instead the one in task_struct
Signed-off-by: Helge Deller <[email protected]> Signed-off-by: Sven Schnelle <[email protected]>
show more ...
|
|
Revision tags: v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2 |
|
| #
df86ddbb |
| 01-Mar-2021 |
Masahiro Yamada <[email protected]> |
parisc: syscalls: switch to generic syscalltbl.sh
Many architectures duplicate similar shell scripts.
This commit converts parisc to use scripts/syscalltbl.sh. This also unifies syscall_table_64.h
parisc: syscalls: switch to generic syscalltbl.sh
Many architectures duplicate similar shell scripts.
This commit converts parisc to use scripts/syscalltbl.sh. This also unifies syscall_table_64.h and syscall_table_c32.h.
Signed-off-by: Masahiro Yamada <[email protected]> Acked-by: Helge Deller <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8 |
|
| #
53a42b63 |
| 02-Oct-2020 |
John David Anglin <[email protected]> |
parisc: Switch to more fine grained lws locks
Increase the number of lws locks to 256 entries (instead of 16) and choose lock entry based on bits 3-11 (instead of 4-7) of the relevant address. With
parisc: Switch to more fine grained lws locks
Increase the number of lws locks to 256 entries (instead of 16) and choose lock entry based on bits 3-11 (instead of 4-7) of the relevant address. With this change we archieve more fine-grained locking in futex syscalls and thus reduce the number of possible stalls.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8 |
|
| #
157e9afc |
| 28-Jul-2020 |
Helge Deller <[email protected]> |
Revert "parisc: Revert "Release spinlocks using ordered store""
This reverts commit 86d4d068df573a8c2105554624796c086d6bec3d.
Signed-off-by: Helge Deller <[email protected]> Cc: <[email protected]
Revert "parisc: Revert "Release spinlocks using ordered store""
This reverts commit 86d4d068df573a8c2105554624796c086d6bec3d.
Signed-off-by: Helge Deller <[email protected]> Cc: <[email protected]> # v5.0+
show more ...
|
| #
6e9f06ee |
| 28-Jul-2020 |
Helge Deller <[email protected]> |
Revert "parisc: Use ldcw instruction for SMP spinlock release barrier"
This reverts commit 9e5c602186a692a7e848c0da17aed40f49d30519. No need to use the ldcw instruction as SMP spinlock release barri
Revert "parisc: Use ldcw instruction for SMP spinlock release barrier"
This reverts commit 9e5c602186a692a7e848c0da17aed40f49d30519. No need to use the ldcw instruction as SMP spinlock release barrier. Revert it to gain back speed again.
Signed-off-by: Helge Deller <[email protected]> Cc: <[email protected]> # v5.2+
show more ...
|
| #
462fb756 |
| 28-Jul-2020 |
Helge Deller <[email protected]> |
Revert "parisc: Drop LDCW barrier in CAS code when running UP"
This reverts commit e6eb5fe9123f05dcbf339ae5c0b6d32fcc0685d5. We need to optimize it differently. A follow up patch will correct it.
S
Revert "parisc: Drop LDCW barrier in CAS code when running UP"
This reverts commit e6eb5fe9123f05dcbf339ae5c0b6d32fcc0685d5. We need to optimize it differently. A follow up patch will correct it.
Signed-off-by: Helge Deller <[email protected]> Cc: <[email protected]> # v5.2+
show more ...
|
|
Revision tags: v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4, v5.2-rc3, v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1 |
|
| #
106c9092 |
| 02-Jan-2019 |
Firoz Khan <[email protected]> |
parisc: remove nargs from __SYSCALL
The __SYSCALL macro's arguments are system call number, system call entry name and number of arguments for the system call.
Argument- nargs in __SYSCALL(nr, entr
parisc: remove nargs from __SYSCALL
The __SYSCALL macro's arguments are system call number, system call entry name and number of arguments for the system call.
Argument- nargs in __SYSCALL(nr, entry, nargs) is neither calculated nor used anywhere. So it would be better to keep the implementaion as __SYSCALL(nr, entry). This will unifies the implementation with some other architetures too.
Signed-off-by: Firoz Khan <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
| #
e6eb5fe9 |
| 07-May-2019 |
Helge Deller <[email protected]> |
parisc: Drop LDCW barrier in CAS code when running UP
When running an SMP kernel on a single-CPU machine, we can speed up the CAS code by replacing the LDCW sync barrier with NOP.
Signed-off-by: He
parisc: Drop LDCW barrier in CAS code when running UP
When running an SMP kernel on a single-CPU machine, we can speed up the CAS code by replacing the LDCW sync barrier with NOP.
Signed-off-by: Helge Deller <[email protected]>
show more ...
|
| #
1829dda0 |
| 05-May-2019 |
Helge Deller <[email protected]> |
parisc: Rename LEVEL to PA_ASM_LEVEL to avoid name clash with DRBD code
LEVEL is a very common word, and now after many years it suddenly clashed with another LEVEL define in the DRBD code. Rename i
parisc: Rename LEVEL to PA_ASM_LEVEL to avoid name clash with DRBD code
LEVEL is a very common word, and now after many years it suddenly clashed with another LEVEL define in the DRBD code. Rename it to PA_ASM_LEVEL instead.
Reported-by: kbuild test robot <[email protected]> Signed-off-by: Helge Deller <[email protected]> Cc: <[email protected]>
show more ...
|
| #
9e5c6021 |
| 14-Apr-2019 |
John David Anglin <[email protected]> |
parisc: Use ldcw instruction for SMP spinlock release barrier
There are only a couple of instructions that can function as a memory barrier on parisc. Currently, we use the sync instruction as a me
parisc: Use ldcw instruction for SMP spinlock release barrier
There are only a couple of instructions that can function as a memory barrier on parisc. Currently, we use the sync instruction as a memory barrier when releasing a spinlock. However, the ldcw instruction is a better barrier when we have a handy memory location since it operates in the cache on coherent machines.
This patch updates the spinlock release code to use ldcw. I also changed the "stw,ma" instructions to "stw" instructions as it is not an adequate barrier.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4 |
|
| #
575afc4d |
| 19-Nov-2018 |
Firoz Khan <[email protected]> |
parisc: generate uapi header and system call table files
System call table generation script must be run to gener- ate unistd_32/64.h and syscall_table_32/64/c32.h files. This patch will have change
parisc: generate uapi header and system call table files
System call table generation script must be run to gener- ate unistd_32/64.h and syscall_table_32/64/c32.h files. This patch will have changes which will invokes the script.
This patch will generate unistd_32/64.h and syscall_table- _32/64/c32.h files by the syscall table generation script invoked by parisc/Makefile and the generated files against the removed files must be identical.
The generated uapi header file will be included in uapi/- asm/unistd.h and generated system call table header file will be included by kernel/syscall.S file.
Signed-off-by: Firoz Khan <[email protected]> Acked-by: Helge Deller <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v4.20-rc3, v4.20-rc2 |
|
| #
86d4d068 |
| 06-Nov-2018 |
John David Anglin <[email protected]> |
parisc: Revert "Release spinlocks using ordered store"
This reverts commit d27dfa13b9f77ae7e6ed09d70a0426ed26c1a8f9.
Unfortunately, this patch needs to be reverted. We need the full sync barrier a
parisc: Revert "Release spinlocks using ordered store"
This reverts commit d27dfa13b9f77ae7e6ed09d70a0426ed26c1a8f9.
Unfortunately, this patch needs to be reverted. We need the full sync barrier and not the limited barrier provided by using an ordered store. The sync ensures that all accesses and cache purge instructions that follow the sync are performed after all such instructions prior the sync instruction have completed executing.
The patch breaks the rwlock implementation in glibc. This caused the test-lock application in the libprelude testsuite to hang. With the change reverted, the test runs correctly and the libprelude package builds successfully.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v4.20-rc1, v4.19 |
|
| #
d27dfa13 |
| 17-Oct-2018 |
John David Anglin <[email protected]> |
parisc: Release spinlocks using ordered store
This patch updates the spin unlock code to use an ordered store with release semanatics. All prior accesses are guaranteed to be performed before an or
parisc: Release spinlocks using ordered store
This patch updates the spin unlock code to use an ordered store with release semanatics. All prior accesses are guaranteed to be performed before an ordered store is performed.
Using an ordered store is significantly faster than using the sync memory barrier.
Signed-off-by: John David Anglin <[email protected]> Signed-off-by: Helge Deller <[email protected]>
show more ...
|
|
Revision tags: v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6, v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1 |
|
| #
3d0186bb |
| 16-Jun-2018 |
Matthew Wilcox <[email protected]> |
Update email address
Redirect some older email addresses that are in the git logs.
Signed-off-by: Matthew Wilcox <[email protected]>
|
| #
54c770da |
| 16-Aug-2018 |
Helge Deller <[email protected]> |
parisc: Update comments in syscall.S regarding wide userland
We do support running 64-bit userspace processes, although there isn't yet full gcc and glibc support. Anyway, fix the comments to reflec
parisc: Update comments in syscall.S regarding wide userland
We do support running 64-bit userspace processes, although there isn't yet full gcc and glibc support. Anyway, fix the comments to reflect the reality.
Signed-off-by: Helge Deller <[email protected]>
show more ...
|
| #
b6fc0ccc |
| 16-Aug-2018 |
Helge Deller <[email protected]> |
parisc: Fix ptraced 64-bit applications to call 64-bit syscalls
Fix the strace code path to call 64-bit syscalls in case we are executing by a 64-bit application.
Signed-off-by: Helge Deller <delle
parisc: Fix ptraced 64-bit applications to call 64-bit syscalls
Fix the strace code path to call 64-bit syscalls in case we are executing by a 64-bit application.
Signed-off-by: Helge Deller <[email protected]>
show more ...
|
| #
7797167f |
| 12-Aug-2018 |
John David Anglin <[email protected]> |
parisc: Remove ordered stores from syscall.S
Now that we use a sync prior to releasing the locks in syscall.S, we don't need the PA 2.0 ordered stores used to release some locks. Using an ordered s
parisc: Remove ordered stores from syscall.S
Now that we use a sync prior to releasing the locks in syscall.S, we don't need the PA 2.0 ordered stores used to release some locks. Using an ordered store, potentially slows the release and subsequent code.
There are a number of other ordered stores and loads that serve no purpose. I have converted these to normal stores.
Signed-off-by: John David Anglin <[email protected]> Cc: [email protected] # 4.0+ Signed-off-by: Helge Deller <[email protected]>
show more ...
|