| /linux-6.15/Documentation/locking/ |
| H A D | preempt-locking.rst | 35 protect these situations by disabling preemption around them. 37 You can also use put_cpu() and get_cpu(), which will disable preemption. 44 Under preemption, the state of the CPU must be protected. This is arch- 51 preemption must be disabled around such regions. 54 kernel_fpu_begin and kernel_fpu_end will disable and enable preemption. 72 Data protection under preemption is achieved by disabling preemption for the 86 preemption is not enabled. 125 Preventing preemption using interrupt disabling 132 in doubt, rely on locking or explicit preemption disabling. 137 These may be used to protect from preemption, however, on exit, if preemption [all …]
|
| H A D | locktypes.rst | 60 mechanisms, disabling preemption or interrupts are pure CPU local 103 PI has limitations on non-PREEMPT_RT kernels due to preemption and 106 PI clearly cannot preempt preemption-disabled or interrupt-disabled 162 by disabling preemption or interrupts. 217 preemption or interrupts is required, for example, to safely access 244 Non-PREEMPT_RT kernels disable preemption to get this effect. 247 preemption enabled. The lock disables softirq handlers and also 248 prevents reentrancy due to task preemption. 430 preemption. The following substitution works on both kernels:: 473 Acquiring a raw_spinlock_t disables preemption and possibly also [all …]
|
| H A D | seqlock.rst | 47 preemption, preemption must be explicitly disabled before entering the 72 /* Serialized context with disabled preemption */ 107 For lock types which do not implicitly disable preemption, preemption
|
| H A D | hwspinlock.rst | 95 Upon a successful return from this function, preemption is disabled so 111 Upon a successful return from this function, preemption and the local 127 Upon a successful return from this function, preemption is disabled, 178 Upon a successful return from this function, preemption is disabled so 195 Upon a successful return from this function, preemption and the local 211 Upon a successful return from this function, preemption is disabled, 268 Upon a successful return from this function, preemption and local 280 Upon a successful return from this function, preemption is reenabled,
|
| /linux-6.15/Documentation/gpu/ |
| H A D | msm-preemption.rst | 12 When preemption is enabled 4 rings are initialized, corresponding to different 16 requesting preemption. When certain conditions are met, depending on the 32 configured by changing the preemption level, this allows to compromise between 33 latency (ie. the time that passes between when the kernel requests preemption 43 preemption of any kind. 58 expected to set the state that isn't preserved whenever preemption occurs which 60 before and after preemption. 66 being executed. There are different kinds of preemption records and most of 67 those require one buffer per ring. This is because preemption never occurs 76 preemption. [all …]
|
| H A D | drm-compute.rst | 17 not even to force preemption. The driver with is simply forced to unmap a BO 36 If job preemption and recoverable pagefaults are not available, those are the
|
| /linux-6.15/kernel/ |
| H A D | Kconfig.preempt | 26 This is the traditional Linux preemption model, geared towards 43 "explicit preemption points" to the kernel code. These new 44 preemption points have been selected to reduce the maximum 66 otherwise not be about to reach a natural preemption point. 76 bool "Scheduler controlled preemption model" 81 This option provides a scheduler driven preemption model that 82 is fundamentally similar to full preemption, but is less 84 reduce lock holder preemption and recover some of the performance 85 gains seen from using Voluntary preemption. 120 This option allows to define the preemption model on the kernel [all …]
|
| /linux-6.15/Documentation/core-api/ |
| H A D | entry.rst | 167 irq_enter_rcu() updates the preemption count which makes in_hardirq() 172 irq_exit_rcu() handles interrupt time accounting, undoes the preemption 175 In theory, the preemption count could be updated in irqentry_enter(). In 176 practice, deferring this update to irq_enter_rcu() allows the preemption-count 180 preemption count has not yet been updated with the HARDIRQ_OFFSET state. 182 Note that irq_exit_rcu() must remove HARDIRQ_OFFSET from the preemption count 185 also requires that HARDIRQ_OFFSET has been removed from the preemption count. 223 Note that the update of the preemption counter has to be the first 226 preemption count modification in the NMI entry/exit case must not be
|
| H A D | local_ops.rst | 42 making sure that we modify it from within a preemption safe context. It is 76 preemption already disabled. I suggest, however, to explicitly 77 disable preemption anyway to make sure it will still work correctly on 104 local atomic operations: it makes sure that preemption is disabled around write 110 If you are already in a preemption-safe context, you can use 161 * preemptible context (it disables preemption) :
|
| H A D | this_cpu_ops.rst | 20 necessary to disable preemption or interrupts to ensure that the 44 The following this_cpu() operations with implied preemption protection 46 preemption and interrupts:: 110 reserved for a specific processor. Without disabling preemption in the 142 smp_processor_id() may be used, for example, where preemption has been 144 critical section. When preemption is re-enabled this pointer is usually 240 preemption. If a per cpu variable is not used in an interrupt context
|
| /linux-6.15/Documentation/trace/rv/ |
| H A D | monitor_wip.rst | 13 preemption disabled:: 30 The wakeup event always takes place with preemption disabled because
|
| H A D | monitor_sched.rst | 108 The schedule called with preemption disabled (scpd) monitor ensures schedule is 109 called with preemption disabled:: 130 does not enable preemption::
|
| /linux-6.15/Documentation/RCU/ |
| H A D | NMI-RCU.rst | 45 The do_nmi() function processes each NMI. It first disables preemption 50 preemption is restored. 95 CPUs complete any preemption-disabled segments of code that they were 97 Since NMI handlers disable preemption, synchronize_rcu() is guaranteed
|
| H A D | rcubarrier.rst | 330 disables preemption, which acted as an RCU read-side critical 364 Therefore, on_each_cpu() disables preemption across its call 367 preemption-disabled regions of code as RCU read-side critical 373 But if on_each_cpu() ever decides to forgo disabling preemption,
|
| /linux-6.15/Documentation/virt/kvm/devices/ |
| H A D | arm-vgic.rst | 99 maximum possible 128 preemption levels. The semantics of the register 100 indicate if any interrupts in a given preemption level are in the active 103 Thus, preemption level X has one or more active interrupts if and only if: 107 Bits for undefined preemption levels are RAZ/WI.
|
| /linux-6.15/arch/arc/kernel/ |
| H A D | entry-compact.S | 152 ; if L2 IRQ interrupted a L1 ISR, disable preemption 157 ; -preemption off IRQ, user task in syscall picked to run 172 ; bump thread_info->preempt_count (Disable preemption) 352 ; decrement thread_info->preempt_count (re-enable preemption)
|
| /linux-6.15/kernel/trace/rv/monitors/scpd/ |
| H A D | Kconfig | 11 Monitor to ensure schedule is called with preemption disabled.
|
| /linux-6.15/Documentation/tools/rtla/ |
| H A D | common_osnoise_description.rst | 3 time in a loop while with preemption, softirq and IRQs enabled, thus
|
| /linux-6.15/Documentation/tools/rv/ |
| H A D | rv-mon-wip.rst | 21 checks if the wakeup events always take place with preemption disabled.
|
| H A D | rv-mon-sched.rst | 49 * scpd: schedule called with preemption disabled
|
| /linux-6.15/Documentation/mm/ |
| H A D | highmem.rst | 66 CPU while the mapping is active. Although preemption is never disabled by 73 As said, pagefaults and preemption are never disabled. There is no need to 74 disable preemption because, when context switches to a different task, the 110 effects of atomic mappings, i.e. disabling page faults or preemption, or both. 141 restrictions on preemption or migration. It comes with an overhead as mapping
|
| /linux-6.15/drivers/gpu/drm/i915/ |
| H A D | Kconfig.profile | 59 How long to wait (in milliseconds) for a preemption event to occur 77 How long to wait (in milliseconds) for a preemption event to occur
|
| /linux-6.15/drivers/gpu/drm/xe/ |
| H A D | Kconfig.profile | 30 How long to wait (in microseconds) for a preemption event to occur
|
| /linux-6.15/Documentation/translations/zh_CN/core-api/ |
| H A D | local_ops.rst | 155 * preemptible context (it disables preemption) :
|
| /linux-6.15/Documentation/arch/arm/ |
| H A D | kernel_mode_neon.rst | 14 preemption disabled 58 * NEON/VFP code is executed with preemption disabled.
|