<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="/rss.xsl.xml"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
    <title>Changes in Kconfig.preempt</title>
    <description></description>
    <language>en</language>
    <copyright>Copyright 2015</copyright>
    <generator>Java</generator><item>
        <title>fe9beaaa - sched: No PREEMPT_RT=y for all{yes,mod}config</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#fe9beaaa</link>
        <description>sched: No PREEMPT_RT=y for all{yes,mod}configWhile PREEMPT_RT is undoubtedly totally awesome, it does not, at thistime, make sense to have all{yes,mod}config select it.Reported-by: Stephen Rothwell &lt;sfr@canb.auug.org.au&gt;Fixes: 35772d627b55 (&quot;sched: Enable PREEMPT_DYNAMIC for PREEMPT_RT&quot;)Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Thu, 07 Nov 2024 14:21:54 +0000</pubDate>
        <dc:creator>Peter Zijlstra &lt;peterz@infradead.org&gt;</dc:creator>
    </item>
<item>
        <title>35772d62 - sched: Enable PREEMPT_DYNAMIC for PREEMPT_RT</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#35772d62</link>
        <description>sched: Enable PREEMPT_DYNAMIC for PREEMPT_RTIn order to enable PREEMPT_DYNAMIC for PREEMPT_RT, remove PREEMPT_RTfrom the &apos;Preemption Model&apos; choice. Strictly speaking PREEMPT_RT isnot a change in how preemption works, but rather it makes a ton morecode preemptible.Notably, take away NONE and VOLUNTARY options for PREEMPT_RT, they makeno sense (but are techincally possible).Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;Reviewed-by: Sebastian Andrzej Siewior &lt;bigeasy@linutronix.de&gt;Link: https://lkml.kernel.org/r/20241007075055.441622332@infradead.org

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Fri, 04 Oct 2024 12:46:56 +0000</pubDate>
        <dc:creator>Peter Zijlstra &lt;peterz@infradead.org&gt;</dc:creator>
    </item>
<item>
        <title>7c70cb94 - sched: Add Lazy preemption model</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#7c70cb94</link>
        <description>sched: Add Lazy preemption modelChange fair to use resched_curr_lazy(), which, when the lazypreemption model is selected, will set TIF_NEED_RESCHED_LAZY.This LAZY bit will be promoted to the full NEED_RESCHED bit on tick.As such, the average delay between setting LAZY and actuallyrescheduling will be TICK_NSEC/2.In short, Lazy preemption will delay preemption for fair class butwill function as Full preemption for all the other classes, mostnotably the realtime (RR/FIFO/DEADLINE) classes.The goal is to bridge the performance gap with Voluntary, such that wemight eventually remove that option entirely.Suggested-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;Reviewed-by: Sebastian Andrzej Siewior &lt;bigeasy@linutronix.de&gt;Link: https://lkml.kernel.org/r/20241007075055.331243614@infradead.org

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Fri, 04 Oct 2024 12:46:58 +0000</pubDate>
        <dc:creator>Peter Zijlstra &lt;peterz@infradead.org&gt;</dc:creator>
    </item>
<item>
        <title>a2f4b16e - sched_ext: Build fix on !CONFIG_STACKTRACE[_SUPPORT]</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#a2f4b16e</link>
        <description>sched_ext: Build fix on !CONFIG_STACKTRACE[_SUPPORT]scx_dump_task() uses stack_trace_save_tsk() which is only available whenCONFIG_STACKTRACE. Make CONFIG_SCHED_CLASS_EXT select CONFIG_STACKTRACE ifthe support is available and skip capturing stack trace if!CONFIG_STACKTRACE.Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;Reported-by: kernel test robot &lt;lkp@intel.com&gt;Closes: https://lore.kernel.org/oe-kbuild-all/202407161844.reewQQrR-lkp@intel.com/Acked-by: David Vernet &lt;void@manifault.com&gt;Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Wed, 31 Jul 2024 18:56:31 +0000</pubDate>
        <dc:creator>Tejun Heo &lt;tj@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>b5ba2e1a - sched_ext: add CONFIG_DEBUG_INFO_BTF dependency</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#b5ba2e1a</link>
        <description>sched_ext: add CONFIG_DEBUG_INFO_BTF dependencyWithout BTF, attempting to load any sched_ext scheduler will result inan error like the following:  libbpf: kernel BTF is missing at &apos;/sys/kernel/btf/vmlinux&apos;, was CONFIG_DEBUG_INFO_BTF enabled?This makes sched_ext pretty much unusable, so explicitly depend onCONFIG_DEBUG_INFO_BTF to prevent these issues.Signed-off-by: Andrea Righi &lt;andrea.righi@canonical.com&gt;Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Thu, 27 Jun 2024 18:45:22 +0000</pubDate>
        <dc:creator>Andrea Righi &lt;andrea.righi@canonical.com&gt;</dc:creator>
    </item>
<item>
        <title>fa48e8d2 - sched_ext: Documentation: scheduler: Document extensible scheduler class</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#fa48e8d2</link>
        <description>sched_ext: Documentation: scheduler: Document extensible scheduler classAdd Documentation/scheduler/sched-ext.rst which gives a high-level overviewand pointers to the examples.v6: - Add paragraph explaining debug dump.v5: - Updated to reflect /sys/kernel interface change. Kconfig options      added.v4: - README improved, reformatted in markdown and renamed to README.md.v3: - Added tools/sched_ext/README.    - Dropped _example prefix from scheduler names.v2: - Apply minor edits suggested by Bagas. Caveats section dropped as all      of them are addressed.Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;Reviewed-by: David Vernet &lt;dvernet@meta.com&gt;Acked-by: Josh Don &lt;joshdon@google.com&gt;Acked-by: Hao Luo &lt;haoluo@google.com&gt;Acked-by: Barret Rhoden &lt;brho@google.com&gt;Cc: Bagas Sanjaya &lt;bagasdotme@gmail.com&gt;

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Tue, 18 Jun 2024 20:09:21 +0000</pubDate>
        <dc:creator>Tejun Heo &lt;tj@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>7b0888b7 - sched_ext: Implement core-sched support</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#7b0888b7</link>
        <description>sched_ext: Implement core-sched supportThe core-sched support is composed of the following parts:- task_struct-&gt;scx.core_sched_at is added. This is a timestamp which can be  used to order tasks. Depending on whether the BPF scheduler implements  custom ordering, it tracks either global FIFO ordering of all tasks or  local-DSQ ordering within the dispatched tasks on a CPU.- prio_less() is updated to call scx_prio_less() when comparing SCX tasks.  scx_prio_less() calls ops.core_sched_before() if available or uses the  core_sched_at timestamp. For global FIFO ordering, the BPF scheduler  doesn&apos;t need to do anything. Otherwise, it should implement  ops.core_sched_before() which reflects the ordering.- When core-sched is enabled, balance_scx() balances all SMT siblings so  that they all have tasks dispatched if necessary before pick_task_scx() is  called. pick_task_scx() picks between the current task and the first  dispatched task on the local DSQ based on availability and the  core_sched_at timestamps. Note that FIFO ordering is expected among the  already dispatched tasks whether running or on the local DSQ, so this path  always compares core_sched_at instead of calling into  ops.core_sched_before().qmap_core_sched_before() is added to scx_qmap. It scales thedistances from the heads of the queues to compare the tasks across differentpriority queues and seems to behave as expected.v3: Fixed build error when !CONFIG_SCHED_SMT reported by Andrea Righi.v2: Sched core added the const qualifiers to prio_less task arguments.    Explicitly drop them for ops.core_sched_before() task arguments. BPF    enforces access control through the verifier, so the qualifier isn&apos;t    actually operative and only gets in the way when interacting with    various helpers.Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;Reviewed-by: David Vernet &lt;dvernet@meta.com&gt;Reviewed-by: Josh Don &lt;joshdon@google.com&gt;Cc: Andrea Righi &lt;andrea.righi@canonical.com&gt;

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Tue, 18 Jun 2024 20:09:20 +0000</pubDate>
        <dc:creator>Tejun Heo &lt;tj@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>f0e1a064 - sched_ext: Implement BPF extensible scheduler class</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#f0e1a064</link>
        <description>sched_ext: Implement BPF extensible scheduler classImplement a new scheduler class sched_ext (SCX), which allows schedulingpolicies to be implemented as BPF programs to achieve the following:1. Ease of experimentation and exploration: Enabling rapid iteration of new   scheduling policies.2. Customization: Building application-specific schedulers which implement   policies that are not applicable to general-purpose schedulers.3. Rapid scheduler deployments: Non-disruptive swap outs of scheduling   policies in production environments.sched_ext leverages BPF&#8217;s struct_ops feature to define a structure whichexports function callbacks and flags to BPF programs that wish to implementscheduling policies. The struct_ops structure exported by sched_ext isstruct sched_ext_ops, and is conceptually similar to struct sched_class. Therole of sched_ext is to map the complex sched_class callbacks to the moresimple and ergonomic struct sched_ext_ops callbacks.For more detailed discussion on the motivations and overview, please referto the cover letter.Later patches will also add several example schedulers and documentation.This patch implements the minimum core framework to enable implementation ofBPF schedulers. Subsequent patches will gradually add functionalitiesincluding safety guarantee mechanisms, nohz and cgroup support.include/linux/sched/ext.h defines struct sched_ext_ops. With the comment ontop, each operation should be self-explanatory. The followings are worthnoting:- Both &quot;sched_ext&quot; and its shorthand &quot;scx&quot; are used. If the identifier  already has &quot;sched&quot; in it, &quot;ext&quot; is used; otherwise, &quot;scx&quot;.- In sched_ext_ops, only .name is mandatory. Every operation is optional and  if omitted a simple but functional default behavior is provided.- A new policy constant SCHED_EXT is added and a task can select sched_ext  by invoking sched_setscheduler(2) with the new policy constant. However,  if the BPF scheduler is not loaded, SCHED_EXT is the same as SCHED_NORMAL  and the task is scheduled by CFS. When the BPF scheduler is loaded, all  tasks which have the SCHED_EXT policy are switched to sched_ext.- To bridge the workflow imbalance between the scheduler core and  sched_ext_ops callbacks, sched_ext uses simple FIFOs called dispatch  queues (dsq&apos;s). By default, there is one global dsq (SCX_DSQ_GLOBAL), and  one local per-CPU dsq (SCX_DSQ_LOCAL). SCX_DSQ_GLOBAL is provided for  convenience and need not be used by a scheduler that doesn&apos;t require it.  SCX_DSQ_LOCAL is the per-CPU FIFO that sched_ext pulls from when putting  the next task on the CPU. The BPF scheduler can manage an arbitrary number  of dsq&apos;s using scx_bpf_create_dsq() and scx_bpf_destroy_dsq().- sched_ext guarantees system integrity no matter what the BPF scheduler  does. To enable this, each task&apos;s ownership is tracked through  p-&gt;scx.ops_state and all tasks are put on scx_tasks list. The disable path  can always recover and revert all tasks back to CFS. See p-&gt;scx.ops_state  and scx_tasks.- A task is not tied to its rq while enqueued. This decouples CPU selection  from queueing and allows sharing a scheduling queue across an arbitrary  subset of CPUs. This adds some complexities as a task may need to be  bounced between rq&apos;s right before it starts executing. See  dispatch_to_local_dsq() and move_task_to_local_dsq().- One complication that arises from the above weak association between task  and rq is that synchronizing with dequeue() gets complicated as dequeue()  may happen anytime while the task is enqueued and the dispatch path might  need to release the rq lock to transfer the task. Solving this requires a  bit of complexity. See the logic around p-&gt;scx.sticky_cpu and  p-&gt;scx.ops_qseq.- Both enable and disable paths are a bit complicated. The enable path  switches all tasks without blocking to avoid issues which can arise from  partially switched states (e.g. the switching task itself being starved).  The disable path can&apos;t trust the BPF scheduler at all, so it also has to  guarantee forward progress without blocking. See scx_ops_enable() and  scx_ops_disable_workfn().- When sched_ext is disabled, static_branches are used to shut down the  entry points from hot paths.v7: - scx_ops_bypass() was incorrectly and unnecessarily trying to grab      scx_ops_enable_mutex which can lead to deadlocks in the disable path.      Fixed.    - Fixed TASK_DEAD handling bug in scx_ops_enable() path which could lead      to use-after-free.    - Consolidated per-cpu variable usages and other cleanups.v6: - SCX_NR_ONLINE_OPS replaced with SCX_OPI_*_BEGIN/END so that multiple      groups can be expressed. Later CPU hotplug operations are put into      their own group.    - SCX_OPS_DISABLING state is replaced with the new bypass mechanism      which allows temporarily putting the system into simple FIFO      scheduling mode bypassing the BPF scheduler. In addition to the shut      down path, this will also be used to isolate the BPF scheduler across      PM events. Enabling and disabling the bypass mode requires iterating      all runnable tasks. rq-&gt;scx.runnable_list addition is moved from the      later watchdog patch.    - ops.prep_enable() is replaced with ops.init_task() and      ops.enable/disable() are now called whenever the task enters and      leaves sched_ext instead of when the task becomes schedulable on      sched_ext and stops being so. A new operation - ops.exit_task() - is      called when the task stops being schedulable on sched_ext.    - scx_bpf_dispatch() can now be called from ops.select_cpu() too. This      removes the need for communicating local dispatch decision made by      ops.select_cpu() to ops.enqueue() via per-task storage.      SCX_KF_SELECT_CPU is added to support the change.    - SCX_TASK_ENQ_LOCAL which told the BPF scheudler that      scx_select_cpu_dfl() wants the task to be dispatched to the local DSQ      was removed. Instead, scx_bpf_select_cpu_dfl() now dispatches directly      if it finds a suitable idle CPU. If such behavior is not desired,      users can use scx_bpf_select_cpu_dfl() which returns the verdict in a      bool out param.    - scx_select_cpu_dfl() was mishandling WAKE_SYNC and could end up      queueing many tasks on a local DSQ which makes tasks to execute in      order while other CPUs stay idle which made some hackbench numbers      really bad. Fixed.    - The current state of sched_ext can now be monitored through files      under /sys/sched_ext instead of /sys/kernel/debug/sched/ext. This is      to enable monitoring on kernels which don&apos;t enable debugfs.    - sched_ext wasn&apos;t telling BPF that ops.dispatch()&apos;s @prev argument may      be NULL and a BPF scheduler which derefs the pointer without checking      could crash the kernel. Tell BPF. This is currently a bit ugly. A      better way to annotate this is expected in the future.    - scx_exit_info updated to carry pointers to message buffers instead of      embedding them directly. This decouples buffer sizes from API so that      they can be changed without breaking compatibility.    - exit_code added to scx_exit_info. This is used to indicate different      exit conditions on non-error exits and will be used to handle e.g. CPU      hotplugs.    - The patch &quot;sched_ext: Allow BPF schedulers to switch all eligible      tasks into sched_ext&quot; is folded in and the interface is changed so      that partial switching is indicated with a new ops flag      %SCX_OPS_SWITCH_PARTIAL. This makes scx_bpf_switch_all() unnecessasry      and in turn SCX_KF_INIT. ops.init() is now called with      SCX_KF_SLEEPABLE.    - Code reorganized so that only the parts necessary to integrate with      the rest of the kernel are in the header files.    - Changes to reflect the BPF and other kernel changes including the      addition of bpf_sched_ext_ops.cfi_stubs.v5: - To accommodate 32bit configs, p-&gt;scx.ops_state is now atomic_long_t      instead of atomic64_t and scx_dsp_buf_ent.qseq which uses      load_acquire/store_release is now unsigned long instead of u64.    - Fix the bug where bpf_scx_btf_struct_access() was allowing write      access to arbitrary fields.    - Distinguish kfuncs which can be called from any sched_ext ops and from      anywhere. e.g. scx_bpf_pick_idle_cpu() can now be called only from      sched_ext ops.    - Rename &quot;type&quot; to &quot;kind&quot; in scx_exit_info to make it easier to use on      languages in which &quot;type&quot; is a reserved keyword.    - Since cff9b2332ab7 (&quot;kernel/sched: Modify initial boot task idle      setup&quot;), PF_IDLE is not set on idle tasks which haven&apos;t been online      yet which made scx_task_iter_next_filtered() include those idle tasks      in iterations leading to oopses. Update scx_task_iter_next_filtered()      to directly test p-&gt;sched_class against idle_sched_class instead of      using is_idle_task() which tests PF_IDLE.    - Other updates to match upstream changes such as adding const to      set_cpumask() param and renaming check_preempt_curr() to      wakeup_preempt().v4: - SCHED_CHANGE_BLOCK replaced with the previous      sched_deq_and_put_task()/sched_enq_and_set_tsak() pair. This is      because upstream is adaopting a different generic cleanup mechanism.      Once that lands, the code will be adapted accordingly.    - task_on_scx() used to test whether a task should be switched into SCX,      which is confusing. Renamed to task_should_scx(). task_on_scx() now      tests whether a task is currently on SCX.    - scx_has_idle_cpus is barely used anymore and replaced with direct      check on the idle cpumask.    - SCX_PICK_IDLE_CORE added and scx_pick_idle_cpu() improved to prefer      fully idle cores.    - ops.enable() now sees up-to-date p-&gt;scx.weight value.    - ttwu_queue path is disabled for tasks on SCX to avoid confusing BPF      schedulers expecting -&gt;select_cpu() call.    - Use cpu_smt_mask() instead of topology_sibling_cpumask() like the rest      of the scheduler.v3: - ops.set_weight() added to allow BPF schedulers to track weight changes      without polling p-&gt;scx.weight.    - move_task_to_local_dsq() was losing SCX-specific enq_flags when      enqueueing the task on the target dsq because it goes through      activate_task() which loses the upper 32bit of the flags. Carry the      flags through rq-&gt;scx.extra_enq_flags.    - scx_bpf_dispatch(), scx_bpf_pick_idle_cpu(), scx_bpf_task_running()      and scx_bpf_task_cpu() now use the new KF_RCU instead of      KF_TRUSTED_ARGS to make it easier for BPF schedulers to call them.    - The kfunc helper access control mechanism implemented through      sched_ext_entity.kf_mask is improved. Now SCX_CALL_OP*() is always      used when invoking scx_ops operations.v2: - balance_scx_on_up() is dropped. Instead, on UP, balance_scx() is      called from put_prev_taks_scx() and pick_next_task_scx() as necessary.      To determine whether balance_scx() should be called from      put_prev_task_scx(), SCX_TASK_DEQD_FOR_SLEEP flag is added. See the      comment in put_prev_task_scx() for details.    - sched_deq_and_put_task() / sched_enq_and_set_task() sequences replaced      with SCHED_CHANGE_BLOCK().    - Unused all_dsqs list removed. This was a left-over from previous      iterations.    - p-&gt;scx.kf_mask is added to track and enforce which kfunc helpers are      allowed. Also, init/exit sequences are updated to make some kfuncs      always safe to call regardless of the current BPF scheduler state.      Combined, this should make all the kfuncs safe.    - BPF now supports sleepable struct_ops operations. Hacky workaround      removed and operations and kfunc helpers are tagged appropriately.    - BPF now supports bitmask / cpumask helpers. scx_bpf_get_idle_cpumask()      and friends are added so that BPF schedulers can use the idle masks      with the generic helpers. This replaces the hacky kfunc helpers added      by a separate patch in V1.    - CONFIG_SCHED_CLASS_EXT can no longer be enabled if SCHED_CORE is      enabled. This restriction will be removed by a later patch which adds      core-sched support.    - Add MAINTAINERS entries and other misc changes.Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;Co-authored-by: David Vernet &lt;dvernet@meta.com&gt;Acked-by: Josh Don &lt;joshdon@google.com&gt;Acked-by: Hao Luo &lt;haoluo@google.com&gt;Acked-by: Barret Rhoden &lt;brho@google.com&gt;Cc: Andrea Righi &lt;andrea.righi@canonical.com&gt;

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Tue, 18 Jun 2024 20:09:17 +0000</pubDate>
        <dc:creator>Tejun Heo &lt;tj@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>7dd5ad2d - Revert &quot;signal, x86: Delay calling signals in atomic on RT enabled kernels&quot;</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#7dd5ad2d</link>
        <description>Revert &quot;signal, x86: Delay calling signals in atomic on RT enabled kernels&quot;Revert commit bf9ad37dc8a. It needs to be better encapsulated andgeneralized.Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;Cc: &quot;Eric W. Biederman&quot; &lt;ebiederm@xmission.com&gt;Cc: Oleg Nesterov &lt;oleg@redhat.com&gt;Cc: Sebastian Andrzej Siewior &lt;bigeasy@linutronix.de&gt;

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Thu, 31 Mar 2022 08:36:55 +0000</pubDate>
        <dc:creator>Thomas Gleixner &lt;tglx@linutronix.de&gt;</dc:creator>
    </item>
<item>
        <title>bf9ad37d - signal, x86: Delay calling signals in atomic on RT enabled kernels</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#bf9ad37d</link>
        <description>signal, x86: Delay calling signals in atomic on RT enabled kernelsOn x86_64 we must disable preemption before we enable interruptsfor stack faults, int3 and debugging, because the current task is usinga per CPU debug stack defined by the IST. If we schedule out, another taskcan come in and use the same stack and cause the stack to be corruptedand crash the kernel on return.When CONFIG_PREEMPT_RT is enabled, spinlock_t locks become sleeping, andone of these is the spin lock used in signal handling.Some of the debug code (int3) causes do_trap() to send a signal.This function calls a spinlock_t lock that has been converted to asleeping lock. If this happens, the above issues with the corruptedstack is possible.Instead of calling the signal right away, for PREEMPT_RT and x86,the signal information is stored on the stacks task_struct andTIF_NOTIFY_RESUME is set. Then on exit of the trap, the signal resumecode will send the signal when preemption is enabled.[ rostedt: Switched from #ifdef CONFIG_PREEMPT_RT to  ARCH_RT_DELAYS_SIGNAL_SEND and added comments to the code. ][bigeasy: Add on 32bit as per Yang Shi, minor rewording. ][ tglx: Use a config option ]Signed-off-by: Oleg Nesterov &lt;oleg@redhat.com&gt;Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;Signed-off-by: Sebastian Andrzej Siewior &lt;bigeasy@linutronix.de&gt;Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;Link: https://lore.kernel.org/r/Ygq5aBB/qMQw6aP5@linutronix.de

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Tue, 14 Jul 2015 12:26:34 +0000</pubDate>
        <dc:creator>Oleg Nesterov &lt;oleg@redhat.com&gt;</dc:creator>
    </item>
<item>
        <title>99cf983c - sched/preempt: Add PREEMPT_DYNAMIC using static keys</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#99cf983c</link>
        <description>sched/preempt: Add PREEMPT_DYNAMIC using static keysWhere an architecture selects HAVE_STATIC_CALL but notHAVE_STATIC_CALL_INLINE, each static call has an out-of-line trampolinewhich will either branch to a callee or return to the caller.On such architectures, a number of constraints can conspire to makethose trampolines more complicated and potentially less useful than we&apos;dlike. For example:* Hardware and software control flow integrity schemes can require the  addition of &quot;landing pad&quot; instructions (e.g. `BTI` for arm64), which  will also be present at the &quot;real&quot; callee.* Limited branch ranges can require that trampolines generate or load an  address into a register and perform an indirect branch (or at least  have a slow path that does so). This loses some of the benefits of  having a direct branch.* Interaction with SW CFI schemes can be complicated and fragile, e.g.  requiring that we can recognise idiomatic codegen and remove  indirections understand, at least until clang proves more helpful  mechanisms for dealing with this.For PREEMPT_DYNAMIC, we don&apos;t need the full power of static calls, as wereally only need to enable/disable specific preemption functions. We canachieve the same effect without a number of the pain points above byusing static keys to fold early returns into the preemption functionsthemselves rather than in an out-of-line trampoline, effectivelyinlining the trampoline into the start of the function.For arm64, this results in good code generation. For example, thedynamic_cond_resched() wrapper looks as follows when enabled. Whendisabled, the first `B` is replaced with a `NOP`, resulting in an earlyreturn.| &lt;dynamic_cond_resched&gt;:|        bti     c|        b       &lt;dynamic_cond_resched+0x10&gt;     // or `nop`|        mov     w0, #0x0|        ret|        mrs     x0, sp_el0|        ldr     x0, [x0, #8]|        cbnz    x0, &lt;dynamic_cond_resched+0x8&gt;|        paciasp|        stp     x29, x30, [sp, #-16]!|        mov     x29, sp|        bl      &lt;preempt_schedule_common&gt;|        mov     w0, #0x1|        ldp     x29, x30, [sp], #16|        autiasp|        ret... compared to the regular form of the function:| &lt;__cond_resched&gt;:|        bti     c|        mrs     x0, sp_el0|        ldr     x1, [x0, #8]|        cbz     x1, &lt;__cond_resched+0x18&gt;|        mov     w0, #0x0|        ret|        paciasp|        stp     x29, x30, [sp, #-16]!|        mov     x29, sp|        bl      &lt;preempt_schedule_common&gt;|        mov     w0, #0x1|        ldp     x29, x30, [sp], #16|        autiasp|        retAny architecture which implements static keys should be able to use thisto implement PREEMPT_DYNAMIC with similar cost to non-inlined staticcalls. Since this is likely to have greater overhead than (inlined)static calls, PREEMPT_DYNAMIC is only defaulted to enabled whenHAVE_PREEMPT_DYNAMIC_CALL is selected.Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;Acked-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;Acked-by: Frederic Weisbecker &lt;frederic@kernel.org&gt;Link: https://lore.kernel.org/r/20220214165216.2231574-6-mark.rutland@arm.com

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Mon, 14 Feb 2022 16:52:14 +0000</pubDate>
        <dc:creator>Mark Rutland &lt;mark.rutland@arm.com&gt;</dc:creator>
    </item>
<item>
        <title>a8b76910 - preempt: Restore preemption model selection configs</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#a8b76910</link>
        <description>preempt: Restore preemption model selection configsCommit c597bfddc9e9 (&quot;sched: Provide Kconfig support for default dynamicpreempt mode&quot;) changed the selectable config names for the preemptionmodel. This means a config file must now select  CONFIG_PREEMPT_BEHAVIOUR=yrather than  CONFIG_PREEMPT=yto get a preemptible kernel. This means all arch config files would need tobe updated - right now they&apos;ll all end up with the defaultCONFIG_PREEMPT_NONE_BEHAVIOUR.Rather than touch a good hundred of config files, restore usage ofCONFIG_PREEMPT{_NONE, _VOLUNTARY}. Make them configure:o The build-time preemption model when !PREEMPT_DYNAMICo The default boot-time preemption model when PREEMPT_DYNAMICAdd siblings of those configs with the _BUILD suffix to unconditionallydesignate the build-time preemption model (PREEMPT_DYNAMIC is built withthe &quot;highest&quot; preemption model it supports, aka PREEMPT). Downstreamconfigs should by now all be depending / selected by CONFIG_PREEMPTIONrather than CONFIG_PREEMPT, so only a few sites need patching up.Signed-off-by: Valentin Schneider &lt;valentin.schneider@arm.com&gt;Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;Acked-by: Marco Elver &lt;elver@google.com&gt;Link: https://lore.kernel.org/r/20211110202448.4054153-2-valentin.schneider@arm.com

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Wed, 10 Nov 2021 20:24:44 +0000</pubDate>
        <dc:creator>Valentin Schneider &lt;valentin.schneider@arm.com&gt;</dc:creator>
    </item>
<item>
        <title>c597bfdd - sched: Provide Kconfig support for default dynamic preempt mode</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#c597bfdd</link>
        <description>sched: Provide Kconfig support for default dynamic preempt modeCurrently the boot defined preempt behaviour (aka dynamic preempt)selects full preemption by default when the &quot;preempt=&quot; boot parameteris omitted. However distros may rather want to default to eitherno preemption or voluntary preemption.To provide with this flexibility, make dynamic preemption a visibleKconfig option and adapt the preemption behaviour selected by the userto either static or dynamic preemption.Signed-off-by: Frederic Weisbecker &lt;frederic@kernel.org&gt;Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;Link: https://lkml.kernel.org/r/20210914103134.11309-1-frederic@kernel.org

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Tue, 14 Sep 2021 10:31:34 +0000</pubDate>
        <dc:creator>Frederic Weisbecker &lt;frederic@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>d2343cb8 - sched/core: Disable CONFIG_SCHED_CORE by default</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#d2343cb8</link>
        <description>sched/core: Disable CONFIG_SCHED_CORE by defaultThis option at minimum adds extra code to the scheduler - even ifit&apos;s default unused - and most users wouldn&apos;t want it.Reported-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Mon, 28 Jun 2021 19:55:16 +0000</pubDate>
        <dc:creator>Ingo Molnar &lt;mingo@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>7b419f47 - sched: Add CONFIG_SCHED_CORE help text</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#7b419f47</link>
        <description>sched: Add CONFIG_SCHED_CORE help textHugh noted that the SCHED_CORE Kconfig option could do with a helptext.Requested-by: Hugh Dickins &lt;hughd@google.com&gt;Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;Reviewed-by: Randy Dunlap &lt;rdunlap@infradead.org&gt;Acked-by: Hugh Dickins &lt;hughd@google.com&gt;Link: https://lkml.kernel.org/r/YKyhtwhEgvtUDOyl@hirez.programming.kicks-ass.net

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Tue, 25 May 2021 06:53:28 +0000</pubDate>
        <dc:creator>Peter Zijlstra &lt;peterz@infradead.org&gt;</dc:creator>
    </item>
<item>
        <title>9edeaea1 - sched: Core-wide rq-&gt;lock</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#9edeaea1</link>
        <description>sched: Core-wide rq-&gt;lockIntroduce the basic infrastructure to have a core wide rq-&gt;lock.This relies on the rq-&gt;__lock order being in increasing CPU number(inside a core). It is also constrained to SMT8 per lockdep (andSMT256 per preempt_count).Luckily SMT8 is the max supported SMT count for Linux (Mips, Sparc andPower are known to have this).Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;Tested-by: Don Hiatt &lt;dhiatt@digitalocean.com&gt;Tested-by: Hongyu Ning &lt;hongyu.ning@linux.intel.com&gt;Tested-by: Vincent Guittot &lt;vincent.guittot@linaro.org&gt;Link: https://lkml.kernel.org/r/YJUNfzSgptjX7tG6@hirez.programming.kicks-ass.net

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Tue, 17 Nov 2020 23:19:34 +0000</pubDate>
        <dc:creator>Peter Zijlstra &lt;peterz@infradead.org&gt;</dc:creator>
    </item>
<item>
        <title>6ef869e0 - preempt: Introduce CONFIG_PREEMPT_DYNAMIC</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#6ef869e0</link>
        <description>preempt: Introduce CONFIG_PREEMPT_DYNAMICPreemption mode selection is currently hardcoded on Kconfig choices.Introduce a dedicated option to tune preemption flavour at boot time,This will be only available on architectures efficiently supportingstatic calls in order not to tempt with the feature against additionaloverhead that might be prohibitive or undesirable.CONFIG_PREEMPT_DYNAMIC is automatically selected by CONFIG_PREEMPT ifthe architecture provides the necessary support (CONFIG_STATIC_CALL_INLINE,CONFIG_GENERIC_ENTRY, and provide with __preempt_schedule_function() /__preempt_schedule_notrace_function()).Suggested-by: Peter Zijlstra &lt;peterz@infradead.org&gt;Signed-off-by: Michal Hocko &lt;mhocko@suse.com&gt;Signed-off-by: Frederic Weisbecker &lt;frederic@kernel.org&gt;[peterz: relax requirement to HAVE_STATIC_CALL]Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;Link: https://lkml.kernel.org/r/20210118141223.123667-5-frederic@kernel.org

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Mon, 18 Jan 2021 14:12:19 +0000</pubDate>
        <dc:creator>Michal Hocko &lt;mhocko@kernel.org&gt;</dc:creator>
    </item>
<item>
        <title>d61ca3c2 - sched/Kconfig: Fix spelling mistake in user-visible help text</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#d61ca3c2</link>
        <description>sched/Kconfig: Fix spelling mistake in user-visible help textFix a spelling mistake in the help text for PREEMPT_RT.Signed-off-by: Srivatsa S. Bhat (VMware) &lt;srivatsa@csail.mit.edu&gt;Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;Link: https://lkml.kernel.org/r/157204450499.10518.4542293884417101528.stgit@srivatsa-ubuntu

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Fri, 25 Oct 2019 23:02:07 +0000</pubDate>
        <dc:creator>Srivatsa S. Bhat (VMware) &lt;srivatsa@csail.mit.edu&gt;</dc:creator>
    </item>
<item>
        <title>b8d33498 - sched/rt, Kconfig: Unbreak def/oldconfig with CONFIG_PREEMPT=y</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#b8d33498</link>
        <description>sched/rt, Kconfig: Unbreak def/oldconfig with CONFIG_PREEMPT=yThe merge of the CONFIG_PREEMPT_RT stub renamed CONFIG_PREEMPT toCONFIG_PREEMPT_LL which causes all defconfigs which have CONFIG_PREEMPT=yset to fall back to CONFIG_PREEMPT_NONE because CONFIG_PREEMPT depends onthe preemption mode choice wich defaults to NONE. This also affectsoldconfig builds.So rather than changing 114 defconfig files and being an annoyance tousers, revert the rename and select a new config symbol PREEMPTION. Thatkeeps everything working smoothly and the revelant ifdef&apos;s are going to befixed up step by step.Reported-by: Mark Rutland &lt;mark.rutland@arm.com&gt;Fixes: a50a3f4b6a31 (&quot;sched/rt, Kconfig: Introduce CONFIG_PREEMPT_RT&quot;)Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Mon, 22 Jul 2019 15:59:19 +0000</pubDate>
        <dc:creator>Thomas Gleixner &lt;tglx@linutronix.de&gt;</dc:creator>
    </item>
<item>
        <title>a50a3f4b - sched/rt, Kconfig: Introduce CONFIG_PREEMPT_RT</title>
        <link>http://172.16.0.5:8080/history/linux-6.15/kernel/Kconfig.preempt#a50a3f4b</link>
        <description>sched/rt, Kconfig: Introduce CONFIG_PREEMPT_RTAdd a new entry to the preemption menu which enables the real-time supportfor the kernel. The choice is only enabled when an architecture supportsit.It selects PREEMPT as the RT features depend on it. To achieve that theexisting PREEMPT choice is renamed to PREEMPT_LL which select PREEMPT aswell.No functional change.Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;Acked-by: Paul E. McKenney &lt;paulmck@linux.ibm.com&gt;Acked-by: Steven Rostedt (VMware) &lt;rostedt@goodmis.org&gt;Acked-by: Clark Williams &lt;williams@redhat.com&gt;Acked-by: Daniel Bristot de Oliveira &lt;bristot@redhat.com&gt;Acked-by: Frederic Weisbecker &lt;frederic@kernel.org&gt;Acked-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;Acked-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;Acked-by: Daniel Wagner &lt;wagi@monom.org&gt;Acked-by: Luis Claudio R. Goncalves &lt;lgoncalv@redhat.com&gt;Acked-by: Julia Cartwright &lt;julia@ni.com&gt;Acked-by: Tom Zanussi &lt;tom.zanussi@linux.intel.com&gt;Acked-by: Gratian Crisan &lt;gratian.crisan@ni.com&gt;Acked-by: Sebastian Siewior &lt;bigeasy@linutronix.de&gt;Cc: Andrew Morton &lt;akpm@linuxfoundation.org&gt;Cc: Christoph Hellwig &lt;hch@lst.de&gt;Cc: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;Cc: Lukas Bulwahn &lt;lukas.bulwahn@gmail.com&gt;Cc: Mike Galbraith &lt;efault@gmx.de&gt;Cc: Tejun Heo &lt;tj@kernel.org&gt;Link: http://lkml.kernel.org/r/alpine.DEB.2.21.1907172200190.1778@nanos.tec.linutronix.deSigned-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;

            List of files:
            /linux-6.15/kernel/Kconfig.preempt</description>
        <pubDate>Wed, 17 Jul 2019 20:01:49 +0000</pubDate>
        <dc:creator>Thomas Gleixner &lt;tglx@linutronix.de&gt;</dc:creator>
    </item>
</channel>
</rss>
