|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2 |
|
| #
f66b0acf |
| 05-Feb-2025 |
Nam Cao <[email protected]> |
time: Switch to hrtimer_setup()
hrtimer_setup() takes the callback function pointer as argument and initializes the timer completely.
Replace hrtimer_init() and the open coded initialization of hrt
time: Switch to hrtimer_setup()
hrtimer_setup() takes the callback function pointer as argument and initializes the timer completely.
Replace hrtimer_init() and the open coded initialization of hrtimer::function with the new setup mechanism.
Signed-off-by: Nam Cao <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/all/170bb691a0d59917c8268a98c80b607128fc9f7f.1738746821.git.namcao@linutronix.de
show more ...
|
|
Revision tags: v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7 |
|
| #
93190bc3 |
| 04-Nov-2024 |
Marco Elver <[email protected]> |
seqlock, treewide: Switch to non-raw seqcount_latch interface
Switch all instrumentable users of the seqcount_latch interface over to the non-raw interface.
Co-developed-by: "Peter Zijlstra (Intel)
seqlock, treewide: Switch to non-raw seqcount_latch interface
Switch all instrumentable users of the seqcount_latch interface over to the non-raw interface.
Co-developed-by: "Peter Zijlstra (Intel)" <[email protected]> Signed-off-by: "Peter Zijlstra (Intel)" <[email protected]> Signed-off-by: Marco Elver <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
8ab40fc2 |
| 04-Nov-2024 |
Marco Elver <[email protected]> |
time/sched_clock: Broaden sched_clock()'s instrumentation coverage
Most of sched_clock()'s implementation is ineligible for instrumentation due to relying on sched_clock_noinstr().
Split the implem
time/sched_clock: Broaden sched_clock()'s instrumentation coverage
Most of sched_clock()'s implementation is ineligible for instrumentation due to relying on sched_clock_noinstr().
Split the implementation off into an __always_inline function __sched_clock(), which is then used by the noinstr and instrumentable version, to allow more of sched_clock() to be covered by various instrumentation.
This will allow instrumentation with the various sanitizers (KASAN, KCSAN, KMSAN, UBSAN). For KCSAN, we know that raw seqcount_latch usage without annotations will result in false positive reports: tell it that all of __sched_clock() is "atomic" for the latch reader; later changes in this series will take care of the writers.
Co-developed-by: "Peter Zijlstra (Intel)" <[email protected]> Signed-off-by: "Peter Zijlstra (Intel)" <[email protected]> Signed-off-by: Marco Elver <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
1139c71d |
| 04-Nov-2024 |
Marco Elver <[email protected]> |
time/sched_clock: Swap update_clock_read_data() latch writes
Swap the writes to the odd and even copies to make the writer critical section look like all other seqcount_latch writers.
Signed-off-by
time/sched_clock: Swap update_clock_read_data() latch writes
Swap the writes to the odd and even copies to make the writer critical section look like all other seqcount_latch writers.
Signed-off-by: Marco Elver <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3 |
|
| #
5949a68c |
| 19-May-2023 |
Peter Zijlstra <[email protected]> |
time/sched_clock: Provide sched_clock_noinstr()
With the intent to provide local_clock_noinstr(), a variant of local_clock() that's safe to be called from noinstr code (with the assumption that any
time/sched_clock: Provide sched_clock_noinstr()
With the intent to provide local_clock_noinstr(), a variant of local_clock() that's safe to be called from noinstr code (with the assumption that any such code will already be non-preemptible), prepare for things by providing a noinstr sched_clock_noinstr() function.
Specifically, preempt_enable_*() calls out to schedule(), which upsets noinstr validation efforts.
As such, pull out the preempt_{dis,en}able_notrace() requirements from the sched_clock_read() implementations by explicitly providing it in the sched_clock() function.
This further requires said sched_clock_read() functions to be noinstr themselves, for ARCH_WANTS_NO_INSTR users. See the next few patches.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Michael Kelley <[email protected]> # Hyper-V Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
d16317de |
| 19-May-2023 |
Peter Zijlstra <[email protected]> |
seqlock/latch: Provide raw_read_seqcount_latch_retry()
The read side of seqcount_latch consists of:
do { seq = raw_read_seqcount_latch(&latch->seq); ... } while (read_seqcount_latch_ret
seqlock/latch: Provide raw_read_seqcount_latch_retry()
The read side of seqcount_latch consists of:
do { seq = raw_read_seqcount_latch(&latch->seq); ... } while (read_seqcount_latch_retry(&latch->seq, seq));
which is asymmetric in the raw_ department, and sure enough, read_seqcount_latch_retry() includes (explicit) instrumentation where raw_read_seqcount_latch() does not.
This inconsistency becomes a problem when trying to use it from noinstr code. As such, fix it by renaming and re-implementing raw_read_seqcount_latch_retry() without the instrumentation.
Specifically the instrumentation in question is kcsan_atomic_next(0) in do___read_seqcount_retry(). Loosing this annotation is not a problem because raw_read_seqcount_latch() does not pass through kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX).
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Reviewed-by: Petr Mladek <[email protected]> Tested-by: Michael Kelley <[email protected]> # Hyper-V Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4 |
|
| #
f4b62e1e |
| 24-Apr-2022 |
Maciej W. Rozycki <[email protected]> |
time/sched_clock: Fix formatting of frequency reporting code
Use flat rather than nested indentation for chained else/if clauses as per coding-style.rst:
if (x == y) { .. } else if (x > y) {
time/sched_clock: Fix formatting of frequency reporting code
Use flat rather than nested indentation for chained else/if clauses as per coding-style.rst:
if (x == y) { .. } else if (x > y) { ... } else { .... }
This also improves readability.
Signed-off-by: Maciej W. Rozycki <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: John Stultz <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
cc1b923a |
| 24-Apr-2022 |
Maciej W. Rozycki <[email protected]> |
time/sched_clock: Use Hz as the unit for clock rate reporting below 4kHz
The kernel uses kHz as the unit for clock rates reported between 1MHz (inclusive) and 4MHz (exclusive), e.g.:
sched_clock:
time/sched_clock: Use Hz as the unit for clock rate reporting below 4kHz
The kernel uses kHz as the unit for clock rates reported between 1MHz (inclusive) and 4MHz (exclusive), e.g.:
sched_clock: 64 bits at 1000kHz, resolution 1000ns, wraps every 2199023255500ns
This reduces the amount of data lost due to rounding, but hasn't been replicated for the kHz range when support was added for proper reporting of sub-kHz clock rates. Take the same approach for rates between 1kHz (inclusive) and 4kHz (exclusive), which makes it consistent.
Signed-off-by: Maciej W. Rozycki <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
92067440 |
| 24-Apr-2022 |
Maciej W. Rozycki <[email protected]> |
time/sched_clock: Round the frequency reported to nearest rather than down
The frequency reported for clock sources are rounded down, which gives misleading figures, e.g.:
I/O ASIC clock frequency
time/sched_clock: Round the frequency reported to nearest rather than down
The frequency reported for clock sources are rounded down, which gives misleading figures, e.g.:
I/O ASIC clock frequency 24999480Hz sched_clock: 32 bits at 24MHz, resolution 40ns, wraps every 85901132779ns MIPS counter frequency 59998512Hz sched_clock: 32 bits at 59MHz, resolution 16ns, wraps every 35792281591ns
Rounding to nearest is more adequate:
I/O ASIC clock frequency 24999664Hz sched_clock: 32 bits at 25MHz, resolution 40ns, wraps every 85900499947ns MIPS counter frequency 59999728Hz sched_clock: 32 bits at 60MHz, resolution 16ns, wraps every 35791556599ns
Signed-off-by: Maciej W. Rozycki <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: John Stultz <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8 |
|
| #
4cd2bb12 |
| 29-Sep-2020 |
Quanyang Wang <[email protected]> |
time/sched_clock: Mark sched_clock_read_begin/retry() as notrace
Since sched_clock_read_begin() and sched_clock_read_retry() are called by notrace function sched_clock(), they shouldn't be traceable
time/sched_clock: Mark sched_clock_read_begin/retry() as notrace
Since sched_clock_read_begin() and sched_clock_read_retry() are called by notrace function sched_clock(), they shouldn't be traceable either, or else ftrace_graph_caller will run into a dead loop on the path as below (arm for instance):
ftrace_graph_caller() prepare_ftrace_return() function_graph_enter() ftrace_push_return_trace() trace_clock_local() sched_clock() sched_clock_read_begin/retry()
Fixes: 1b86abc1c645 ("sched_clock: Expose struct clock_read_data") Signed-off-by: Quanyang Wang <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3 |
|
| #
a690ed07 |
| 27-Aug-2020 |
Ahmed S. Darwish <[email protected]> |
time/sched_clock: Use seqcount_latch_t
Latch sequence counters have unique read and write APIs, and thus seqcount_latch_t was recently introduced at seqlock.h.
Use that new data type instead of pla
time/sched_clock: Use seqcount_latch_t
Latch sequence counters have unique read and write APIs, and thus seqcount_latch_t was recently introduced at seqlock.h.
Use that new data type instead of plain seqcount_t. This adds the necessary type-safety and ensures only latching-safe seqcount APIs are to be used.
Signed-off-by: Ahmed S. Darwish <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
| #
58faf20a |
| 27-Aug-2020 |
Ahmed S. Darwish <[email protected]> |
time/sched_clock: Use raw_read_seqcount_latch() during suspend
sched_clock uses seqcount_t latching to switch between two storage places protected by the sequence counter. This allows it to have int
time/sched_clock: Use raw_read_seqcount_latch() during suspend
sched_clock uses seqcount_t latching to switch between two storage places protected by the sequence counter. This allows it to have interruptible, NMI-safe, seqcount_t write side critical sections.
Since 7fc26327b756 ("seqlock: Introduce raw_read_seqcount_latch()"), raw_read_seqcount_latch() became the standardized way for seqcount_t latch read paths. Due to the dependent load, it has one read memory barrier less than the currently used raw_read_seqcount() API.
Use raw_read_seqcount_latch() for the suspend path.
Commit aadd6e5caaac ("time/sched_clock: Use raw_read_seqcount_latch()") missed changing that instance of raw_read_seqcount().
References: 1809bfa44e10 ("timers, sched/clock: Avoid deadlock during read from NMI") Signed-off-by: Ahmed S. Darwish <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.9-rc2, v5.9-rc1 |
|
| #
b0294f30 |
| 07-Aug-2020 |
Randy Dunlap <[email protected]> |
time: Delete repeated words in comments
Drop repeated words in kernel/time/. {when, one, into}
Signed-off-by: Randy Dunlap <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]
time: Delete repeated words in comments
Drop repeated words in kernel/time/. {when, one, into}
Signed-off-by: Randy Dunlap <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: John Stultz <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.8, v5.8-rc7, v5.8-rc6 |
|
| #
aadd6e5c |
| 16-Jul-2020 |
Ahmed S. Darwish <[email protected]> |
time/sched_clock: Use raw_read_seqcount_latch()
sched_clock uses seqcount_t latching to switch between two storage places protected by the sequence counter. This allows it to have interruptible, NMI
time/sched_clock: Use raw_read_seqcount_latch()
sched_clock uses seqcount_t latching to switch between two storage places protected by the sequence counter. This allows it to have interruptible, NMI-safe, seqcount_t write side critical sections.
Since 7fc26327b756 ("seqlock: Introduce raw_read_seqcount_latch()"), raw_read_seqcount_latch() became the standardized way for seqcount_t latch read paths. Due to the dependent load, it also has one read memory barrier less than the currently used raw_read_seqcount() API.
Use raw_read_seqcount_latch() for the seqcount_t latch read path.
Signed-off-by: Ahmed S. Darwish <[email protected]> Signed-off-by: Leo Yan <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/r/[email protected] References: 1809bfa44e10 ("timers, sched/clock: Avoid deadlock during read from NMI") Signed-off-by: Will Deacon <[email protected]>
show more ...
|
| #
1b86abc1 |
| 16-Jul-2020 |
Peter Zijlstra <[email protected]> |
sched_clock: Expose struct clock_read_data
In order to support perf_event_mmap_page::cap_time features, an architecture needs, aside from a userspace readable counter register, to expose the exact c
sched_clock: Expose struct clock_read_data
In order to support perf_event_mmap_page::cap_time features, an architecture needs, aside from a userspace readable counter register, to expose the exact clock data so that userspace can convert the counter register into a correct timestamp.
Provide struct clock_read_data and two (seqcount) helpers so that architectures (arm64 in specific) can expose the numbers to userspace.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Leo Yan <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
show more ...
|
|
Revision tags: v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6 |
|
| #
2c8bd588 |
| 09-Mar-2020 |
Ahmed S. Darwish <[email protected]> |
time/sched_clock: Expire timer in hardirq context
To minimize latency, PREEMPT_RT kernels expires hrtimers in preemptible softirq context by default. This can be overriden by marking the timer's exp
time/sched_clock: Expire timer in hardirq context
To minimize latency, PREEMPT_RT kernels expires hrtimers in preemptible softirq context by default. This can be overriden by marking the timer's expiry with HRTIMER_MODE_HARD.
sched_clock_timer is missing this annotation: if its callback is preempted and the duration of the preemption exceeds the wrap around time of the underlying clocksource, sched clock will get out of sync.
Mark the sched_clock_timer for expiry in hard interrupt context.
Signed-off-by: Ahmed S. Darwish <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1, v5.5, v5.5-rc7, v5.5-rc6 |
|
| #
27077455 |
| 07-Jan-2020 |
Paul Cercueil <[email protected]> |
time/sched_clock: Disable interrupts in sched_clock_register()
Instead of issueing a warning if sched_clock_register() is called from a context where IRQs are enabled, the code now ensures that IRQs
time/sched_clock: Disable interrupts in sched_clock_register()
Instead of issueing a warning if sched_clock_register() is called from a context where IRQs are enabled, the code now ensures that IRQs are indeed disabled.
Signed-off-by: Paul Cercueil <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Daniel Lezcano <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5 |
|
| #
086ee46b |
| 22-Oct-2019 |
Ben Dooks (Codethink) <[email protected]> |
timers/sched_clock: Include local timekeeping.h for missing declarations
Include the timekeeping.h header to get the declaration of the sched_clock_{suspend,resume} functions. Fixes the following sp
timers/sched_clock: Include local timekeeping.h for missing declarations
Include the timekeeping.h header to get the declaration of the sched_clock_{suspend,resume} functions. Fixes the following sparse warnings:
kernel/time/sched_clock.c:275:5: warning: symbol 'sched_clock_suspend' was not declared. Should it be static? kernel/time/sched_clock.c:286:6: warning: symbol 'sched_clock_resume' was not declared. Should it be static?
Signed-off-by: Ben Dooks (Codethink) <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5, v5.2-rc4, v5.2-rc3, v5.2-rc2, v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3 |
|
| #
3f2552f7 |
| 29-Mar-2019 |
Chang-An Chen <[email protected]> |
timers/sched_clock: Prevent generic sched_clock wrap caused by tick_freeze()
tick_freeze() introduced by suspend-to-idle in commit 124cf9117c5f ("PM / sleep: Make it possible to quiesce timers durin
timers/sched_clock: Prevent generic sched_clock wrap caused by tick_freeze()
tick_freeze() introduced by suspend-to-idle in commit 124cf9117c5f ("PM / sleep: Make it possible to quiesce timers during suspend-to-idle") uses timekeeping_suspend() instead of syscore_suspend() during suspend-to-idle. As a consequence generic sched_clock will keep going because sched_clock_suspend() and sched_clock_resume() are not invoked during suspend-to-idle which can result in a generic sched_clock wrap.
On a ARM system with suspend-to-idle enabled, sched_clock is registered as "56 bits at 13MHz, resolution 76ns, wraps every 4398046511101ns", which means the real wrapping duration is 8796093022202ns.
[ 134.551779] suspend-to-idle suspend (timekeeping_suspend()) [ 1204.912239] suspend-to-idle resume (timekeeping_resume()) ...... [ 1206.912239] suspend-to-idle suspend (timekeeping_suspend()) [ 5880.502807] suspend-to-idle resume (timekeeping_resume()) ...... [ 6000.403724] suspend-to-idle suspend (timekeeping_suspend()) [ 8035.753167] suspend-to-idle resume (timekeeping_resume()) ...... [ 8795.786684] (2)[321:charger_thread]...... [ 8795.788387] (2)[321:charger_thread]...... [ 0.057226] (0)[0:swapper/0]...... [ 0.061447] (2)[0:swapper/2]......
sched_clock was not stopped during suspend-to-idle, and sched_clock_poll hrtimer was not expired because timekeeping_suspend() was invoked during suspend-to-idle. It makes sched_clock wrap at kernel time 8796s.
To prevent this, invoke sched_clock_suspend() and sched_clock_resume() in tick_freeze() together with timekeeping_suspend() and timekeeping_resume().
Fixes: 124cf9117c5f (PM / sleep: Make it possible to quiesce timers during suspend-to-idle) Signed-off-by: Chang-An Chen <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Matthias Brugger <[email protected]> Cc: John Stultz <[email protected]> Cc: Kees Cook <[email protected]> Cc: Corey Minyard <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Cc: Stanley Chu <[email protected]> Cc: <[email protected]> Cc: <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
| #
d75f773c |
| 25-Mar-2019 |
Sakari Ailus <[email protected]> |
treewide: Switch printk users from %pf and %pF to %ps and %pS, respectively
%pF and %pf are functionally equivalent to %pS and %ps conversion specifiers. The former are deprecated, therefore switch
treewide: Switch printk users from %pf and %pF to %ps and %pS, respectively
%pF and %pf are functionally equivalent to %pS and %ps conversion specifiers. The former are deprecated, therefore switch the current users to use the preferred variant.
The changes have been produced by the following command:
git grep -l '%p[fF]' | grep -v '^\(tools\|Documentation\)/' | \ while read i; do perl -i -pe 's/%pf/%ps/g; s/%pF/%pS/g;' $i; done
And verifying the result.
Link: http://lkml.kernel.org/r/[email protected] Cc: Andy Shevchenko <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Signed-off-by: Sakari Ailus <[email protected]> Acked-by: David Sterba <[email protected]> (for btrfs) Acked-by: Mike Rapoport <[email protected]> (for mm/memblock.c) Acked-by: Bjorn Helgaas <[email protected]> (for drivers/pci) Acked-by: Rafael J. Wysocki <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
show more ...
|
|
Revision tags: v5.1-rc2 |
|
| #
e1e41b6c |
| 18-Mar-2019 |
Rasmus Villemoes <[email protected]> |
timekeeping: Consistently use unsigned int for seqcount snapshot
The timekeeping code uses a random mix of "unsigned long" and "unsigned int" for the seqcount snapshots (ratio 14:12). Since the seql
timekeeping: Consistently use unsigned int for seqcount snapshot
The timekeeping code uses a random mix of "unsigned long" and "unsigned int" for the seqcount snapshots (ratio 14:12). Since the seqlock.h API is entirely based on unsigned int, use that throughout.
Signed-off-by: Rasmus Villemoes <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: John Stultz <[email protected]> Cc: Stephen Boyd <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1 |
|
| #
2fa6d420 |
| 31-Oct-2018 |
Thomas Gleixner <[email protected]> |
sched/clock: Remove license boilerplate
The SPDX identifier defines the license of the file already. No need for the boilerplate.
Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Kees
sched/clock: Remove license boilerplate
The SPDX identifier defines the license of the file already. No need for the boilerplate.
Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Kees Cook <[email protected]> Acked-by: Ingo Molnar <[email protected]> Acked-by: John Stultz <[email protected]> Acked-by: Corey Minyard <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Kate Stewart <[email protected]> Cc: Philippe Ombredanne <[email protected]> Cc: Peter Anvin <[email protected]> Cc: Russell King <[email protected]> Cc: Richard Cochran <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Nicolas Pitre <[email protected]> Cc: David Riley <[email protected]> Cc: Colin Cross <[email protected]> Cc: Mark Brown <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
| #
35728b82 |
| 31-Oct-2018 |
Thomas Gleixner <[email protected]> |
time: Add SPDX license identifiers
Update the time(r) core files files with the correct SPDX license identifier based on the license text in the file itself. The SPDX identifier is a legally binding
time: Add SPDX license identifiers
Update the time(r) core files files with the correct SPDX license identifier based on the license text in the file itself. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text.
This work is based on a script and data from Philippe Ombredanne, Kate Stewart and myself. The data has been created with two independent license scanners and manual inspection.
The following files do not contain any direct license information and have been omitted from the big initial SPDX changes:
timeconst.bc: The .bc files were not touched time.c, timer.c, timekeeping.c: Licence was deduced from EXPORT_SYMBOL_GPL
As those files do not contain direct license references they fall under the project license, i.e. GPL V2 only.
Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Kees Cook <[email protected]> Acked-by: Ingo Molnar <[email protected]> Acked-by: John Stultz <[email protected]> Acked-by: Corey Minyard <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Kate Stewart <[email protected]> Cc: Philippe Ombredanne <[email protected]> Cc: Russell King <[email protected]> Cc: Richard Cochran <[email protected]> Cc: Nicolas Pitre <[email protected]> Cc: David Riley <[email protected]> Cc: Colin Cross <[email protected]> Cc: Mark Brown <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Paul E. McKenney <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
| #
58c5fc2b |
| 31-Oct-2018 |
Thomas Gleixner <[email protected]> |
time: Remove useless filenames in top level comments
Remove the pointless filenames in the top level comments. They have no value at all and just occupy space. While at it tidy up some of the commen
time: Remove useless filenames in top level comments
Remove the pointless filenames in the top level comments. They have no value at all and just occupy space. While at it tidy up some of the comments and remove a stale one.
Signed-off-by: Thomas Gleixner <[email protected]> Acked-by: Nicolas Pitre <[email protected]> Acked-by: Kees Cook <[email protected]> Acked-by: Ingo Molnar <[email protected]> Acked-by: John Stultz <[email protected]> Acked-by: Corey Minyard <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Kate Stewart <[email protected]> Cc: Philippe Ombredanne <[email protected]> Cc: Peter Anvin <[email protected]> Cc: Russell King <[email protected]> Cc: Richard Cochran <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: David Riley <[email protected]> Cc: Colin Cross <[email protected]> Cc: Mark Brown <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6 |
|
| #
5d2a4e91 |
| 19-Jul-2018 |
Pavel Tatashin <[email protected]> |
sched/clock: Move sched clock initialization and merge with generic clock
sched_clock_postinit() initializes a generic clock on systems where no other clock is provided. This function may be called
sched/clock: Move sched clock initialization and merge with generic clock
sched_clock_postinit() initializes a generic clock on systems where no other clock is provided. This function may be called only after timekeeping_init().
Rename sched_clock_postinit to generic_clock_inti() and call it from sched_clock_init(). Move the call for sched_clock_init() until after time_init().
Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Pavel Tatashin <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
show more ...
|