|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7 |
|
| #
4b7d6542 |
| 11-Mar-2025 |
Ulf Hansson <[email protected]> |
PM: s2idle: Extend comment in s2idle_enter()
The s2idle_lock must be held while checking for a pending wakeup and while moving into S2IDLE_STATE_ENTER, to make sure a wakeup doesn't get lost. Let's
PM: s2idle: Extend comment in s2idle_enter()
The s2idle_lock must be held while checking for a pending wakeup and while moving into S2IDLE_STATE_ENTER, to make sure a wakeup doesn't get lost. Let's extend the comment in the code to make this clear.
Signed-off-by: Ulf Hansson <[email protected]> Link: https://patch.msgid.link/[email protected] [ rjw: Rewrote the new comment ] Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
| #
0f42194c |
| 11-Mar-2025 |
Ulf Hansson <[email protected]> |
PM: s2idle: Drop redundant locks when entering s2idle
The calls to cpus_read_lock|unlock() protects us from getting CPUS hotplugged, while entering suspend-to-idle. However, when s2idle_enter() is c
PM: s2idle: Drop redundant locks when entering s2idle
The calls to cpus_read_lock|unlock() protects us from getting CPUS hotplugged, while entering suspend-to-idle. However, when s2idle_enter() is called we should be far beyond the point when CPUs may be hotplugged. Let's therefore simplify the code and drop the use of the lock.
Signed-off-by: Ulf Hansson <[email protected]> Link: https://patch.msgid.link/[email protected] [ rjw: Rewrote the new comment ] Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc6, v6.14-rc5 |
|
| #
63830aef |
| 26-Feb-2025 |
Marcos Paulo de Souza <[email protected]> |
printk: Rename resume_console to console_resume_all
The function resume_console has a misleading name, since it resumes all consoles, so rename it accordingly.
Signed-off-by: Marcos Paulo de Souza
printk: Rename resume_console to console_resume_all
The function resume_console has a misleading name, since it resumes all consoles, so rename it accordingly.
Signed-off-by: Marcos Paulo de Souza <[email protected]> Reviewed-by: Petr Mladek <[email protected]> Reviewed-by: John Ogness <[email protected]> Link: https://lore.kernel.org/r/[email protected] [[email protected]: Fixed typo in the commit message.] Signed-off-by: Petr Mladek <[email protected]>
show more ...
|
| #
e9cec448 |
| 26-Feb-2025 |
Marcos Paulo de Souza <[email protected]> |
printk: Rename suspend_console to console_suspend_all
The function suspend_console has a misleading name, since it suspends all consoles, so rename it accordingly.
Signed-off-by: Marcos Paulo de So
printk: Rename suspend_console to console_suspend_all
The function suspend_console has a misleading name, since it suspends all consoles, so rename it accordingly.
Signed-off-by: Marcos Paulo de Souza <[email protected]> Reviewed-by: Petr Mladek <[email protected]> Reviewed-by: John Ogness <[email protected]> Link: https://lore.kernel.org/r/[email protected] [[email protected]: Fixed typo in the commit message.] Signed-off-by: Petr Mladek <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4 |
|
| #
3c89a068 |
| 08-Apr-2024 |
Anna-Maria Behnsen <[email protected]> |
PM: s2idle: Make sure CPUs will wakeup directly on resume
s2idle works like a regular suspend with freezing processes and freezing devices. All CPUs except the control CPU go into idle. Once this is
PM: s2idle: Make sure CPUs will wakeup directly on resume
s2idle works like a regular suspend with freezing processes and freezing devices. All CPUs except the control CPU go into idle. Once this is completed the control CPU kicks all other CPUs out of idle, so that they reenter the idle loop and then enter s2idle state. The control CPU then issues an swait() on the suspend state and therefore enters the idle loop as well.
Due to being kicked out of idle, the other CPUs leave their NOHZ states, which means the tick is active and the corresponding hrtimer is programmed to the next jiffie.
On entering s2idle the CPUs shut down their local clockevent device to prevent wakeups. The last CPU which enters s2idle shuts down its local clockevent and freezes timekeeping.
On resume, one of the CPUs receives the wakeup interrupt, unfreezes timekeeping and its local clockevent and starts the resume process. At that point all other CPUs are still in s2idle with their clockevents switched off. They only resume when they are kicked by another CPU or after resuming devices and then receiving a device interrupt.
That means there is no guarantee that all CPUs will wakeup directly on resume. As a consequence there is no guarantee that timers which are queued on those CPUs and should expire directly after resume, are handled. Also timer list timers which are remotely queued to one of those CPUs after resume will not result in a reprogramming IPI as the tick is active. Queueing a hrtimer will also not result in a reprogramming IPI because the first hrtimer event is already in the past.
The recent introduction of the timer pull model (7ee988770326 ("timers: Implement the hierarchical pull model")) amplifies this problem, if the current migrator is one of the non woken up CPUs. When a non pinned timer list timer is queued and the queuing CPU goes idle, it relies on the still suspended migrator CPU to expire the timer which will happen by chance.
The problem exists since commit 8d89835b0467 ("PM: suspend: Do not pause cpuidle in the suspend-to-idle path"). There the cpuidle_pause() call which in turn invoked a wakeup for all idle CPUs was moved to a later point in the resume process. This might not be reached or reached very late because it waits on a timer of a still suspended CPU.
Address this by kicking all CPUs out of idle after the control CPU returns from swait() so that they resume their timers and restore consistent system state.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218641 Fixes: 8d89835b0467 ("PM: suspend: Do not pause cpuidle in the suspend-to-idle path") Signed-off-by: Anna-Maria Behnsen <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Tested-by: Mario Limonciello <[email protected]> Cc: 5.16+ <[email protected]> # 5.16+ Acked-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Ulf Hansson <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
|
Revision tags: v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7 |
|
| #
9bc4ffd3 |
| 29-Feb-2024 |
Maulik Shah <[email protected]> |
PM: suspend: Set mem_sleep_current during kernel command line setup
psci_init_system_suspend() invokes suspend_set_ops() very early during bootup even before kernel command line for mem_sleep_defaul
PM: suspend: Set mem_sleep_current during kernel command line setup
psci_init_system_suspend() invokes suspend_set_ops() very early during bootup even before kernel command line for mem_sleep_default is setup. This leads to kernel command line mem_sleep_default=s2idle not working as mem_sleep_current gets changed to deep via suspend_set_ops() and never changes back to s2idle.
Set mem_sleep_current along with mem_sleep_default during kernel command line setup as default suspend mode.
Fixes: faf7ec4a92c0 ("drivers: firmware: psci: add system suspend support") CC: [email protected] # 5.4+ Signed-off-by: Maulik Shah <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3 |
|
| #
9ff544fa |
| 29-Jan-2024 |
Rafael J. Wysocki <[email protected]> |
PM: sleep: stats: Define suspend_stats next to the code using it
It is not necessary to define struct suspend_stats in a header file and the suspend_stats variable in the core device system-wide PM
PM: sleep: stats: Define suspend_stats next to the code using it
It is not necessary to define struct suspend_stats in a header file and the suspend_stats variable in the core device system-wide PM code. They both can be defined in kernel/power/main.c, next to the sysfs and debugfs code accessing suspend_stats, which can be static.
Modify the code in question in accordance with the above observation and replace the static inline functions manipulating suspend_stats with regular ones defined in kernel/power/main.c.
While at it, move the enum suspend_stat_step to the end of suspend.h which is a more suitable place for it.
No intentional functional impact.
Signed-off-by: Rafael J. Wysocki <[email protected]> Reviewed-by: Ulf Hansson <[email protected]>
show more ...
|
| #
b730bab0 |
| 29-Jan-2024 |
Rafael J. Wysocki <[email protected]> |
PM: sleep: stats: Use an array of step failure counters
Instead of using a set of individual struct suspend_stats fields representing suspend step failure counters, use an array of counters indexed
PM: sleep: stats: Use an array of step failure counters
Instead of using a set of individual struct suspend_stats fields representing suspend step failure counters, use an array of counters indexed by enum suspend_stat_step for this purpose, which allows dpm_save_failed_step() to increment the appropriate counter automatically, so that its callers don't need to do that directly.
It also allows suspend_stats_show() to carry out a loop over the counters array to print their values.
Because the counters cannot become negative, use unsigned int for representing them.
The only user-observable impact of this change is a different ordering of entries in the suspend_stats debugfs file which is not expected to matter.
Signed-off-by: Rafael J. Wysocki <[email protected]> Reviewed-by: Stanislaw Gruszka <[email protected]> Reviewed-by: Ulf Hansson <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4 |
|
| #
811d59fd |
| 29-Aug-2022 |
Mario Limonciello <[email protected]> |
ACPI: s2idle: Add a new ->check() callback for platform_s2idle_ops
On some platforms it is found that Linux more aggressively enters s2idle than Windows enters Modern Standby and this uncovers some
ACPI: s2idle: Add a new ->check() callback for platform_s2idle_ops
On some platforms it is found that Linux more aggressively enters s2idle than Windows enters Modern Standby and this uncovers some synchronization issues for the platform. To aid in debugging this class of problems in the future, add support for an extra optional callback intended for drivers to emit extra debugging.
Signed-off-by: Mario Limonciello <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Hans de Goede <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc3 |
|
| #
5950e5d5 |
| 22-Aug-2022 |
Peter Zijlstra <[email protected]> |
freezer: Have {,un}lock_system_sleep() save/restore flags
Rafael explained that the reason for having both PF_NOFREEZE and PF_FREEZER_SKIP is that {,un}lock_system_sleep() is callable from kthread c
freezer: Have {,un}lock_system_sleep() save/restore flags
Rafael explained that the reason for having both PF_NOFREEZE and PF_FREEZER_SKIP is that {,un}lock_system_sleep() is callable from kthread context that has previously called set_freezable().
In preparation of merging the flags, have {,un}lock_system_slee() save and restore current->flags.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4 |
|
| #
9ea4dcf4 |
| 22-Apr-2022 |
Dan Williams <[email protected]> |
PM: CXL: Disable suspend
The CXL specification claims S3 support at a hardware level, but at a system software level there are some missing pieces. Section 9.4 (CXL 2.0) rightly claims that "CXL mem
PM: CXL: Disable suspend
The CXL specification claims S3 support at a hardware level, but at a system software level there are some missing pieces. Section 9.4 (CXL 2.0) rightly claims that "CXL mem adapters may need aux power to retain memory context across S3", but there is no enumeration mechanism for the OS to determine if a given adapter has that support. Moreover the save state and resume image for the system may inadvertantly end up in a CXL device that needs to be restored before the save state is recoverable. I.e. a circular dependency that is not resolvable without a third party save-area.
Arrange for the cxl_mem driver to fail S3 attempts. This still nominaly allows for suspend, but requires unbinding all CXL memory devices before the suspend to ensure the typical DRAM flow is taken. The cxl_mem unbind flow is intended to also tear down all CXL memory regions associated with a given cxl_memdev.
It is reasonable to assume that any device participating in a System RAM range published in the EFI memory map is covered by aux power and save-area outside the device itself. So this restriction can be minimized in the future once pre-existing region enumeration support arrives, and perhaps a spec update to clarify if the EFI memory map is sufficent for determining the range of devices managed by platform-firmware for S3 support.
Per Rafael, if the CXL configuration prevents suspend then it should fail early before tasks are frozen, and mem_sleep should stop showing 'mem' as an option [1]. Effectively CXL augments the platform suspend ->valid() op since, for example, the ACPI ops are not aware of the CXL / PCI dependencies. Given the split role of platform firmware vs OS provisioned CXL memory it is up to the cxl_mem driver to determine if the CXL configuration has elements that platform firmware may not be prepared to restore.
Link: https://lore.kernel.org/r/CAJZ5v0hGVN_=3iU8OLpHY3Ak35T5+JcBM-qs8SbojKrpd0VXsA@mail.gmail.com [1] Cc: "Rafael J. Wysocki" <[email protected]> Cc: Pavel Machek <[email protected]> Cc: Len Brown <[email protected]> Reviewed-by: Rafael J. Wysocki <[email protected]> Link: https://lore.kernel.org/r/165066828317.3907920.5690432272182042556.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <[email protected]>
show more ...
|
|
Revision tags: v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3 |
|
| #
cb1f65c1 |
| 04-Feb-2022 |
Rafael J. Wysocki <[email protected]> |
PM: s2idle: ACPI: Fix wakeup interrupts handling
After commit e3728b50cd9b ("ACPI: PM: s2idle: Avoid possible race related to the EC GPE") wakeup interrupts occurring immediately after the one disca
PM: s2idle: ACPI: Fix wakeup interrupts handling
After commit e3728b50cd9b ("ACPI: PM: s2idle: Avoid possible race related to the EC GPE") wakeup interrupts occurring immediately after the one discarded by acpi_s2idle_wake() may be missed. Moreover, if the SCI triggers again immediately after the rearming in acpi_s2idle_wake(), that wakeup may be missed too.
The problem is that pm_system_irq_wakeup() only calls pm_system_wakeup() when pm_wakeup_irq is 0, but that's not the case any more after the interrupt causing acpi_s2idle_wake() to run until pm_wakeup_irq is cleared by the pm_wakeup_clear() call in s2idle_loop(). However, there may be wakeup interrupts occurring in that time frame and if that happens, they will be missed.
To address that issue first move the clearing of pm_wakeup_irq to the point at which it is known that the interrupt causing acpi_s2idle_wake() to tun will be discarded, before rearming the SCI for wakeup. Moreover, because that only reduces the size of the time window in which the issue may manifest itself, allow pm_system_irq_wakeup() to register two second wakeup interrupts in a row and, when discarding the first one, replace it with the second one. [Of course, this assumes that only one wakeup interrupt can be discarded in one go, but currently that is the case and I am not aware of any plans to change that.]
Fixes: e3728b50cd9b ("ACPI: PM: s2idle: Avoid possible race related to the EC GPE") Cc: 5.4+ <[email protected]> # 5.4+ Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7 |
|
| #
9f6abfcd |
| 22-Oct-2021 |
Rafael J. Wysocki <[email protected]> |
PM: suspend: Use valid_state() consistently
Make valid_state() check if the ->enter callback is present in suspend_ops (only PM_SUSPEND_TO_IDLE can be valid otherwise) and make sleep_state_supported
PM: suspend: Use valid_state() consistently
Make valid_state() check if the ->enter callback is present in suspend_ops (only PM_SUSPEND_TO_IDLE can be valid otherwise) and make sleep_state_supported() call valid_state() consistently to validate the states other than PM_SUSPEND_TO_IDLE.
While at it, clean up the comment in valid_state().
No expected functional impact.
Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
| #
23f62d7a |
| 22-Oct-2021 |
Rafael J. Wysocki <[email protected]> |
PM: sleep: Pause cpuidle later and resume it earlier during system transitions
Commit 8651f97bd951 ("PM / cpuidle: System resume hang fix with cpuidle") that introduced cpuidle pausing during system
PM: sleep: Pause cpuidle later and resume it earlier during system transitions
Commit 8651f97bd951 ("PM / cpuidle: System resume hang fix with cpuidle") that introduced cpuidle pausing during system suspend did that to work around a platform firmware issue causing systems to hang during resume if CPUs were allowed to enter idle states in the system suspend and resume code paths.
However, pausing cpuidle before the last phase of suspending devices is the source of an otherwise arbitrary difference between the suspend-to-idle path and other system suspend variants, so it is cleaner to do that later, before taking secondary CPUs offline (it is still safer to take secondary CPUs offline with cpuidle paused, though).
Modify the code accordingly, but in order to avoid code duplication, introduce new wrapper functions, pm_sleep_disable_secondary_cpus() and pm_sleep_enable_secondary_cpus(), to combine cpuidle_pause() and cpuidle_resume(), respectively, with the handling of secondary CPUs during system-wide transitions to sleep states.
Signed-off-by: Rafael J. Wysocki <[email protected]> Reviewed-by: Ulf Hansson <[email protected]> Tested-by: Ulf Hansson <[email protected]>
show more ...
|
| #
8d89835b |
| 22-Oct-2021 |
Rafael J. Wysocki <[email protected]> |
PM: suspend: Do not pause cpuidle in the suspend-to-idle path
It is pointless to pause cpuidle in the suspend-to-idle path, because it is going to be resumed in the same path later and pausing it do
PM: suspend: Do not pause cpuidle in the suspend-to-idle path
It is pointless to pause cpuidle in the suspend-to-idle path, because it is going to be resumed in the same path later and pausing it does not serve any particular purpose in that case.
Rework the code to avoid doing that.
Signed-off-by: Rafael J. Wysocki <[email protected]> Reviewed-by: Ulf Hansson <[email protected]> Tested-by: Ulf Hansson <[email protected]>
show more ...
|
| #
c1bfc598 |
| 19-Oct-2021 |
Rafael J. Wysocki <[email protected]> |
Revert "PM: sleep: Do not assume that "mem" is always present"
Revert commit bfcc1e67ff1e ("PM: sleep: Do not assume that "mem" is always present"), because it breaks compatibility with user space u
Revert "PM: sleep: Do not assume that "mem" is always present"
Revert commit bfcc1e67ff1e ("PM: sleep: Do not assume that "mem" is always present"), because it breaks compatibility with user space utilities assuming that "mem" will always be present in /sys/power/state.
Fixes: bfcc1e67ff1e ("PM: sleep: Do not assume that "mem" is always present") Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
|
Revision tags: v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2 |
|
| #
bfcc1e67 |
| 15-Sep-2021 |
Florian Fainelli <[email protected]> |
PM: sleep: Do not assume that "mem" is always present
An implementation of suspend_ops is allowed to reject the PM_SUSPEND_MEM suspend type from its ->valid() callback, we should not assume that it
PM: sleep: Do not assume that "mem" is always present
An implementation of suspend_ops is allowed to reject the PM_SUSPEND_MEM suspend type from its ->valid() callback, we should not assume that it is always present as this is not a correct reflection of what a firmware interface may support.
Fixes: 406e79385f32 ("PM / sleep: System sleep state selection interface rework") Signed-off-by: Florian Fainelli <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
|
Revision tags: v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5 |
|
| #
d2c8cce6 |
| 03-Aug-2021 |
Sebastian Andrzej Siewior <[email protected]> |
PM: sleep: s2idle: Replace deprecated CPU-hotplug functions
The functions get_online_cpus() and put_online_cpus() have been deprecated during the CPU hotplug rework. They map directly to cpus_read_l
PM: sleep: s2idle: Replace deprecated CPU-hotplug functions
The functions get_online_cpus() and put_online_cpus() have been deprecated during the CPU hotplug rework. They map directly to cpus_read_lock() and cpus_read_unlock().
Replace deprecated CPU-hotplug functions with the official version. The behavior remains unchanged.
Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
|
Revision tags: v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4 |
|
| #
ab150c3f |
| 13-Nov-2020 |
Alex Shi <[email protected]> |
PM / suspend: fix kernel-doc markup
Add parameter explanation to fix kernel-doc marks:
kernel/power/suspend.c:233: warning: Function parameter or member 'state' not described in 'suspend_valid_only
PM / suspend: fix kernel-doc markup
Add parameter explanation to fix kernel-doc marks:
kernel/power/suspend.c:233: warning: Function parameter or member 'state' not described in 'suspend_valid_only_mem' kernel/power/suspend.c:344: warning: Function parameter or member 'state' not described in 'suspend_prepare'
Signed-off-by: Alex Shi <[email protected]> [ rjw: Change the proposed parameter descriptions. ] Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
|
Revision tags: v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2 |
|
| #
70d93298 |
| 18-Aug-2020 |
Peter Zijlstra <[email protected]> |
notifier: Fix broken error handling pattern
The current notifiers have the following error handling pattern all over the place:
int err, nr;
err = __foo_notifier_call_chain(&chain, val_up, v, -1
notifier: Fix broken error handling pattern
The current notifiers have the following error handling pattern all over the place:
int err, nr;
err = __foo_notifier_call_chain(&chain, val_up, v, -1, &nr); if (err & NOTIFIER_STOP_MASK) __foo_notifier_call_chain(&chain, val_down, v, nr-1, NULL)
And aside from the endless repetition thereof, it is broken. Consider blocking notifiers; both calls take and drop the rwsem, this means that the notifier list can change in between the two calls, making @nr meaningless.
Fix this by replacing all the __foo_notifier_call_chain() functions with foo_notifier_call_chain_robust() that embeds the above pattern, but ensures it is inside a single lock region.
Note: I switched atomic_notifier_call_chain_robust() to use the spinlock, since RCU cannot provide the guarantee required for the recovery.
Note: software_resume() error handling was broken afaict.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2 |
|
| #
e3728b50 |
| 11-Feb-2020 |
Rafael J. Wysocki <[email protected]> |
ACPI: PM: s2idle: Avoid possible race related to the EC GPE
It is theoretically possible for the ACPI EC GPE to be set after the s2idle_ops->wake() called from s2idle_loop() has returned and before
ACPI: PM: s2idle: Avoid possible race related to the EC GPE
It is theoretically possible for the ACPI EC GPE to be set after the s2idle_ops->wake() called from s2idle_loop() has returned and before the subsequent pm_wakeup_pending() check is carried out. If that happens, the resulting wakeup event will cause the system to resume even though it may be a spurious one.
To avoid that race, first make the ->wake() callback in struct platform_s2idle_ops return a bool value indicating whether or not to let the system resume and rearrange s2idle_loop() to use that value instad of the direct pm_wakeup_pending() call if ->wake() is present.
Next, rework acpi_s2idle_wake() to process EC events and check pm_wakeup_pending() before re-arming the SCI for system wakeup to prevent it from triggering prematurely and add comments to that function to explain the rationale for the new code flow.
Fixes: 56b991849009 ("PM: sleep: Simplify suspend-to-idle control flow") Cc: 5.4+ <[email protected]> # 5.4+ Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
|
Revision tags: v5.6-rc1, v5.5, v5.5-rc7 |
|
| #
c052bf82 |
| 16-Jan-2020 |
Jonas Meurer <[email protected]> |
PM: suspend: Add sysfs attribute to control the "sync on suspend" behavior
The sysfs attribute `/sys/power/sync_on_suspend` controls, whether or not filesystems are synced by the kernel before syste
PM: suspend: Add sysfs attribute to control the "sync on suspend" behavior
The sysfs attribute `/sys/power/sync_on_suspend` controls, whether or not filesystems are synced by the kernel before system suspend.
Congruously, the behaviour of build-time switch CONFIG_SUSPEND_SKIP_SYNC is slightly changed: It now defines the run-tim default for the new sysfs attribute `/sys/power/sync_on_suspend`.
The run-time attribute is added because the existing corresponding build-time Kconfig flag for (`CONFIG_SUSPEND_SKIP_SYNC`) is not flexible enough. E.g. Linux distributions that provide pre-compiled kernels usually want to stick with the default (sync filesystems before suspend) but under special conditions this needs to be changed.
One example for such a special condition is user-space handling of suspending block devices (e.g. using `cryptsetup luksSuspend` or `dmsetup suspend`) before system suspend. The Kernel trying to sync filesystems after the underlying block device already got suspended obviously leads to dead-locks. Be aware that you have to take care of the filesystem sync yourself before suspending the system in those scenarios.
Signed-off-by: Jonas Meurer <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
|
Revision tags: v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4 |
|
| #
11f26633 |
| 10-Aug-2019 |
Rafael J. Wysocki <[email protected]> |
PM: suspend: Fix platform_suspend_prepare_noirq()
After commit ac9eafbe930a ("ACPI: PM: s2idle: Execute LPS0 _DSM functions with suspended devices"), a NULL pointer may be dereferenced if suspend-to
PM: suspend: Fix platform_suspend_prepare_noirq()
After commit ac9eafbe930a ("ACPI: PM: s2idle: Execute LPS0 _DSM functions with suspended devices"), a NULL pointer may be dereferenced if suspend-to-idle is attempted on a platform without "traditional" suspend support due to invalid fall-through in platform_suspend_prepare_noirq().
Fix that and while at it add missing braces in platform_resume_noirq().
Fixes: ac9eafbe930a ("ACPI: PM: s2idle: Execute LPS0 _DSM functions with suspended devices") Reported-by: Marek Szyprowski <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
show more ...
|
|
Revision tags: v5.3-rc3 |
|
| #
ac9eafbe |
| 01-Aug-2019 |
Rafael J. Wysocki <[email protected]> |
ACPI: PM: s2idle: Execute LPS0 _DSM functions with suspended devices
According to Section 3.5 of the "Intel Low Power S0 Idle" document [1], Function 5 of the LPS0 _DSM is expected to be invoked whe
ACPI: PM: s2idle: Execute LPS0 _DSM functions with suspended devices
According to Section 3.5 of the "Intel Low Power S0 Idle" document [1], Function 5 of the LPS0 _DSM is expected to be invoked when the system configuration matches the criteria for entering the target low-power state of the platform. In particular, this means that all devices should be suspended and in low-power states already when that function is invoked.
This is not the case currently, however, because Function 5 of the LPS0 _DSM is invoked by it before the "noirq" phase of device suspend, which means that some devices may not have been put into low-power states yet at that point. That is a consequence of the previous design of the suspend-to-idle flow that allowed the "noirq" phase of device suspend and the "noirq" phase of device resume to be carried out for multiple times while "suspended" (if any spurious wakeup events were detected) and the point of the LPS0 _DSM Function 5 invocation was chosen so as to call it (and LPS0 _DSM Function 6 analogously) once per suspend-resume cycle (regardless of how many times the "noirq" phases of device suspend and resume were carried out while "suspended").
Now that the suspend-to-idle flow has been redesigned to carry out the "noirq" phases of device suspend and resume once in each cycle, the code can be reordered to follow the specification that it is based on more closely.
For this purpose, add ->prepare_late and ->restore_early platform callbacks for suspend-to-idle, to be executed, respectively, after the "noirq" phase of suspending devices and before the "noirq" phase of resuming them and make ACPI use them for the invocation of LPS0 _DSM functions as appropriate.
While at it, move the LPS0 entry requirements check to be made before invoking Functions 3 and 5 of the LPS0 _DSM (also once per cycle) as follows from the specification [1].
Link: https://uefi.org/sites/default/files/resources/Intel_ACPI_Low_Power_S0_Idle.pdf # [1] Signed-off-by: Rafael J. Wysocki <[email protected]> Tested-by: Kai-Heng Feng <[email protected]>
show more ...
|
|
Revision tags: v5.3-rc2, v5.3-rc1 |
|
| #
8eb0fd3b |
| 15-Jul-2019 |
Rafael J. Wysocki <[email protected]> |
PM: sleep: Integrate suspend-to-idle with generig suspend flow
After previous changes the suspend-to-idle code flow can be integrated more tightly with the generic system suspend code flow by making
PM: sleep: Integrate suspend-to-idle with generig suspend flow
After previous changes the suspend-to-idle code flow can be integrated more tightly with the generic system suspend code flow by making suspend_enter() call s2idle_loop() later and removing the direct invocations of dpm_noirq_begin(), dpm_noirq_suspend_devices(), dpm_noirq_end(), and dpm_noirq_resume_devices() from the latter, so do that.
This change is not expected to alter functionality.
Signed-off-by: Rafael J. Wysocki <[email protected]> Acked-by: Thomas Gleixner <[email protected]>
show more ...
|