|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1 |
|
| #
8fa7292f |
| 05-Apr-2025 |
Thomas Gleixner <[email protected]> |
treewide: Switch/rename to timer_delete[_sync]()
timer_delete[_sync]() replaces del_timer[_sync](). Convert the whole tree over and remove the historical wrapper inlines.
Conversion was done with c
treewide: Switch/rename to timer_delete[_sync]()
timer_delete[_sync]() replaces del_timer[_sync](). Convert the whole tree over and remove the historical wrapper inlines.
Conversion was done with coccinelle plus manual fixups where necessary.
Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v6.14, v6.14-rc7, v6.14-rc6 |
|
| #
6cc4c3aa |
| 04-Mar-2025 |
Tang Yizhou <[email protected]> |
writeback: fix calculations in trace_balance_dirty_pages() for cgwb
In the commit dcc25ae76eb7 ("writeback: move global_dirty_limit into wb_domain") of the cgroup writeback backpressure propagation
writeback: fix calculations in trace_balance_dirty_pages() for cgwb
In the commit dcc25ae76eb7 ("writeback: move global_dirty_limit into wb_domain") of the cgroup writeback backpressure propagation patchset, Tejun made some adaptations to trace_balance_dirty_pages() for cgroup writeback. However, this adaptation was incomplete and Tejun missed further adaptation in the subsequent patches.
In the cgroup writeback scenario, if sdtc in balance_dirty_pages() is assigned to mdtc, then upon entering trace_balance_dirty_pages(), __entry->limit should be assigned based on the dirty_limit of the corresponding memcg's wb_domain, rather than global_wb_domain.
To address this issue and simplify the implementation, introduce a 'limit' field in struct dirty_throttle_control to store the hard_limit value computed in wb_position_ratio() by calling hard_dirty_limit(). This field will then be used in trace_balance_dirty_pages() to assign the value to __entry->limit.
Link: https://lkml.kernel.org/r/[email protected] Fixes: dcc25ae76eb7 ("writeback: move global_dirty_limit into wb_domain") Signed-off-by: Tang Yizhou <[email protected]> Acked-by: Tejun Heo <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Christian Brauner <[email protected]> Cc: Jan Kara <[email protected]> Cc: "Masami Hiramatsu (Google)" <[email protected]> Cc: Matthew Wilcow (Oracle) <[email protected]> Cc: Steven Rostedt <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
f1ab2831 |
| 04-Mar-2025 |
Tang Yizhou <[email protected]> |
writeback: let trace_balance_dirty_pages() take struct dtc as parameter
Patch series "Fix calculations in trace_balance_dirty_pages() for cgwb", v2.
In my experiment, I found that the output of tra
writeback: let trace_balance_dirty_pages() take struct dtc as parameter
Patch series "Fix calculations in trace_balance_dirty_pages() for cgwb", v2.
In my experiment, I found that the output of trace_balance_dirty_pages() in the cgroup writeback scenario was strange because trace_balance_dirty_pages() always uses global_wb_domain.dirty_limit for related calculations instead of the dirty_limit of the corresponding memcg's wb_domain.
The basic idea of the fix is to store the hard dirty limit value computed in wb_position_ratio() into struct dirty_throttle_control and use it for calculations in trace_balance_dirty_pages().
This patch (of 3):
Currently, trace_balance_dirty_pages() already has 12 parameters. In the patch #3, I initially attempted to introduce an additional parameter. However, in include/linux/trace_events.h, bpf_trace_run12() only supports up to 12 parameters and bpf_trace_run13() does not exist.
To reduce the number of parameters in trace_balance_dirty_pages(), we can make it accept a pointer to struct dirty_throttle_control as a parameter. To achieve this, we need to move the definition of struct dirty_throttle_control from mm/page-writeback.c to include/linux/writeback.h.
Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Tang Yizhou <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Christian Brauner <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Jan Kara <[email protected]> Cc: "Masami Hiramatsu (Google)" <[email protected]> Cc: Matthew Wilcow (Oracle) <[email protected]> Cc: Tang Yizhou <[email protected]> Cc: Tejun Heo <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc5 |
|
| #
173a3dc0 |
| 26-Feb-2025 |
Matthew Wilcox (Oracle) <[email protected]> |
mm: assert the folio is locked in folio_start_writeback()
The folio must be locked when we start writeback in order to prevent writeback from being started twice on the same folio. I don't expect t
mm: assert the folio is locked in folio_start_writeback()
The folio must be locked when we start writeback in order to prevent writeback from being started twice on the same folio. I don't expect this to catch any problems, but it should be good documentation.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Reviewed-by: "Darrick J. Wong" <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1 |
|
| #
1751f872 |
| 28-Jan-2025 |
Joel Granados <[email protected]> |
treewide: const qualify ctl_tables where applicable
Add the const qualifier to all the ctl_tables in the tree except for watchdog_hardlockup_sysctl, memory_allocation_profiling_sysctls, loadpin_sysc
treewide: const qualify ctl_tables where applicable
Add the const qualifier to all the ctl_tables in the tree except for watchdog_hardlockup_sysctl, memory_allocation_profiling_sysctls, loadpin_sysctl_table and the ones calling register_net_sysctl (./net, drivers/inifiniband dirs). These are special cases as they use a registration function with a non-const qualified ctl_table argument or modify the arrays before passing them on to the registration function.
Constifying ctl_table structs will prevent the modification of proc_handler function pointers as the arrays would reside in .rodata. This is made possible after commit 78eb4ea25cd5 ("sysctl: treewide: constify the ctl_table argument of proc_handlers") constified all the proc_handlers.
Created this by running an spatch followed by a sed command: Spatch: virtual patch
@ depends on !(file in "net") disable optional_qualifier @
identifier table_name != { watchdog_hardlockup_sysctl, iwcm_ctl_table, ucma_ctl_table, memory_allocation_profiling_sysctls, loadpin_sysctl_table }; @@
+ const struct ctl_table table_name [] = { ... };
sed: sed --in-place \ -e "s/struct ctl_table .table = &uts_kern/const struct ctl_table *table = \&uts_kern/" \ kernel/utsname_sysctl.c
Reviewed-by: Song Liu <[email protected]> Acked-by: Steven Rostedt (Google) <[email protected]> # for kernel/trace/ Reviewed-by: Martin K. Petersen <[email protected]> # SCSI Reviewed-by: Darrick J. Wong <[email protected]> # xfs Acked-by: Jani Nikula <[email protected]> Acked-by: Corey Minyard <[email protected]> Acked-by: Wei Liu <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Reviewed-by: Bill O'Donnell <[email protected]> Acked-by: Baoquan He <[email protected]> Acked-by: Ashutosh Dixit <[email protected]> Acked-by: Anna Schumaker <[email protected]> Signed-off-by: Joel Granados <[email protected]>
show more ...
|
|
Revision tags: v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1 |
|
| #
6aeb991c |
| 21-Nov-2024 |
Jim Zhao <[email protected]> |
mm/page-writeback: consolidate wb_thresh bumping logic into __wb_calc_thresh
Address the feedback from 39ac99852fca ("mm/page-writeback: raise wb_thresh to prevent write blocking with strictlimit)".
mm/page-writeback: consolidate wb_thresh bumping logic into __wb_calc_thresh
Address the feedback from 39ac99852fca ("mm/page-writeback: raise wb_thresh to prevent write blocking with strictlimit)". The wb_thresh bumping logic is scattered across wb_position_ratio, __wb_calc_thresh, and wb_update_dirty_ratelimit. For consistency, consolidate all wb_thresh bumping logic into __wb_calc_thresh.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Jim Zhao <[email protected]> Reviewed-by: Jan Kara <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Kemeng Shi <[email protected]> Cc: Guenter Roeck <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
4ce718f3 |
| 04-Jan-2025 |
Stefan Roesch <[email protected]> |
mm: fix div by zero in bdi_ratio_from_pages
During testing it has been detected, that it is possible to get div by zero error in bdi_set_min_bytes. The error is caused by the function bdi_ratio_fro
mm: fix div by zero in bdi_ratio_from_pages
During testing it has been detected, that it is possible to get div by zero error in bdi_set_min_bytes. The error is caused by the function bdi_ratio_from_pages(). bdi_ratio_from_pages() calls global_dirty_limits. If the dirty threshold is 0, the div by zero is raised. This can happen if the root user is setting:
echo 0 > /proc/sys/vm/dirty_ratio
The following is a test case:
echo 0 > /proc/sys/vm/dirty_ratio cd /sys/class/bdi/<device> echo 1 > strict_limit echo 8192 > min_bytes
==> error is raised.
The problem is addressed by returning -EINVAL if dirty_ratio or dirty_bytes is set to 0.
[[email protected]: check for -EINVAL in bdi_set_min_bytes() and bdi_set_max_bytes()] Link: https://lkml.kernel.org/r/[email protected] [[email protected]: v3] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Stefan Roesch <[email protected]> Reported-by: cheung wall <[email protected]> Closes: https://lore.kernel.org/linux-mm/[email protected]/T/#t Acked-by: David Hildenbrand <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Qiang Zhang <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5 |
|
| #
568bcf41 |
| 25-Oct-2024 |
Shakeel Butt <[email protected]> |
memcg-v1: no need for memcg locking for writeback tracking
During the era of memcg charge migration, the kernel has to be make sure that the writeback stat updates do not race with the charge migrat
memcg-v1: no need for memcg locking for writeback tracking
During the era of memcg charge migration, the kernel has to be make sure that the writeback stat updates do not race with the charge migration. Otherwise it might update the writeback stats of the wrong memcg. Now with the memcg charge migration gone, there is no more race for writeback stat updates and the previous locking can be removed.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Shakeel Butt <[email protected]> Acked-by: Michal Hocko <[email protected]> Reviewed-by: Roman Gushchin <[email protected]> Acked-by: Johannes Weiner <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Muchun Song <[email protected]> Cc: Yosry Ahmed <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
a8cd9d4c |
| 25-Oct-2024 |
Shakeel Butt <[email protected]> |
memcg-v1: no need for memcg locking for dirty tracking
During the era of memcg charge migration, the kernel has to be make sure that the dirty stat updates do not race with the charge migration. Oth
memcg-v1: no need for memcg locking for dirty tracking
During the era of memcg charge migration, the kernel has to be make sure that the dirty stat updates do not race with the charge migration. Otherwise it might update the dirty stats of the wrong memcg. Now with the memcg charge migration gone, there is no more race for dirty stat updates and the previous locking can be removed.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Shakeel Butt <[email protected]> Acked-by: Michal Hocko <[email protected]> Reviewed-by: Roman Gushchin <[email protected]> Acked-by: Johannes Weiner <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Muchun Song <[email protected]> Cc: Yosry Ahmed <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
39ac9985 |
| 23-Oct-2024 |
Jim Zhao <[email protected]> |
mm/page-writeback: raise wb_thresh to prevent write blocking with strictlimit
With the strictlimit flag, wb_thresh acts as a hard limit in balance_dirty_pages() and wb_position_ratio(). When device
mm/page-writeback: raise wb_thresh to prevent write blocking with strictlimit
With the strictlimit flag, wb_thresh acts as a hard limit in balance_dirty_pages() and wb_position_ratio(). When device write operations are inactive, wb_thresh can drop to 0, causing writes to be blocked. The issue occasionally occurs in fuse fs, particularly with network backends, the write thread is blocked frequently during a period. To address it, this patch raises the minimum wb_thresh to a controllable level, similar to the non-strictlimit case.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Jim Zhao <[email protected]> Cc: Matthew Wilcox <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
7fce207a |
| 24-Oct-2024 |
Joanne Koong <[email protected]> |
mm/writeback: add folio_mark_dirty_lock()
Add a new convenience helper folio_mark_dirty_lock() that grabs the folio lock before calling folio_mark_dirty().
Refactor set_page_dirty_lock() to directl
mm/writeback: add folio_mark_dirty_lock()
Add a new convenience helper folio_mark_dirty_lock() that grabs the folio lock before calling folio_mark_dirty().
Refactor set_page_dirty_lock() to directly use folio_mark_dirty_lock().
Signed-off-by: Joanne Koong <[email protected]> Signed-off-by: Miklos Szeredi <[email protected]>
show more ...
|
|
Revision tags: v6.12-rc4, v6.12-rc3 |
|
| #
98f3ac9b |
| 09-Oct-2024 |
Tang Yizhou <[email protected]> |
mm/page-writeback.c: Fix comment of wb_domain_writeout_add()
__bdi_writeout_inc() has undergone multiple renamings, but the comment within the function body have not been updated accordingly. Update
mm/page-writeback.c: Fix comment of wb_domain_writeout_add()
__bdi_writeout_inc() has undergone multiple renamings, but the comment within the function body have not been updated accordingly. Update it to reflect the latest wb_domain_writeout_add().
Signed-off-by: Tang Yizhou <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Jan Kara <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
| #
a54fc493 |
| 09-Oct-2024 |
Tang Yizhou <[email protected]> |
mm/page-writeback.c: Update comment for BANDWIDTH_INTERVAL
The name of the BANDWIDTH_INTERVAL macro is misleading, as it is not only used in the bandwidth update functions wb_update_bandwidth() and
mm/page-writeback.c: Update comment for BANDWIDTH_INTERVAL
The name of the BANDWIDTH_INTERVAL macro is misleading, as it is not only used in the bandwidth update functions wb_update_bandwidth() and __wb_update_bandwidth(), but also in the dirty limit update function domain_update_dirty_limit().
Currently, we haven't found an ideal name, so update the comment only.
Signed-off-by: Tang Yizhou <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5 |
|
| #
0692fad5 |
| 21-Aug-2024 |
Yuesong Li <[email protected]> |
mm:page-writeback: use folio_next_index() helper in writeback_iter()
Simplify code pattern of 'folio->index + folio_nr_pages(folio)' by using the existing helper folio_next_index().
Link: https://l
mm:page-writeback: use folio_next_index() helper in writeback_iter()
Simplify code pattern of 'folio->index + folio_nr_pages(folio)' by using the existing helper folio_next_index().
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Yuesong Li <[email protected]> Cc: Matthew Wilcox <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4 |
|
| #
ae04f69d |
| 10-Jun-2024 |
Qais Yousef <[email protected]> |
sched/rt: Rename realtime_{prio, task}() to rt_or_dl_{prio, task}()
Some find the name realtime overloaded. Use rt_or_dl() as an alternative, hopefully better, name.
Suggested-by: Daniel Bristot de
sched/rt: Rename realtime_{prio, task}() to rt_or_dl_{prio, task}()
Some find the name realtime overloaded. Use rt_or_dl() as an alternative, hopefully better, name.
Suggested-by: Daniel Bristot de Oliveira <[email protected]> Signed-off-by: Qais Yousef <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
130fd056 |
| 10-Jun-2024 |
Qais Yousef <[email protected]> |
sched/rt: Clean up usage of rt_task()
rt_task() checks if a task has RT priority. But depends on your dictionary, this could mean it belongs to RT class, or is a 'realtime' task, which includes RT a
sched/rt: Clean up usage of rt_task()
rt_task() checks if a task has RT priority. But depends on your dictionary, this could mean it belongs to RT class, or is a 'realtime' task, which includes RT and DL classes.
Since this has caused some confusion already on discussion [1], it seemed a clean up is due.
I define the usage of rt_task() to be tasks that belong to RT class. Make sure that it returns true only for RT class and audit the users and replace the ones required the old behavior with the new realtime_task() which returns true for RT and DL classes. Introduce similar realtime_prio() to create similar distinction to rt_prio() and update the users that required the old behavior to use the new function.
Move MAX_DL_PRIO to prio.h so it can be used in the new definitions.
Document the functions to make it more obvious what is the difference between them. PI-boosted tasks is a factor that must be taken into account when choosing which function to use.
Rename task_is_realtime() to realtime_task_policy() as the old name is confusing against the new realtime_task().
No functional changes were intended.
[1] https://lore.kernel.org/lkml/[email protected]/
Signed-off-by: Qais Yousef <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Phil Auld <[email protected]> Reviewed-by: "Steven Rostedt (Google)" <[email protected]> Reviewed-by: Sebastian Andrzej Siewior <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
78eb4ea2 |
| 24-Jul-2024 |
Joel Granados <[email protected]> |
sysctl: treewide: constify the ctl_table argument of proc_handlers
const qualify the struct ctl_table argument in the proc_handler function signatures. This is a prerequisite to moving the static ct
sysctl: treewide: constify the ctl_table argument of proc_handlers
const qualify the struct ctl_table argument in the proc_handler function signatures. This is a prerequisite to moving the static ctl_table structs into .rodata data which will ensure that proc_handler function pointers cannot be modified.
This patch has been generated by the following coccinelle script:
``` virtual patch
@r1@ identifier ctl, write, buffer, lenp, ppos; identifier func !~ "appldata_(timer|interval)_handler|sched_(rt|rr)_handler|rds_tcp_skbuf_handler|proc_sctp_do_(hmac_alg|rto_min|rto_max|udp_port|alpha_beta|auth|probe_interval)"; @@
int func( - struct ctl_table *ctl + const struct ctl_table *ctl ,int write, void *buffer, size_t *lenp, loff_t *ppos);
@r2@ identifier func, ctl, write, buffer, lenp, ppos; @@
int func( - struct ctl_table *ctl + const struct ctl_table *ctl ,int write, void *buffer, size_t *lenp, loff_t *ppos) { ... }
@r3@ identifier func; @@
int func( - struct ctl_table * + const struct ctl_table * ,int , void *, size_t *, loff_t *);
@r4@ identifier func, ctl; @@
int func( - struct ctl_table *ctl + const struct ctl_table *ctl ,int , void *, size_t *, loff_t *);
@r5@ identifier func, write, buffer, lenp, ppos; @@
int func( - struct ctl_table * + const struct ctl_table * ,int write, void *buffer, size_t *lenp, loff_t *ppos);
```
* Code formatting was adjusted in xfs_sysctl.c to comply with code conventions. The xfs_stats_clear_proc_handler, xfs_panic_mask_proc_handler and xfs_deprecated_dointvec_minmax where adjusted.
* The ctl_table argument in proc_watchdog_common was const qualified. This is called from a proc_handler itself and is calling back into another proc_handler, making it necessary to change it as part of the proc_handler migration.
Co-developed-by: Thomas Weißschuh <[email protected]> Signed-off-by: Thomas Weißschuh <[email protected]> Co-developed-by: Joel Granados <[email protected]> Signed-off-by: Joel Granados <[email protected]>
show more ...
|
| #
68ed2a39 |
| 21-Jun-2024 |
Jan Kara <[email protected]> |
mm: avoid overflows in dirty throttling logic
The dirty throttling logic is interspersed with assumptions that dirty limits in PAGE_SIZE units fit into 32-bit (so that various multiplications fit in
mm: avoid overflows in dirty throttling logic
The dirty throttling logic is interspersed with assumptions that dirty limits in PAGE_SIZE units fit into 32-bit (so that various multiplications fit into 64-bits). If limits end up being larger, we will hit overflows, possible divisions by 0 etc. Fix these problems by never allowing so large dirty limits as they have dubious practical value anyway. For dirty_bytes / dirty_background_bytes interfaces we can just refuse to set so large limits. For dirty_ratio / dirty_background_ratio it isn't so simple as the dirty limit is computed from the amount of available memory which can change due to memory hotplug etc. So when converting dirty limits from ratios to numbers of pages, we just don't allow the result to exceed UINT_MAX.
This is root-only triggerable problem which occurs when the operator sets dirty limits to >16 TB.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Jan Kara <[email protected]> Reported-by: Zach O'Keefe <[email protected]> Reviewed-By: Zach O'Keefe <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
8dfcffa3 |
| 21-Jun-2024 |
Jan Kara <[email protected]> |
Revert "mm/writeback: fix possible divide-by-zero in wb_dirty_limits(), again"
Patch series "mm: Avoid possible overflows in dirty throttling".
Dirty throttling logic assumes dirty limits in page u
Revert "mm/writeback: fix possible divide-by-zero in wb_dirty_limits(), again"
Patch series "mm: Avoid possible overflows in dirty throttling".
Dirty throttling logic assumes dirty limits in page units fit into 32-bits. This patch series makes sure this is true (see patch 2/2 for more details).
This patch (of 2):
This reverts commit 9319b647902cbd5cc884ac08a8a6d54ce111fc78.
The commit is broken in several ways. Firstly, the removed (u64) cast from the multiplication will introduce a multiplication overflow on 32-bit archs if wb_thresh * bg_thresh >= 1<<32 (which is actually common - the default settings with 4GB of RAM will trigger this). Secondly, the div64_u64() is unnecessarily expensive on 32-bit archs. We have div64_ul() in case we want to be safe & cheap. Thirdly, if dirty thresholds are larger than 1<<32 pages, then dirty balancing is going to blow up in many other spectacular ways anyway so trying to fix one possible overflow is just moot.
Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Fixes: 9319b647902c ("mm/writeback: fix possible divide-by-zero in wb_dirty_limits(), again") Signed-off-by: Jan Kara <[email protected]> Reviewed-By: Zach O'Keefe <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc3, v6.10-rc2, v6.10-rc1 |
|
| #
8246291e |
| 14-May-2024 |
Kemeng Shi <[email protected]> |
writeback: factor out balance_wb_limits to remove repeated code
Factor out balance_wb_limits to remove repeated code
[[email protected]: add comment] Link: https://lkml.kernel.org/r/20240
writeback: factor out balance_wb_limits to remove repeated code
Factor out balance_wb_limits to remove repeated code
[[email protected]: add comment] Link: https://lkml.kernel.org/r/[email protected] [[email protected]: s/fileds/fields/ in comment] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kemeng Shi <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Tejun Heo <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
236d0f16 |
| 14-May-2024 |
Kemeng Shi <[email protected]> |
writeback: factor out wb_dirty_exceeded to remove repeated code
Factor out wb_dirty_exceeded to remove repeated code
Link: https://lkml.kernel.org/r/[email protected]
writeback: factor out wb_dirty_exceeded to remove repeated code
Factor out wb_dirty_exceeded to remove repeated code
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kemeng Shi <[email protected]> Acked-by: Tejun Heo <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
8c9918de |
| 14-May-2024 |
Kemeng Shi <[email protected]> |
writeback: factor out balance_domain_limits to remove repeated code
Factor out balance_domain_limits to remove repeated code.
Link: https://lkml.kernel.org/r/20240514125254.142203-7-shikemeng@huawe
writeback: factor out balance_domain_limits to remove repeated code
Factor out balance_domain_limits to remove repeated code.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kemeng Shi <[email protected]> Acked-by: Tejun Heo <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
2530e239 |
| 14-May-2024 |
Kemeng Shi <[email protected]> |
writeback: factor out wb_dirty_freerun to remove more repeated freerun code
Factor out wb_dirty_freerun to remove more repeated freerun code.
Link: https://lkml.kernel.org/r/20240514125254.142203-6
writeback: factor out wb_dirty_freerun to remove more repeated freerun code
Factor out wb_dirty_freerun to remove more repeated freerun code.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kemeng Shi <[email protected]> Acked-by: Tejun Heo <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
9bb48a70 |
| 14-May-2024 |
Kemeng Shi <[email protected]> |
writeback: factor out code of freerun to remove repeated code
Factor out code of freerun into new helper functions domain_poll_intv and domain_dirty_freerun to remove repeated code.
Link: https://l
writeback: factor out code of freerun to remove repeated code
Factor out code of freerun into new helper functions domain_poll_intv and domain_dirty_freerun to remove repeated code.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kemeng Shi <[email protected]> Acked-by: Tejun Heo <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|
| #
6e208329 |
| 14-May-2024 |
Kemeng Shi <[email protected]> |
writeback: factor out domain_over_bg_thresh to remove repeated code
Factor out domain_over_bg_thresh from wb_over_bg_thresh to remove repeated code.
Link: https://lkml.kernel.org/r/20240514125254.1
writeback: factor out domain_over_bg_thresh to remove repeated code
Factor out domain_over_bg_thresh from wb_over_bg_thresh to remove repeated code.
Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kemeng Shi <[email protected]> Acked-by: Tejun Heo <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
show more ...
|