|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13 |
|
| #
3194e364 |
| 16-Jan-2025 |
John Garry <[email protected]> |
dm-table: atomic writes support
Support stacking atomic write limits for DM devices.
All the pre-existing code in blk_stack_atomic_writes_limits() already takes care of finding the aggregrate limit
dm-table: atomic writes support
Support stacking atomic write limits for DM devices.
All the pre-existing code in blk_stack_atomic_writes_limits() already takes care of finding the aggregrate limits from the bottom devices.
Feature flag DM_TARGET_ATOMIC_WRITES is introduced so that atomic writes can be enabled on personalities selectively. This is to ensure that atomic writes are only enabled when verified to be working properly (for a specific personality). In addition, it just may not make sense to enable atomic writes on some personalities (so this flag also helps there).
Signed-off-by: John Garry <[email protected]> Reviewed-by: Mike Snitzer <[email protected]> Signed-off-by: Mikulas Patocka <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2 |
|
| #
013f510d |
| 01-Aug-2024 |
Yue Haibing <[email protected]> |
dm: Remove unused declaration dm_get_rq_mapinfo()
Commit ae6ad75e5c3c ("dm: remove unused dm_get_rq_mapinfo()") removed the implementation but leave declaration.
Signed-off-by: Yue Haibing <yuehaib
dm: Remove unused declaration dm_get_rq_mapinfo()
Commit ae6ad75e5c3c ("dm: remove unused dm_get_rq_mapinfo()") removed the implementation but leave declaration.
Signed-off-by: Yue Haibing <[email protected]> Reviewed-by: Damien Le Moal <[email protected]> Signed-off-by: Mikulas Patocka <[email protected]>
show more ...
|
|
Revision tags: v6.11-rc1, v6.10 |
|
| #
61706974 |
| 10-Jul-2024 |
Mikulas Patocka <[email protected]> |
dm: introduce the target flag mempool_needs_integrity
This commit introduces the dm target flag mempool_needs_integrity. When the flag is set, device mapper will call bioset_integrity_create on it's
dm: introduce the target flag mempool_needs_integrity
This commit introduces the dm target flag mempool_needs_integrity. When the flag is set, device mapper will call bioset_integrity_create on it's bio sets. The target can then call bio_integrity_alloc on the bios allocated from the table's mempool.
Signed-off-by: Mikulas Patocka <[email protected]> Signed-off-by: Mike Snitzer <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc7 |
|
| #
a21f9edb |
| 04-Jul-2024 |
Benjamin Marzinski <[email protected]> |
dm: factor out helper function from dm_get_device
Factor out a helper function, dm_devt_from_path(), from dm_get_device() for use in dm targets.
Signed-off-by: Benjamin Marzinski <[email protected]
dm: factor out helper function from dm_get_device
Factor out a helper function, dm_devt_from_path(), from dm_get_device() for use in dm targets.
Signed-off-by: Benjamin Marzinski <[email protected]> Signed-off-by: Mikulas Patocka <[email protected]>
show more ...
|
| #
9d45db03 |
| 04-Jul-2024 |
Damien Le Moal <[email protected]> |
dm: Remove max_secure_erase_granularity
The max_secure_erase_granularity boolean of struct dm_target is used in __process_abnormal_io() but never set by any target. Remove this field and the dead co
dm: Remove max_secure_erase_granularity
The max_secure_erase_granularity boolean of struct dm_target is used in __process_abnormal_io() but never set by any target. Remove this field and the dead code using it.
Signed-off-by: Damien Le Moal <[email protected]> Signed-off-by: Mikulas Patocka <[email protected]>
show more ...
|
| #
396a27e9 |
| 04-Jul-2024 |
Damien Le Moal <[email protected]> |
dm: Remove max_write_zeroes_granularity
The max_write_zeroes_granularity boolean of struct dm_target is used in __process_abnormal_io() but never set by any target. Remove this field and the dead co
dm: Remove max_write_zeroes_granularity
The max_write_zeroes_granularity boolean of struct dm_target is used in __process_abnormal_io() but never set by any target. Remove this field and the dead code using it.
Signed-off-by: Damien Le Moal <[email protected]> Signed-off-by: Mikulas Patocka <[email protected]>
show more ...
|
| #
81e77063 |
| 04-Jul-2024 |
Damien Le Moal <[email protected]> |
dm: handle REQ_OP_ZONE_RESET_ALL
This commit implements processing of the REQ_OP_ZONE_RESET_ALL operation for zoned mapped devices. Given that this operation always has a BIO sector of 0 and a 0 siz
dm: handle REQ_OP_ZONE_RESET_ALL
This commit implements processing of the REQ_OP_ZONE_RESET_ALL operation for zoned mapped devices. Given that this operation always has a BIO sector of 0 and a 0 size, processing through the regular BIO __split_and_process_bio() function does not work because this function would always select the first target. Instead, handling of this operation is implemented using the function __send_zone_reset_all().
Similarly to the __send_empty_flush() function, the new __send_zone_reset_all() function manually goes through all targets of a mapped device table doing the following: 1) If the target can natively support REQ_OP_ZONE_RESET_ALL, __send_duplicate_bios() is used to forward the reset all operation to the target. This case is handled with the __send_zone_reset_all_native() function. 2) For other targets, the function __send_zone_reset_all_emulated() is executed to emulate the execution of REQ_OP_ZONE_RESET_ALL using regular REQ_OP_ZONE_RESET operations.
Targets that can natively support REQ_OP_ZONE_RESET_ALL are identified using the new target field zone_reset_all_supported. This boolean is set to true in for targets that have reliable zone limits, that is, targets that map all sequential write required zones of their zoned device(s). Setting this field is handled in dm_set_zones_restrictions() and device_get_zone_resource_limits().
For targets with unreliable zone limits, REQ_OP_ZONE_RESET_ALL must be emulated (case 2 above). This is implemented with __send_zone_reset_all_emulated() and is similar to the block layer function blkdev_zone_reset_all_emulated(): first a report zones is done for the zones of the target to identify zones that need reset, that is, any sequential write required zone that is not already empty. This is done using a bitmap and the function dm_zone_get_reset_bitmap() which sets to 1 the bit corresponding to a zone that needs reset. Next, this zone bitmap is inspected and a clone BIO modified to use the REQ_OP_ZONE_RESET operation issued for any zone with its bit set in the zone bitmap.
This implementation is more efficient than what the block layer does with blkdev_zone_reset_all_emulated(), which is always used for DM zoned devices currently: as we can natively use REQ_OP_ZONE_RESET_ALL on targets mapping all sequential write required zones, resetting all zones of a zoned mapped device can be much faster compared to always emulating this operation using regular per-zone reset. In the worst case, this implementation is as-efficient as the block layer emulation. This reduction in the time it takes to reset all zones of a zoned mapped device depends directly on the mapped device targets mapping (reliable zone limits or not).
Signed-off-by: Damien Le Moal <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Reviewed-by: Johannes Thumshirn <[email protected]> Reviewed-by: Martin K. Petersen <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2 |
|
| #
aaa53168 |
| 28-May-2024 |
Mikulas Patocka <[email protected]> |
dm: optimize flushes
Device mapper sends flush bios to all the targets and the targets send it to the underlying device. That may be inefficient, for example if a table contains 10 linear targets po
dm: optimize flushes
Device mapper sends flush bios to all the targets and the targets send it to the underlying device. That may be inefficient, for example if a table contains 10 linear targets pointing to the same physical device, then device mapper would send 10 flush bios to that device - despite the fact that only one bio would be sufficient.
This commit optimizes the flush behavior. It introduces a per-target variable flush_bypasses_map - it is set when the target supports flush optimization - currently, the dm-linear and dm-stripe targets support it. When all the targets in a table have flush_bypasses_map, flush_bypasses_map on the table is set. __send_empty_flush tests if the table has flush_bypasses_map - and if it has, no flush bios are sent to the targets via the "map" method and the list dm_table->devices is iterated and the flush bios are sent to each member of the list.
Signed-off-by: Mikulas Patocka <[email protected]> Reviewed-by: Mike Snitzer <[email protected]> Suggested-by: Yang Yang <[email protected]>
show more ...
|
|
Revision tags: v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2 |
|
| #
a28d893e |
| 23-Jan-2024 |
Christian Brauner <[email protected]> |
md: port block device access to file
Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Christoph Hellwig <[email protected]> Reviewed-by: Jan Kara <jack@suse.
md: port block device access to file
Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Christoph Hellwig <[email protected]> Reviewed-by: Jan Kara <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4 |
|
| #
c2fce61f |
| 27-Sep-2023 |
Jan Kara <[email protected]> |
dm: Convert to bdev_open_by_dev()
Convert device mapper to use bdev_open_by_dev() and pass the handle around.
CC: Alasdair Kergon <[email protected]> CC: Mike Snitzer <[email protected]> CC: dm-devel
dm: Convert to bdev_open_by_dev()
Convert device mapper to use bdev_open_by_dev() and pass the handle around.
CC: Alasdair Kergon <[email protected]> CC: Mike Snitzer <[email protected]> CC: [email protected] Acked-by: Christoph Hellwig <[email protected]> Acked-by: Christian Brauner <[email protected]> Signed-off-by: Jan Kara <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6 |
|
| #
05bdb996 |
| 08-Jun-2023 |
Christoph Hellwig <[email protected]> |
block: replace fmode_t with a block-specific type for block open flags
The only overlap between the block open flags mapped into the fmode_t and other uses of fmode_t are FMODE_READ and FMODE_WRITE.
block: replace fmode_t with a block-specific type for block open flags
The only overlap between the block open flags mapped into the fmode_t and other uses of fmode_t are FMODE_READ and FMODE_WRITE. Define a new blk_mode_t instead for use in blkdev_get_by_{dev,path}, ->open and ->ioctl and stop abusing fmode_t.
Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Jack Wang <[email protected]> [rnbd] Reviewed-by: Hannes Reinecke <[email protected]> Reviewed-by: Christian Brauner <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc5 |
|
| #
d4a28d7d |
| 31-May-2023 |
Christoph Hellwig <[email protected]> |
dm: remove dm_get_dev_t
Open code dm_get_dev_t in the only remaining caller, and propagate the exact error code from lookup_bdev and early_lookup_bdev.
Signed-off-by: Christoph Hellwig <[email protected]>
dm: remove dm_get_dev_t
Open code dm_get_dev_t in the only remaining caller, and propagate the exact error code from lookup_bdev and early_lookup_bdev.
Signed-off-by: Christoph Hellwig <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jens Axboe <[email protected]>
show more ...
|
|
Revision tags: v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7 |
|
| #
f7995089 |
| 14-Apr-2023 |
Mike Snitzer <[email protected]> |
dm: unexport dm_get_queue_limits()
There are no dm_get_queue_limits() callers outside of DM core and there shouldn't be.
Also, remove its BUG_ON(!atomic_read(&md->holders)) to micro-optimize __proc
dm: unexport dm_get_queue_limits()
There are no dm_get_queue_limits() callers outside of DM core and there shouldn't be.
Also, remove its BUG_ON(!atomic_read(&md->holders)) to micro-optimize __process_abnormal_io().
Signed-off-by: Mike Snitzer <[email protected]>
show more ...
|
| #
13f6facf |
| 14-Apr-2023 |
Mike Snitzer <[email protected]> |
dm: allow targets to require splitting WRITE_ZEROES and SECURE_ERASE
Introduce max_write_zeroes_granularity and max_secure_erase_granularity flags in the dm_target struct.
If a target sets these th
dm: allow targets to require splitting WRITE_ZEROES and SECURE_ERASE
Introduce max_write_zeroes_granularity and max_secure_erase_granularity flags in the dm_target struct.
If a target sets these then DM core will split IO of these operation types accordingly (in terms of max_write_zeroes_sectors and max_secure_erase_sectors respectively).
Signed-off-by: Mike Snitzer <[email protected]>
show more ...
|
|
Revision tags: v6.3-rc6 |
|
| #
3664ff82 |
| 09-Apr-2023 |
Yangtao Li <[email protected]> |
dm: add helper macro for simple DM target module init and exit
Eliminate duplicate boilerplate code for simple modules that contain a single DM target driver without any additional setup code.
Add
dm: add helper macro for simple DM target module init and exit
Eliminate duplicate boilerplate code for simple modules that contain a single DM target driver without any additional setup code.
Add a new module_dm() macro, which replaces the module_init() and module_exit() with template functions that call dm_register_target() and dm_unregister_target() respectively.
Signed-off-by: Yangtao Li <[email protected]> Signed-off-by: Mike Snitzer <[email protected]>
show more ...
|
|
Revision tags: v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1 |
|
| #
06961c48 |
| 01-Mar-2023 |
Mike Snitzer <[email protected]> |
dm: split discards further if target sets max_discard_granularity
The block core (bio_split_discard) will already split discards based on the 'discard_granularity' and 'max_discard_sectors' queue_li
dm: split discards further if target sets max_discard_granularity
The block core (bio_split_discard) will already split discards based on the 'discard_granularity' and 'max_discard_sectors' queue_limits. But the DM thin target also needs to ensure that it doesn't receive a discard that spans a 'max_discard_sectors' boundary.
Introduce a dm_target 'max_discard_granularity' flag that if set will cause DM core to split discard bios relative to 'max_discard_sectors'. This treats 'discard_granularity' as a "min_discard_granularity" and 'max_discard_sectors' as a "max_discard_granularity".
Requested-by: Joe Thornber <[email protected]> Signed-off-by: Mike Snitzer <[email protected]>
show more ...
|
|
Revision tags: v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6 |
|
| #
a4a82ce3 |
| 26-Jan-2023 |
Heinz Mauelshagen <[email protected]> |
dm: correct block comments format.
Signed-off-by: Heinz Mauelshagen <[email protected]> Signed-off-by: Mike Snitzer <[email protected]>
|
| #
44bc08ed |
| 01-Feb-2023 |
Heinz Mauelshagen <[email protected]> |
dm: enclose complex macros into parentheses where possible
Signed-off-by: Heinz Mauelshagen <[email protected]> Signed-off-by: Mike Snitzer <[email protected]>
|
| #
86a3238c |
| 25-Jan-2023 |
Heinz Mauelshagen <[email protected]> |
dm: change "unsigned" to "unsigned int"
Signed-off-by: Heinz Mauelshagen <[email protected]> Signed-off-by: Mike Snitzer <[email protected]>
|
| #
3bd94003 |
| 25-Jan-2023 |
Heinz Mauelshagen <[email protected]> |
dm: add missing SPDX-License-Indentifiers
'GPL-2.0-only' is used instead of 'GPL-2.0' because SPDX has deprecated its use.
Suggested-by: John Wiele <[email protected]> Signed-off-by: Heinz Mauelsha
dm: add missing SPDX-License-Indentifiers
'GPL-2.0-only' is used instead of 'GPL-2.0' because SPDX has deprecated its use.
Suggested-by: John Wiele <[email protected]> Signed-off-by: Heinz Mauelshagen <[email protected]> Signed-off-by: Mike Snitzer <[email protected]>
show more ...
|
|
Revision tags: v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8 |
|
| #
9dd1cd32 |
| 20-Jul-2022 |
Mike Snitzer <[email protected]> |
dm: fix dm-raid crash if md_handle_request() splits bio
Commit ca522482e3eaf ("dm: pass NULL bdev to bio_alloc_clone") introduced the optimization to _not_ perform bio_associate_blkg()'s relatively
dm: fix dm-raid crash if md_handle_request() splits bio
Commit ca522482e3eaf ("dm: pass NULL bdev to bio_alloc_clone") introduced the optimization to _not_ perform bio_associate_blkg()'s relatively costly work when DM core clones its bio. But in doing so it exposed the possibility for DM's cloned bio to alter DM target behavior (e.g. crash) if a target were to issue IO without first calling bio_set_dev().
The DM raid target can trigger an MD crash due to its need to split the DM bio that is passed to md_handle_request(). The split will recurse to submit_bio_noacct() using a bio with an uninitialized ->bi_blkg. This NULL bio->bi_blkg causes blk_throtl_bio() to dereference a NULL blkg_to_tg(bio->bi_blkg).
Fix this in DM core by adding a new 'needs_bio_set_dev' target flag that will make alloc_tio() call bio_set_dev() on behalf of the target. dm-raid is the only target that requires this flag. bio_set_dev() initializes the DM cloned bio's ->bi_blkg, using bio_associate_blkg, before passing the bio to md_handle_request().
Long-term fix would be to audit and refactor MD code to rely on DM to split its bio, using dm_accept_partial_bio(), but there are MD raid personalities (e.g. raid1 and raid10) whose implementation are tightly coupled to handling the bio splitting inline.
Fixes: ca522482e3eaf ("dm: pass NULL bdev to bio_alloc_clone") Cc: [email protected] Signed-off-by: Mike Snitzer <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc7, v5.19-rc6 |
|
| #
2aec377a |
| 05-Jul-2022 |
Mike Snitzer <[email protected]> |
dm table: remove dm_table_get_num_targets() wrapper
More efficient and readable to just access table->num_targets directly.
Suggested-by: Linus Torvalds <[email protected]> Signed-off-b
dm table: remove dm_table_get_num_targets() wrapper
More efficient and readable to just access table->num_targets directly.
Suggested-by: Linus Torvalds <[email protected]> Signed-off-by: Mike Snitzer <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4 |
|
| #
047218ec |
| 22-Apr-2022 |
Jane Chu <[email protected]> |
dax: add .recovery_write dax_operation
Introduce dax_recovery_write() operation. The function is used to recover a dax range that contains poison. Typical use case is when a user process receives a
dax: add .recovery_write dax_operation
Introduce dax_recovery_write() operation. The function is used to recover a dax range that contains poison. Typical use case is when a user process receives a SIGBUS with si_code BUS_MCEERR_AR indicating poison(s) in a dax range, in response, the user process issues a pwrite() to the page-aligned dax range, thus clears the poison and puts valid data in the range.
Reviewed-by: Christoph Hellwig <[email protected]> Signed-off-by: Jane Chu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Dan Williams <[email protected]>
show more ...
|
| #
e511c4a3 |
| 13-May-2022 |
Jane Chu <[email protected]> |
dax: introduce DAX_RECOVERY_WRITE dax access mode
Up till now, dax_direct_access() is used implicitly for normal access, but for the purpose of recovery write, dax range with poison is requested. T
dax: introduce DAX_RECOVERY_WRITE dax access mode
Up till now, dax_direct_access() is used implicitly for normal access, but for the purpose of recovery write, dax range with poison is requested. To make the interface clear, introduce enum dax_access_mode { DAX_ACCESS, DAX_RECOVERY_WRITE, } where DAX_ACCESS is used for normal dax access, and DAX_RECOVERY_WRITE is used for dax recovery write.
Suggested-by: Dan Williams <[email protected]> Signed-off-by: Jane Chu <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Mike Snitzer <[email protected]> Reviewed-by: Vivek Goyal <[email protected]> Link: https://lore.kernel.org/r/165247982851.52965.11024212198889762949.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <[email protected]>
show more ...
|
|
Revision tags: v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8 |
|
| #
b7f8dff0 |
| 10-Mar-2022 |
Mike Snitzer <[email protected]> |
dm: simplify dm_sumbit_bio_remap interface
Remove the from_wq argument from dm_sumbit_bio_remap(). Eliminates the need for dm_sumbit_bio_remap() callers to know whether they are calling for a workqu
dm: simplify dm_sumbit_bio_remap interface
Remove the from_wq argument from dm_sumbit_bio_remap(). Eliminates the need for dm_sumbit_bio_remap() callers to know whether they are calling for a workqueue or from the original dm_submit_bio().
Add map_task to dm_io struct, record the map_task in alloc_io and clear it after all target ->map() calls have completed. Update dm_sumbit_bio_remap to check if 'current' matches io->map_task rather than rely on passed 'from_rq' argument.
This change really simplifies the chore of porting each DM target to using dm_sumbit_bio_remap() because there is no longer the risk of programming error by not completely knowing all the different contexts a particular method that calls dm_sumbit_bio_remap() might be used in.
Signed-off-by: Mike Snitzer <[email protected]>
show more ...
|