History log of /linux-6.15/fs/bcachefs/debug.c (Results 1 – 25 of 86)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1
# dcffc3b1 30-Mar-2025 Kent Overstreet <[email protected]>

bcachefs: Split up bch_dev.io_ref

We now have separate per device io_refs for read and write access.

This fixes a device removal bug where the discard workers were still
running while we're removin

bcachefs: Split up bch_dev.io_ref

We now have separate per device io_refs for read and write access.

This fixes a device removal bug where the discard workers were still
running while we're removing alloc info for that device.

It's also a bit of hardening; we no longer allow writes to devices that
are read-only.

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


Revision tags: v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5
# ca24130e 29-Dec-2024 Kent Overstreet <[email protected]>

bcachefs: bch2_bkey_pick_read_device() can now specify a device

To be used for scrub, where we want the read to come from a specific
device.

Signed-off-by: Kent Overstreet <[email protected]

bcachefs: bch2_bkey_pick_read_device() can now specify a device

To be used for scrub, where we want the read to come from a specific
device.

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


# 78c9c6f6 22-Jan-2025 Kent Overstreet <[email protected]>

bcachefs: Move write_points to debugfs

this was hitting the sysfs 4k limit

Signed-off-by: Kent Overstreet <[email protected]>


# 35f51970 26-Jan-2025 Kent Overstreet <[email protected]>

bcachefs: Improve journal pin flushing

Running the preempt tiering tests with a lower than normal journal
reclaim delay turned up a shutdown hang - a lost wakeup, caused because
flushing a journal p

bcachefs: Improve journal pin flushing

Running the preempt tiering tests with a lower than normal journal
reclaim delay turned up a shutdown hang - a lost wakeup, caused because
flushing a journal pin (e.g. key cache/write buffer) can generate a new
journal pin.

The "simple" fix of adding the correct wakeup didn't work because of
ordering issues; if we flush btree node pins too aggressively before
other pins have completed, we end up spinning where each flush iteration
generates new work.

So to fix this correctly:
- The list of flushed journal pins is now broken out by type, so that
we can wait for key cache/write buffer pin flushing to complete
before flushing dirty btree nodes

- A new closure_waitlist is added for bch2_journal_flush_pins; this one
is only used under or when we're taking the journal lock, so it's
pretty cheap to add rigorously correct wakeups to journal_pin_set()
and journal_pin_drop().

Additionally, bch2_journal_seq_pins_to_text() is moved to
journal_reclaim.c, where it belongs, along with a bit of other small
renaming and refactoring.

Besides fixing the hang, the better ordering between key cache/write
buffer flushing and btree node flushing should help or fix the "unmount
taking excessively long" a few users have been noticing.

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


Revision tags: v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3
# db514cf6 10-Oct-2024 Kent Overstreet <[email protected]>

bcachefs: Avoid bch2_btree_id_str()

Prefer bch2_btree_id_to_text() - it prints out the integer ID when
unknown.

Signed-off-by: Kent Overstreet <[email protected]>


Revision tags: v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3
# 968feb85 07-Aug-2024 Kent Overstreet <[email protected]>

bcachefs: Convert for_each_btree_node() to lockrestart_do()

for_each_btree_node() now works similarly to for_each_btree_key(), where
the loop body is passed as an argument to be passed to lockrestar

bcachefs: Convert for_each_btree_node() to lockrestart_do()

for_each_btree_node() now works similarly to for_each_btree_key(), where
the loop body is passed as an argument to be passed to lockrestart_do().

This now calls trans_begin() on every loop iteration - which fixes an
SRCU warning in backpointers fsck.

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


Revision tags: v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6
# 67c56411 29-Jun-2024 Kent Overstreet <[email protected]>

bcachefs: Fix loop restart in bch2_btree_transactions_read()

Accidental infinite loop; also fix btree_deadlock_to_text()

Signed-off-by: Kent Overstreet <[email protected]>


Revision tags: v6.10-rc5
# 1aaf5cb4 23-Jun-2024 Kent Overstreet <[email protected]>

bcachefs: Fix btree_trans list ordering

The debug code relies on btree_trans_list being ordered so that it can
resume on subsequent calls or lock restarts.

However, it was using trans->locknig_wait

bcachefs: Fix btree_trans list ordering

The debug code relies on btree_trans_list being ordered so that it can
resume on subsequent calls or lock restarts.

However, it was using trans->locknig_wait.task.pid, which is incorrect
since btree_trans objects are cached and reused - typically by different
tasks.

Fix this by switching to pointer order, and also sort them lazily when
required - speeding up the btree_trans_get() fastpath.

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


# de611ab6 23-Jun-2024 Kent Overstreet <[email protected]>

bcachefs: Fix race between trans_put() and btree_transactions_read()

debug.c was using closure_get() on a different thread's closure where
the we don't know if the object being refcounted is alive.

bcachefs: Fix race between trans_put() and btree_transactions_read()

debug.c was using closure_get() on a different thread's closure where
the we don't know if the object being refcounted is alive.

We keep btree_trans objects on a list so they can be printed by debug
code, and because it is cost prohibitive to touch the btree_trans list
every time we allocate and free btree_trans objects, cached objects are
also on this list.

However, we do not want the debug code to see cached but not in use
btree_trans objects - critically because the btree_paths array will have
been freed (if it was reallocated).

closure_get() is also incorrect to use when that get may race with it
hitting zero, i.e. we must already have a ref on the object or know the
ref can't currently hit 0 for other reasons (as used in the cycle
detector).

to fix this, use the previously introduced closure_get_not_zero(),
closure_return_sync(), and closure_init_stack_release(); the debug code
now can only take a ref on a trans object if it's alive and in use.

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


# 18e92841 23-Jun-2024 Kent Overstreet <[email protected]>

bcachefs: Make btree_deadlock_to_text() clearer

btree_deadlock_to_text() searches the list of btree transactions to find
a deadlock - when it finds one it's done; it's not like other *_read()
functi

bcachefs: Make btree_deadlock_to_text() clearer

btree_deadlock_to_text() searches the list of btree transactions to find
a deadlock - when it finds one it's done; it's not like other *_read()
functions that's printing each object.

Factor out btree_deadlock_to_text() to make this clearer.

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


# f44cc269 23-Jun-2024 Kent Overstreet <[email protected]>

bcachefs: fix seqmutex_relock()

We were grabbing the sequence number before unlock incremented it - fix
this by moving the increment to seqmutex_lock() (so the seqmutex_relock()
failure path skips t

bcachefs: fix seqmutex_relock()

We were grabbing the sequence number before unlock incremented it - fix
this by moving the increment to seqmutex_lock() (so the seqmutex_relock()
failure path skips the mutex_trylock()), and returning the sequence
number from unlock(), to make the API simpler and safer.

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


Revision tags: v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7
# 2c91ab72 30-Apr-2024 Kent Overstreet <[email protected]>

bcachefs: bch2_dev_get_ioref() checks for device not present

Signed-off-by: Kent Overstreet <[email protected]>


# 91ffdecf 03-May-2024 Kent Overstreet <[email protected]>

bcachefs: bch2_dev_get_ioref2(); debug.c

Signed-off-by: Kent Overstreet <[email protected]>


Revision tags: v6.9-rc6, v6.9-rc5, v6.9-rc4
# 2f724563 12-Apr-2024 Kent Overstreet <[email protected]>

bcachefs: member helper cleanups

Some renaming for better consistency

bch2_member_exists -> bch2_member_alive
bch2_dev_exists -> bch2_member_exists
bch2_dev_exsits2 -> bch2_dev_exists
bch_dev_lock

bcachefs: member helper cleanups

Some renaming for better consistency

bch2_member_exists -> bch2_member_alive
bch2_dev_exists -> bch2_member_exists
bch2_dev_exsits2 -> bch2_dev_exists
bch_dev_locked -> bch2_dev_locked
bch_dev_bkey_exists -> bch2_dev_bkey_exists

new helper - bch2_dev_safe

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


# 5dd8c60e 07-Apr-2024 Kent Overstreet <[email protected]>

bcachefs: iter/update/trigger/str_hash flag cleanup

Combine iter/update/trigger/str_hash flags into a single enum, and
x-macroize them for a to_text() function later.

These flags are all for a spec

bcachefs: iter/update/trigger/str_hash flag cleanup

Combine iter/update/trigger/str_hash flags into a single enum, and
x-macroize them for a to_text() function later.

These flags are all for a specific iter/key/update context, so it makes
sense to group them together - iter/update/trigger flags were already
given distinct bits, this cleans up and unifies that handling.

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


# 7423330e 10-Apr-2024 Kent Overstreet <[email protected]>

bcachefs: prt_printf() now respects \r\n\t

Signed-off-by: Kent Overstreet <[email protected]>


Revision tags: v6.9-rc3
# 9fb3036f 03-Apr-2024 Kent Overstreet <[email protected]>

bcachefs: Move btree_updates to debugfs

sysfs is limited to PAGE_SIZE, and when we're debugging strange
deadlocks/priority inversions we need to see the full list.

Signed-off-by: Kent Overstreet <k

bcachefs: Move btree_updates to debugfs

sysfs is limited to PAGE_SIZE, and when we're debugging strange
deadlocks/priority inversions we need to see the full list.

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


Revision tags: v6.9-rc2, v6.9-rc1
# e60aa472 14-Mar-2024 Thomas Bertschinger <[email protected]>

bcachefs: create debugfs dir for each btree

This creates a subdirectory for each individual btree under the btrees/
debugfs directory.

Directory structure, before:

/sys/kernel/debug/bcachefs/$FS_I

bcachefs: create debugfs dir for each btree

This creates a subdirectory for each individual btree under the btrees/
debugfs directory.

Directory structure, before:

/sys/kernel/debug/bcachefs/$FS_ID/btrees/
├── alloc
├── alloc-bfloat-failed
├── alloc-formats
├── backpointers
├── backpointers-bfloat-failed
├── backpointers-formats
...

Directory structure, after:

/sys/kernel/debug/bcachefs/$FS_ID/btrees/
├── alloc
│   ├── bfloat-failed
│   ├── formats
│   └── keys
├── backpointers
│   ├── bfloat-failed
│   ├── formats
│   └── keys
...

Signed-off-by: Thomas Bertschinger <[email protected]>
Signed-off-by: Kent Overstreet <[email protected]>

show more ...


# 3ed94062 18-Mar-2024 Kent Overstreet <[email protected]>

bcachefs: Improve bch2_fatal_error()

error messages should always include __func__

Signed-off-by: Kent Overstreet <[email protected]>


Revision tags: v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3
# cb6fc943 01-Feb-2024 Kent Overstreet <[email protected]>

bcachefs: kill kvpmalloc()

Signed-off-by: Kent Overstreet <[email protected]>


Revision tags: v6.8-rc2
# 612e1110 22-Jan-2024 Kent Overstreet <[email protected]>

bcachefs: Add gfp flags param to bch2_prt_task_backtrace()

Fixes: e6a2566f7a00 ("bcachefs: Better journal tracepoints")
Signed-off-by: Kent Overstreet <[email protected]>
Reported-by: smatch


Revision tags: v6.8-rc1
# ec4edd7b 16-Jan-2024 Kent Overstreet <[email protected]>

bcachefs: Prep work for variable size btree node buffers

bcachefs btree nodes are big - typically 256k - and btree roots are
pinned in memory. As we're now up to 18 btrees, we now have significant
m

bcachefs: Prep work for variable size btree node buffers

bcachefs btree nodes are big - typically 256k - and btree roots are
pinned in memory. As we're now up to 18 btrees, we now have significant
memory overhead in mostly empty btree roots.

And in the future we're going to start enforcing that certain btree node
boundaries exist, to solve lock contention issues - analagous to XFS's
AGIs.

Thus, we need to start allocating smaller btree node buffers when we
can. This patch changes code that refers to the filesystem constant
c->opts.btree_node_size to refer to the btree node buffer size -
btree_buf_bytes() - where appropriate.

Signed-off-by: Kent Overstreet <[email protected]>

show more ...


Revision tags: v6.7
# c13fbb7d 04-Jan-2024 Kent Overstreet <[email protected]>

bcachefs: Improve would_deadlock trace event

We now include backtraces for every thread involved in the cycle.

Signed-off-by: Kent Overstreet <[email protected]>


# 8a0dda6f 03-Jan-2024 Kent Overstreet <[email protected]>

bcachefs: kill useless return ret

Signed-off-by: Kent Overstreet <[email protected]>


Revision tags: v6.7-rc8
# 89056f24 24-Dec-2023 Kent Overstreet <[email protected]>

bcachefs: track transaction durations

Signed-off-by: Kent Overstreet <[email protected]>


1234