|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14 |
|
| #
ae0a457f |
| 18-Mar-2025 |
Emil Tsalapatis <[email protected]> |
bpf: Make perf_event_read_output accessible in all program types.
The perf_event_read_event_output helper is currently only available to tracing protrams, but is useful for other BPF programs like s
bpf: Make perf_event_read_output accessible in all program types.
The perf_event_read_event_output helper is currently only available to tracing protrams, but is useful for other BPF programs like sched_ext schedulers. When the helper is available, provide its bpf_func_proto directly from the bpf base_proto.
Signed-off-by: Emil Tsalapatis (Meta) <[email protected]> Acked-by: Jiri Olsa <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1 |
|
| #
8c6eb7ca |
| 29-Jan-2025 |
Sebastian Andrzej Siewior <[email protected]> |
bpf: Use RCU in all users of __module_text_address().
__module_address() can be invoked within a RCU section, there is no requirement to have preemption disabled.
Replace the preempt_disable() sect
bpf: Use RCU in all users of __module_text_address().
__module_address() can be invoked within a RCU section, there is no requirement to have preemption disabled.
Replace the preempt_disable() section around __module_address() with RCU.
Cc: Alexei Starovoitov <[email protected]> Cc: Andrii Nakryiko <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Eduard Zingerman <[email protected]> Cc: Hao Luo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Fastabend <[email protected]> Cc: KP Singh <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: Masami Hiramatsu <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Matt Bobrowski <[email protected]> Cc: Song Liu <[email protected]> Cc: Stanislav Fomichev <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Yonghong Song <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Acked-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Petr Pavlu <[email protected]>
show more ...
|
| #
4580f4e0 |
| 24-Feb-2025 |
Alexei Starovoitov <[email protected]> |
bpf: Fix deadlock between rcu_tasks_trace and event_mutex.
Fix the following deadlock: CPU A _free_event() perf_kprobe_destroy() mutex_lock(&event_mutex) perf_trace_event_unreg()
bpf: Fix deadlock between rcu_tasks_trace and event_mutex.
Fix the following deadlock: CPU A _free_event() perf_kprobe_destroy() mutex_lock(&event_mutex) perf_trace_event_unreg() synchronize_rcu_tasks_trace()
There are several paths where _free_event() grabs event_mutex and calls sync_rcu_tasks_trace. Above is one such case.
CPU B bpf_prog_test_run_syscall() rcu_read_lock_trace() bpf_prog_run_pin_on_cpu() bpf_prog_load() bpf_tracing_func_proto() trace_set_clr_event() mutex_lock(&event_mutex)
Delegate trace_set_clr_event() to workqueue to avoid such lock dependency.
Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
show more ...
|
| #
b4a8b5bb |
| 20-Feb-2025 |
Hou Tao <[email protected]> |
bpf: Use preempt_count() directly in bpf_send_signal_common()
bpf_send_signal_common() uses preemptible() to check whether or not the current context is preemptible. If it is preemptible, it will us
bpf: Use preempt_count() directly in bpf_send_signal_common()
bpf_send_signal_common() uses preemptible() to check whether or not the current context is preemptible. If it is preemptible, it will use irq_work to send the signal asynchronously instead of trying to hold a spin-lock, because spin-lock is sleepable under PREEMPT_RT.
However, preemptible() depends on CONFIG_PREEMPT_COUNT. When CONFIG_PREEMPT_COUNT is turned off (e.g., CONFIG_PREEMPT_VOLUNTARY=y), !preemptible() will be evaluated as 1 and bpf_send_signal_common() will use irq_work unconditionally.
Fix it by unfolding "!preemptible()" and using "preempt_count() != 0 || irqs_disabled()" instead.
Fixes: 87c544108b61 ("bpf: Send signals asynchronously if !preemptible") Signed-off-by: Hou Tao <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
| #
72e213a7 |
| 07-Feb-2025 |
Peter Zijlstra <[email protected]> |
x86/ibt: Clean up is_endbr()
Pretty much every caller of is_endbr() actually wants to test something at an address and ends up doing get_kernel_nofault(). Fold the lot into a more convenient helper.
x86/ibt: Clean up is_endbr()
Pretty much every caller of is_endbr() actually wants to test something at an address and ends up doing get_kernel_nofault(). Fold the lot into a more convenient helper.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Sami Tolvanen <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Acked-by: "Masami Hiramatsu (Google)" <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.13 |
|
| #
87c54410 |
| 15-Jan-2025 |
Puranjay Mohan <[email protected]> |
bpf: Send signals asynchronously if !preemptible
BPF programs can execute in all kinds of contexts and when a program running in a non-preemptible context uses the bpf_send_signal() kfunc, it will c
bpf: Send signals asynchronously if !preemptible
BPF programs can execute in all kinds of contexts and when a program running in a non-preemptible context uses the bpf_send_signal() kfunc, it will cause issues because this kfunc can sleep. Change `irqs_disabled()` to `!preemptible()`.
Reported-by: [email protected] Closes: https://lore.kernel.org/all/[email protected]/ Fixes: 1bc7896e9ef4 ("bpf: Fix deadlock with rq_lock in bpf_send_signal()") Signed-off-by: Puranjay Mohan <[email protected]> Acked-by: Yonghong Song <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc7, v6.13-rc6 |
|
| #
4f7caaa2 |
| 31-Dec-2024 |
Masami Hiramatsu (Google) <[email protected]> |
bpf: Use ftrace_get_symaddr() for kprobe_multi probes
Add ftrace_get_entry_ip() which is only for ftrace based probes, and use it for kprobe multi probes because they are based on fprobe which uses
bpf: Use ftrace_get_symaddr() for kprobe_multi probes
Add ftrace_get_entry_ip() which is only for ftrace based probes, and use it for kprobe multi probes because they are based on fprobe which uses ftrace instead of kprobes.
Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Mark Rutland <[email protected]> Link: https://lore.kernel.org/173566081414.878879.10631096557346094362.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
show more ...
|
| #
2ebadb60 |
| 06-Jan-2025 |
Jiri Olsa <[email protected]> |
bpf: Return error for missed kprobe multi bpf program execution
When kprobe multi bpf program can't be executed due to recursion check, we currently return 0 (success) to fprobe layer where it's ign
bpf: Return error for missed kprobe multi bpf program execution
When kprobe multi bpf program can't be executed due to recursion check, we currently return 0 (success) to fprobe layer where it's ignored for standard kprobe multi probes.
For kprobe session the success return value will make fprobe layer to install return probe and try to execute it as well.
But the return session probe should not get executed, because the entry part did not run. FWIW the return probe bpf program most likely won't get executed, because its recursion check will likely fail as well, but we don't need to run it in the first place.. also we can make this clear and obvious.
It also affects missed counts for kprobe session program execution, which are now doubled (extra count for not executed return probe).
Signed-off-by: Jiri Olsa <[email protected]> Acked-by: Masami Hiramatsu (Google) <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
| #
ca3c4f64 |
| 04-Jan-2025 |
Pu Lehui <[email protected]> |
bpf: Move out synchronize_rcu_tasks_trace from mutex CS
Commit ef1b808e3b7c ("bpf: Fix UAF via mismatching bpf_prog/attachment RCU flavors") resolved a possible UAF issue in uprobes that attach non-
bpf: Move out synchronize_rcu_tasks_trace from mutex CS
Commit ef1b808e3b7c ("bpf: Fix UAF via mismatching bpf_prog/attachment RCU flavors") resolved a possible UAF issue in uprobes that attach non-sleepable bpf prog by explicitly waiting for a tasks-trace-RCU grace period. But, in the current implementation, synchronize_rcu_tasks_trace is included within the mutex critical section, which increases the length of the critical section and may affect performance. So let's move out synchronize_rcu_tasks_trace from mutex CS.
Signed-off-by: Pu Lehui <[email protected]> Reviewed-by: Jiri Olsa <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc5 |
|
| #
8e2759da |
| 26-Dec-2024 |
Masami Hiramatsu (Google) <[email protected]> |
bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
Enable kprobe_multi feature if CONFIG_FPROBE is enabled. The pt_regs is converted from ftrace_regs by ftrace_partial_regs(), thus some re
bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled
Enable kprobe_multi feature if CONFIG_FPROBE is enabled. The pt_regs is converted from ftrace_regs by ftrace_partial_regs(), thus some registers may always returns 0. But it should be enough for function entry (access arguments) and exit (access return value).
Cc: Alexei Starovoitov <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Mark Rutland <[email protected]> Link: https://lore.kernel.org/173519000417.391279.14011193569589886419.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Acked-by: Florent Revest <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
show more ...
|
| #
762abbc0 |
| 26-Dec-2024 |
Masami Hiramatsu (Google) <[email protected]> |
fprobe: Use ftrace_regs in fprobe exit handler
Change the fprobe exit handler to use ftrace_regs structure instead of pt_regs. This also introduce HAVE_FTRACE_REGS_HAVING_PT_REGS which means the ftr
fprobe: Use ftrace_regs in fprobe exit handler
Change the fprobe exit handler to use ftrace_regs structure instead of pt_regs. This also introduce HAVE_FTRACE_REGS_HAVING_PT_REGS which means the ftrace_regs is including the pt_regs so that ftrace_regs can provide pt_regs without memory allocation. Fprobe introduces a new dependency with that.
Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Acked-by: Heiko Carstens <[email protected]> # s390 Cc: Huacai Chen <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: bpf <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: WANG Xuerui <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Alexander Gordeev <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Dave Hansen <[email protected]> Cc: [email protected] Cc: "H. Peter Anvin" <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Song Liu <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: KP Singh <[email protected]> Cc: Matt Bobrowski <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Andrii Nakryiko <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: Eduard Zingerman <[email protected]> Cc: Yonghong Song <[email protected]> Cc: John Fastabend <[email protected]> Cc: Stanislav Fomichev <[email protected]> Cc: Hao Luo <[email protected]> Cc: Andrew Morton <[email protected]> Link: https://lore.kernel.org/173518995092.391279.6765116450352977627.stgit@devnote2 Signed-off-by: Steven Rostedt (Google) <[email protected]>
show more ...
|
| #
46bc0823 |
| 26-Dec-2024 |
Masami Hiramatsu (Google) <[email protected]> |
fprobe: Use ftrace_regs in fprobe entry handler
This allows fprobes to be available with CONFIG_DYNAMIC_FTRACE_WITH_ARGS instead of CONFIG_DYNAMIC_FTRACE_WITH_REGS, then we can enable fprobe on arm6
fprobe: Use ftrace_regs in fprobe entry handler
This allows fprobes to be available with CONFIG_DYNAMIC_FTRACE_WITH_ARGS instead of CONFIG_DYNAMIC_FTRACE_WITH_REGS, then we can enable fprobe on arm64.
Cc: Alexei Starovoitov <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Mark Rutland <[email protected]> Link: https://lore.kernel.org/173518994037.391279.2786805566359674586.stgit@devnote2 Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Acked-by: Florent Revest <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
show more ...
|
|
Revision tags: v6.13-rc4, v6.13-rc3, v6.13-rc2 |
|
| #
978c4486 |
| 08-Dec-2024 |
Jiri Olsa <[email protected]> |
bpf,perf: Fix invalid prog_array access in perf_event_detach_bpf_prog
Syzbot reported [1] crash that happens for following tracing scenario:
- create tracepoint perf event with attr.inherit=1, at
bpf,perf: Fix invalid prog_array access in perf_event_detach_bpf_prog
Syzbot reported [1] crash that happens for following tracing scenario:
- create tracepoint perf event with attr.inherit=1, attach it to the process and set bpf program to it - attached process forks -> chid creates inherited event
the new child event shares the parent's bpf program and tp_event (hence prog_array) which is global for tracepoint
- exit both process and its child -> release both events - first perf_event_detach_bpf_prog call will release tp_event->prog_array and second perf_event_detach_bpf_prog will crash, because tp_event->prog_array is NULL
The fix makes sure the perf_event_detach_bpf_prog checks prog_array is valid before it tries to remove the bpf program from it.
[1] https://lore.kernel.org/bpf/Z1MR6dCIKajNS6nU@krava/T/#m91dbf0688221ec7a7fc95e896a7ef9ff93b0b8ad
Fixes: 0ee288e69d03 ("bpf,perf: Fix perf_event_detach_bpf_prog error handling") Reported-by: [email protected] Signed-off-by: Jiri Olsa <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
show more ...
|
| #
ef1b808e |
| 10-Dec-2024 |
Jann Horn <[email protected]> |
bpf: Fix UAF via mismatching bpf_prog/attachment RCU flavors
Uprobes always use bpf_prog_run_array_uprobe() under tasks-trace-RCU protection. But it is possible to attach a non-sleepable BPF program
bpf: Fix UAF via mismatching bpf_prog/attachment RCU flavors
Uprobes always use bpf_prog_run_array_uprobe() under tasks-trace-RCU protection. But it is possible to attach a non-sleepable BPF program to a uprobe, and non-sleepable BPF programs are freed via normal RCU (see __bpf_prog_put_noref()). This leads to UAF of the bpf_prog because a normal RCU grace period does not imply a tasks-trace-RCU grace period.
Fix it by explicitly waiting for a tasks-trace-RCU grace period after removing the attachment of a bpf_prog to a perf_event.
Fixes: 8c7dcb84e3b7 ("bpf: implement sleepable uprobes by chaining gps") Suggested-by: Andrii Nakryiko <[email protected]> Suggested-by: Alexei Starovoitov <[email protected]> Signed-off-by: Jann Horn <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Cc: [email protected] Link: https://lore.kernel.org/bpf/[email protected]
show more ...
|
|
Revision tags: v6.13-rc1 |
|
| #
3bfb49d7 |
| 29-Nov-2024 |
Marco Elver <[email protected]> |
bpf: Refactor bpf_tracing_func_proto() and remove bpf_get_probe_write_proto()
With bpf_get_probe_write_proto() no longer printing a message, we can avoid it being a special case with its own permiss
bpf: Refactor bpf_tracing_func_proto() and remove bpf_get_probe_write_proto()
With bpf_get_probe_write_proto() no longer printing a message, we can avoid it being a special case with its own permission check.
Refactor bpf_tracing_func_proto() similar to bpf_base_func_proto() to have a section conditional on bpf_token_capable(CAP_SYS_ADMIN), where the proto for bpf_probe_write_user() is returned. Finally, remove the unnecessary bpf_get_probe_write_proto().
This simplifies the code, and adding additional CAP_SYS_ADMIN-only helpers in future avoids duplicating the same CAP_SYS_ADMIN check.
Suggested-by: Andrii Nakryiko <[email protected]> Signed-off-by: Marco Elver <[email protected]> Acked-by: Jiri Olsa <[email protected]> Acked-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
| #
b28573eb |
| 29-Nov-2024 |
Marco Elver <[email protected]> |
bpf: Remove bpf_probe_write_user() warning message
The warning message for bpf_probe_write_user() was introduced in 96ae52279594 ("bpf: Add bpf_probe_write_user BPF helper to be called in tracers"),
bpf: Remove bpf_probe_write_user() warning message
The warning message for bpf_probe_write_user() was introduced in 96ae52279594 ("bpf: Add bpf_probe_write_user BPF helper to be called in tracers"), with the following in the commit message:
Given this feature is meant for experiments, and it has a risk of crashing the system, and running programs, we print a warning on when a proglet that attempts to use this helper is installed, along with the pid and process name.
After 8 years since 96ae52279594, bpf_probe_write_user() has found successful applications beyond experiments [1, 2], with no other good alternatives. Despite its intended purpose for "experiments", that doesn't stop Hyrum's law, and there are likely many more users depending on this helper: "[..] it does not matter what you promise [..] all observable behaviors of your system will be depended on by somebody."
The ominous "helper that may corrupt user memory!" has offered no real benefit, and has been found to lead to confusion where the system administrator is loading programs with valid use cases.
As such, remove the warning message.
Link: https://lore.kernel.org/lkml/[email protected]/ [1] Link: https://lore.kernel.org/r/lkml/CAAn3qOUMD81-vxLLfep0H6rRd74ho2VaekdL4HjKq+Y1t9KdXQ@mail.gmail.com/ [2] Link: https://lore.kernel.org/all/CAEf4Bzb4D_=zuJrg3PawMOW3KqF8JvJm9SwF81_XHR2+u5hkUg@mail.gmail.com/ Signed-off-by: Marco Elver <[email protected]> Acked-by: Jiri Olsa <[email protected]> Acked-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
|
Revision tags: v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1 |
|
| #
b9c44b91 |
| 15-May-2024 |
Yabin Cui <[email protected]> |
perf/core: Save raw sample data conditionally based on sample type
Currently, space for raw sample data is always allocated within sample records for both BPF output and tracepoint events. This lead
perf/core: Save raw sample data conditionally based on sample type
Currently, space for raw sample data is always allocated within sample records for both BPF output and tracepoint events. This leads to unused space in sample records when raw sample data is not requested.
This patch enforces checking sample type of an event in perf_sample_save_raw_data(). So raw sample data will only be saved if explicitly requested, reducing overhead when it is not needed.
Fixes: 0a9081cf0a11 ("perf/core: Add perf_sample_save_raw_data() helper") Signed-off-by: Yabin Cui <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Reviewed-by: Ian Rogers <[email protected]> Acked-by: Namhyung Kim <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
99b403d2 |
| 08-Nov-2024 |
Jiri Olsa <[email protected]> |
bpf: Add support for uprobe multi session context
Placing bpf_session_run_ctx layer in between bpf_run_ctx and bpf_uprobe_multi_run_ctx, so the session data can be retrieved from uprobe_multi link.
bpf: Add support for uprobe multi session context
Placing bpf_session_run_ctx layer in between bpf_run_ctx and bpf_uprobe_multi_run_ctx, so the session data can be retrieved from uprobe_multi link.
Plus granting session kfuncs access to uprobe session programs.
Signed-off-by: Jiri Olsa <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
show more ...
|
| #
d920179b |
| 08-Nov-2024 |
Jiri Olsa <[email protected]> |
bpf: Add support for uprobe multi session attach
Adding support to attach BPF program for entry and return probe of the same function. This is common use case which at the moment requires to create
bpf: Add support for uprobe multi session attach
Adding support to attach BPF program for entry and return probe of the same function. This is common use case which at the moment requires to create two uprobe multi links.
Adding new BPF_TRACE_UPROBE_SESSION attach type that instructs kernel to attach single link program to both entry and exit probe.
It's possible to control execution of the BPF program on return probe simply by returning zero or non zero from the entry BPF program execution to execute or not the BPF program on return probe respectively.
Signed-off-by: Jiri Olsa <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
show more ...
|
| #
f505005b |
| 08-Nov-2024 |
Jiri Olsa <[email protected]> |
bpf: Force uprobe bpf program to always return 0
As suggested by Andrii make uprobe multi bpf programs to always return 0, so they can't force uprobe removal.
Keeping the int return type for uprobe
bpf: Force uprobe bpf program to always return 0
As suggested by Andrii make uprobe multi bpf programs to always return 0, so they can't force uprobe removal.
Keeping the int return type for uprobe_prog_run, because it will be used in following session changes.
Fixes: 89ae89f53d20 ("bpf: Add multi uprobe link") Suggested-by: Andrii Nakryiko <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
show more ...
|
| #
0ee288e6 |
| 23-Oct-2024 |
Jiri Olsa <[email protected]> |
bpf,perf: Fix perf_event_detach_bpf_prog error handling
Peter reported that perf_event_detach_bpf_prog might skip to release the bpf program for -ENOENT error from bpf_prog_array_copy.
This can't h
bpf,perf: Fix perf_event_detach_bpf_prog error handling
Peter reported that perf_event_detach_bpf_prog might skip to release the bpf program for -ENOENT error from bpf_prog_array_copy.
This can't happen because bpf program is stored in perf event and is detached and released only when perf event is freed.
Let's drop the -ENOENT check and make sure the bpf program is released in any case.
Fixes: 170a7e3ea070 ("bpf: bpf_prog_array_copy() should return -ENOENT if exclude_prog not found") Reported-by: Peter Zijlstra <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
Closes: https://lore.kernel.org/lkml/[email protected]/
show more ...
|
| #
da09a9e0 |
| 18-Oct-2024 |
Jiri Olsa <[email protected]> |
uprobe: Add data pointer to consumer handlers
Adding data pointer to both entry and exit consumer handlers and all its users. The functionality itself is coming in following change.
Signed-off-by:
uprobe: Add data pointer to consumer handlers
Adding data pointer to both entry and exit consumer handlers and all its users. The functionality itself is coming in following change.
Signed-off-by: Jiri Olsa <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Oleg Nesterov <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
6fad274f |
| 21-Oct-2024 |
Daniel Borkmann <[email protected]> |
bpf: Add MEM_WRITE attribute
Add a MEM_WRITE attribute for BPF helper functions which can be used in bpf_func_proto to annotate an argument type in order to let the verifier know that the helper wri
bpf: Add MEM_WRITE attribute
Add a MEM_WRITE attribute for BPF helper functions which can be used in bpf_func_proto to annotate an argument type in order to let the verifier know that the helper writes into the memory passed as an argument. In the past MEM_UNINIT has been (ab)used for this function, but the latter merely tells the verifier that the passed memory can be uninitialized.
There have been bugs with overloading the latter but aside from that there are also cases where the passed memory is read + written which currently cannot be expressed, see also 4b3786a6c539 ("bpf: Zero former ARG_PTR_TO_{LONG,INT} args in case of error").
Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Kumar Kartikeya Dwivedi <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Alexei Starovoitov <[email protected]>
show more ...
|
| #
6280cf71 |
| 16-Oct-2024 |
Puranjay Mohan <[email protected]> |
bpf: Implement bpf_send_signal_task() kfunc
Implement bpf_send_signal_task kfunc that is similar to bpf_send_signal_thread and bpf_send_signal helpers but can be used to send signals to other threa
bpf: Implement bpf_send_signal_task() kfunc
Implement bpf_send_signal_task kfunc that is similar to bpf_send_signal_thread and bpf_send_signal helpers but can be used to send signals to other threads and processes. It also supports sending a cookie with the signal similar to sigqueue().
If the receiving process establishes a handler for the signal using the SA_SIGINFO flag to sigaction(), then it can obtain this cookie via the si_value field of the siginfo_t structure passed as the second argument to the handler.
Signed-off-by: Puranjay Mohan <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
show more ...
|
| #
ad6b5b6e |
| 11-Oct-2024 |
Tyrone Wu <[email protected]> |
bpf: Fix unpopulated path_size when uprobe_multi fields unset
Previously when retrieving `bpf_link_info.uprobe_multi` with `path` and `path_size` fields unset, the `path_size` field is not populated
bpf: Fix unpopulated path_size when uprobe_multi fields unset
Previously when retrieving `bpf_link_info.uprobe_multi` with `path` and `path_size` fields unset, the `path_size` field is not populated (remains 0). This behavior was inconsistent with how other input/output string buffer fields work, as the field should be populated in cases when: - both buffer and length are set (currently works as expected) - both buffer and length are unset (not working as expected)
This patch now fills the `path_size` field when `path` and `path_size` are unset.
Fixes: e56fdbfb06e2 ("bpf: Add link_info support for uprobe multi link") Signed-off-by: Tyrone Wu <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
show more ...
|