| 8988c4b9 | 29-Apr-2025 |
James Clark <[email protected]> |
perf tools: Fix in-source libperf build
When libperf is built alone in-source, $(OUTPUT) isn't set. This causes the generated uapi path to resolve to '/../arch' which results in a permissions error:
perf tools: Fix in-source libperf build
When libperf is built alone in-source, $(OUTPUT) isn't set. This causes the generated uapi path to resolve to '/../arch' which results in a permissions error:
mkdir: cannot create directory '/../arch': Permission denied
Fix it by removing the preceding '/..' which means that it gets generated either in the tools/lib/perf part of the tree or the OUTPUT folder. Some other rules that rely on OUTPUT further refine this conditionally depending on whether it's an in-source or out-of-source build, but I don't think we need the extra complexity here. And this rule is slightly different to others because the header is needed by both libperf and Perf. This is further complicated by the fact that Perf always passes O=... to libperf even for in source builds, meaning that OUTPUT isn't set consistently between projects.
Because we're no longer going one level up to try to generate the file in the tools/ folder, Perf's include rule needs to descend into libperf. Also fix the clean rule while we're here.
Reported-by: Thorsten Leemhuis <[email protected]> Closes: https://lore.kernel.org/linux-perf-users/[email protected]/ Fixes: bfb713ea53c7 ("perf tools: Fix arm64 build by generating unistd_64.h") Signed-off-by: James Clark <[email protected]> Tested-by: Thorsten Leemhuis <[email protected]> Link: https://lore.kernel.org/r/20250429-james-perf-fix-libperf-in-source-build-v1-1-a1a827ac15e5@linaro.org Signed-off-by: Namhyung Kim <[email protected]>
show more ...
|
| bfb94675 | 06-Dec-2024 |
Ian Rogers <[email protected]> |
libperf cpumap: Grow array of read CPUs in smaller increments
Instead of growing the array by 2048, grow by the larger of the current range or 16.
As ranges are typical for things like the online C
libperf cpumap: Grow array of read CPUs in smaller increments
Instead of growing the array by 2048, grow by the larger of the current range or 16.
As ranges are typical for things like the online CPUs this will mean a single allocation happens.
While uncore CPU maps will grow 16 at a time which is a value that is generous except say on large servers.
Reviewed-by: Leo Yan <[email protected]> Signed-off-by: Ian Rogers <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ben Gainey <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Kyle Meyer <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
show more ...
|
| 9a1e1065 | 06-Dec-2024 |
Kyle Meyer <[email protected]> |
perf: Increase MAX_NR_CPUS to 4096
Systems have surpassed 2048 CPUs. Increase MAX_NR_CPUS to 4096.
Bitmaps declared with MAX_NR_CPUS bits will increase from 256B to 512B, cpus_runtime will increase
perf: Increase MAX_NR_CPUS to 4096
Systems have surpassed 2048 CPUs. Increase MAX_NR_CPUS to 4096.
Bitmaps declared with MAX_NR_CPUS bits will increase from 256B to 512B, cpus_runtime will increase from 81960B to 163880B, and max_entries will increase from 8192B to 16384B.
Reviewed-by: Ian Rogers <[email protected]> Reviewed-by: Leo Yan <[email protected]> Signed-off-by: Kyle Meyer <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Ben Gainey <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: James Clark <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
show more ...
|