If verbose is enabled and parse_event is called, typically by tests,
log failures.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
print_metric_only_json and print_metric_end in stat-display.c may
create a metric value of "none" which fails validation as isfloat. Add
a helper to properly validate metric numeric values.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
The evsel_script() function is unused since the linked commit. Fix the
build by removing it.
Fixes the following compilation error:
static inline struct evsel_script *evsel_script(struct evsel *evsel)
^
builtin-script.c:347:36: error: unused function 'evsel_script' [-Werror,-Wunused-function]
Fixes: 3622990efa ("perf script: Change metric format to use json metrics")
Signed-off-by: James Clark <james.clark@linaro.org>
Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
errno.h isn't used in auxtrace.h so remove it and fix build failures
caused by transitive dependencies through auxtrace.h on errno.h.
Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
The NO_AUXTRACE build option was used when the __get_cpuid feature
test failed or if it was provided on the command line. The option no
longer avoids a dependency on a library and so having the option is
just adding complexity to the code base. Remove the option
CONFIG_AUXTRACE from Build files and HAVE_AUXTRACE_SUPPORT by assuming
it is always defined.
Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
This feature test is no longer used so remove.
The function tested by the feature test is used in:
tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.c
however, the Makefile just assumes the presence of the function and
doesn't perform a build feature test for it.
Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
The intel-pt code dependent on __get_cpuid is no longer present so
remove the feature test in the Makefile.config.
Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Rather than having a feature test and include of <cpuid.h> for the
__get_cpuid function, use the cpuid function provided by
tools/perf/arch/x86/util/cpuid.h.
Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
This adds test cases to verify the precise ip fallback logic:
- If the system supports precise ip, for an event given with the maximum
precision level, it should be able to decrease precise_ip to find a
supported level.
- The same fallback behavior should also work in more complex scenarios,
such as event groups or when PEBS is involved
Additional fallback tests, such as those covering missing feature cases,
can be added in the future.
Suggested-by: Ian Rogers <irogers@google.com>
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Ian Rogers <irogers!@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
One of my concern in the perf stat output was the alignment in the
metrics and shadow stats. I think it missed to calculate the basic
output length using COUNTS_LEN and EVNAME_LEN but missed to add the
unit length like "msec" and surround 2 spaces. I'm not sure why it's
not printed below though.
But anyway, now it shows correctly aligned metric output.
$ perf stat true
Performance counter stats for 'true':
859,772 task-clock # 0.395 CPUs utilized
0 context-switches # 0.000 /sec
0 cpu-migrations # 0.000 /sec
56 page-faults # 65.134 K/sec
1,075,022 instructions # 0.86 insn per cycle
1,255,911 cycles # 1.461 GHz
220,573 branches # 256.548 M/sec
7,381 branch-misses # 3.35% of all branches
TopdownL1 # 19.2 % tma_retiring
# 28.6 % tma_backend_bound
# 9.5 % tma_bad_speculation
# 42.6 % tma_frontend_bound
0.002174871 seconds time elapsed ^
|
0.002154000 seconds user |
0.000000000 seconds sys here
Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
For the sake of better documentation, add core_wide and target_cpu to
the tool.json. When the values of system_wide and
user_requested_cpu_list are unknown, use the values from the global
stat_config.
Example output showing how '-a' modifies the values in `perf stat`:
```
$ perf stat -e core_wide,target_cpu true
Performance counter stats for 'true':
0 core_wide
0 target_cpu
0.000993787 seconds time elapsed
0.001128000 seconds user
0.000000000 seconds sys
$ perf stat -e core_wide,target_cpu -a true
Performance counter stats for 'system wide':
1 core_wide
1 target_cpu
0.002271723 seconds time elapsed
$ perf list
...
tool:
core_wide
[1 if not SMT,if SMT are events being gathered on all SMT threads 1 otherwise 0. Unit: tool]
...
target_cpu
[1 if CPUs being analyzed,0 if threads/processes. Unit: tool]
...
```
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Explicitly use a metric rather than implicitly expecting '-e
instructions,cycles' to produce a metric. Use a metric with software
events to make it more compatible.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
test_stat_record_report and test_stat_record_script used default
output which triggers a bug when sending metrics. As this isn't
relevant to the test switch to using named software events.
Update the match in test_hybrid as the cycles event is now cpu-cycles
to workaround potential ARM issues.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Previously '-e cycles,instructions' would implicitly create an IPC
metric. This now has to be explicit with '-M insn_per_cycle'.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Default metrics may use unsupported events and be ignored. These
metrics shouldn't cause metric testing to fail.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Make the expectations match json metrics rather than the previous hard
coded ones.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
The Default[234] metric groups may contain unsupported legacy
events. Allow those metric groups to fail.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
When testing metric-only, pass a metric to perf rather than expecting
a hard coded metric value to be generated.
Remove keys that were really metric-only units and instead don't
expect metric only to have a matching json key as it encodes metrics
as {"metric_name", "metric_value"}.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Remove code that tested the "unit" as in KB/sec for certain hard coded
metric values and did workarounds.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
The logic to skip output of a default metric line was firing on
Alderlake and not displaying 'TopdownL1 (cpu_atom)'. Remove the
need_full_name check as it is equivalent to the different PMU test in
the cases we care about, merge the 'if's and flip the evsel of the PMU
test. The 'if' is now basically saying, if the output matches the last
printed output then skip the output.
Before:
```
TopdownL1 (cpu_core) # 11.3 % tma_bad_speculation
# 24.3 % tma_frontend_bound
TopdownL1 (cpu_core) # 33.9 % tma_backend_bound
# 30.6 % tma_retiring
# 42.2 % tma_backend_bound
# 25.0 % tma_frontend_bound (49.81%)
# 12.8 % tma_bad_speculation
# 20.0 % tma_retiring (59.46%)
```
After:
```
TopdownL1 (cpu_core) # 8.3 % tma_bad_speculation
# 43.7 % tma_frontend_bound
# 30.7 % tma_backend_bound
# 17.2 % tma_retiring
TopdownL1 (cpu_atom) # 31.9 % tma_backend_bound
# 37.6 % tma_frontend_bound (49.66%)
# 18.0 % tma_bad_speculation
# 12.6 % tma_retiring (59.58%)
```
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Now that the metrics are encoded in common json the hard coded
printing means the metrics are shown twice. Remove the hard coded
version.
This means that when specifying events, and those events correspond to
a hard coded metric, the metric will no longer be displayed. The
metric will be displayed if the metric is requested. Due to the adhoc
printing in the previous approach it was often found frustrating, the
new approach avoids this.
The default perf stat output on an alderlake now looks like:
```
$ perf stat -a -- sleep 1
Performance counter stats for 'system wide':
19,697 context-switches # nan cs/sec cs_per_second
TopdownL1 (cpu_core) # 10.7 % tma_bad_speculation
# 24.9 % tma_frontend_bound
TopdownL1 (cpu_core) # 34.3 % tma_backend_bound
# 30.1 % tma_retiring
6,593 page-faults # nan faults/sec page_faults_per_second
729,065,658 cpu_atom/cpu-cycles/ # nan GHz cycles_frequency (49.79%)
1,605,131,101 cpu_core/cpu-cycles/ # nan GHz cycles_frequency
# 19.7 % tma_bad_speculation
# 14.2 % tma_retiring (50.14%)
# 37.3 % tma_frontend_bound (50.31%)
87,302,268 cpu_atom/branches/ # nan M/sec branch_frequency (60.27%)
512,046,956 cpu_core/branches/ # nan M/sec branch_frequency
1,111 cpu-migrations # nan migrations/sec migrations_per_second
# 28.8 % tma_backend_bound (60.26%)
0.00 msec cpu-clock # 0.0 CPUs CPUs_utilized
392,509,323 cpu_atom/instructions/ # 0.6 instructions insn_per_cycle (60.19%)
2,990,369,310 cpu_core/instructions/ # 1.9 instructions insn_per_cycle
3,493,478 cpu_atom/branch-misses/ # 5.9 % branch_miss_rate (49.69%)
7,297,531 cpu_core/branch-misses/ # 1.4 % branch_miss_rate
1.006621701 seconds time elapsed
```
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
The metric format option isn't properly supported. This change
improves that by making the sample events update the counts of an
evsel, where the shadow metric code expects to read the values. To
support printing metrics, metrics need to be found. This is done on
the first attempt to print a metric. Every metric is parsed and then
the evsels in the metric's evlist compared to those in perf script
using the perf_event_attr type and config. If the metric matches then
it is added for printing. As an event in the perf script's evlist may
have >1 metric id, or different leader for aggregation, the first
metric matched will be displayed in those cases.
An example use is:
```
$ perf record -a -e '{instructions,cpu-cycles}:S' -a -- sleep 1
$ perf script -F period,metric
...
867817
metric: 0.30 insn per cycle
125394
metric: 0.04 insn per cycle
313516
metric: 0.11 insn per cycle
metric: 1.00 insn per cycle
```
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Some Default group metrics require their events showing for
consistency with perf's previous behavior. Add a flag to indicate when
this is the case and use it in stat-display.
As events are coming from Default metrics remove that default hardware
and software events from perf stat.
Following this change the default perf stat output on an alderlake looks like:
```
$ perf stat -a -- sleep 1
Performance counter stats for 'system wide':
20,550 context-switches # nan cs/sec cs_per_second
TopdownL1 (cpu_core) # 9.0 % tma_bad_speculation
# 28.1 % tma_frontend_bound
TopdownL1 (cpu_core) # 29.2 % tma_backend_bound
# 33.7 % tma_retiring
6,685 page-faults # nan faults/sec page_faults_per_second
790,091,064 cpu_atom/cpu-cycles/
# nan GHz cycles_frequency (49.83%)
2,563,918,366 cpu_core/cpu-cycles/
# nan GHz cycles_frequency
# 12.3 % tma_bad_speculation
# 14.5 % tma_retiring (50.20%)
# 33.8 % tma_frontend_bound (50.24%)
76,390,322 cpu_atom/branches/ # nan M/sec branch_frequency (60.20%)
1,015,173,047 cpu_core/branches/ # nan M/sec branch_frequency
1,325 cpu-migrations # nan migrations/sec migrations_per_second
# 39.3 % tma_backend_bound (60.17%)
0.00 msec cpu-clock # 0.000 CPUs utilized
# 0.0 CPUs CPUs_utilized
554,347,072 cpu_atom/instructions/ # 0.64 insn per cycle
# 0.6 instructions insn_per_cycle (60.14%)
5,228,931,991 cpu_core/instructions/ # 2.04 insn per cycle
# 2.0 instructions insn_per_cycle
4,308,874 cpu_atom/branch-misses/ # 5.65% of all branches
# 5.6 % branch_miss_rate (49.76%)
9,890,606 cpu_core/branch-misses/ # 0.97% of all branches
# 1.0 % branch_miss_rate
1.005477803 seconds time elapsed
```
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
For CPU nanoseconds a lot of the stat-shadow metrics use either
task-clock or cpu-clock, the latter being used when
target__has_cpu. Add a #target_cpu literal so that json metrics can
perform the same test.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Rather than using the first evsel in the matched events, try to find
the least shared non-tool evsel. The aim is to pick the first evsel
that typifies the metric within the list of metrics.
This addresses an issue where Default metric group metrics may lose
their counter value due to how the stat displaying hides counters for
default event/metric output.
For a metricgroup like TopdownL1 on an Intel Alderlake the change is,
before there are 4 events with metrics:
```
$ perf stat -M topdownL1 -a sleep 1
Performance counter stats for 'system wide':
7,782,334,296 cpu_core/TOPDOWN.SLOTS/ # 10.4 % tma_bad_speculation
# 19.7 % tma_frontend_bound
2,668,927,977 cpu_core/topdown-retiring/ # 35.7 % tma_backend_bound
# 34.1 % tma_retiring
803,623,987 cpu_core/topdown-bad-spec/
167,514,386 cpu_core/topdown-heavy-ops/
1,555,265,776 cpu_core/topdown-fe-bound/
2,792,733,013 cpu_core/topdown-be-bound/
279,769,310 cpu_atom/TOPDOWN_RETIRING.ALL/ # 12.2 % tma_retiring
# 15.1 % tma_bad_speculation
457,917,232 cpu_atom/CPU_CLK_UNHALTED.CORE/ # 38.4 % tma_backend_bound
# 34.2 % tma_frontend_bound
783,519,226 cpu_atom/TOPDOWN_FE_BOUND.ALL/
10,790,192 cpu_core/INT_MISC.UOP_DROPPING/
879,845,633 cpu_atom/TOPDOWN_BE_BOUND.ALL/
```
After there are 6 events with metrics:
```
$ perf stat -M topdownL1 -a sleep 1
Performance counter stats for 'system wide':
2,377,551,258 cpu_core/TOPDOWN.SLOTS/ # 7.9 % tma_bad_speculation
# 36.4 % tma_frontend_bound
480,791,142 cpu_core/topdown-retiring/ # 35.5 % tma_backend_bound
186,323,991 cpu_core/topdown-bad-spec/
65,070,590 cpu_core/topdown-heavy-ops/ # 20.1 % tma_retiring
871,733,444 cpu_core/topdown-fe-bound/
848,286,598 cpu_core/topdown-be-bound/
260,936,456 cpu_atom/TOPDOWN_RETIRING.ALL/ # 12.4 % tma_retiring
# 17.6 % tma_bad_speculation
419,576,513 cpu_atom/CPU_CLK_UNHALTED.CORE/
797,132,597 cpu_atom/TOPDOWN_FE_BOUND.ALL/ # 38.0 % tma_frontend_bound
3,055,447 cpu_core/INT_MISC.UOP_DROPPING/
671,014,164 cpu_atom/TOPDOWN_BE_BOUND.ALL/ # 32.0 % tma_backend_bound
```
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
It should also have PERF_SAMPLE_TID to enable inherit and PERF_SAMPLE_READ
on recent kernels. Not having _TID makes the feature check wrongly detect
the inherit and _READ support.
It was reported that the following command failed due to the error in
the missing feature check on Intel SPR machines.
$ perf record -e '{cpu/mem-loads-aux/S,cpu/mem-loads,ldlat=3/PS}' -- ls
Error:
Failure to open event 'cpu/mem-loads,ldlat=3/PS' on PMU 'cpu' which will be removed.
Invalid event (cpu/mem-loads,ldlat=3/PS) in per-thread mode, enable system wide with '-a'.
Reviewed-by: Ian Rogers <irogers@google.com>
Fixes: 3b193a57ba ("perf tools: Detect missing kernel features properly")
Reported-and-tested-by: Chen, Zide <zide.chen@intel.com>
Closes: https://lore.kernel.org/lkml/20251022220802.1335131-1-zide.chen@intel.com/
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Remove unnecessary semicolons reported by Coccinelle/coccicheck and the
semantic patch at scripts/coccinelle/misc/semicolon.cocci.
Signed-off-by: Chen Ni <nichen@iscas.ac.cn>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
The behavior of weak terms is subtle, add a test that they aren't
accidentally broken. The test finds an event with a weak 'period' and
then overrides it. In no such event is present then the test skips.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Add an ability to be able to compose perf_tools, by having one perform
an action and then calling a delegate. Currently the perf_tools have
if-then-elses setting the callback and then if-then-elses within the
callback. Understanding the behavior is complex as it is in two places
and logic for numerous operations, within things like perf inject, is
interwoven. By chaining perf_tools together based on command line
options this kind of code can be avoided.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Getting context for what a tool is doing, such as the perf_inject
instance, using container_of the tool is a common pattern in the
code. This isn't possible event_op2, event_op3 and event_op4 callbacks
as the tool isn't passed. Add the argument and then fix function
signatures to match. As tools maybe reading a tool from somewhere
else, change that code to use the passed in tool.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
This changes the output of the event like below. In fact, that's the
output it used to have before the JSON conversion.
Before:
$ perf stat -e task-clock true
Performance counter stats for 'true':
313,848 task-clock # 0.290 CPUs utilized
0.001081223 seconds time elapsed
0.001122000 seconds user
0.000000000 seconds sys
After:
$ perf stat -e task-clock true
Performance counter stats for 'true':
0.36 msec task-clock # 0.297 CPUs utilized
0.001225435 seconds time elapsed
0.001268000 seconds user
0.000000000 seconds sys
Reviewed-by: Ian Rogers <irogers@google.com>
Fixes: 9957d8c801 ("perf jevents: Add common software event json")
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Recent change on enabling --buildid-mmap by default brought an issue
with build-id handling. With build-ID in MMAP2 records, we don't need
to save the build-ID table in the header of a perf data file.
But the actual file contents still need to be cached in the debug
directory for annotation etc. Split the build-ID header processing and
caching and make sure perf record to save hit DSOs in the build-ID cache
by moving perf_session__cache_build_ids() to the end of the record__
finish_output().
Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
The tables created by jevents.py are only used within the pmu-events.c
file. Change the declarations of those global variables to be static
to encapsulate this.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
When copy metrics into a group also copy default information from the
original metrics.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
If an out-of-memory occurs the expr also needs freeing.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Update comment as the stat_config no longer holds all metrics.
Signed-off-by: Ian Rogers <irogers@google.com>
Fixes: faebee18d7 ("perf stat: Move metric list from config to evlist")
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
The metric_events exist in the metric_expr list and so this variable
has been unused for a while.
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
The syscalls_sys_{enter,exit} map in augmented_raw_syscalls.bpf.c has
max entries of 512. Usually syscall numbers are smaller than this but
x86 has x32 ABI where syscalls start from 512.
That makes trace__init_syscalls_bpf_prog_array_maps() fail in the middle
of the loop when it accesses those keys. As the loop iteration is not
ordered by syscall numbers anymore, the failure can affect non-x32
syscalls.
Let's increase the map size to 1024 so that it can handle those ABIs
too. While most systems won't need this, increasing the size will be
safer for potential future changes.
Reviewed-by: Howard Chu <howardchu95@gmail.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
When using perf record with the `--overwrite` option, a segmentation fault
occurs if an event fails to open. For example:
perf record -e cycles-ct -F 1000 -a --overwrite
Error:
cycles-ct:H: PMU Hardware doesn't support sampling/overflow-interrupts. Try 'perf stat'
perf: Segmentation fault
#0 0x6466b6 in dump_stack debug.c:366
#1 0x646729 in sighandler_dump_stack debug.c:378
#2 0x453fd1 in sigsegv_handler builtin-record.c:722
#3 0x7f8454e65090 in __restore_rt libc-2.32.so[54090]
#4 0x6c5671 in __perf_event__synthesize_id_index synthetic-events.c:1862
#5 0x6c5ac0 in perf_event__synthesize_id_index synthetic-events.c:1943
#6 0x458090 in record__synthesize builtin-record.c:2075
#7 0x45a85a in __cmd_record builtin-record.c:2888
#8 0x45deb6 in cmd_record builtin-record.c:4374
#9 0x4e5e33 in run_builtin perf.c:349
#10 0x4e60bf in handle_internal_command perf.c:401
#11 0x4e6215 in run_argv perf.c:448
#12 0x4e653a in main perf.c:555
#13 0x7f8454e4fa72 in __libc_start_main libc-2.32.so[3ea72]
#14 0x43a3ee in _start ??:0
The --overwrite option implies --tail-synthesize, which collects non-sample
events reflecting the system status when recording finishes. However, when
evsel opening fails (e.g., unsupported event 'cycles-ct'), session->evlist
is not initialized and remains NULL. The code unconditionally calls
record__synthesize() in the error path, which iterates through the NULL
evlist pointer and causes a segfault.
To fix it, move the record__synthesize() call inside the error check block, so
it's only called when there was no error during recording, ensuring that evlist
is properly initialized.
Fixes: 4ea648aec0 ("perf record: Add --tail-synthesize option")
Signed-off-by: Shuai Xue <xueshuai@linux.alibaba.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
On some machines, it caused troubles when it tried to find kernel
symbols. I think it's because kernel modules and kallsyms are messed
up during load and split.
Basically we want to make sure the kernel map is loaded and the code has
it in the lock_contention_read(). But recently we added more lookups in
the lock_contention_prepare() which is called before _read().
Also the kernel map (kallsyms) may not be the first one in the group
like on ARM. Let's use machine__kernel_map() rather than just loading
the first map.
Reviewed-by: Ian Rogers <irogers@google.com>
Fixes: 688d2e8de2 ("perf lock contention: Add -l/--lock-addr option")
Signed-off-by: Namhyung Kim <namhyung@kernel.org>