Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
User visible changes:
* Add a visual cue for toggle zeroing of samples in 'perf top' (Taeung Song)
* Fix for double free in 'perf stat' when using some specific invalid
command line combo (Yasser Shalabi)
Infrastructure changes:
* Add option to copy events when queuing for sorting across cpu buffers
and enable it for 'perf kvm stat live', to avoid having events left
in the queue pointing to the ring buffer be rewritten in high volume
sessions. (Alexander Yarygin, improving work done by David Ahern):
* Document sysfs events/ interfaces (Cody P Schafer)
* Add support to new style format of kernel PMU event. (Kan Liang)
* Fix typos in perf/Documentation (Masanari Iida)
* Improve callchains when using libunwind (Namhyung Kim)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
When processing events the session code has an ordered samples queue
which is used to time-sort events coming in across multiple mmaps. At a
later point in time samples on the queue are flushed up to some
timestamp at which point the event is actually processed.
When analyzing events live (ie., record/analysis path in the same
command) there is a race that leads to corrupted events and parse errors
which cause perf to terminate. The problem is that when the event is
placed in the ordered samples queue it is only a reference to the event
which is really sitting in the mmap buffer. Even though the event is
queued for later processing the mmap tail pointer is updated which
indicates to the kernel that the event has been processed. The race is
flushing the event from the queue before it gets overwritten by some
other event. For commands trying to process events live (versus just
writing to a file) and processing a high rate of events this leads to
parse failures and perf terminates.
Examples hitting this problem are 'perf kvm stat live', especially with
nested VMs which generate 100,000+ traces per second, and a command
processing scheduling events with a high rate of context switching --
e.g., running 'perf bench sched pipe'.
This patch offers live commands an option to copy the event when it is
placed in the ordered samples queue.
Based on a patch from David Ahern <dsahern@gmail.com>
Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1412347212-28237-2-git-send-email-yarygin@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The unw_addr_space_t in libunwind represents an address space to be used
for stack unwinding. It doesn't need to be create/destory everytime to
unwind callchain (as in get_entries) and can have a same lifetime as
thread (unless exec called).
So move the address space construction/destruction logic to the thread
lifetime handling functions. This is a preparation to enable caching in
the unwind library.
Note that it saves unw_addr_space_t object using thread__set_priv(). It
seems currently only used by perf trace and perf kvm stat commands which
don't use callchain.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jean Pihet <jean.pihet@linaro.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Arun Sharma <asharma@fb.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jean Pihet <jean.pihet@linaro.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1412556363-26229-3-git-send-email-namhyung@kernel.org
[ Fixup unwind-libunwind.c missing CALLCHAIN_DWARF definition, added
missing __maybe_unused on unused parameters in stubs at util/unwind.h ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add test case in automated tests suite. It checks not only the two types
of pmu event stytle formats "pmu_event_name" and "cpu/pmu_event_name/",
but also the different formats mixtures which are more likely to trigger
parse issue.
The patch set including this one has been tested by the perf automated
test:
./perf test parse -v"
On haswell, ivybridge and Romley platform.
The patch set also has been tested on haswell by the following script.
Note: please make sure that your test system support TSX and
L1-dcache-loads events. Otherwise, you may want to change the events to
other pmu events.
[lk@localhost ~]$ cat perf_style_test.sh
# hardware events + kernel pmu event with different style
perf stat -x, -e cycles,mem-stores,tx-start sleep 2
perf stat -x, -e cpu-cycles,cycles-ct,cycles-t sleep 2
perf stat -x, -e cycles,cpu/cycles-ct/,cpu/cycles-t/ sleep 2
perf stat -x, -e instructions,cpu/tx-start/ sleep 2
perf stat -x, -e '{cycles,tx-start}' sleep 2
perf stat -x, -e '{cycles,cpu/tx-start/}' sleep 2
# HW Cache event + kernel pmu event with different style
perf stat -x, -e L1-dcache-loads,cpu/mem-stores/,tx-start sleep 2
perf stat -x, -e L1-dcache-loads,mem-stores,cpu/tx-start/ sleep 2
perf stat -x, -e '{L1-dcache-loads,mem-stores}' sleep 2
perf stat -x, -e '{L1-dcache-loads,cpu/tx-start/}' sleep 2
# Raw event + kernel pmu event with different style:
perf stat -x, -e cpu/event=0xc0,umask=0x00/,mem-loads,cpu/mem-stores/ sleep 2
perf stat -x, -e cpu/event=0xc0,umask=0x00/,tx-start,cpu/el-start/ sleep 2
perf stat -x, -e '{cpu/event=0xc0,umask=0x00/,tx-start}' sleep 2
Signed-off-by: Kan Liang <kan.liang@intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1412694532-23391-5-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add new rules for kernel PMU event.
Currently, the patch only want to handle the PMU event name as "a-b" and
"a".
event_pmu:
PE_KERNEL_PMU_EVENT sep_dc
|
PE_PMU_EVENT_PRE '-' PE_PMU_EVENT_SUF sep_dc
PE_KERNEL_PMU_EVENT token is for
cycles-ct/cycles-t/mem-loads/mem-stores.
The prefix cycles is mixed up with cpu-cycles. loads and stores are
mixed up with cache event So they have to be hardcode in lex.
PE_PMU_EVENT_PRE and PE_PMU_EVENT_SUF tokens are for other PMU events.
The lex looks generic identifier up in the table and return the matched
token. If there is no match, generic PE_NAME token will be return.
Using the rules, kernel PMU event could use new style format without //
so you can use:
perf record -e mem-loads ...
instead of:
perf record -e cpu/mem-loads/
Signed-off-by: Kan Liang <kan.liang@intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1412694532-23391-4-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
There are two types of event formats for PMU events. E.g. el-abort OR
cpu/el-abort/. However, the lexer mistakenly recognizes the simple style
format as two events.
The parse_events_pmu_check function uses bsearch to search the name in
known pmu event list. It can tell the lexer that the name is a PE_NAME
or a PMU event name prefix or a PMU event name suffix. All these
information will be used for accurately parsing kernel PMU events.
The pmu events list will be read from sysfs at runtime.
Note: Currently, the patch only want to handle the PMU event name as
"a-b" and "a". The only exception, "stalled-cycles-frontend" and
"stalled-cycles-fronted", are already hardcoded in lexer.
Signed-off-by: Kan Liang <kan.liang@intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1412694532-23391-3-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This reverts commit 50e200f079 ("perf tools: Default to cpu// for
events v5")
The fixup cannot handle the case that
new style format(which without //) mixed with
other different formats.
For example,
group events with new style format: {mem-stores,mem-loads}
some hardware event + new style event: cycles,mem-loads
Cache event + new style event: LLC-loads,mem-loads
Raw event + new style event:
cpu/event=0xc8,umask=0x08/,mem-loads
old style event and new stytle mixture: mem-stores,cpu/mem-loads/
Signed-off-by: Kan Liang <kan.liang@intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1412694532-23391-2-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When 'perf top' is run, one can't easily find a difference
between -z option and normal output.
So I added a visual cue to know whether it is the zeroing or not.
Output is as below.
Before:
$ perf top
Samples: 61K of event 'cycles', Event count (approx.): 3908136933
Overhead Shared Object Symbol
1.42% firefox [.] 0x0000000000011e76
1.32% libpthread-2.17.so [.] pthread_mutex_lock
If you press key 'z' or run with zero option like '$ perf top --zero', it is as below.
After:
Samples: 61K of event 'cycles', Event count (approx.): 3908136933 [z]
Overhead Shared Object Symbol
1.42% firefox [.] 0x0000000000011e76
1.32% libpthread-2.17.so [.] pthread_mutex_lock
Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1412665995-26359-1-git-send-email-treeze.taeung@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
Infrastructure fixes and changes:
* Fix off-by-one bugs in map->end handling (Stephane Eranian)
* Fix off-by-one bug in maps__find(), also related to map->end handling (Namhyung Kim)
* Make struct symbol->end be the first addr after the symbol range, to make it
match the convention used for struct map->end. (Arnaldo Carvalho de Melo)
* Fix perf_evlist__add_pollfd() error handling in 'perf kvm stat live' (Jiri Olsa)
* Fix python test build by moving callchain_param to an object linked into the
python binding (Jiri Olsa)
* Do not include a struct hists per perf_evsel, untangling the histogram code
from perf_evsel, to pave the way for exporting a minimalistic
tools/lib/api/perf/ library usable by tools/perf and initially by the rasd
daemon being developed by Borislav Petkov, Robert Richter and Jean Pihet.
(Arnaldo Carvalho de Melo)
* Make perf_evlist__open(evlist, NULL, NULL), i.e. without cpu and thread
maps mean syswide monitoring, reducing the boilerplate for tools that
only want system wide mode. (Arnaldo Carvalho de Melo)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch fixes off-by-one errors in the management of maps.
A map is defined by start address and length as implemented by
map__new():
map__init(map, type, start, start + len, pgoff, dso);
map->start = addr;
map->end = end;
Consequently, the actual address range is [start; end[ map->end is the
first byte outside the range.
This patch fixes two bugs where upper bound checking was off-by-one.
In V2, we fix map_groups__fixup_overlappings() some more where
map->start was off-by-one as reported by Jiri.
Signed-off-by: Stephane Eranian <eranian@google.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20141006083532.GA4850@quad
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
A segfault happens on 'perf test hists_link' because we end up using a
struct machines on the stack, and then machines__init() was not
initializing the newly introduced rb_root, just the existing list_head.
When we introduced struct dsos, to group the two ways to store dsos,
i.e. the linked list and the rbtree, we didn't turned the initialization
done in:
machines__init(machines->host) ->
machine__init() ->
INIT_LIST_HEAD
into a dsos__init() to keep on initializing the list_head but _as well_
initializing the rb_root, oops.
All worked because outside perf-test we probably zalloc the whole thing
which ends up initializing it in to NULL.
So the problem looks contained to 'perf test' that uses it on stack,
etc.
Reported-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Waiman Long <Waiman.Long@hp.com>,
Cc: Adrian Hunter <adrian.hunter@intel.com>,
Cc: Don Zickus <dzickus@redhat.com>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Waiman Long <Waiman.Long@hp.com>,
Link: http://lkml.kernel.org/r/20141014180353.GF3198@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Pull x86 ras, uv and vdso fixlets from Ingo Molnar:
"ras: tone down a kernel message to only occur during initial bootup,
not during suspend/resume cycles.
uv: a cleanup commit
vdso: a fix to error checking"
* 'x86-ras-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/mce: Avoid showing repetitive message from intel_init_thermal()
* 'x86-uv-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/apic/uv: Remove unnecessary #ifdef
* 'x86-vdso-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/vdso: Fix vdso2c's special_pages[] error checking
Pull x86 fixes from Ingo Molnar:
"Misc smaller fixes that missed the v3.17 cycle"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/build: Add arch/x86/purgatory/ make generated files to gitignore
x86: Fix section conflict for numachip
x86: Reject x32 executables if x32 ABI not supported
x86_64, entry: Filter RFLAGS.NT on entry from userspace
x86, boot, kaslr: Fix nuisance warning on 32-bit builds
Pull x86 seccomp changes from Ingo Molnar:
"This tree includes x86 seccomp filter speedups and related preparatory
work, which touches core seccomp facilities as well.
The main idea is to split seccomp into two phases, to be able to enter
a simple fast path for syscalls with ptrace side effects.
There's no substantial user-visible (and ABI) effects expected from
this, except a change in how we emit a better audit record for
SECCOMP_RET_TRACE events"
* 'x86-seccomp-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86_64, entry: Use split-phase syscall_trace_enter for 64-bit syscalls
x86_64, entry: Treat regs->ax the same in fastpath and slowpath syscalls
x86: Split syscall_trace_enter into two phases
x86, entry: Only call user_exit if TIF_NOHZ
x86, x32, audit: Fix x32's AUDIT_ARCH wrt audit
seccomp: Document two-phase seccomp and arch-provided seccomp_data
seccomp: Allow arch code to provide seccomp_data
seccomp: Refactor the filter callback and the API
seccomp,x86,arm,mips,s390: Remove nr parameter from secure_computing
Pull x86 platform updates from Ingo Molnar:
"The main changes in this tree are:
- fix and update Intel Quark [Galileo] SoC platform support
- update IOSF chipset side band interface and make it available via
debugfs
- enable HPETs on Soekris net6501 and other e6xx based systems"
* 'x86-platform-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86: Add cpu_detect_cache_sizes to init_intel() add Quark legacy_cache()
x86: Quark: Comment setup_arch() to document TLB/PGE bug
x86/intel/quark: Switch off CR4.PGE so TLB flush uses CR3 instead
x86/platform/intel/iosf: Add debugfs config option for IOSF
x86/platform/intel/iosf: Add better description of IOSF driver in config
x86/platform/intel/iosf: Add Braswell PCI ID
x86/platform/pmc_atom: Fix warning when CONFIG_DEBUG_FS=n
x86: HPET force enable for e6xx based systems
x86/iosf: Add debugfs support
x86/iosf: Add Kconfig prompt for IOSF_MBI selection
Pull x86 mm updates from Ingo Molnar:
"This tree includes the following changes:
- fix memory hotplug
- fix hibernation bootup memory layout assumptions
- fix hyperv numa guest kernel messages
- remove dead code
- update documentation"
* 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/mm: Update memory map description to list hypervisor-reserved area
x86/mm, hibernate: Do not assume the first e820 area to be RAM
x86/mm/numa: Drop dead code and rename setup_node_data() to setup_alloc_data()
x86/mm/hotplug: Modify PGD entry when removing memory
x86/mm/hotplug: Pass sync_global_pgds() a correct argument in remove_pagetable()
x86: Remove set_pmd_pfn
Pull x86 FPU updates from Ingo Molnar:
"x86 FPU handling fixes, cleanups and enhancements from Oleg.
The signal handling race fix and the __restore_xstate_sig() preemption
fix for eager-mode is marked for -stable as well"
* 'x86-fpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86: copy_thread: Don't nullify ->ptrace_bps twice
x86, fpu: Shift "fpu_counter = 0" from copy_thread() to arch_dup_task_struct()
x86, fpu: copy_process: Sanitize fpu->last_cpu initialization
x86, fpu: copy_process: Avoid fpu_alloc/copy if !used_math()
x86, fpu: Change __thread_fpu_begin() to use use_eager_fpu()
x86, fpu: __restore_xstate_sig()->math_state_restore() needs preempt_disable()
x86, fpu: shift drop_init_fpu() from save_xstate_sig() to handle_signal()
Pull x86 cpufeature updates from Ingo Molnar:
"This tree includes the following changes:
- Introduce DISABLED_MASK to list disabled CPU features, to simplify
CPU feature handling and avoid excessive #ifdefs
- Remove the lightly used cpu_has_pae() primitive"
* 'x86-cpufeature-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86: Add more disabled features
x86: Introduce disabled-features
x86: Axe the lightly-used cpu_has_pae
Pull x86 cpu offlining patch from Ingo Molnar:
"This tree includes a single commit that speeds up x86 suspend/resume
by replacing a naive 100msec sleep based polling loop with proper
completion notification.
This gives some real suspend/resume benefit on servers with larger
core counts"
* 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/smpboot: Speed up suspend/resume by avoiding 100ms sleep for CPU offline during S3
Pull x86 cleanups from Ingo Molnar:
"Three small cleanups"
* 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/tty/serial/8250: Clean up the asm/serial.h include file a bit
x86/tty/serial/8250: Resolve missing-field-initializers warnings
x86: Remove obsolete comment in uapi/e820.h
Pull x86 build update from Ingo Molnar:
"A single commit that simplifies the no-FPU-ops build options"
* 'x86-build-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/kbuild: Eliminate duplicate command line options
Pull x86 bootup updates from Ingo Molnar:
"The changes in this cycle were:
- Fix rare SMP-boot hang (mostly in virtual environments)
- Fix build warning with certain (rare) toolchains"
* 'x86-boot-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/relocs: Make per_cpu_load_addr static
x86/smpboot: Initialize secondary CPU only if master CPU will wait for it
Pull x86 asm updates from Ingo Molnar:
"The changes in this cycle were:
- Speed up the x86 __preempt_schedule() implementation
- Fix/improve low level asm code debug info annotations"
* 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86: Unwind-annotate thunk_32.S
x86: Improve cmpxchg8b_emu.S
x86: Improve cmpxchg16b_emu.S
x86/lib/Makefile: Remove the unnecessary "+= thunk_64.o"
x86: Speed up ___preempt_schedule*() by using THUNK helpers