Table of Contents
Table of Contents
This manual applies to OProfile version 0.9.9. OProfile is a set of performance monitoring tools for Linux 2.6 and higher systems, available on a number of architectures. OProfile provides the following features:
OProfile is capable of monitoring native hardware events occurring in all parts of a running system, from the kernel (including modules and interrupt handlers) to shared libraries to binaries. OProfile can collect event information for the whole system in the background with very little overhead. These features make it ideal for monitoring entire systems to determine bottle necks in real-world systems.
Many CPUs provide "performance counters", hardware registers that can count "events"; for example, cache misses, or CPU cycles. OProfile can collect profiles of code based on the number of these occurring events: repeatedly, every time a certain (configurable) number of events has occurred, the PC value is recorded. This information is aggregated into profiles for each binary image. Alternatively, OProfile's event counting tool can collect simple raw event counts.
Some hardware setups do not allow OProfile to use performance counters: in these cases, no events are available so OProfile operates in timer mode, as described in later chapters. Timer mode is only available in "legacy profiling mode" (see Section 1, “OProfile legacy profiling mode”).
/root/.oprofile/daemonrc
.
Subsequent runs of opcontrol --start
will continue to use these cached values until you
override them with new values.
Your kernel's Performance Events Subsystem does not support your processor type |
OProfile is useful in a number of situations. You might want to use OProfile when you :
need low overhead
cannot use highly intrusive profiling methods
need to profile interrupt handlers
need to profile an application and its shared libraries
need to profile dynamically compiled code of supported virtual machines (see Section 4.1, “Support for dynamically compiled (JIT) code”)
need to capture the performance behaviour of entire system
want to examine hardware effects such as cache misses
want detailed source annotation
want instruction-level profiles
want call-graph profiles
OProfile is not a panacea. OProfile might not be a complete solution when you :
require call graph profiles on platforms other than x86, ARM, and PowerPC
require 100% instruction-accurate profiles
need function call counts or an interstitial profiling API
cannot tolerate any disturbance to the system whatsoever
need to profile interpreted or dynamically compiled code of non-supported virtual machines
Older versions of OProfile were not capable of attributing samples to symbols from dynamically compiled code, i.e. "just-in-time (JIT) code". Typical JIT compilers load the JIT code into anonymous memory regions. OProfile reported the samples from such code, but the attribution provided was simply:
anon: <tgid><address range> |
Due to this limitation, it wasn't possible to profile applications executed by virtual machines (VMs) like the Java Virtual Machine. OProfile now contains an infrastructure to support JITed code. A development library is provided to allow developers to add support for any VM that produces dynamically compiled code (see the OProfile JIT agent developer guide). In addition, built-in support is included for the following:
For information on how to use OProfile's JIT support, see Section 3, “Setting up the JIT profiling feature”.
OProfile currently does not support event-based profiling (i.e, using hardware events like cache misses, branch mispredicts) on virtual machine guests running under systems such as VMware. The list of supported events displayed by ophelp or 'opcontrol --list-events' is based on CPU type and does not take into account whether the running system is a guest system or real system. To use OProfile on such guest systems, you can use timer mode (see Section 5.2, “OProfile in timer interrupt mode”).
To use OProfile's JIT support, a kernel version 2.6.13 or later is required. In earlier kernel versions, the anonymous memory regions are not reported to OProfile and results in profiling reports without any samples in these regions.
Profiling the Cell Broadband Engine PowerPC Processing Element (PPE) requires a kernel version
of 2.6.18 or more recent.
Profiling the Cell Broadband Engine Synergistic Processing Element (SPE) requires a kernel version
of 2.6.22 or more recent. Additionally, full support of SPE profiling requires a BFD library
from binutils code dated January 2007 or later. To ensure the proper BFD support exists, run
the configure
utility with --with-target=cell-be
.
Profiling the Cell Broadband Engine using SPU events requires a kernel version of 2.6.29-rc1
or more recent.
Instruction-Based Sampling (IBS) profile on AMD family10h processors requires kernel version 2.6.28-rc2 or later.
For Intel IA32, processors as old as P6 generation or Pentium 4 core are supported. The AMD Athlon, Opteron, Phenom, and Turion CPUs are also supported. Older IA32 CPU types can be used with the timer mode of OProfile; please see later in this manual for details. OProfile also supports most processor types of the following architectures: Alpha, MIPS, ARM, x86-64, sparc64, PowerPC, AVR32, and, in timer mode, PA-RISC and s390.
SMP machines are fully supported.
These libraries are required : popt
, bfd
,
liberty
(debian users: libiberty is provided in binutils-dev package), dl
,
plus the standard C++ libraries.
In order to build the perf_events-enabled operf program, you need to either
install the kernel-headers package for your system or use the --with-kernel
configure option.
For secure processing of sample data from JIT virtual machines (e.g., Java), the special user account "oprofile" must exist on the system. The 'configure' and 'make install' operations will print warning messages if this account is not found. If you intend to profile JITed code, you must create a group account named 'oprofile' and then create the 'oprofile' user account, setting the default group to 'oprofile'. A runtime error message is printed to the oprofile log when processing JIT samples if this special user account cannot be found.
The use of the GUI to start the profiler requires the Qt
library.
Either Qt 3
or Qt 4
should work.
Probably not too strenuous a requirement, but older A.OUT binaries/libraries are not supported.
OK, so it's not really a requirement, but I wish it was...
There is a web page (which you may be reading now) at http://oprofile.sf.net/.
You can download a source tarball or check out code from the code repository at the sourceforge page, http://sf.net/projects/oprofile/.
There is a low-traffic OProfile-specific mailing list, details at http://sf.net/mail/?group_id=16191.
There is a bug tracker for OProfile at SourceForge, http://sf.net/tracker/?group_id=16191&atid=116191.
Several OProfile developers and users sometimes hang out on channel #oprofile on the OFTC network.
First you need to build OProfile and install it. ./configure, make, make install is often all you need, but note these arguments to ./configure :
--with-java
Use this option if you need to profile Java applications. Also, see Section 5, “System requirements”, "Required user account". This option is used to specify the location of the Java Development Kit (JDK) source tree you wish to use. This is necessary to get the interface description of the JVMPI (or JVMTI) interface to compile the JIT support code successfully.
The Java Runtime Environment (JRE) does not include the development files that are required to compile the JIT support code, so the full JDK must be installed in order to use this option.
By default, the Oprofile JIT support libraries will be installed in
<oprof_install_dir>/lib/oprofile
. To build
and install OProfile and the JIT support libraries as 64-bit, you can
do something like the following:
# CFLAGS="-m64" CXXFLAGS="-m64" ./configure \ --with-java={my_jdk_installdir} \ --libdir=/usr/local/lib64 |
If you encounter errors building 64-bit, you should install libtool 1.5.26 or later since that release of libtool fixes known problems for certain platforms. If you install libtool into a non-standard location, you'll need to edit the invocation of 'aclocal' in OProfile's autogen.sh as follows (assume an install location of /usr/local):
aclocal -I m4 -I /usr/local/share/aclocal
--with-qt-dir/includes/libraries
Specify the location of Qt headers and libraries. It defaults to searching in
$QTDIR
if these are not specified.
--disable-werror
Development versions of OProfile build by
default with -Werror
. This option turns
-Werror
off.
--disable-optimization
Disable the -O2
compiler flag
(useful if you discover an OProfile bug and want to give a useful
back-trace etc.)
--with-kernel
This option is used to specify the location of the kernel headers include
directory
needed to build the perf_events-enabled operf program. By default, the OProfile
build system expects to find this directory under /usr
. Use this option if your
kernel headers are in a non-standard location or if building in a cross-compile enviroment or in a
situation where the host system does not support perf_events but you wish to build binaries for a
target system that does support perf_events.
It is recommended that if you have a uniprocessor machine, you enable the local APIC / IO_APIC support for your kernel (this is automatically enabled for SMP kernels). With many BIOS (kernel >= 2.6.9 and UP kernel) it's not sufficient to enable the local APIC -- you must also turn it on explicitly at boot time by providing the "lapic" option to the kernel. If you use the NMI watchdog, be aware that the watchdog is disabled when profiling starts and not re-enabled until the profiling is stopped.
Please note that you must save or have available the vmlinux
file
generated during a kernel compile, as OProfile needs it (you can use
--no-vmlinux
, but this will prevent kernel profiling).
Table of Contents
Profiling with operf is the recommended profiling mode with OProfile. Using this mode not only allows you to target your profiling more precisely (i.e., single process or system-wide), it also allows OProfile to co-exist better with other tools on your system that may also be using the perf_events kernel subsystem.
With operf, there is no initial setup needed -- simply invoke operf with the options you need; then run the OProfile post-processing tool(s). The operf syntax is as follows:
operf [ options ] [ --system-wide | --pid=<PID> | [ command [ args ] ] ] |
A typical usage might look like this:
operf ./my_test_program my_arg |
When ./my_test_program
completes (or when you press Ctrl-C), profiling
stops and you're ready to use opreport or other OProfile post-processing tools.
By default, operf stores the sample data in <cur_dir>/oprofile_data/samples/current
,
and opreport and other post-processing tools will look in that location first for profile data,
unless you pass the --session-dir
option.
Before you can use OProfile's legacy profiling mode, you must set it up. The minimum setup required for this
is to tell OProfile where the vmlinux
file corresponding to the
running kernel is, for example :
opcontrol --vmlinux=/boot/vmlinux-`uname -r` |
If you don't want to profile the kernel itself,
you can tell OProfile you don't have a vmlinux
file :
opcontrol --no-vmlinux |
Now we are ready to start the daemon (oprofiled) which collects the profile data :
opcontrol --start |
When you want to stop profiling, you can do so with :
opcontrol --shutdown |
Note that unlike gprof, no instrumentation (-pg
and -a
options to gcc)
is necessary.
Periodically (or on opcontrol --shutdown or opcontrol --dump)
the profile data is written out into the $SESSION_DIR/samples directory (by default at /var/lib/oprofile/samples
).
These profile files cover shared libraries, applications, the kernel (vmlinux), and kernel modules.
You can clear the profile data (at any time) with opcontrol --reset.
To place these sample database files in a specific directory instead of the default location
(/var/lib/oprofile
) use the --session-dir=dir
option.
You must also specify the --session-dir
to tell the tools to continue using this directory.
opcontrol --no-vmlinux --session-dir=/home/me/tmpsession |
opcontrol --start --session-dir=/home/me/tmpsession |
You can get summaries of this data in a number of ways at any time. To get a summary of data across the entire system for all of these profiles, you can do :
opreport [--session-dir=dir] |
Or to get a more detailed summary, for a particular image, you can do something like :
opreport -l /boot/vmlinux-`uname -r` |
There are also a number of other ways of presenting the data, as described later in this manual. Note that OProfile will choose a default profiling setup for you. However, there are a number of options you can pass to opcontrol if you need to change something, also detailed later.
ocount is an OProfile tool that can be used to count native hardware events occurring in either a specific application, a set of processes or threads, a set of active system processors, or the entire system. The data collected during a counting session is displayed to stdout by default, but may also be saved to a file. The ocount syntax is as follows:
ocount [ options ] [ --system-wide | --process-list <pids> | --thread-list <tids> | --cpu-list <cpus> [ command [ args ] ] ] |
A typical usage might look like this:
ocount --events=CPU_CLK_UNHALTED,INST_RETIRED /home/user1/my_test_program my_arg |
When my_test_program
completes (or when you press Ctrl-C), counting
stops and the results are displayed to the screen (as shown below).
Events were actively counted for 2.8 seconds. Event counts (actual) for /home/user1/my_test_program my_arg: Event Count % time counted CPU_CLK_UNHALTED 9,408,018,070 100.00 INST_RETIRED 16,719,918,108 100.00 |
Both methods of profiling (operf and opcontrol) --
as well as event counting with ocount --
allow you to give one or more event specifications to provide details of how each
hardware performance counter should be set up. With operf and ocount, you
can provide a comma-separated list of event specfications using the --events
option. With opcontrol, you use the --event
option
for each desired event specification.
For profiling, the event specification is a colon-separated string of the form
name:count:unitmask:kernel:user
as described in the table below. For ocount, specification is of the form
name:unitmask:kernel:user
.
Note the presence of the count field for profiling. The count field tells the profiler
how many events should occur between a profile snapshot (usually referred to as a "sample"). Since
ocount does not do sampling, the count field is not needed.
If no event specs are passed to operf, ocount, or opcontrol,
the default event will be used. With opcontrol, if you have
previously specified some non-default event but want to revert to the default event, use
--event=default
. Use of this option overrides all previous event selections
that have been cached.
name
|
The symbolic event name, e.g. CPU_CLK_UNHALTED |
count
|
The counter reset value, e.g. 100000; use only for profiling |
unitmask
|
The unit mask, as given in the events list: e.g. 0x0f; or a symbolic name
if a name=<um_name> field is present |
kernel
|
Whether to profile kernel code |
user
|
Whether to profile userspace code |
The last three values are optional, if you omit them (e.g. operf --events=DATA_MEM_REFS:30000
),
they will be set to the default values (the default unit mask value for the given event, and profiling (or counting)
both kernel and userspace code). Note that some events require a unit mask.
You can specify unit mask values using either a numerical value (hex values
must begin with "0x") or a symbolic name (if the name=<um_name>
field is shown in the ophelp output). For some named unit masks, the hex value is not unique; thus, OProfile
tools enforce specifying such unit masks value by name.
When using legacy mode opcontrol on IBM PowerPC platforms, all events specified must be in the same group;
i.e., the group number appended to the event name (e.g. <some-event-name>_GRP9
) must be the same.
When using operf or ocount on IBM PowerPC platforms, the above restriction regarding the same group number does not apply, and events may be specified with or without the group number suffix. If no group number suffix is given, one will be automatically assigned; thus, OProfile post-processing tools will always show real event names that include the group number suffix.
If OProfile is using timer-interrupt mode, there is no event configuration possible.
The table below lists the default profiling event for various processor types. The same events can be used for ocount, minus the count field.
Processor | cpu_type | Default event |
Alpha EV4 | alpha/ev4 | CYCLES:100000:0:1:1 |
Alpha EV5 | alpha/ev5 | CYCLES:100000:0:1:1 |
Alpha PCA56 | alpha/pca56 | CYCLES:100000:0:1:1 |
Alpha EV6 | alpha/ev6 | CYCLES:100000:0:1:1 |
Alpha EV67 | alpha/ev67 | CYCLES:100000:0:1:1 |
ARM/XScale PMU1 | arm/xscale1 | CPU_CYCLES:100000:0:1:1 |
ARM/XScale PMU2 | arm/xscale2 | CPU_CYCLES:100000:0:1:1 |
ARM/MPCore | arm/mpcore | CPU_CYCLES:100000:0:1:1 |
AVR32 | avr32 | CPU_CYCLES:100000:0:1:1 |
Athlon | i386/athlon | CPU_CLK_UNHALTED:100000:0:1:1 |
Pentium Pro | i386/ppro | CPU_CLK_UNHALTED:100000:0:1:1 |
Pentium II | i386/pii | CPU_CLK_UNHALTED:100000:0:1:1 |
Pentium III | i386/piii | CPU_CLK_UNHALTED:100000:0:1:1 |
Pentium M (P6 core) | i386/p6_mobile | CPU_CLK_UNHALTED:100000:0:1:1 |
Pentium 4 (non-HT) | i386/p4 | GLOBAL_POWER_EVENTS:100000:1:1:1 |
Pentium 4 (HT) | i386/p4-ht | GLOBAL_POWER_EVENTS:100000:1:1:1 |
Hammer | x86-64/hammer | CPU_CLK_UNHALTED:100000:0:1:1 |
Family10h | x86-64/family10 | CPU_CLK_UNHALTED:100000:0:1:1 |
Family11h | x86-64/family11h | CPU_CLK_UNHALTED:100000:0:1:1 |
Itanium | ia64/itanium | CPU_CYCLES:100000:0:1:1 |
Itanium 2 | ia64/itanium2 | CPU_CYCLES:100000:0:1:1 |
TIMER_INT | timer | None selectable |
IBM pseries | PowerPC 4/5/6/7/8/970/Cell | CYCLES:100000:0:1:1 |
IBM s390 | timer | None selectable |
IBM s390x | timer | None selectable |
This section gives a brief description of the available OProfile utilities and their purpose.
ophelp
This utility lists the available events and short descriptions.
operf
This is the recommended program for collecting profile data, discussed in Section 1, “Using operf”.
opcontrol
Used for controlling OProfile data collection in legacy mode, discussed in Section 2, “Using opcontrol”.
agent libraries
Used by virtual machines (like the Java VM) to record information about JITed code being profiled. See Section 3, “Setting up the JIT profiling feature”.
opreport
This is the main tool for retrieving useful profile data, described in Section 2, “Image summaries and symbol summaries (opreport)”.
opannotate
This utility can be used to produce annotated source, assembly or mixed source/assembly. Source level annotation is available only if the application was compiled with debugging symbols. See Section 3, “Outputting annotated source (opannotate)”.
opgprof
This utility can output gprof-style data files for a binary, for use with gprof -p. See Section 5, “gprof-compatible output (opgprof)”.
oparchive
This utility can be used to collect executables, debuginfo, and sample files and copy the files into an archive. The archive is self-contained and can be moved to another machine for further analysis. See Section 6, “Analyzing profile data on another system (oparchive)”.
opimport
This utility converts sample database files from a foreign binary format (abi) to the native format. This is useful only when moving sample files between hosts, for analysis on platforms other than the one used for collection. See Section 7, “Converting sample database files (opimport)”.
Table of Contents
This section describes in detail how operf is used to control profiling. Unless otherwise directed, operf will profile using the default event for your system. For most systems, the default event is some cycles-based event, assuming your processor type supports hardware performance counters. If your hardware does support performance counters, you can specify something other than the default hardware event on which to profile. The performance monitor counters can be programmed to count various hardware events, such as cache misses or MMX operations. The event chosen for each counter is reflected in the profile data collected by OProfile: functions and binaries at the top of the profiles reflect that most of the chosen events happened within that code.
Additionally, each counter is programmed with a "count" value, which corresponds to how detailed the profile is. The lower the value, the more frequently profile samples are taken. You can choose to sample only kernel code, user-space code, or both (both is the default). Finally, some events have a "unit mask" -- this is a value that further restricts the types of event that are counted. You can see the event types and unit masks for your CPU using ophelp. More information on event specification can be found at Section 4, “Specifying performance counter events”.
The operf command syntax is:
operf [ options ] [ --system-wide | --pid=<PID> | [ command [ args ] ] ] |
When profiling an application using either the command
or --pid
option of
operf, forks and execs of the profiled process will also be profiled. The samples
from an exec'ed process will be attributed to the executable binary run by that process. See
Section 7, “Using operf to profile fork/execs”
Following is a description of the operf options.
command [args]
The command or application to be profiled. The[args] are the input arguments
that the command or application requires. Either command
, --pid
or
--system-wide
is required, but cannot be used simultaneously.
--pid / -p [PID]
This option enables operf to profile a running application. PID
should be the process ID of the process you wish to profile. When
finished profiling (e.g., when the profiled process ends), press
Ctrl-c to stop operf.
--system-wide / -s
This option is for performing a system-wide profile. You must
have root authority to run operf in this mode.
When finished profiling, Ctrl-C to stop operf. If you run
operf --system-wide
as a background job (i.e., with the &), you
must stop it in a controlled manner in order to process
the profile data it has collected. Use kill -SIGINT <operf-PID>
for this purpose. It is recommended that when running operf
with this option, your current working directory should be /root
or a subdirectory
of /root
to avoid storing sample data files in locations accessible by regular users.
--vmlinux / k [vmlinux_path]
A vmlinux file that matches the running kernel that has symbol and/or debuginfo. Kernel samples will be attributed to this binary, allowing post-processing tools (like opreport) to attribute samples to the appropriate kernel symbols. If this option is not specified, all kernel samples will be attributed to a pseudo binary named "no-vmlinux".
--callgraph / -g
This option enables the callgraph to be saved during profiling. NOTE: The full callchain is recorded, so there is no depth limit.
--append / -a
By default, operf moves old profile data from
<session_dir>/samples/current
to
<session_dir>/samples/previous
.
If a 'previous' profile already existed, it will be replaced. If the
--append
option is passed, old profile data in 'current' is left in place and
new profile data will be added to it, and the 'previous' profile (if one existed)
will remain untouched. To access the 'previous' profile, simply add a session
specification to the normal invocation of oprofile post-processing tools; for example:
opreport session:previous |
--events / -e [event1[,event2[,...]]]
This option is for passing a comma-separated list of event specifications for profiling. Each event spec is of the form:
name:count[:unitmask[:kernel[:user]]] |
When no event specification is given, the default event for the running processor type will be used for profiling. Use ophelp to list the available events for your processor type.
--separate-thread / -t
This option categorizes samples by thread group ID (tgid) and thread ID (tid).
The --separate-thread
option is useful for seeing per-thread samples in
multi-threaded applications. When used in conjuction with the --system-wide
option, --separate-thread
is also useful for seeing per-process
(i.e., per-thread group) samples for the case where multiple processes are
executing the same program during a profiling run.
--separate-cpu / -c
This option categorizes samples by cpu.
--session-dir / -d [path]
This option specifies the session directory to hold the sample data. If not specified,
the data is saved in the oprofile_data
directory on the current path.
---lazy-conversion / -l
Use this option to reduce the overhead of operf during profiling.
Normally, profile data received from the kernel is converted to OProfile format
during profiling time. This is typically not an issue when profiling a single
application. But when using the --system-wide
option, this on-the-fly
conversion process can cause noticeable overhead, particularly on busy
multi-processor systems. The --lazy-conversion
option directs
operf to wait until profiling is completed to do the conversion
of profile data.
--verbose / -V [level]
A comma-separated list of debugging control values used to increase the verbosity of the output. Valid values are: debug, record, convert, misc, sfile, arcs, and the special value, 'all'.
--version -v
Show operf version.
--help / -h
Show a help message.
In this section we describe the configuration and control of the profiling system with opcontrol in more depth. See Section 1, “Using operf” for a description of the preferred profiling method.
The opcontrol script has a default setup, but you can alter this with the options given below. In particular, you can select specific hardware events on which to base your profile. See Section 1, “Using operf” for an introduction to hardware events and performance counter configuration. The event types and unit masks for your CPU are listed by opcontrol --list-events or ophelp.
The opcontrol script provides the following actions :
--init
Loads the OProfile module if required and makes the OProfile driver interface available.
--setup
Followed by list arguments for profiling set up. List of arguments
saved in /root/.oprofile/daemonrc
.
Giving this option is not necessary; you can just directly pass one
of the setup options, e.g. opcontrol --no-vmlinux.
--status
Show configuration information.
--start-daemon
Start the oprofile daemon without starting actual profiling. The profiling
can then be started using --start
. This is useful for avoiding
measuring the cost of daemon startup, as --start
is a simple
write to a file in oprofilefs.
--start
Start data collection with either arguments provided by --setup
or information saved in /root/.oprofile/daemonrc
. Specifying
the addition --verbose
makes the daemon generate lots of debug data
whilst it is running.
--dump
Force a flush of the collected profiling data to the daemon.
--stop
Stop data collection.
--shutdown
Stop data collection and kill the daemon.
--reset
Clears out data from current session, but leaves saved sessions.
--save=
session_name
Save data from current session to session_name.
--deinit
Shuts down daemon. Unload the OProfile module and oprofilefs.
--list-events
List event types and unit masks.
--help
Generate usage messages.
There are a number of possible settings, of which, only
--vmlinux
(or --no-vmlinux
)
is required. These settings are stored in ~/.oprofile/daemonrc
.
--buffer-size=
num
Number of samples in kernel buffer. Buffer watershed needs to be tweaked when changing this value.
--buffer-watershed=
num
Set kernel buffer watershed to num samples. When remain only buffer-size - buffer-watershed free entries remain in the kernel buffer, data will be flushed to the daemon. Most useful values are in the range [0.25 - 0.5] * buffer-size.
--cpu-buffer-size=
num
Number of samples in kernel per-cpu buffer. If you profile at high rate, it can help to increase this if the log file show excessive count of samples lost due to cpu buffer overflow.
--event=
[eventspec]
Use the given performance counter event to profile. See Section 4, “Specifying performance counter events” below.
--session-dir=
dir_path
Create/use sample database out of directory dir_path
instead of
the default location (/var/lib/oprofile).
--separate=
[none,lib,kernel,thread,cpu,all]
By default, every profile is stored in a single file. Thus, for example,
samples in the C library are all accredited to the /lib/libc.o
profile. However, you choose to create separate sample files by specifying
one of the below options.
none
|
No profile separation (default) |
lib
|
Create per-application profiles for libraries |
kernel
|
Create per-application profiles for the kernel and kernel modules |
thread
|
Create profiles for each thread and each task |
cpu
|
Create profiles for each CPU |
all
|
All of the above options |
Note that --separate=kernel
also turns on --separate=lib
.
When using --separate=kernel
, samples in hardware interrupts, soft-irqs, or other
asynchronous kernel contexts are credited to the task currently running. This means you will see
seemingly nonsense profiles such as /bin/bash
showing samples for the PPP modules,
etc.
Using --separate=thread
creates a lot
of sample files if you leave OProfile running for a while; it's most
useful when used for short sessions, or when using image filtering.
--callgraph=
#depth
Enable call-graph sample collection with a maximum depth. Use 0 to disable callgraph profiling. NOTE: Callgraph support is available on a limited number of platforms at this time; for example:
x86 with 2.6 or higher kernel
ARM with 2.6 or higher kernel
PowerPC with 2.6.17 or higher kernel
--image=
image,[images]|"all"
Image filtering. If you specify one or more absolute
paths to binaries, OProfile will only produce profile results for those
binary images. This is useful for restricting the sometimes voluminous
output you may get otherwise, especially with
--separate=thread
. Note that if you are using
--separate=lib
or
--separate=kernel
, then if you specification an
application binary, the shared libraries and kernel code
are included. Specify the value
"all" to profile everything (the default).
--vmlinux=
file
vmlinux kernel image.
--no-vmlinux
Use this when you don't have a kernel vmlinux file, and you don't want to profile the kernel. This still counts the total number of kernel samples, but can't give symbol-based results for the kernel or any modules.
Here, we have a Pentium III running at 800MHz, and we want to look at where data memory references are happening most, and also get results for CPU time.
# opcontrol --event=CPU_CLK_UNHALTED:400000 --event=DATA_MEM_REFS:10000 # opcontrol --vmlinux=/boot/2.6.0/vmlinux # opcontrol --start |
Use --start-daemon
to avoid
the profiler startup affecting results.
# opcontrol --vmlinux=/boot/2.6.0/vmlinux # opcontrol --start-daemon # my_favourite_benchmark --init # opcontrol --start ; my_favourite_benchmark --run ; opcontrol --stop |
Here, we want to see a profile of the OProfile daemon itself, including when it was running inside the kernel driver, and its use of shared libraries.
# opcontrol --separate=kernel --vmlinux=/boot/2.6.0/vmlinux # opcontrol --start # my_favourite_stress_test --run # opreport -l -p /lib/modules/2.6.0/kernel /usr/local/bin/oprofiled |
It can often be useful to split up profiling data into several different time periods. For example, you may want to collect data on an application's startup separately from the normal runtime data. You can use the simple command opcontrol --save to do this. For example :
# opcontrol --save=blah |
will create a sub-directory in $SESSION_DIR/samples
containing the samples
up to that point (the current session's sample files are moved into this
directory). You can then pass this session name as a parameter to the post-profiling
analysis tools, to only get data up to the point you named the
session. If you do not want to save a session, you can do
rm -rf $SESSION_DIR/samples/sessionname or, for the
current session, opcontrol --reset.
To gather information about JITed code from a virtual machine, it needs to be instrumented with an agent library. We use the agent libraries for Java in the following example. To use the Java profiling feature, you must build OProfile with the "--with-java" option (Section 7, “Installation”).
Add this to the startup parameters of the JVM (for JVMTI):
|
or
|
The JVMPI agent implementation is enabled with the command line option
|
Currently, there is just one option available -- debug
. For JVMPI,
the convention for specifying an option is option_name=[yes|no]
.
For JVMTI, the option specification is simply the option name, implying
"yes"; no option specified implies "no".
The agent library (installed in <oprof_install_dir>/lib/oprofile
)
needs to be in the library search path (e.g. add the library directory
to LD_LIBRARY_PATH
). If the command line of
the JVM is not accessible, it may be buried within shell scripts or a
launcher program. It may also be possible to set an environment variable to add
the instrumentation.
For Sun JVMs this is JAVA_TOOL_OPTIONS
. Please check
your JVM documentation for
further information on the agent startup options.
The oprof_start application provides a convenient way to start the profiler. Note that oprof_start is just a wrapper around the opcontrol script, so it does not provide more services than the script itself.
After oprof_start is started you can select the event type for each counter; the sampling rate and other related parameters are explained in Section 2, “Using opcontrol”. The "Configuration" section allows you to set general parameters such as the buffer size, kernel filename etc. The counter setup interface should be self-explanatory; Section 5.1, “Hardware performance counters” and related links contain information on using unit masks.
A status line shows the current status of the profiler: how long it has been running, and the average number of interrupts received per second and the total, over all processors. Note that quitting oprof_start does not stop the profiler.
Your configuration is saved in the same file as opcontrol uses; that is,
~/.oprofile/daemonrc
.
Most processor models include performance monitor units that can be configured to monitor (count) various types of hardware events. This section is where you can find architecture-specific information to help you use these events for profiling. You do not really need to read this section unless you are interested in using events other than the default event chosen by OProfile.
Your CPU type may not include the requisite support for hardware performance counters, in which case you must use OProfile in timer mode (see Section 5.2, “OProfile in timer interrupt mode”).
The Intel hardware performance counters are detailed in the Intel IA-32 Architecture Manual, Volume 3, available from http://developer.intel.com/. The AMD Athlon/Opteron/Phenom/Turion implementation is detailed in http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/22007.pdf. For IBM PowerPC processors, documentation is available at https://www.power.org/. For example, https://www.power.org/events/Power7 contains specific information on the performance monitor unit for the IBM POWER7.
These processors are capable of delivering an interrupt when a counter overflows. This is the basic mechanism on which OProfile is based. The delivery mode is NMI, so blocking interrupts in the kernel does not prevent profiling. When the interrupt handler is called, the current PC value and the current task are recorded into the profiling structure. This allows the overflow event to be attached to a specific assembly instruction in a binary image. OProfile receives this data from the kernel and writes it to the sample files.
If we use an event such as CPU_CLK_UNHALTED
or INST_RETIRED
(GLOBAL_POWER_EVENTS
or INSTR_RETIRED
, respectively, on the Pentium 4), we can
use the overflow counts as an estimate of actual time spent in each part of code. Alternatively we can profile interesting
data such as the cache behaviour of routines with the other available counters.
However there are several caveats. First, there are those issues listed in the Intel manual. There is a delay between the counter overflow and the interrupt delivery that can skew results on a small scale - this means you cannot rely on the profiles at the instruction level as being perfectly accurate. If you are using an "event-mode" counter such as the cache counters, a count registered against it doesn't mean that it is responsible for that event. However, it implies that the counter overflowed in the dynamic vicinity of that instruction, to within a few instructions. Further details on this problem can be found in Chapter 5, Interpreting profiling results and also in the Digital paper "ProfileMe: A Hardware Performance Counter".
Each counter has several configuration parameters. First, there is the unit mask: this simply further specifies what to count. Second, there is the counter value, discussed below. Third, there is a parameter whether to increment counts whilst in kernel or user space. You can configure these separately for each counter.
After each overflow event, the counter will be re-initialized such that another overflow will occur after this many events have been counted. Thus, higher values mean less-detailed profiling, and lower values mean more detail, but higher overhead. Picking a good value for this parameter is, unfortunately, somewhat of a black art. It is of course dependent on the event you have chosen. Specifying too large a value will mean not enough interrupts are generated to give a realistic profile (though this problem can be ameliorated by profiling for longer). Specifying too small a value can lead to higher performance overhead.
Some CPU types do not provide the needed hardware support to use the hardware performance counters. This includes some laptops, classic Pentiums, and other CPU types not yet supported by OProfile (such as Cyrix). On these machines, OProfile falls back to using the timer interrupt for profiling, back to using the real-time clock interrupt to collect samples. In timer mode, OProfile is not able to profile code that has interrupts disabled.
You can force use of the timer interrupt by using the timer=1
module
parameter (or oprofile.timer=1
on the boot command line if OProfile is
built-in). If OProfile was built as a kernel module, then you must pass the 'timer=1'
parameter with the modprobe command. Do this before executing 'opcontrol --init' or
edit the opcontrol command's invocation of modprobe to pass the 'timer=1' parameter.
The Pentium 4 / Xeon performance counters are organized around 3 types of model specific registers (MSRs): 45 event selection control registers (ESCRs), 18 counter configuration control registers (CCCRs) and 18 counters. ESCRs describe a particular set of events which are to be recorded, and CCCRs bind ESCRs to counters and configure their operation. Unfortunately the relationship between these registers is quite complex; they cannot all be used with one another at any time. There is, however, a subset of 8 counters, 8 ESCRs, and 8 CCCRs which can be used independently of one another, so OProfile only accesses those registers, treating them as a bank of 8 "normal" counters, similar to those in the P6 or Athlon/Opteron/Phenom/Turion families of CPU.
There is currently no support for Precision Event-Based Sampling (PEBS), nor any advanced uses of the Debug Store (DS). Current support is limited to the conservative extension of OProfile's existing interrupt-based model described above.
The Itanium 2 performance monitoring unit (PMU) organizes the counters as four pairs of performance event monitoring registers. Each pair is composed of a Performance Monitoring Configuration (PMC) register and Performance Monitoring Data (PMD) register. The PMC selects the performance event being monitored and the PMD determines the sampling interval. The IA64 Performance Monitoring Unit (PMU) triggers sampling with maskable interrupts. Thus, samples will not occur in sections of the IA64 kernel where interrupts are disabled.
None of the advance features of the Itanium 2 performance monitoring unit such as opcode matching, address range matching, or precise event sampling are supported by this version of OProfile. The Itanium 2 support only maps OProfile's existing interrupt-based model to the PMU hardware.
The performance monitoring unit (PMU) for the IBM PowerPC 64-bit processors consists of between 4 and 8 counters (depending on the model), plus three special purpose registers used for programming the counters -- MMCR0, MMCR1, and MMCRA. Advanced features such as instruction matching and thresholding are not supported by this version of OProfile.
The Cell Broadband Engine (CBE) processor core consists of a PowerPC Processing Element (PPE) and 8 Synergistic Processing Elements (SPE). PPEs and SPEs each consist of a processing unit (PPU and SPU, respectively) and other hardware components, such as memory controllers.
A PPU has two hardware threads (aka "virtual CPUs"). The performance monitor unit of the CBE collects event information on one hardware thread at a time. Therefore, when profiling PPE events, OProfile collects the profile based on the selected events by time slicing the performance counter hardware between the two threads. The user must ensure the collection interval is long enough so that the time spent collecting data for each PPU is sufficient to obtain a good profile.
To profile an SPU application, the user should specify the SPU_CYCLES event. When starting OProfile with SPU_CYCLES, the opcontrol script enforces certain separation parameters (separate=cpu,lib) to ensure that sufficient information is collected in the sample data in order to generate a complete report. The --merge=cpu option can be used to obtain a more readable report if analyzing the performance of each separate SPU is not necessary.
Profiling with an SPU event (events 4100 through 4163) is not compatible with any other event. Further more, only one SPU event can be specified at a time. The hardware only supports profiling on one SPU per node at a time. The OProfile kernel code time slices between the eight SPUs to collect data on all SPUs.
SPU profile reports have some unique characteristics compared to reports for standard architectures:
Instruction-Based Sampling (IBS) is a new performance measurement technique available on AMD Family 10h processors. Traditional performance counter sampling is not precise enough to isolate performance issues to individual instructions. IBS, however, precisely identifies instructions which are not making the best use of the processor pipeline and memory hierarchy. For more information, please refer to the "Instruction-Based Sampling: A New Performance Analysis Technique for AMD Family 10h Processors" ( http://developer.amd.com/assets/AMD_IBS_paper_EN.pdf). There are two types of IBS profile types, described in the following sections.
IBS fetch sampling is a statistical sampling method which counts completed fetch operations. When the number of completed fetch operations reaches the maximum fetch count (the sampling period), IBS tags the fetch operation and monitors that operation until it either completes or aborts. When a tagged fetch completes or aborts, a sampling interrupt is generated and an IBS fetch sample is taken. An IBS fetch sample contains a timestamp, the identifier of the interrupted process, the virtual fetch address, and several event flags and values that describe what happened during the fetch operation.
IBS op sampling selects, tags, and monitors macro-ops as issued from AMD64 instructions. Two options are available for selecting ops for sampling:
In both cases, an IBS sample is generated only if the tagged op retires. Thus, IBS op event information does not measure speculative execution activity. The execution stages of the pipeline monitor the tagged macro-op. When the tagged macro-op retires, a sampling interrupt is generated and an IBS op sample is taken. An IBS op sample contains a timestamp, the identifier of the interrupted process, the virtual address of the AMD64 instruction from which the op was issued, and several event flags and values that describe what happened when the macro-op executed.
Enabling IBS profiling is done simply by specifying IBS performance events
through the "--event=" options. These events are listed in the
opcontrol --list-events
.
opcontrol --event=IBS_FETCH_XXX:<count>:<um>:<kernel>:<user> opcontrol --event=IBS_OP_XXX:<count>:<um>:<kernel>:<user> Note: * All IBS fetch event must have the same event count and unitmask, as do those for IBS op. |
IBM System z provides a facility which does instruction sampling as part of the CPU. This has great advantages over the timer based sampling approach like better sampling resolution with less overhead and the possibility to get samples within code sections where interrupts are disabled (useful especially for Linux kernel code).
A public description of the System z CPU-Measurement Facilities can be found here: The Load-Program-Parameter and CPU-Measurement Facilities
System z hardware sampling can be used for Linux instances in LPAR mode. The hardware sampling support used by OProfile was introduced for System z10 in October 2008.
To enable hardware sampling for an LPAR you must activate the LPAR with authorization for basic sampling control. See the "Support Element Operations Guide" for your mainframe system for more information.
The hardware sampling facility can be enabled and disabled using the event interface. A `virtual' counter 0 has been defined that only supports a single event, HWSAMPLING. By default the HWSAMPLING event is enabled on machines providing the facility. For both events only the `count', `kernel' and `user' options are evaluated by the kernel module.
The `count' value is the sampling rate as it is passed to the CPU measurement facility. A sample will be taken by the hardware every `count' cycles. Using low values here will quickly fill up the sampling buffers and will generate CPU load on the OProfile daemon and the kernel module being busy flushing the hardware buffers. This might considerably impact the workload to be profiled.
The unit mask `um' is required to be zero.
The opcontrol tool provides a new option specific to System z hardware sampling:
A special counter /dev/oprofile/timer
is provided
by the kernel module allowing to switch back to timer mode sampling
dynamically. The TIMER event is limited to be used only with this
counter. The TIMER event can be specified using the
--event=
as with every other event.
opcontrol --event=TIMER:1 |
On z10 or later machines the default event is set to TIMER in case the hardware sampling facility is not available.
Although required, the 'count' parameter of the TIMER event is ignored. The value may eventually be used for timer based sampling with a configurable sampling frequency, but this is currently not supported.
OProfile is a low-level profiler which allows continuous profiling with a low-overhead cost.
When using OProfile legacy mode profiling, it may be possible to configure such a low a counter reset value
(i.e., high sampling rate) that the system can become overloaded with counter interrupts and your
system's responsiveness may be severely impacted. Whilst some validation is done on the count
values you pass to opcontrol with your event specification, it is not foolproof.
This can happen as follows: When the profiler count reaches zero, an NMI handler is called which stores the sample values in an internal buffer, then resets the counter to its original value. If the reset count you specified is very low, a pending NMI can be sent before the NMI handler has completed. Due to the priority of the NMI, the pending interrupt is delivered immediately after completion of the previous interrupt handler, and control never returns to other parts of the system. If all processors are stuck in this mode, the system will appear to be frozen.
If this happens, it will be impossible to bring the system back to a workable state.
There is no way to provide real security against this happening, other than making sure to use a reasonable value
for the counter reset. For example, setting CPU_CLK_UNHALTED
event type with a ridiculously low reset count (e.g. 500)
is likely to freeze the system.
In short : Don't try a foolish sample count value. Unfortunately the definition of a foolish value is really dependent on the event type. If ever in doubt, post a message to
Table of Contents
OK, so the profiler has been running, but it's not much use unless we can get some data out. Sometimes, OProfile does a little too good a job of keeping overhead low, and no data reaches the profiler. This can happen on lightly-loaded machines. If you're using OPorifle legacy mode, you can force a dump at any time with :
opcontrol --dump
This ensures that any profile data collected by the oprofiled daemon has been flusehd
to disk. Remember to do a dump
, stop
, shutdown
, or deinit
before complaining there is no profiling data!
Now that we've got some data, it has to be processed. That's the job of opreport, opannotate, or opgprof.
All of the analysis tools take a profile specification. This is a set of definitions that describe which actual profiles should be examined. The simplest profile specification is empty: this will match all the available profile files for the current session (this is what happens when you do opreport).
Specification parameters are of the form name:value[,value]
.
For example, if I wanted to get a combined symbol summary for
/bin/myprog
and /bin/myprog2
,
I could do opreport -l image:/bin/myprog,/bin/myprog2.
As a special case, you don't actually need to specify the image:
part here: anything left on the command line is assumed to be an
image:
name. Similarly, if no session:
is specified, then session:current
is assumed ("current"
is a special name of the current / last profiling session).
In addition to the comma-separated list shown above, some of the
specification parameters can take glob-style
values. For example, if I want to see image summaries for all
binaries profiled in /usr/bin/
, I could do
opreport image:/usr/bin/\*. Note the necessity
to escape the special character from the shell.
For opreport, profile specifications can be used to define two profiles, giving differential output. This is done by enclosing each of the two specifications within curly braces, as shown in the examples below. Any specifications outside of curly braces are shared across both.
Image summaries for all profiles with DATA_MEM_REFS
samples in the saved session called "stresstest" :
# opreport session:stresstest event:DATA_MEM_REFS |
Symbol summary for the application called "test_sym53c8xx,9xx". Note the
escaping is necessary as image:
takes a comma-separated list.
# opreport -l ./test/test_sym53c8xx\,9xx |
Image summaries for all binaries in the test
directory,
excepting boring-test
:
# opreport image:./test/\* image-exclude:./test/boring-test |
Differential profile of a binary stored in two archives :
# opreport -l /bin/bash { archive:./orig } { archive:./new } |
Differential profile of an archived binary with the current session :
# opreport -l /bin/bash { archive:./orig } { } |
archive:
archivepath
A path to an archive made with oparchive. Absence of this tag, unlike others, means "the current system", equivalent to specifying "archive:".
session:
sessionlist
A comma-separated list of session names to resolve in. Absence of this tag, unlike others, means "the current session", equivalent to specifying "session:current".
session-exclude:
sessionlist
A comma-separated list of sessions to exclude.
image:
imagelist
A comma-separated list of image names to resolve. Each entry may be relative path, glob-style name, or full path, e.g.
opreport 'image:/usr/bin/oprofiled,*op*,./opreport' |
image-exclude:
imagelist
Same as image:
, but the matching images are excluded.
lib-image:
imagelist
Same as image:
, but only for images that are for
a particular primary binary image (namely, an application). This only
makes sense to use if you're using --separate
.
This includes kernel modules and the kernel when using
--separate=kernel
.
lib-image-exclude:
imagelist
Same as lib-image:
, but the matching images
are excluded.
event:
eventlist
The symbolic event name to match on, e.g. event:DATA_MEM_REFS
.
You can pass a list of events for side-by-side comparison with opreport.
When using the timer interrupt, the event is always "TIMER".
count:
eventcountlist
The event count to match on, e.g. event:DATA_MEM_REFS count:30000
.
Note that this value refers to the count value in the event spec you passed
to opcontrol or operf when setting up to do a
profile run. It has nothing to do with the sample counts in the profile data
itself.
You can pass a list of events for side-by-side comparison with opreport.
When using the timer interrupt, the count is always 0 (indicating it cannot be set).
unit-mask:
masklist
The unit mask value of the event to match on, e.g. unit-mask:1
.
You can pass a list of events for side-by-side comparison with opreport.
cpu:
cpulist
Only consider profiles for the given numbered CPU (starting from zero). This is only useful when using CPU profile separation.
tgid:
pidlist
Only consider profiles for the given task groups. Unless some program is using threads, the task group ID of a process is the same as its process ID. This option corresponds to the POSIX notion of a thread group. This is only useful when using per-process profile separation.
tid:
tidlist
Only consider profiles for the given threads. When using
recent thread libraries, all threads in a process share the
same task group ID, but have different thread IDs. You can
use this option in combination with tgid:
to
restrict the results to particular threads within a process.
This is only useful when using per-process profile separation.
Each session's sample files can be found in the $SESSION_DIR/samples/ directory (default when
using legacy mode: /var/lib/oprofile/samples/
; default when using
operf: <cur_dir>/oprofile_data/samples/
).
These are used, along with the binary image files, to produce human-readable data.
In some circumstances (e.g., kernel modules), OProfile
will not be able to find the binary images. All the tools have an --image-path
option to which you can pass a comma-separated list of alternate paths to search. For example,
I can let OProfile find my 2.6 modules by using --image-path /lib/modules/2.6.0/kernel/.
It is your responsibility to ensure that the correct images are found when using this
option.
Note that if a binary image changes after the sample file was created, you won't be able to get useful symbol-based data out. This situation is detected for you. If you replace a binary, you should make sure to save the old binary if you need to do comparative profiles.
When attempting to get output, you may see the error :
error: no sample files found: profile specification too strict ? |
What this is saying is that the profile specification you passed in, when matched against the available sample files, resulted in no matches. There are a number of reasons this might happen:
You specified a binary name, but spelt it wrongly. Check your spelling !
Make very sure that OProfile was actually up and running when you ran the application you wish to profile.
Remember OProfile is a statistical profiler - you're not guaranteed to get samples for short-running programs. You can help this by using a lower count for the performance counter, so there are a lot more samples taken per second.
Similarly, if the application spends little time in the main binary image itself, with most of it spent in shared libraries it uses, you might not see any samples for the binary image (i.e., executable) itself. If you're using OProfile legacy mode profiling, then we recommend using opcontrol --separate=lib before the profiling session so that opreport and friends show the library profiles on a per-application basis. This is done automatically when profiling with operf, so no special setup is necessary.
For example, you specified something like tgid:3433
,
but no task with that group ID ever ran the code.
If you're using a particular event counter, for example counting MMX operations, the code might simply have not generated any events in the first place. Verify the code you're profiling does what you expect it to.
If you're trying to get reports for a kernel
module, make sure to use the -p
option, and specify the
module name with the .ko
extension. Check if the module is one loaded from initrd.
The opreport utility is the primary utility you will use for getting formatted data out of OProfile. It produces two types of data: image summaries and symbol summaries. An image summary lists the number of samples for individual binary images such as libraries or applications. Symbol summaries provide per-symbol profile data. In the following example, we're getting an image summary for the whole system:
$ opreport --long-filenames CPU: PIII, speed 863.195 MHz (estimated) Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 23150 905898 59.7415 /usr/lib/gcc-lib/i386-redhat-linux/3.2/cc1plus 214320 14.1338 /boot/2.6.0/vmlinux 103450 6.8222 /lib/i686/libc-2.3.2.so 60160 3.9674 /usr/local/bin/madplay 31769 2.0951 /usr/local/oprofile-pp/bin/oprofiled 26550 1.7509 /usr/lib/libartsflow.so.1.0.0 23906 1.5765 /usr/bin/as 18770 1.2378 /oprofile 15528 1.0240 /usr/lib/qt-3.0.5/lib/libqt-mt.so.3.0.5 11979 0.7900 /usr/X11R6/bin/XFree86 11328 0.7471 /bin/bash ... |
If we had specified --symbols
in the previous command, we would have
gotten a symbol summary of all the images across the entire system. We can restrict this to only
part of the system profile; for example,
below is a symbol summary of the OProfile daemon. Note that as we used
opcontrol --separate=lib,kernel, symbols from images that oprofiled
has used are also shown.
$ opreport -l -p /lib/modules/`uname -r` `which oprofiled` 2>/dev/null | more CPU: Core 2, speed 2.534e+06 MHz (estimated) Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (Unhalted core cycles) count 100000 samples % image name symbol name 1353 24.9447 vmlinux sidtab_context_to_sid 500 9.2183 vmlinux avtab_hash_eval 154 2.8392 vmlinux __link_path_walk 152 2.8024 vmlinux d_prune_aliases 120 2.2124 vmlinux avtab_search_node 104 1.9174 vmlinux find_next_bit 85 1.5671 vmlinux selinux_file_fcntl 82 1.5118 vmlinux avtab_write 81 1.4934 oprofiled odb_update_node_with_offset 73 1.3459 oprofiled opd_process_samples 72 1.3274 vmlinux avc_has_perm_noaudit 61 1.1246 libc-2.12.so _IO_vfscanf 59 1.0878 ext4.ko ext4_mark_iloc_dirty ... |
These are the two basic ways you are most likely to use regularly, but opreport can do a lot more than that, as described below.
--separate[*]
options
whilst profiling, there can be several separate profiles for
a single binary image within a session. Normally the output
will keep these images separated. So, for example, if you profiled
with separation on a per-cpu basis (opcontrol --separate=cpu
or
operf --separate-cpu
), you would see separate columns in
the output of opreport for each CPU where samples
were recorded. But it can be useful to merge these results back together
to make the report more readable. The --merge
option allows
you to do that.
event:
profile specifications, etc.
This section provides details on how to use the OProfile callgraph feature.
When using the --callgraph
option, you can see what
functions are calling other functions in the output. Consider the
following program:
#include <string.h> #include <stdlib.h> #include <stdio.h> #define SIZE 500000 static int compare(const void *s1, const void *s2) { return strcmp(s1, s2); } static void repeat(void) { int i; char *strings[SIZE]; char str[] = "abcdefghijklmnopqrstuvwxyz"; for (i = 0; i < SIZE; ++i) { strings[i] = strdup(str); strfry(strings[i]); } qsort(strings, SIZE, sizeof(char *), compare); } int main() { while (1) repeat(); } |
When running with the call-graph option, OProfile will record the function stack every time it takes a sample. opreport --callgraph outputs an entry for each function, where each entry looks similar to:
samples % image name symbol name 197 0.1548 cg main 127036 99.8452 cg repeat 84590 42.5084 libc-2.3.2.so strfry 84590 66.4838 libc-2.3.2.so strfry [self] 39169 30.7850 libc-2.3.2.so random_r 3475 2.7312 libc-2.3.2.so __i686.get_pc_thunk.bx ------------------------------------------------------------------------------- |
Here the non-indented line is the function we're focussing upon
(strfry()
). This
line is the same as you'd get from a normal opreport
output.
Above the non-indented line we find the functions that called this
function (for example, repeat()
calls
strfry()
). The samples and percentage values here
refer to the number of times we took a sample where this call was found
in the stack; the percentage is relative to all other callers of the
function we're focussing on. Note that these values are
not call counts; they only reflect the call stack
every time a sample is taken; that is, if a call is found in the stack
at the time of a sample, it is recorded in this count.
Below the line are functions that are called by
strfry()
(called callees).
It's clear here that strfry()
calls
random_r()
. We also see a special entry with a
"[self]" marker. This records the normal samples for the function, but
the percentage becomes relative to all callees. This allows you to
compare time spent in the function itself compared to functions it
calls. Note that if a function calls itself, then it will appear in the
list of callees of itself, but without the "[self]" marker; so recursive
calls are still clearly separable.
You may have noticed that the output lists main()
as calling strfry()
, but it's clear from the source
that this doesn't actually happen. See Section 3, “Interpreting call-graph profiles” for an explanation.
Callgraph output where anonymously mapped code is in the callstack can sometimes be misleading.
For all such code, the samples for the anonymously mapped code are stored in a samples subdirectory
named {anon:anon}/<tgid>.<begin_addr>.<end_addr>
.
As stated earlier, if this anonymously mapped code is JITed code from a supported VM like Java,
OProfile creates an ELF file to provide a (somewhat) permanent backing file for the code.
However, when viewing callgraph output, any anonymously mapped code in the callstack
will be attributed to anon (<tgid>: range:<begin_addr>-<end_addr>
,
even if a .jo
ELF file had been created for it. See the example below.
------------------------------------------------------------------------------- 1 2.2727 libj9ute23.so java.bin traceV 2 4.5455 libj9ute23.so java.bin utsTraceV 4 9.0909 libj9trc23.so java.bin fillInUTInterfaces 37 84.0909 libj9trc23.so java.bin twGetSequenceCounter 8 0.0154 libj9prt23.so java.bin j9time_hires_clock 27 61.3636 anon (tgid:10014 range:0x100000-0x103000) java.bin (no symbols) 9 20.4545 libc-2.4.so java.bin gettimeofday 8 18.1818 libj9prt23.so java.bin j9time_hires_clock [self] ------------------------------------------------------------------------------- |
The output shows that "anon (tgid:10014 range:0x100000-0x103000)" was a callee of
j9time_hires_clock
, even though the ELF file 10014.jo
was
created for this profile run. Unfortunately, there is currently no way to correlate
that anonymous callgraph entry with its corresponding .jo
file.
Often, we'd like to be able to compare two profiles. For example, when analysing the performance of an application, we'd like to make code changes and examine the effect of the change. This is supported in opreport by giving a profile specification that identifies two different profiles. The general form is of:
$ opreport <shared-spec> { <first-profile> } { <second-profile> } |
We lost our Dragon book down the back of the sofa, so you have to be careful to have spaces around those braces, or things will get hopelessly confused. We can only apologise.
For each of the profiles, the shared section is prefixed, and then the specification is analysed. The usual parameters work both within the shared section, and in the sub-specification within the curly braces.
A typical way to use this feature is with archives created with oparchive. Let's look at an example:
$ ./a $ oparchive -o orig ./a $ opcontrol --reset # edit and recompile a $ ./a # now compare the current profile of a with the archived profile $ opreport -xl ./a { archive:./orig } { } CPU: PIII, speed 863.233 MHz (estimated) Counted CPU_CLK_UNHALTED events (clocks processor is not halted) with a unit mask of 0x00 (No unit mask) count 100000 samples % diff % symbol name 92435 48.5366 +0.4999 a 54226 --- --- c 49222 25.8459 +++ d 48787 25.6175 -2.2e-01 b |
Note that we specified an empty second profile in the curly braces, as we wanted to use the current session; alternatively, we could have specified another archive, or a tgid etc. We specified the binary a in the shared section, so we matched that in both the profiles we're diffing.
As in the normal output, the results are sorted by the number of samples, and the percentage field represents the relative percentage of the symbol's samples in the second profile.
Notice the new column in the output. This value represents the
percentage change of the relative percent between the first and the
second profile: roughly, "how much more important this symbol is".
Looking at the symbol a()
, we can see that it took
roughly the same amount of the total profile in both the first and the
second profile. The function c()
was not in the new
profile, so has been marked with ---
. Note that the
sample value is the number of samples in the first profile; since we're
displaying results for the second profile, we don't list a percentage
value for it, as it would be meaningless. d()
is
new in the second profile, and consequently marked with
+++
.
When comparing profiles between different binaries, it should be clear that functions can change in terms of VMA and size. To avoid this problem, opreport considers a symbol to be the same if the symbol name, image name, and owning application name all match; any other factors are ignored. Note that the check for application name means that trying to compare library profiles between two different applications will not work as you might expect: each symbol will be considered different.
Many applications, typically ones involving dynamic compilation into machine code (just-in-time, or "JIT", compilation), have executable mappings that are not backed by an ELF file. opreport has basic support for showing the samples taken in these regions; for example:
$ opreport /usr/bin/mono -l CPU: ppc64 POWER5, speed 1654.34 MHz (estimated) Counted CYCLES events (Processor Cycles using continuous sampling) with a unit mask of 0x00 (No unit mask) count 100000 samples % image name symbol name 47 58.7500 mono (no symbols) 14 17.5000 anon (tgid:3189 range:0xf72aa000-0xf72fa000) (no symbols) 9 11.2500 anon (tgid:3189 range:0xf6cca000-0xf6dd9000) (no symbols) . . . . |
Note that, since such mappings are dependent upon individual invocations of
a binary, these mappings are always listed as a dependent image,
even when using the legacy mode opcontrol --separate=none
command.
Equally, the results are not affected by the --merge
option.
As shown in the opreport output above, OProfile is unable to attribute the samples to any symbol(s) because there is no ELF file for this code. Enhanced support for JITed code is now available for some virtual machines; e.g., the Java Virtual Machine. For details about OProfile output for JITed code, see Section 4, “OProfile results with JIT samples”.
For more information about JIT support in OProfile, see Section 4.1, “Support for dynamically compiled (JIT) code”.
The --xml option can be used to generate XML instead of the usual text format. This allows opreport to eliminate some of the constraints dictated by the two dimensional text format. For example, it is possible to separate the sample data across multiple events, cpus and threads. The XML schema implemented by opreport is found in doc/opreport.xsd. It contains more detailed comments about the structure of the XML generated by opreport.
Since XML is consumed by a client program rather than a user, its structure is fairly static. In particular, the --sort option is incompatible with the --xml option. Percentages are not dislayed in the XML so the options related to percentages will have no effect. Full pathnames are always displayed in the XML so --long-filenames is not necessary. The --details option will cause all of the individual sample data to be included in the XML as well as the instruction byte stream for each symbol (for doing disassembly) and can result in very large XML files.
--accumulated / -a
Accumulate sample and percentage counts in the symbol list.
--callgraph / -c
Show callgraph information.
--debug-info / -g
Show source file and line for each symbol.
--demangle / -D none|normal|smart
none: no demangling. normal: use default demangler (default) smart: use pattern-matching to make C++ symbol demangling more readable.
--details / -d
Show per-instruction details for all selected symbols. Note that, for binaries without symbol information, the VMA values shown are raw file offsets for the image binary.
--exclude-dependent / -x
Do not include application-specific images for libraries, kernel modules and the kernel. This option only makes sense if the profile session used --separate.
--exclude-symbols / -e [symbols]
Exclude all the symbols in the given comma-separated list.
--global-percent / -%
Make all percentages relative to the whole profile.
--help / -? / --usage
Show help message.
--image-path / -p [paths]
Comma-separated list of additional paths to search for binaries. This is needed to find kernel modules.
--root / -R [path]
A path to a filesystem to search for additional binaries.
--include-symbols / -i [symbols]
Only include symbols in the given comma-separated list.
--long-filenames / -f
Output full paths instead of basenames.
--merge / -m [lib,cpu,tid,tgid,unitmask,all]
Merge any profiles separated in a --separate session.
--no-header
Don't output a header detailing profiling parameters.
--output-file / -o [file]
Output to the given file instead of stdout.
--reverse-sort / -r
Reverse the sort from the default.
--session-dir=
dir_path
Use sample database out of directory dir_path
instead of the default location (/var/lib/oprofile).
--show-address / -w
Show the VMA address of each symbol (off by default).
--sort / -s [vma,sample,symbol,debug,image]
Sort the list of symbols by, respectively, symbol address, number of samples, symbol name, debug filename and line number, binary image filename.
--symbols / -l
List per-symbol information instead of a binary image summary.
--threshold / -t [percentage]
Only output data for symbols that have more than the given percentage of total samples.
--verbose / -V [options]
Give verbose debugging output.
--version / -v
Show version.
--xml / -X
Generate XML output.
The opannotate utility generates annotated source files or assembly listings, optionally
mixed with source.
If you want to see the source file, the profiled application needs to have debug information, and the source
must be available through this debug information. For GCC, you must use the -g
option
when you are compiling.
If the binary doesn't contain sufficient debug information, you can still
use opannotate --assembly
to get annotated assembly
as long as the binary has (at least) symbol information.
Note that for the reason explained in Section 5.1, “Hardware performance counters” the results can be inaccurate. The debug information itself can add other problems; for example, the line number for a symbol can be incorrect. Assembly instructions can be re-ordered and moved by the compiler, and this can lead to crediting source lines with samples not really "owned" by this line. Also see Chapter 5, Interpreting profiling results.
You can output the annotation to one single file, containing all the source found using the
--source
. You can use this in conjunction with --assembly
to get combined source/assembly output.
You can also output a directory of annotated source files that maintains the structure of the original sources. Each line in the annotated source is prepended with the samples for that line. Additionally, each symbol is annotated giving details for the symbol as a whole. An example:
$ opannotate --source --output-dir=annotated /usr/local/oprofile-pp/bin/oprofiled $ ls annotated/home/moz/src/oprofile-pp/daemon/ opd_cookie.h opd_image.c opd_kernel.c opd_sample_files.c oprofiled.c |
Line numbers are maintained in the source files, but each file has a footer appended describing the profiling details. The actual annotation looks something like this :
... :static uint64_t pop_buffer_value(struct transient * trans) 11510 1.9661 :{ /* pop_buffer_value total: 89901 15.3566 */ : uint64_t val; : 10227 1.7469 : if (!trans->remaining) { : fprintf(stderr, "BUG: popping empty buffer !\n"); : exit(EXIT_FAILURE); : } : : val = get_buffer_value(trans->buffer, 0); 2281 0.3896 : trans->remaining--; 2296 0.3922 : trans->buffer += kernel_pointer_size; : return val; 10454 1.7857 :} ... |
The first number on each line is the number of samples, whilst the second is the relative percentage of total samples.
Of course, opannotate needs to be able to locate the source files
for the binary image(s) in order to produce output. Some binary images have debug
information where the given source file paths are relative, not absolute. You can
specify search paths to look for these files (similar to gdb's
dir
command) with the --search-dirs
option.
Sometimes you may have a binary image which gives absolute paths for the source files,
but you have the actual sources elsewhere (commonly, you've installed an SRPM for
a binary on your system and you want annotation from an existing profile). You can
use the --base-dirs
option to redirect OProfile to look somewhere
else for source files. For example, imagine we have a binary generated from a source
file that is given in the debug information as /tmp/build/libfoo/foo.c
,
and you have the source tree matching that binary installed in /home/user/libfoo/
.
You can redirect OProfile to find foo.c
correctly like this :
$ opannotate --source --base-dirs=/tmp/build/libfoo/ --search-dirs=/home/user/libfoo/ --output-dir=annotated/ /lib/libfoo.so |
You can specify multiple (comma-separated) paths to both options.
--assembly / -a
Output annotated assembly. If this is combined with --source, then mixed source / assembly annotations are output.
--base-dirs / -b [paths]/
Comma-separated list of path prefixes. This can be used to point OProfile to a
different location for source files when the debug information specifies an
absolute path on your system for the source that does not exist. The prefix
is stripped from the debug source file paths, then searched in the search dirs
specified by --search-dirs
.
--demangle / -D none|normal|smart
none: no demangling. normal: use default demangler (default) smart: use pattern-matching to make C++ symbol demangling more readable.
--exclude-dependent / -x
Do not include application-specific images for libraries, kernel modules and the kernel. This option only makes sense if the profile session used --separate.
--exclude-file [files]
Exclude all files in the given comma-separated list of glob patterns.
--exclude-symbols / -e [symbols]
Exclude all the symbols in the given comma-separated list.
--help / -? / --usage
Show help message.
--image-path / -p [paths]
Comma-separated list of additional paths to search for binaries. This is needed to find kernel modules.
--root / -R [path]
A path to a filesystem to search for additional binaries.
--include-file [files]
Only include files in the given comma-separated list of glob patterns.
--include-symbols / -i [symbols]
Only include symbols in the given comma-separated list.
--objdump-params [params]
Pass the given parameters as extra values when calling objdump. If more than one option is to be passed to objdump, the parameters must be enclosed in a quoted string.
An example of where this option is useful is when your toolchain does not automatically recognize instructions that are specific to your processor. For example, on IBM POWER7/RHEL 6, objdump must be told that a binary file may have POWER7-specific instructions. The opannotate option to show the POWER7-specific instructions is:
--objdump-params=-Mpower7 |
The opannotate option to show the POWER7-specific instructions, the source code (--source) and the line numbers (-l) would be:
--objdump-params="-Mpower7 -l --source" |
--output-dir / -o [dir]
Output directory. This makes opannotate output one annotated file for each source file. This option can't be used in conjunction with --assembly.
--search-dirs / -d [paths]
Comma-separated list of paths to search for source files. This is useful to find source files when the debug information only contains relative paths.
--source / -s
Output annotated source. This requires debugging information to be available for the binaries.
--threshold / -t [percentage]
Only output data for symbols that have more than the given percentage of total samples.
--verbose / -V [options]
Give verbose debugging output.
--version / -v
Show version.
After profiling a Java (or other supported VM) application, the
OProfile JIT support creates ELF binaries from the
intermediate files that were written by the agent library.
The ELF binaries are named <tgid>.jo
.
With the symbol information stored in these ELF files, it is
possible to map samples to the appropriate symbols.
The usual analysis tools (opreport and/or opannotate) can now be used to get symbols and assembly code for the instrumented VM processes.
Below is an example of a profile report of a Java application that has been instrumented with the provided agent library.
$ opreport -l /usr/lib/jvm/jre-1.5.0-ibm/bin/java CPU: Core Solo / Duo, speed 2167 MHz (estimated) Counted CPU_CLK_UNHALTED events (Unhalted clock cycles) with a unit mask of 0x00 (Unhalted core cycles) count 100000 samples % image name symbol name 186020 50.0523 no-vmlinux no-vmlinux (no symbols) 34333 9.2380 7635.jo java void test.f1() 19022 5.1182 libc-2.5.so libc-2.5.so _IO_file_xsputn@@GLIBC_2.1 18762 5.0483 libc-2.5.so libc-2.5.so vfprintf 16408 4.4149 7635.jo java void test$HelloThread.run() 16250 4.3724 7635.jo java void test$test_1.f2(int) 15303 4.1176 7635.jo java void test.f2(int, int) 13252 3.5657 7635.jo java void test.f2(int) 5165 1.3897 7635.jo java void test.f4() 955 0.2570 7635.jo java void test$HelloThread.run()~ |
Depending on the JVM that is used, certain options of opreport and opannotate do NOT work since they rely on debug information (e.g. source code line number) that is not always available. The Sun JVM does provide the necessary debug information via the JVMTI[PI] interface, but other JVMs do not.
As you can see in the opreport output, the JIT support agent for Java
generates symbols to include the class and method signature.
A symbol with the suffix ˜<n> (e.g.
void test$HelloThread.run()˜1
) means that this is
the <n>th occurrence of the identical name. This happens if a method is re-JITed.
A symbol with the suffix %<n>, means that the address space of this symbol
was reused during the sample session (see Section 6, “Overlapping symbols in JITed code”).
The value <n> is the percentage of time that this symbol/code was present in
relation to the total lifetime of all overlapping other symbols. A symbol of the form
<return_val> <class_name>$<method_sig>
denotes an
inner class.
If you're familiar with the output produced by GNU gprof,
you may find opgprof useful. It takes a single binary
as an argument, and produces a gmon.out
file for use
with gprof -p. If call-graph profiling is enabled,
then this is also included.
$ opgprof `which oprofiled` # generates gmon.out file $ gprof -p `which oprofiled` | head Flat profile: Each sample counts as 1 samples. % cumulative self self total time samples samples calls T1/call T1/call name 33.13 206237.00 206237.00 odb_insert 22.67 347386.00 141149.00 pop_buffer_value 9.56 406881.00 59495.00 opd_put_sample 7.34 452599.00 45718.00 opd_find_image 7.19 497327.00 44728.00 opd_process_samples |
--help / -? / --usage
Show help message.
--image-path / -p [paths]
Comma-separated list of additional paths to search for binaries. This is needed to find kernel modules.
--root / -R [path]
A path to a filesystem to search for additional binaries.
--output-filename / -o [file]
Output to the given file instead of the default, gmon.out
--threshold / -t [percentage]
Only output data for symbols that have more than the given percentage of total samples.
--verbose / -V [options]
Give verbose debugging output.
--version / -v
Show version.
The oparchive utility generates a directory populated with executable, debug, and oprofile sample files. This directory can be copied to another (host) machine and analyzed offline, with no further need to access the data collection machine (target).
The following command, executed on the target system, will collect the
sample files, the executables associated with the sample files, and the
debuginfo files associated with the executables and copy them into
/tmp/current_data
:
# oparchive -o /tmp/current_data |
When transferring archived profile data to a host machine for offline analysis, you need to determine if the oprofile ABI format is compatible between the target system and the host system; if it isn't, you must run the opimport command to convert the target's sample data files to the format of your host system. See Section 7, “Converting sample database files (opimport)” for more details.
After your profile data is transferred to the host system and (if necessary) you have run the opimport command to convert the file format, you can now run the opreport and opannotate commands. However, you must provide an "archive specification" to let these post-processing tools know where to find of the profile data (sample files, executables, etc.); for example:
# opreport archive:/home/user1/my_oprofile_archive --symbols |
Furthermore, if your profile was collected on your target system into a session-dir
other than /var/lib/oprofile
, the oparchive
command will display a message similar to the following:
# NOTE: The sample data in this archive is located at /home/user1/test-stuff/oprofile_data instead of the standard location of /var/lib/oprofile. Hence, when using opreport and other post-processing tools on this archive, you must pass the following option: --session-dir=/home/user1/test-stuff/oprofile_data |
Then the above opreport example would have to include that
--session-dir
option.
$SESSION_DIR
tree located within the output directory you specified in your oparchive
command. Then, when running the opreport or opannotate
commands on your host system, pass the --root
option to point to the
location of your target's executables, etc.
--help / -? / --usage
Show help message.
--exclude-dependent / -x
Do not include application-specific images for libraries, kernel modules and the kernel. This option only makes sense if the profile session used --separate.
--image-path / -p [paths]
Comma-separated list of additional paths to search for binaries. This is needed to find kernel modules.
--root / -R [path]
A path to a filesystem to search for additional binaries.
--output-directory / -o [directory]
Output to the given directory. There is no default. This must be specified.
--list-files / -l
Only list the files that would be archived, don't copy them.
--verbose / -V [options]
Give verbose debugging output.
--version / -v
Show version.
This utility converts sample database files from a foreign binary format (abi) to
the native format. This is required when moving sample files to a (host) system
other than the one used for collection (target system), and the host and target systems are different
architectures. The abi format of the sample files to be imported is described in a
text file located in $SESSION_DIR/abi
. If you are unsure if
your target and host systems have compatible architectures (in regard to the OProfile
ABI), simply diff a $SESSION_DIR/abi
file from the target system
with one from the host system. If any differences show up at all, you must run the
opimport command.
The oparchive command should be used on the machine where the profile was taken (target) in order to collect sample files and all other necessary information. The archive directory that is the output from oparchive should be copied to the system where you wish to perform your performance analysis (host).
The following command converts an input sample file to the specified output sample file using the given abi file as a binary description of the input file and the curent platform abi as a binary description of the output file. (NOTE: The ellipses are used to make the example more compact and cannot be used in an actual command line.)
# opimport -a /tmp/foreign-abi -o /tmp/imported/.../GLOBAL_POWER_EVENTS.200000.1.all.all.all /tmp/archived/var/lib/.../mprime/GLOBAL_POWER_EVENTS.200000.1.all.all.all |
Since opimport converts just one file at a time, an example shell script is provided below that will perform an import/conversion of all sample files in a samples directory collected from the target system.
#!/bin/bash Usage: my-import.sh <input-abi-pathname> # NOTE: Start from the "samples" directory containing the "current" directory # to be imported mkdir current-imported cd current-imported; (cd ../current; find . -type d ! -name .) |xargs mkdir cd ../current; mv stats ../StatsSave; find . -type f | while read line; do opimport -a $1 -o ../current-imported/$line $line; done; mv ../StatsSave stats; |
Example usage: Assume that on the target system, a profile was collected using a session-dir of
/var/lib/oprofile
, and then oparchive -o profile1 was run.
Then the profile1
directory is copied to the host system for analysis. To import
the sample data in profile1
, you would perform the following steps:
$cd profile1/var/lib/oprofile/samples $my-import.sh `pwd`/../abi |
--help / -? / --usage
Show help message.
--abi / -a [filename]
Input abi file description location.
--force / -f
Force conversion even if the input and output abi are identical.
--output / -o [filename]
Specify the output filename. If the output file already exists, the file is not overwritten but data are accumulated in. Sample filename are informative for post profile tools and must be kept identical, in other word the pathname from the first path component containing a '{' must be kept as it in the output filename.
--verbose / -V
Give verbose debugging output.
--version / -v
Show version.
Table of Contents
The standard caveats of profiling apply in interpreting the results from OProfile: profile realistic situations, profile different scenarios, profile for as long as a time as possible, avoid system-specific artifacts, don't trust the profile data too much. Also bear in mind the comments on the performance counters above - you cannot rely on totally accurate instruction-level profiling. However, for almost all circumstances the data can be useful. Ideally a utility such as Intel's VTUNE would be available to allow careful instruction-level analysis; go hassle Intel for this, not me ;)
This is an example of how the latency of delivery of profiling interrupts can impact the reliability of the profiling data. This is pretty much a worst-case-scenario example: these problems are fairly rare.
double fun(double a, double b, double c) { double result = 0; for (int i = 0 ; i < 10000; ++i) { result += a; result *= b; result /= c; } return result; } |
Here the last instruction of the loop is very costly, and you would expect the result reflecting that - but (cutting the instructions inside the loop):
$ opannotate -a -t 10 ./a.out 88 15.38% : 8048337: fadd %st(3),%st 48 8.391% : 8048339: fmul %st(2),%st 68 11.88% : 804833b: fdiv %st(1),%st 368 64.33% : 804833d: inc %eax : 804833e: cmp $0x270f,%eax : 8048343: jle 8048337 |
The problem comes from the x86 hardware; when the counter overflows the IRQ is asserted but the hardware has features that can delay the NMI interrupt: x86 hardware is synchronous (i.e. cannot interrupt during an instruction); there is also a latency when the IRQ is asserted, and the multiple execution units and the out-of-order model of modern x86 CPUs also causes problems. This is the same function, with annotation :
$ opannotate -s -t 10 ./a.out :double fun(double a, double b, double c) :{ /* _Z3funddd total: 572 100.0% */ : double result = 0; 368 64.33% : for (int i = 0 ; i < 10000; ++i) { 88 15.38% : result += a; 48 8.391% : result *= b; 68 11.88% : result /= c; : } : return result; :} |
The conclusion: don't trust samples coming at the end of a loop, particularly if the last instruction generated by the compiler is costly. This case can also occur for branches. Always bear in mind that samples can be delayed by a few cycles from its real position. That's a hardware problem and OProfile can do nothing about it.
OProfile uses non-maskable interrupts (NMI) on the P6 generation, Pentium 4, Athlon, Opteron, Phenom, and Turion processors. These interrupts can occur even in sections of the kernel where interrupts are disabled, allowing collection of samples in virtually all executable code. The timer interrupt mode and Itanium 2 collection mechanisms use maskable interrupts; therefore, these profiling mechanisms have "sample shadows", or blind spots: regions where no samples will be collected. Typically, the samples will be attributed to the code immediately after the interrupts are re-enabled.
Your kernel is likely to support halting the processor when a CPU is idle. As
the typical hardware events like CPU_CLK_UNHALTED
do not
count when the CPU is halted, the kernel profile will not reflect the actual
amount of time spent idle. You can change this behaviour by booting with
the idle=poll
option, which uses a different idle routine. This
will appear as poll_idle()
in your kernel profile.
OProfile profiles kernel modules by default. However, there are a couple of problems
you may have when trying to get results. First, you may have booted via an initrd;
this means that the actual path for the module binaries cannot be determined automatically.
To get around this, you can use the -p
option to the profiling tools
to specify where to look for the kernel modules.
In kernel version 2.6, the information on where kernel module binaries are located was removed.
This means OProfile needs guiding with the -p
option to find your
modules. Normally, you can just use your standard module top-level directory for this.
Note that due to this problem, OProfile cannot check that the modification times match;
it is your responsibility to make sure you do not modify a binary after a profile
has been created.
If you have run insmod or modprobe to insert a module
in a particular directory, it is important that you specify this directory with the
-p
option first, so that it over-rides an older module binary that might
exist in other directories you've specified with -p
. It is up to you
to make sure that these values are correct: the kernel simply does not provide enough
information for OProfile to get this information.
Sometimes the results from call-graph profiles may be different to what
you expect to see. The first thing to check is whether the target
binaries where compiled with frame pointers enabled (if the binary was
compiled using gcc's
-fomit-frame-pointer
option, you will not get
meaningful results). Note that as of this writing, the GCC developers
plan to disable frame pointers by default. The Linux kernel is built
without frame pointers by default; there is a configuration option you
can use to turn it on under the "Kernel Hacking" menu.
Often you may see a caller of a function that does not actually directly
call the function you're looking at (e.g. if a()
calls b()
, which in turn calls
c()
, you may see an entry for
a()->c()
). What's actually occurring is that we
are taking samples at the very start (or the very end) of
c()
; at these few instructions, we haven't yet
created the new function's frame, so it appears as if
a()
is calling directly into
c()
. Be careful not to be misled by these
entries.
Like the rest of OProfile, call-graph profiling uses a statistical approach; this means that sometimes a backtrace sample is truncated, or even partially wrong. Bear this in mind when examining results.
The compiler can introduce some pitfalls in the annotated source output. The optimizer can move pieces of code in such manner that two line of codes are interlaced (instruction scheduling). Also debug info generated by the compiler can show strange behavior. This is especially true for complex expressions e.g. inside an if statement:
if (a && .. b && .. c &&) |
here the problem come from the position of line number. The available debug
info does not give enough details for the if condition, so all samples are
accumulated at the position of the right brace of the expression. Using
opannotate -a
can help to show the real
samples at an assembly level.
The compiler generally needs to generate "glue" code across function calls, dependent on the particular function call conventions used. Additionally other things need to happen, like stack pointer adjustment for the local variables; this code is known as the function prologue. Similar code is needed at function return, and is known as the function epilogue. This will show up in annotations as samples at the very start and end of a function, where there is no apparent executable code in the source.
You may see that a function is credited with a certain number of samples, but the listing does not add up to the correct total. To pick a real example :
:internal_sk_buff_alloc_security(struct sk_buff *skb) 353 2.342% :{ /* internal_sk_buff_alloc_security total: 1882 12.48% */ : : sk_buff_security_t *sksec; 15 0.0995% : int rc = 0; : 10 0.06633% : sksec = skb->lsm_security; 468 3.104% : if (sksec && sksec->magic == DSI_MAGIC) { : goto out; : } : : sksec = (sk_buff_security_t *) get_sk_buff_memory(skb); 3 0.0199% : if (!sksec) { 38 0.2521% : rc = -ENOMEM; : goto out; 10 0.06633% : } : memset(sksec, 0, sizeof (sk_buff_security_t)); 44 0.2919% : sksec->magic = DSI_MAGIC; 32 0.2123% : sksec->skb = skb; 45 0.2985% : sksec->sid = DSI_SID_NORMAL; 31 0.2056% : skb->lsm_security = sksec; : : out: : 146 0.9685% : return rc; : 98 0.6501% :} |
Here, the function is credited with 1,882 samples, but the annotations
below do not account for this. This is usually because of inline functions -
the compiler marks such code with debug entries for the inline function
definition, and this is where opannotate annotates
such samples. In the case above, memset
is the most
likely candidate for this problem. Examining the mixed source/assembly
output can help identify such results.
This problem is more visible when there is no source file available, in the following example it's trivially visible the sums of symbols samples is less than the number of the samples for this file. The difference must be accounted to inline functions.
/* * Total samples for file : "arch/i386/kernel/process.c" * * 109 2.4616 */ /* default_idle total: 84 1.8970 */ /* cpu_idle total: 21 0.4743 */ /* flush_thread total: 1 0.0226 */ /* prepare_to_copy total: 1 0.0226 */ /* __switch_to total: 18 0.4065 */ |
The missing samples are not lost, they will be credited to another source location where the inlined function is defined. The inlined function will be credited from multiple call site and merged in one place in the annotated source file so there is no way to see from what call site are coming the samples for an inlined function.
When running opannotate, you may get a warning "some functions compiled without debug information may have incorrect source line attributions". In some rare cases, OProfile is not able to verify that the derived source line is correct (when some parts of the binary image are compiled without debugging information). Be wary of results if this warning appears.
Furthermore, for some languages the compiler can implicitly generate functions, such as default copy constructors. Such functions are labelled by the compiler as having a line number of 0, which means the source annotation can be confusing.
Depending on your compiler you can fall into the following problem:
struct big_object { int a[500]; }; int main() { big_object a, b; for (int i = 0 ; i != 1000 * 1000; ++i) b = a; return 0; } |
Compiled with gcc 3.0.4 the annotated source is clearly inaccurate:
:int main() :{ /* main total: 7871 100% */ : big_object a, b; : for (int i = 0 ; i != 1000 * 1000; ++i) : b = a; 7871 100% : return 0; :} |
The problem here is distinct from the IRQ latency problem; the debug line number information is not precise enough; again, looking at output of opannoatate -as can help.
:int main() :{ : big_object a, b; : for (int i = 0 ; i != 1000 * 1000; ++i) : 80484c0: push %ebp : 80484c1: mov %esp,%ebp : 80484c3: sub $0xfac,%esp : 80484c9: push %edi : 80484ca: push %esi : 80484cb: push %ebx : b = a; : 80484cc: lea 0xfffff060(%ebp),%edx : 80484d2: lea 0xfffff830(%ebp),%eax : 80484d8: mov $0xf423f,%ebx : 80484dd: lea 0x0(%esi),%esi : return 0; 3 0.03811% : 80484e0: mov %edx,%edi : 80484e2: mov %eax,%esi 1 0.0127% : 80484e4: cld 8 0.1016% : 80484e5: mov $0x1f4,%ecx 7850 99.73% : 80484ea: repz movsl %ds:(%esi),%es:(%edi) 9 0.1143% : 80484ec: dec %ebx : 80484ed: jns 80484e0 : 80484ef: xor %eax,%eax : 80484f1: pop %ebx : 80484f2: pop %esi : 80484f3: pop %edi : 80484f4: leave : 80484f5: ret |
So here it's clear that copying is correctly credited with of all the samples, but the line number information is misplaced. objdump -dS exposes the same problem. Note that maintaining accurate debug information for compilers when optimizing is difficult, so this problem is not suprising. The problem of debug information accuracy is also dependent on the binutils version used; some BFD library versions contain a work-around for known problems of gcc, some others do not. This is unfortunate but we must live with that, since profiling is pointless when you disable optimisation (which would give better debugging entries).
Often the assembler cannot generate debug information automatically. This means that you cannot get a source report unless you manually define the neccessary debug information; read your assembler documentation for how you might do that. The only debugging info needed currently by OProfile is the line-number/filename-VMA association. When profiling assembly without debugging info you can always get report for symbols, and optionally for VMA, through opreport -l or opreport -d, but this works only for symbols with the right attributes. For gas you can get this by
.globl foo .type foo,@function |
whilst for nasm you must use
GLOBAL foo:function ; [1] |
Note that OProfile does not need the global attribute, only the function attribute.
Some virtual machines (e.g., Java) may re-JIT a method, resulting in previously allocated space for a piece of compiled code to be reused. This means that, at one distinct code address, multiple symbols/methods may be present during the run time of the application.
Since OProfile samples are buffered and don′t have timing information, there is no way to correlate samples with the (possibly) varying address ranges in which the code for a symbol may reside. An alternative would be flushing the OProfile sampling buffer when we get an unload event, but this could result in high overhead.
To moderate the problem of overlapping symbols, OProfile tries to select the symbol that was present at this address range most of the time. Additionally, other overlapping symbols are truncated in the overlapping area. This gives reasonable results, because in reality, address reuse typically takes place during phase changes of the application -- in particular, during application startup. Thus, for optimum profiling results, start the sampling session after application startup and burn in.
When profiling an application that forks one or more new processes, operf will
record samples for both the parent process and forked processes. This is also true even if the
forked process performs an exec of some sort (e.g., execvp
). If the
process does not perform an exec, you will see that opreport
will attribute samples for the forked process to the main application executable. On the other
hand, if the forked process does perform an exec, then opreport
will attribute samples to the executable being exec'ed.
To demonstrate this, consider the following examples.
When using operf to profile a single application (either with the --pid
option or command
option), the normal opreport summary output
(i.e., invoking opreport with no options) looks something like the following:
CPU_CLK_UNHALT...| samples| %| ------------------ 112342 100.000 sprintft CPU_CLK_UNHALT...| samples| %| ------------------ 104209 92.7605 libc-2.12.so 7273 6.4740 sprintft 858 0.7637 no-vmlinux 2 0.0018 ld-2.12.so |
But if you profile an application that does a fork/exec, the opreport summary output
will show samples for both the main application you profiled, as well as the exec'ed program.
An example is shown below where s-m-fork
is the main application being profiled, which
in turn forks a process that does an execvp
of the memcpyt
program.
CPU_CLK_UNHALT...| samples| %| ------------------ 133382 70.5031 memcpyt CPU_CLK_UNHALT...| samples| %| ------------------ 123852 92.8551 libc-2.12.so 8522 6.3892 memcpyt 1007 0.7550 no-vmlinux 1 7.5e-04 ld-2.12.so 55804 29.4969 s-m-fork CPU_CLK_UNHALT...| samples| %| ------------------ 51801 92.8267 libc-2.12.so 3589 6.4314 s-m-fork 414 0.7419 no-vmlinux |
Another cause of apparent problems is the hidden cost of instructions. A very common example is two memory reads: one from L1 cache and the other from memory: the second memory read is likely to have more samples. There are many other causes of hidden cost of instructions. A non-exhaustive list: mis-predicted branch, TLB cache miss, partial register stall, partial register dependencies, memory mismatch stall, re-executed µops. If you want to write programs at the assembly level, be sure to take a look at the Intel and AMD documentation at http://developer.intel.com/ and http://developer.amd.com/devguides.jsp.
Table of Contents
This section describes in detail how ocount is used.
Unless the --events
option is specified, ocount will use
the default event for your system. For most systems, the default event is some
cycles-based event, assuming your processor type supports hardware performance
counters. The event specification used for ocount is slightly
different from that required for profiling -- a count value
is not needed. You can see the event information for your CPU using ophelp.
More information on event specification can be found at Section 4, “Specifying performance counter events”.
The ocount command syntax is:
ocount [ options ] [ --system-wide | --process-list <pids> | --thread-list <tids> | --cpu-list <cpus> [ command [ args ] ] ] |
ocount has 5 run modes:
One and only one of these 5 run modes must be specified when you run ocount.
If you run ocount using a run mode other than command [args]
, press Ctrl-c
to stop it when finished counting (e.g., when the monitored process ends). If you background ocount
(i.e., with ’&’) while using one these run modes, you must stop it in a controlled manner so that
the data collection process can be shut down cleanly and final results can be displayed.
Use kill -SIGINT <ocount-PID>
for this purpose.
Following is a description of the ocount options.
command [args]
The command or application to be profiled. The [args] are the input arguments that the command or application requires. The command and its arguments must be positioned at the end of the command line, after all other ocount options.
--process-list / -p [PIDs]
Use this option to count events for one or more already-running applications, specified via a comma-separated list (PIDs). Event counts will be collected for all children of the passed process(es) as well.
--thread-list / -r [TIDs]
Use this option to count events for one or more already-running threads, specified via a comma-separated list (TIDs). Event counts will not be collected for any children of the passed thread(s).
--system-wide / -s
This option is for counting events for all processes running on your system. You must have root authority to run ocount in this mode.
--cpu-list / -C [CPUs]
This option is for counting events on a subset of processors on your system. You must have root authority to run ocount in this mode. This is a comma-separated list, where each element in the list may be either a single processor number or a range of processor numbers; for example: ’-C 2,3,4-11,15’.
--events / -e [event1[,event2[,...]]]
This option is for passing a comma-separated list of event specifications for counting. Each event spec is of the form:
name[:unitmask[:kernel[:user]]] |
When no event specification is given, the default event for the running processor type will be used for counting. Use ophelp to list the available events for your processor type.
--separate-thread / -t
This option can be used in conjunction with either the --process-list
or
--thread-list
option to display event counts on a per-thread (per-process) basis.
Without this option, all counts are aggregated.
--separate-cpu / -c
This option can be used in conjunction with either the --system-wide
or
--cpu-list
option to display event counts on a per-cpu basis. Without this option,
all counts are aggregated.
--time-interval / -i num_seconds[:num_intervals]
Results collected for each time interval are printed every num_seconds instead of the
default of one dump of cumulative event counts at the end of the run. If num_intervals
is specified, ocount exits after the specified number of intervals occur.
--brief-format / -b
Use this option to print results in the following brief format:
[optional cpu or thread,]<event_name>,<count>,<percent_time_enabled> [ <int> ,]< string >,< u64 >,< double > |
If --timer-interval
is specified, a separate line formatted as
timestamp,<num_seconds_since_epoch> |
is printed ahead of each dump of event counts.
--output-file / -f outfile_name
Results are written to outfile_name instead of interactively to the terminal.
--verbose / -V
Use this option to increase the verbosity of the output.
--version -v
Show ocount version.
--help / -h
Show a help message.
Thanks to (in no particular order) : Arjan van de Ven, Rik van Riel, Juan Quintela, Philippe Elie, Phillipp Rumpf, Tigran Aivazian, Alex Brown, Alisdair Rawsthorne, Bob Montgomery, Ray Bryant, H.J. Lu, Jeff Esper, Will Cohen, Graydon Hoare, Cliff Woolley, Alex Tsariounov, Al Stone, Jason Yeh, Randolph Chung, Anton Blanchard, Richard Henderson, Andries Brouwer, Bryan Rittmeyer, Maynard P. Johnson, Richard Reich (rreich@rdrtech.com), Zwane Mwaikambo, Dave Jones, Charles Filtness; and finally Pulp, for "Intro".