Canonical Voices

Posts tagged with 'tracing'

Colin Ian King

The BPF Compiler Collection (BCC) is a toolkit for building kernel tracing tools that leverage the functionality provided by the Linux extended Berkeley Packet Filters (BPF).

BCC allows one to write BPF programs with front-ends in Python or Lua with kernel instrumentation written in C.  The instrumentation code is built into sandboxed eBPF byte code and is executed in the kernel.

The BCC github project README file provides an excellent overview and description of BCC and the various available BCC tools.  Building BCC from scratch can be a bit time consuming, however,  the good news is that the BCC tools are now available as a snap and so BCC can be quickly and easily installed just using:

 sudo snap install --devmode bcc  

There are currently over 50 BCC tools in the snap, so let's have a quick look at a few:

cachetop allows one to view the top page cache hit/miss statistics. To run this use:

 sudo bcc.cachetop  



The funccount tool allows one to count the number of times specific functions get called.  For example, to see how many kernel functions with the name starting with "do_" get called per second one can use:

 sudo bcc.funccount "do_*" -i 1  


To see how to use all the options in this tool, use the -h option:

 sudo bcc.funccount -h  

I've found the funccount tool to be especially useful to check on kernel activity by checking on hits on specific function names.

The slabratetop tool is useful to see the active kernel SLAB/SLUB memory allocation rates:

 sudo bcc.slabratetop  


If you want to see which process is opening specific files, one can snoop on open system calls use the opensnoop tool:

 sudo bcc.opensnoop -T


Hopefully this will give you a taste of the useful tools that are available in BCC (I have barely scratched the surface in this article).  I recommend installing the snap and giving it a try.

As it stands,BCC provides a useful mechanism to develop BPF tracing tools and I look forward to regularly updating the BCC snap as more tools are added to BCC. Kudos to Brendan Gregg for BCC!

Read more
Colin Ian King

Kernel tracing using lttng

LTTng (Linux Trace Toolkit - next generation) is a highly efficient system tracer that allows tracing of the kernel and userspace. It also provides tools to view and analyse the gathered trace data.  So let's see how to install and use LTTng kernel tracing in Ubuntu. First, one has to install the LTTng userspace tools:

 sudo apt-get update  
sudo apt-get install lttng-tools babeltrace
LTTng was already recently added into the Ubuntu 13.10 Saucy kernel, however, with earlier releases one needs to install the LTTng kernel driver using lttng-modules-dkms as follows:

 sudo apt-get install lttng-modules-dkms  
It is a good idea to sanity check to see if the tools and driver are installed correctly, so first check to see the available kernel events on your machine:

 sudo lttng list -k  
And you should get a list similar to the following:
 Kernel events:  
-------------
mm_vmscan_kswapd_sleep (loglevel: TRACE_EMERG (0)) (type: tracepoint)
mm_vmscan_kswapd_wake (loglevel: TRACE_EMERG (0)) (type: tracepoint)
mm_vmscan_wakeup_kswapd (loglevel: TRACE_EMERG (0)) (type: tracepoint)
mm_vmscan_direct_reclaim_begin (loglevel: TRACE_EMERG (0)) (type: tracepoint)
mm_vmscan_memcg_reclaim_begin (loglevel: TRACE_EMERG (0)) (type: tracepoint)
..
Next, we need to create a tracing session:
 sudo lttng create examplesession  
..and enable events to be traced using:
 sudo lttng enable-event sched_process_exec -k  
One can also specify multiple events as a comma separated list. Next, start the tracing using:
 sudo lttng start  
and to stop and complete the tracing use:
 sudo lttng stop  
sudo lttng destroy
and the trace data will be saved in the directory ~/lttng-traces/examplesession-[date]-[time]/.  One can examine the trace data using the babeltrace tool, for example:
 sudo babeltrace ~/lttng-traces/examplesession-20130517-125533  
And you should get a list similar to the following:
 [12:56:04.490960303] (+?.?????????) x220i sched_process_exec: { cpu_id = 2 }, { filename = "/usr/bin/firefox", tid = 4892, old_tid = 4892 }  
[12:56:04.493116594] (+0.002156291) x220i sched_process_exec: { cpu_id = 0 }, { filename = "/usr/bin/which", tid = 4895, old_tid = 4895 }
[12:56:04.496291224] (+0.003174630) x220i sched_process_exec: { cpu_id = 2 }, { filename = "/usr/lib/firefox/firefox", tid = 4892, old_tid = 4892 }
[12:56:05.472770438] (+0.976479214) x220i sched_process_exec: { cpu_id = 2 }, { filename = "/usr/lib/libunity-webapps/unity-webapps-service", tid = 4910, old_tid = 4910 }
[12:56:05.478117340] (+0.005346902) x220i sched_process_exec: { cpu_id = 2 }, { filename = "/usr/bin/ubuntu-webapps-update-index", tid = 4912, old_tid = 4912 }
[12:56:10.834043409] (+5.355926069) x220i sched_process_exec: { cpu_id = 3 }, { filename = "/usr/bin/top", tid = 4937, old_tid = 4937 }
[12:56:13.668306764] (+2.834263355) x220i sched_process_exec: { cpu_id = 3 }, { filename = "/bin/ps", tid = 4938, old_tid = 4938 }
[12:56:16.047191671] (+2.378884907) x220i sched_process_exec: { cpu_id = 3 }, { filename = "/usr/bin/sudo", tid = 4939, old_tid = 4939 }
[12:56:16.059363974] (+0.012172303) x220i sched_process_exec: { cpu_id = 3 }, { filename = "/usr/bin/lttng", tid = 4940, old_tid = 4940 }
The LTTng wiki contains many useful worked examples and is well worth exploring.

As it stands, LTTng is relatively light weight.   Research by Romik Guha Anjoy and Soumya Kanti Chakraborty shows that LTTng describes how the CPU overhead is ~1.6% on a Intel® CoreTM 2 Quad with four 64 bit Q9550 cores.  With measurements I've made with oprofile on a Nexus 4 with 1.5 GHz quad-core Snapdragon S4 Pro processor shows a CPU overhead of < 1% for kernel tracing.  In flight recorder mode, one can generate a lot of trace data. For example, with all tracing enabled running multiple stress tests I was able to generate ~850K second of trace data, so this will obviously impact disk I/O.

Read more
pitti

Part of our efforts to reduce power consumption is to identify processes which keep waking up the disk even when the computer is idle. This already resulted in a few bug reports (and some fixes, too), but we only really just began with this.

Unfortunately there is no really good tool to trace file access events system-wide. powertop claims to, but its output is both very incomplete, and also wrong (e. g. it claims that read accesses are writes). strace gives you everything you do and don’t want to know about what’s going on, but is per-process, and attaching strace to all running and new processes is cumbersome. blktrace is system-wide, but operates at a way too low level for this task: its output has nothing to do any more with files or even inodes, just raw block numbers which are impossible to convert back to an inode and file path.

So I created a little tool called fatrace (“file access trace”, not “fat race” :-) ) which uses fanotify, a couple of /proc lookups and some glue to provide this. By default it monitors the whole system, i. e. all mounts (except the virtual ones like /proc, tmpfs, etc.), but you can also tell it to just consider the mount of the current directory. You can write the log into a file (stdout by default), and run it for a specified number of seconds. Optional time stamps and PID filters are also provided.

$ sudo fatrace
rsyslogd(967): W /var/log/auth.log
notify-osd(2264): O /usr/share/pixmaps/weechat.xpm
compiz(2001): R device 8:2 inode 658203
[...]

It shows the process name and pid, the event type (Rread, Write, Open, or Close), and the path. Sometimes its’ not possible to determine a path (usually because it’s a temporary file which already got deleted, and I suspect mmaps as well), in that case it shows the device and inode number; such programs then need closer inspection with strace.

If you run this in gnome-terminal, there is an annoying feedback loop, as gnome-terminal causes a disk access with each output line, which then causes another output line, ad infinitum. To fix this, you can either redirect output to a file (-o /tmp/trace) or ignore the PID of gnome-terminal (-p `pidof gnome-terminal`).

So to investigate which programs are keeping your disk spinning, run something like

  $ sudo fatrace -o /tmp/trace -s 60

and then do nothing until it finishes.

My next task will be to write an integration program which calls fatrace and powertop, and creates a nice little report out of that raw data, sorted by number of accesses and process name, and all that. But it might already help some folks as it is right now.

The code lives in bzr branch lp:fatrace (web view), you can just run make and sudo ./fatrace. I also uploaded a package to Ubuntu Precise, but it still needs to go through the NEW queue. I also made a 0.1 release, so you can just grab the release tarball if you prefer. Have a look at the manpage and --help, it should be pretty self-explanatory.

Read more