Canonical Voices

Posts tagged with 'tools'

Colin Ian King

An on-going background project of mine is to add various interesting system stress tests to stress-ng.  Over the past several months I've been looking at the ways to exercise various less used or obscure system calls just to add more kernel coverage to the tool.

  • rlimit - generate tens of thousands of SIGXFSZ and many SIGXCPU signals
  • itimer - exercise ITIMER_PROF and generate SIGPROF signals
  • mlock - lock and unlock pages with mlock()/munlock()
  • timerfd - exercise rapid CLOCK_REALTIME events by select() and read() on a timerfd.
  • memfd - exercise anonymous populated page memory mapping and unmappoing using memfd.
  • more aggressive affinity stressor changes to force more CPU IPIs
  • hdd - add readv/writev I/O option
  • tee - tee data between a writer and reader process using tee()
  • crypt - encrypt data with MD5, SHA-256 and SHA-512 using libcrypt
  • mmapmany - perform tens of thousands of memory maps/unmaps to exhaust the per-process mapping limit.
  • zombie - fill up process table with tens of thousands of zombie processes
  • str - heavily exercise a range of glibc string functions
  • xattr - exercise file extended attributes
  • readahead - random reads with readaheads
  • vm - add a rowhammer memory stressor
..as well as extra per-stressor configuration settings and a lot of code clean up and bug fixing.

I've recently been using stress-ng to exercise various kernels on a range of hardware and it has been useful in forcing bugs, especially with the memory specific stressors that seem to trip low memory corner cases.

stress-ng 0.04.01 will be soon available in Ubuntu 15.10 Wily Werewolf.  Visit the stress-ng project page for more details.

Read more
Colin Ian King

powerstat improvements with RAPL

The Linux Running Average Power Limit (RAPL) interface was introduced about 2 years ago in the Linux kernel and allows userspace to read the power consumption from various x86 System-on-a-Chip (SoC) power domains.  The power domains range from the SoC package, CPU core, DRAM controller and graphics power plane.

It appears that the Intel energy status MSRs can be read very rapidly and the resolution is exceptionally good; however, reading the MSR too frequently will consume some power when using the RAPL interface.

I've improved powerstat to now use the RAPL interface with a new -R option (to measure just the total package power consumption).  A new -D option will show all the RAPL domain measurements available.  RAPL measurements are very responsive and one can easily correlate power spikes with bursts of system activity.

Finally, I have added a basic histogram output with the new -H option. This will plot histograms of the power measurements and CPU load from the stats gathered during the powerstat run.

Powerstat 0.01.37 is available in Ubuntu 15.10 Wily Werewolf and the source is available from the git repository.

Read more
Colin Ian King

Over the past year or more I was focused on identifying power consuming processes on various mobile devices.  One of many strategies to reducing power is to remove unnecessary file system activity, such as extraneous logging, repeated file writes, unnecessary file re-reads and to reduce metadata updates.

Fnotifystat is a utility I wrote to help identify such file system activity. My desire was to make the tool as small as possible for small embedded devices and to be relatively flexible without the need of using perf just in case the target device did not have perf built into the kernel by default.

By default, fnotifystat will dump out every second any file system open/close/read/write operations across all mounted file systems, however, one can specify the delay in seconds and the number of times to dump out statistics.   fnotifystat uses the fanotify(7) interface to get file activity across the system, hence it needs to be run with CAP_SYS_ADMIN capability.

An open(2), read(2)/write(2) and close(2) sequence by a process can produce multiple events, so fnotifystat has a -m option to merge events and hence reduce the amount of output.  A verbose -v option will output all file events if one desires to see the full system activity.

If one desires to just monitor a specific collection of processes, one can specify a list of the process ID(s) or process names using the -p option, for example:

sudo fnotifystat -p firefox,thunderbird

fnotifystat catch events on all mounted file systems, but one can restrict that by specifying just path(s) one is interested in using the -i (include) option, for example:

sudo fnotifystat -i /proc

..and one can exclude paths using the -x option.

More information and examples can be found on the fnotifystat project page and the manual also contains more details and some examples too.

Fnotifystat 0.01.10 is available in Ubuntu Vivid Vervet 15.04 and can also be installed for older releases from my power management tools PPA.

Read more
Colin Ian King

Finding kernel bugs with cppcheck

For the past year I have been running the cppcheck static analyzer against the linux kernel sources to see if it can detect any bugs introduced by new commits. Most of the bugs being found are minor thinkos, null pointer de-referencing, uninitialized variables, memory leaks and mistakes in error handling paths.

A useful feature of cppcheck is the --force option that will check against all the configurations in the source (and the kernel does have many!).  This allows us to check for code that may not be exercised much (because it is normally not built in with most config options) or even find dead code.

The downside of using the --force option is that each source file may need to be checked multiple times for each configuration.  For ~20800 sources files this can take a 24 processor server several hours to process.  Errors and warnings are then compared to previous runs (a delta), making it relatively easy to spot new issues on each run.

We also use the latest sources from the cppcheck git repository.  The upside of this is that new static analysis features are used early and this can result in finding existing bugs that previous versions of cppcheck missed.

A typical cppcheck run against the linux kernel source finds about 600 potential errors and 1700 warnings; however a lot of these are false positives.  These need to be individually eyeballed to sort the wheat from the chaff.

Finally, the data is passed through a gnu plot script to generate a trend graph so I can see how errors (red) and warnings (green) are progressing over time:


..note that the large changes in the graph are mostly with features being enabled (or fixed) in cppcheck.

I have been running the same experiment with smatch too, however I am finding that cppcheck seems to have better code coverage because of the --force option and seems to have less false positives.   As it stands, I am finding that the most productive time for finding issues is around the -rc1 and -rc2 merge times (obviously when most of the the major changes land in the kernel).  The outcome of this work has been a bunch of small fixes landing in the kernel to address bugs that cppcheck has found.

Anyhow, cppcheck is an excellent open source static analyzer for C and C++ that I'd heartily recommend as it does seem to catch useful bugs.

Read more
Colin Ian King

During idle moments in the final few weeks of 2014 I have been adding some more stressors and features to stress-ng as well as tidying up the code and fixing some small bugs that have crept in during the last development spin.   Stress-ng aims to stress a machine with various simple torture tests to trip overheating and kernel race conditions.

The mmap stressor now has the '--mmap-file' to use synchronous file backed memory mapping instead of the default anonymous mapping, and the '--mmap-async' option enables asynchronous file mapping if desired.

For socket stressing, the '--sock-domain unix' option now allows AF_UNIX (aka AF_LOCAL) sockets to be used. This compliments the existing AF_INET and AF_INET6 IPv4 and IPv6 protocols available with this stress test.

The CPU stressor now includes mixed integer and floating point stressors, covering 32 and 64 bit integer mixes with the float, double and long double floating point types. The generated object code contains a nice mix of operations which should exercise various functional units in the CPU.  For example, when running on a hyper-threaded CPU one notices a performance hit because these cpu stressor methods heavily contend on the CPU math functional blocks.

File based locking has been extended with the new lockf stressor, this stresses multiple locking and unlocking on portions of a small file and the default blocking mode can be turned into a CPU consuming rapid polling retry with the '--lockf-nonblock' option.

The dup(2) system call is also now stressed with the new dup stressor. This just repeatedly dup's a file opened on /dev/zero until all the free file slots are full, and then closes these. It is very much like the open stressors.

The fcntl(2) F_SETLEASE command is stress tested with the new lease stressor. This has a parent process that rapidly locks and unlocks a file based lease and 1 or more child processes try to open this file and cause lease breaking signal notifications to the parent.

For x86 CPUs, the cache stressor includes two new cache specific options. The '--cache-fence' option forces write serialization on each store operation, while the '--cache-flush' option forces flush cache on each store operation. The code has been specifically written to not incur any overhead if these options are not enabled or are not available on non-x86 CPUs.

This release also includes the stress-ng project mascot too; a humble CPU being tortured and stressed by a rather angry flame.

For more information, visit the stress-ng project page, or check out the stress-ng manual.

Read more
Colin Ian King

Controlling data flow using sluice

Earlier this year I was instrumenting wifi power consumption and needed a way to produce and also consume data at a specific rate to pipe over netcat for my measurements.  I had some older crufty code around to do this but it needed some polishing up so I eventually got around to turning this into a more usable tool called "sluice".  (A sluice gate controls flow of water,  the sluice tool controls rate of data though a pipe).

The sluice package is currently available in PPA:colin-king/white and is built for Ubuntu Trusty, Utopic and Vivid, but there the git repository and tarballs are available too if one wants to build this from source.

The following starts a netcat 'server' to send largefile at a rate of 1MB a second using 1K buffers and reports the transfer stats to stderr with the -v verbose mode enabled:

cat largefile | sluice -r 1MB -i 1K -v | nc -l 127.0.0.1 1234 

Sluice also allows one to adjust the read/write buffer sizes dynamically to try to avoid buffer underflow or overflows while trying to match the specified transfer rate.

Sluice can be used as a data sink on the receiving side and also has an option to throw the data away if one just wants to stream data and test the data link rates, e.g., get data from somehost.com on port 1234 and throw it away at 2MB a second:

nc somehost.com 1234 | sluice -d -r 2MB -i 8K

And finally, sluice as a "tee" mode, where data is copied to stdout and to a specified output file using the -t option.

For more details, refer to the sluice project page.

Read more
Colin Ian King

Recently I was playing around with CPU loading and was trying to estimate the number of compute operations being executed on my machine.  In particular, I was interested to see how many instructions per cycle and stall cycles I was hitting on the more demanding instructions.   Fortunately, perf stat allows one to get detailed processor statistics to measure this.

In my first test, I wanted to see how the Intel rdrand instruction performed with 2 CPUs loaded (each with a hyper-thread):


$ perf stat stress-ng --rdrand 4 -t 60 --times
stress-ng: info: [7762] dispatching hogs: 4 rdrand
stress-ng: info: [7762] successful run completed in 60.00s
stress-ng: info: [7762] for a 60.00s run time:
stress-ng: info: [7762] 240.01s available CPU time
stress-ng: info: [7762] 231.05s user time ( 96.27%)
stress-ng: info: [7762] 0.11s system time ( 0.05%)
stress-ng: info: [7762] 231.16s total time ( 96.31%)

Performance counter stats for 'stress-ng --rdrand 4 -t 60 --times':

231161.945062 task-clock (msec) # 3.852 CPUs utilized
18,450 context-switches # 0.080 K/sec
92 cpu-migrations # 0.000 K/sec
821 page-faults # 0.004 K/sec
667,745,260,420 cycles # 2.889 GHz
646,960,295,083 stalled-cycles-frontend # 96.89% frontend cycles idle
stalled-cycles-backend
13,702,533,103 instructions # 0.02 insns per cycle
# 47.21 stalled cycles per insn
6,549,840,185 branches # 28.334 M/sec
2,352,175 branch-misses # 0.04% of all branches

60.006455711 seconds time elapsed

stress-ng's rdrand test just performs a 64 bit rdrand read and loops on this until the data is ready, and performs this 32 times in an unrolled loop.  Perf stat shows that each rdrand + loop sequence on average consumes about 47 stall cycles showing that rdrand is probably just waiting for the PRNG block to produce random data.

My next experiment was to run the stress-ng ackermann stressor; this performs a lot of recursion, hence one should see a predominantly large amount of branching.

$ perf stat stress-ng --cpu 4 --cpu-method ackermann -t 60 --times
stress-ng: info: [7796] dispatching hogs: 4 cpu
stress-ng: info: [7796] successful run completed in 60.03s
stress-ng: info: [7796] for a 60.03s run time:
stress-ng: info: [7796] 240.12s available CPU time
stress-ng: info: [7796] 226.69s user time ( 94.41%)
stress-ng: info: [7796] 0.26s system time ( 0.11%)
stress-ng: info: [7796] 226.95s total time ( 94.52%)

Performance counter stats for 'stress-ng --cpu 4 --cpu-method ackermann -t 60 --times':

226928.278602 task-clock (msec) # 3.780 CPUs utilized
21,752 context-switches # 0.096 K/sec
127 cpu-migrations # 0.001 K/sec
927 page-faults # 0.004 K/sec
594,117,596,619 cycles # 2.618 GHz
298,809,437,018 stalled-cycles-frontend # 50.29% frontend cycles idle
stalled-cycles-backend
845,746,011,976 instructions # 1.42 insns per cycle
# 0.35 stalled cycles per insn
298,414,546,095 branches # 1315.017 M/sec
95,739,331 branch-misses # 0.03% of all branches

60.032115099 seconds time elapsed

..so about 35% of the time is used in branching and we're getting  about 1.42 instructions per cycle and no many stall cycles, so the code is most probably executing inside the instruction cache, which isn't surprising because the test is rather small.

My final experiment was to measure the stall cycles when performing complex long double floating point math operations, again with stress-ng.

$ perf stat stress-ng --cpu 4 --cpu-method clongdouble -t 60 --times
stress-ng: info: [7854] dispatching hogs: 4 cpu
stress-ng: info: [7854] successful run completed in 60.00s
stress-ng: info: [7854] for a 60.00s run time:
stress-ng: info: [7854] 240.00s available CPU time
stress-ng: info: [7854] 225.15s user time ( 93.81%)
stress-ng: info: [7854] 0.44s system time ( 0.18%)
stress-ng: info: [7854] 225.59s total time ( 93.99%)

Performance counter stats for 'stress-ng --cpu 4 --cpu-method clongdouble -t 60 --times':

225578.329426 task-clock (msec) # 3.757 CPUs utilized
38,443 context-switches # 0.170 K/sec
96 cpu-migrations # 0.000 K/sec
845 page-faults # 0.004 K/sec
651,620,307,394 cycles # 2.889 GHz
521,346,311,902 stalled-cycles-frontend # 80.01% frontend cycles idle
stalled-cycles-backend
17,079,721,567 instructions # 0.03 insns per cycle
# 30.52 stalled cycles per insn
2,903,757,437 branches # 12.873 M/sec
52,844,177 branch-misses # 1.82% of all branches

60.048819970 seconds time elapsed

The complex math operations take some time to complete, stalling on average over 35 cycles per op.  Instead of using 4 concurrent processes, I re-ran this using just the two CPUs and eliminating 2 of the hyperthreads.  This resulted in 25.4 stall cycles per instruction showing that hyperthreaded processes are stalling because of contention on the floating point units.

Perf stat is an incredibly useful tool for examining performance issues at a very low level.   It is simple to use and yet provides excellent stats to allow one to identify issues and fine tune performance critical code.  Well worth using.

Read more
Colin Ian King

even more stress in stress-ng

Over the past few weeks in spare moments I've been adding more stress methods to stress-ng  ready for Ubuntu 15.04 Vivid Vervet.   My intention is to produce a rich set of stress methods that can stress and exercise many facets of a system to force out bugs, catch thermal over-runs and generally torture a kernel in a controlled repeatable manner.

I've also re-structured the tool in several ways to enhance the features and make it easier to maintain.  The cpu stress method has been re-worked to include nearly 40 different ways to stress a processor, covering:

  • Bit manipulation: bitops, crc16, hamming
  • Integer operations: int8, int16, int32, int64, rand
  • Floating point:  long double, double,  float, ln2, hyperbolic, trig
  • Recursion: ackermann, hanoi
  • Computation: correlate, euler, explog, fibonacci, gcd, gray, idct, matrixprod, nsqrt, omega, phi, prime, psi, rgb, sieve, sqrt, zeta
  • Hashing: jenkin, pjw
  • Control flow: jmp, loop
..the intention was to have a wide enough eclectic mix of CPU exercising tests that cover a wide range of typical operations found in computationally intense software.   Use the new --cpu-method option to select the specific CPU stressor, or --cpu-method all to exercise all of them sequentially.

I've also added more generic system stress methods too:
  • bigheap - re-allocs to force OOM killing
  • rename - rename files rapidly
  • utime - update file modification times to create lots of dirty file metadata
  • fstat - rapid fstat'ing of large quantities of files
  • qsort - sorting of large quantities of random data
  • msg - System V message sending/receiving
  • nice - rapid re-nicing processes
  • sigfpe - catch rapid division by zero errors using SIGFPE
  • rdrand - rapid reading of Intel random number generator using the rdrand instruction (Ivybridge and later CPUs only)
Other new options:
  • metrics-brief - this dumps out only the bogo-op metrics that are relevant for just the tests that were run.
  • verify - this will sanity check the stress results per iteration to ensure memory operations and CPU computations are working as expected. Hopefully this will catch any errors on a hot machine that has errors in the hardware. 
  • sequential - this will run all the stress methods one by one (for a default of 60 seconds each) rather than all in parallel.   Use this with the --timeout option to run all the stress methods sequentially each for a specified amount of time. 
  • Specifying 0 instances of any stress method will run an instance of the stress method on all online CPUs. 
The tool also builds and runs on Debian kFreeBSD and GNU HURD kernels although some stress methods or stress options are not included due to lack of support on these other kernels.
The stress-ng man page gives far more explanation of each stress method and more detailed examples of how to use the tool.

For more details, visit here or read the manual.

Read more
Colin Ian King

a final few more features in stress-ng

While hoping to get a feature complete stress-ng sooner than later, I found a few more ways to fiendishly stress a system.

Stress-ng 0.01.22 will be landing soon in Ubuntu 14.10 with three more stress mechanisms:

  • CPU affinity stressing; this rapidly changes CPU affinity of the stress processes just to keep the scheduling busy wasting effort.
  • Timer stressing using the real-time clock; this allows one to generate a large amount of timer interrupts, so it is a useful interrupt saturation test.
  • Directory entry thrashing; this creates and deletes a selectable number of zero length files and hence populates and destroys directory entries.
I have also removed the need to use rand() for random number generation for some of the stress tests and re-used a the faster MWC "random" number generator to add in some well known and very simple math operations for CPU stressing.

Stress-ng now has 15 different simple stress mechanisms that exercise CPU, cache, memory, file system, I/O and CPU schedulers.  I could add more tests, but I think this is a large enough set to allow one to thrash a machine and see how well it performs under pressure.

Read more
Colin Ian King

more stress with stress-ng

Since my last article about stress-ng I have been adding a few more stress mechanisms to stress-ng:

  • file locking - exercise file locking with one or more processes (the more processes the better).
  • fallocate - this allocates a 4MB file, sync's, truncates to zero size and syncs repeatedly
  • yield - this loops on sched_yield() to repeatedly relinquish the CPU forcing a high context switch rate when run with multiple yielding processes.
Also, I have added some new features to tweak scheduling, I/O characteristics and memory allocations of the running stress processes:
  • --sched and --sched-prio options to specify the scheduler type and priority
  • --ionice-class and --ionice-level options to tweak I/O niceness
  • --vm-populate option to populate (pre-fault) page tables for a mapping for the --vm stress test.
If I think of other mechanisms to stress the kernel I will add them, but for now, stress-ng is becoming almost feature complete.

Read more
Colin Ian King

Linaro's idlestat is another useful tool in the arsenal of CPU monitoring utilities.  Idlestat monitors and captures CPU C-state and P-state transitions using the kernel Ftrace tracer and outputs statistics based on entering/exiting each state for each CPU.  Idlestat  also captures IRQ activity as well which ones caused a CPU to exit an idle state -  knowing why a processor came out of a deep C state is always very useful way to help diagnose power consumption issues.

Using idlestat is easy, to capture 20 seconds of activity into a log file called example.log, run:

 sudo idlestat --trace -f example.log -t 20    
..and this will display the per CPU C-state and P-state and IRQ statistics for that run.

One can also take the saved log file and parse it again to calculate the statistics again using:
 idlestat --import -f example.log  

One can get the source from here and I've packaged version 0.3 (plus a bunch of minor fixes that will land in 0.4) for Ubuntu 14.10 Utopic Unicorn.

Read more
Colin Ian King

stress-ng: an updated system stress test tool

Recently added to Ubuntu 14.10 is stress-ng, a simple tool designed to stress various components of a Linux system.   stress-ng is a re-implementation of the original stress tool written by Amos  Waterland and adds various new ways to exercise a computer as well as a very simple "bogo-operation" set of metrics for each stress method.

stress-ng current contains the following methods to exercise the machine:

  • CPU compute - just lots of sqrt() operations on pseudo-random values. One can also specify the % loading of the CPUs
  • Cache thrashing, a naive cache read/write exerciser
  • Drive stress by writing and removing many temporary files
  • Process creation and termination, just lots of fork() + exit() calls
  • I/O syncs, just forcing lots of sync() calls
  • VM stress via mmap(), memory write and munmap()
  • Pipe I/O, large pipe writes and reads that exercise pipe, copying and context switching
  • Socket stressing, much like the pipe I/O test but using sockets
  • Context switching between a pair of producer and consumer processes
Many of the above stress methods have additional configuration options.  Each stress method can be run by one or more child processes.

The --metrics option dumps the number of operations performed by each stress method, aka "bogo ops", bogos because they are a rough and unscientific metric.  One can specify how long to run a test either by test duration in sections or by bogo ops.

I've tried to make stress-ng compatible with the older stress tool, but note that it is not guaranteed to produce identical results as the common test methods between the two tools have been implemented differently.

Stress-ng has been a useful for helping me measure different power consuming loads.  It is also useful with various thermald optimisation tweaks on one of my older machines.

For more information, consult the stress-ng manual page.  Be warned, this tool can make your system get seriously busy and warm!

Read more
Colin Ian King

Back in April I wrote smemstat in to dump out the per process shared memory usage based on the memory mapping data from /proc/$pid/smaps.   I tried to make smemstat as compact as possible with minimised changes in heap size to reduce the impact on the total system shared memory statistics.

smemstat reports the memory utilised per process taking in consideration pages that are shared with other processes. So, if two processes share 64K of memory, each process will report that memory as 32K each.

smemstat has to modes of operation: "dump all current memory stats" and a "dump periodic change in memory stats".  The former is useful for a single snapshot view of memory utilisation, where as the latter is useful to observe memory size changes over time. Exiting or new processes that share memory with other processes will cause a change the reported amounted of memory used by the processes because this memory is shared amongst a changed number of processes.

There are various options to smemstat, so consult the man page for more details.  One useful option is -o that collects the smemstat statistics into a JSON formatted output file. This can be useful for memory based tests as the structured JSON data can be easily parsed and analysed.

smemstat  has landed in Ubuntu Utopic 14.10 and is also available in Debian. Examples of smemstat can be found here. I hope it proves to be a useful utility.

Read more
Colin Ian King

One of my on-going projects is to try to reduce system activity where possible to try to shave off wasted power consumption.   One of the more interesting problems is when very short lived processes are spawned off and die and traditional tools such as ps and top sometimes don't catch that activity.   Over last weekend I wrote the bulk of the forkstat tool to track down these processes.

Forkstat uses the kernel proc connector interface to detect process activity.  Proc connector allows forkstat to receive notifications of process events such as fork, exec, exit, core dump and changing the process name in the comm field over a socket connection.

By default, forkstat will just log fork, exec and exit events, but the -e option allows one to specify one or more of the fork, exec, exit, core dump or comm events.  When a fork event occurs, forkstat will log the PID and process name of the parent and child, allowing one to easily identify where processes are originating.    Where possible, forkstat attempts to track the life time of a process and will log the duration of a processes when it exits (note: this is not an estimate of the CPU used).

The -S option to forkstat will dump out a statistical summary of activity.  This is useful to identify the frequency of processes activity and hence identifying the top offenders.

Forkstat is now available in Ubuntu 14.04 Trusty Tahr LTS.  To install forkstat use:

 sudo apt-get install forkstat  

For more information on the tool and examples of the forkstat output, visit the forkstat quick start page.

Read more
Colin Ian King

Keeping cool with thermald

The push for higher performance desktops and laptops has inevitably lead to higher power dissipation.  Laptops have also shrunk in size leading to increasing problems with removing excess heat and thermal overrun on heavily loaded high end machines.

Intel's thermald prevents machines from overheating and has been recently introduced in the Ubuntu Trusty 14.04 LTS release.  Thermald actively monitors thermal sensors and will attempt to keep the hardware cool by modifying a variety of cooling controls:
 

* Active or passive cooling devices as presented in sysfs
* The Running Average Power Limit (RAPL) driver (Sandybridge upwards)
* The Intel P-state CPU frequency driver (Sandybridge upwards)
* The Intel PowerClamp driver

Thermald has been found to be especially useful when using the Intel P-state CPU frequency scaling driver since this can push the CPU harder than other CPU frequency scaling drivers.

Over the past several weeks I've been working with Intel to shake out some final bugs and get thermald included into Ubuntu 14.04 LTS, so kudos to Srinivas Pandruvada for handling my patches and also providing a lot of timely fixes too.

By default, thermald works without any need for configuration, however, if one has incorrect thermal trip settings or other firmware related thermal zone bugs one can write one's own thermald configuration. 

For further details, consult the Ubuntu thermald wiki page.

Read more
Colin Ian King

Over the past months I have been using static code analysis tools such as smatch and Coverity Scan on various open source projects that I am involved with.  These, combined with using gcc's -Wall -Wextra have proved useful in tracking down and eliminating various bugs.

Recently I stumbled on cppcheck and gave it a spin on several larger projects.  One of the cppcheck project aims is to find errors that the compiler won't spot and also try to keep the number of false positives found to a minimum.

cppcheck is very easy to use, the default settings just work out of the box. However, for extra checking I enabled the --force option to check of all configurations and the --enable=all to report on checks to be totally thorough and pedantic.

The --enable option is especially useful. It allows one to select different types of checking, for example, coding style, execution performance, portability, unused functions and missing include files.

Even though my code has been through smatch and Coverity Scan, cppcheck still managed to find a few issues using --enable=all

1. unused functions
2. a potential memory leak with realloc(), for example:

buf = realloc(buf, new_size);
if (!buf)
     return NULL;

if realloc() fails, buf can be leaked.  A potential fix is:

tmp = realloc(buf, new_size);
if (!tmp) {
     free(buf);
     return NULL;
} else
     buf = tmp;

3. some potential sscanf buffer overflows
4. some coding style improvements, for example, local auto variables could be moved to a deeper scope

So cppcheck worked well for me.  I recommend referring to the cppcheck project wiki to check out the features and then subjecting your code to it and seeing if it can find any bugs.

Read more
Colin Ian King

fwts 13.12.00 released

Version 13.12.00 of the Firmware Test Suite has been released today.  This latest release includes some of the following new features:

  • ACPI method test, add _IPC, _IFT, _SRV tests
  • Update to version 20131115 of ACPICA
  • Check for thermal overrun messages in klog test
  • Test for CPU frequency maximum limits on cpufreq test
  • Add Ivybridge and Haswell CPU specific MSR checks
  • UEFI variable dump:
    • add more subtype support on acpi device path type  
    • handle the End of Hardware Device Path sub-type 0x01
    • add Fibre Channel Ex subtype-21 support on messaging device path type
    • add SATA subtype-18 support on messaging device path type
    • add USB WWID subtype-16 support on messaging device path type
    • add VLAN subtype-20 support on messaging device path type
    • add Device Logical Unit subtype-17 support on messaging device path typ
    • add SAS Ex subtype-22 support on messaging device path type
    • add the iSCSI subtype-19 support on messaging device path type
    • add the NVM Express namespace subtype-23 support on messaging device path type
..and also the usual bug fixes. 

For more details, please consult the release notes
 
The source tarball is available at:  http://fwts.ubuntu.com/release/fwts-V13.12.00.tar.gz and can be cloned from the repository: git://kernel.ubuntu.com/hwe/fwts.git

The fwts 13.12.00 .debs are available in the firmware testing PPA and will soon appear in Ubuntu Trusty 14.04

And thanks to Alex Hung, Ivan Hu, Keng-Yu Lin their contributions and keen eye for reviewing the numerous patches and also to Robert Moore for the on-going work on ACPICA.

Read more
Colin Ian King

Finding spelling mistakes in source code

There are quite a few useful open source utilities around for finding spelling mistakes in source code.  Most recently I've been using codespell which works well for me.

Codespell is regularly being updated and comes with a dictionary originally derived from Wikipedia. I normally pull the latest updates from the repository before running it against my source code.

Fetching it and using it is relatively simple:

 git clone git://git.profusion.mobi/users/lucas/codespell  
cd your-project-dir
/path-to-codespell/codespell.py
..and it will find common spelling mistakes in the entire project directory. Easy!

Read more
Colin Ian King

health-check revisited

Earlier this month I wrote about health-check, a tool to sanity check application resource usage.  Since then I've re-worked and simplified the system call tracing and added some new features.

Originally, health-check was designed to attach itself to running processes. To make the tool even easier to use it can now start applications and follow any subsequent new processes spawned off from fork() or clone(). For example:

sudo health-check -u joeuser -f thunderbird
..will start thunderbird (running as user "joeuser") and follow any new processes it creates.

Sometimes applications will force flushing of data or metadata to disk by excessive use of fsync(), fdatasync() and sync().  Health-check will now keep track of these system calls and provide feedback on their use.

Health-check already has some memory checking techniques - however, it now has been extended to check explicitly for heap changes by examining the brk() system call and also keeping track of mmap() and munmap() mappings.  This allows better tracking of potential memory leaks.

Source code can be found at: git://kernel.ubuntu.com/cking/health-check

Packages found in my White PPA in ppa:colin-king/white so to install on Ubuntu systems use:
 sudo add-apt-repository ppa:colin-king/white  
sudo apt-get update
sudo apt-get install health-check

Read more
Colin Ian King

health-check: a tool to diagnose resource usage

Recently I have been focused on ways to reduce power consumption on Ubuntu phones. To identify resource hungry issues one can use tools such as top, strace and gdb, or inspect per process properties in /proc/$pid, however this can be time consuming and not practical with many processes in a system. Instead, I decided it would be profitable to write a tool to inspect and monitor a running program and report on areas where it appeared to be sub-optimal.  And so health-check was born.

One provides health-check with a list of one or more processes and it will monitor all the associated threads (and child processes) and then report back on the resources used. Heath-check will report on:

  • CPU utilisation
  • Wakeup events
  • Context Switches
  • File I/O operations (Open/Read/Write/Close using fnotify)
  • System calls (using ptrace)
  • Analysis of polling system calls
  • Memory utilisation (including memory growth)
  • Network connections
  • Wakelock activity
CPU utilisation, wakeup events and context switches are useful just to see how much CPU related activity is occurring.   For example,  a program that has multiple threads that frequently ping-pongs between them will have a high context switch rate, which may indicate that it is busy passing data or messages between threads.  A process may be frequently polling on short timer delays may show up as generating a high level of wakeup events.

Some applications may be sub-optimally writing out data frequently, causing dirty pages and meta data that needs to be written back to the file system.  Health-check will capture file I/O activity and report on the names of the files being opened, read, written and closed.

To help identify excessive or heavy system call usage, health-check uses ptrace to trap and monitor all the system calls that the program makes.  For example, it has been observed that some applications excessively call poll() and nanosleep() with poorly chosen timeouts causing excessive CPU utilisation.  For system calls such as these where they can wait until an event or a timeout occur, health-check has some deeper monitoring.  It inspects the given timeout delay and checks to see if the call timed out, for example,  health-check can identify CPU sucking repeated polling where zero timeouts are being used or excessive nanosleeps with zero or negative delays.

The ptrace ability of heath-check also allow it to monitor per-process wake lock writes. Abuse of wakelocks can keep the a kernel from suspending into deep sleep so it is useful to keep track of wakelock activity on some processes. This is not enabled by default as it is an expensive operation to monitor this via ptrace and also some kernels may not have wakelocks, so one has to use the -W option to enable this.

Health-check also inspects /proc/$pid/smaps and will determine if memory utilisation has grown or shrunk.  Unusually high heap growth over time may indicate that an application has a memory leak.

Finally, health-check will inspect /proc/$pid/fd and from this determine any open sockets and then try and resolve the host names of the IP addresses.  For example, it is entirely possible for an application to be making spurious or unwanted connections to various machines, so it is helpful to check up on this kind of activity.

Health-check is still very early alpha quality, so beware of possible bugs.   However, it has been helpful in identifying some misbehaving applications, so it is already proving to be rather useful.

Source code can be found at: git://kernel.ubuntu.com/cking/health-check

Packages found in my White PPA in ppa:colin-king/white so to install on Ubuntu systems use:

 sudo add-apt-repository ppa:colin-king/white  
sudo apt-get update
sudo apt-get install health-check
..and go and track down some resource sucking apps..

Read more