Canonical Voices

Colin Ian King

Firmware Related Blogs

More often than not, I'm looking at ACPI and UEFI related issues, so I was glad to see that  Vincent Zimmer has collected up various useful links to blogs that are Firmware Related.   Very useful, thanks Vincent!

Read more
Colin Ian King

Static code analysis on kernel source

Since 2014 I have been running static code analysis using tools such as cppcheck and smatch against the Linux kernel source on a regular basis to catch bugs that creep into the kernel.   After each cppcheck run I then diff the logs and get a list of deltas on the error and warning messages, and I periodically review these to filter out false positives and I end up with a list of bugs that need some attention.

Bugs such as allocations returning NULL pointers without checks, memory leaks, duplicate memory frees and uninitialized variables are easy to find with static analyzers and generally just require generally one or two line fixes.

So what are the overall trends like?

Warnings and error messages from cppcheck have been dropping over time and "portable warnings" have been steadily increasing.  "Portable warnings" are mainly from arithmetic on void * pointers (which GCC handles has byte sized but is not legal C), and these are slowly increasing over time.   Note that there is some variation in the results as I use the latest versions of cppcheck, and occasionally it finds a lot of false positives and then this gets fixed in later versions of cppcheck.

Comparing it to the growth in kernel size the drop overall warning and error message trends from cppcheck aren't so bad considering the kernel has grown by nearly 11% over the time I have been running the static analysis.

Kernel source growth over time
Since each warning or error reported has to be carefully scrutinized to determine if they are false positives (and this takes a lot of effort and time), I've not yet been able determine the exact false positive rates on these stats.  Compared to the actual lines of code, cppcheck is finding ~1 error per 15K lines of source.

It would be interesting to run this analysis on commercial static analyzers such as Coverity and see how the stats compare.  As it stands, cppcheck is doing it's bit in detecting errors and helping engineers to improve code quality.

Read more
Colin Ian King

Powerstat and thermal zones

Last night I was mulling over an overheating laptop issue that was reported by a user that turned out to be fluff and dust clogging up the fan rather than the intel_pstate driver being broken.

While it is a relief that the kernel driver is not at fault, it still bothered me that this kind of issue should be very simple to diagnose but I overlooked the obvious.   When solving these issues it is very easy to doubt that the complex part of a system is working correctly (e.g. a kernel driver) rather than the simpler part (e.g. the fan not working efficiently).  Normally, I try to apply Occam's Razor which in the simplest form can be phrased as:

"when you have two competing theories that make exactly the same predictions, the simpler one is the better."

..e.g. in this case, the fan is clogged up.

Fortunately, laptops invariably provide Thermal Zone information that can be monitored and hence one can correlate CPU activity with the temperature of various components of a laptop.  So last night I added Thermal Zone sampling to powerstat 0.02.00 which is enabled with the new -t option.

 
powerstat -tfR 0.5
Running for 60.0 seconds (120 samples at 0.5 second intervals).
Power measurements will start in 0 seconds time.

Time User Nice Sys Idle IO Run Ctxt/s IRQ/s Watts x86_pk acpitz CPU Freq
11:13:15 5.1 0.0 2.1 92.8 0.0 1 7902 1152 7.97 62.00 63.00 1.93 GHz
11:13:16 3.9 0.0 2.5 93.1 0.5 1 7168 960 7.64 63.00 63.00 2.73 GHz
11:13:16 1.0 0.0 2.0 96.9 0.0 1 7014 950 7.20 63.00 63.00 2.61 GHz
11:13:17 2.0 0.0 3.0 94.5 0.5 1 6950 960 6.76 64.00 63.00 2.60 GHz
11:13:17 3.0 0.0 3.0 93.9 0.0 1 6738 994 6.21 63.00 63.00 1.68 GHz
11:13:18 3.5 0.0 2.5 93.6 0.5 1 6976 948 7.08 64.00 63.00 2.29 GHz
.... 

..the -t option now shows x86_pk (x86 CPU package temperature) and acpitz (APCI thermal zone) temperature readings in degrees Celsius.

Now this is where the fun begins.  I ran powerstat for 60 seconds at 2 samples per second and then imported the data into LibreOffice.  To easily show corrleations between CPU load, power consumption, temperature and CPU frequency I normalized the data so that the lowest values were 0.0 and the highest were 1.0 and produced the following graph:

One can see that the CPU frequency (green) scales with the the CPU load (blue) and so does the CPU power (orange).   CPU temperature (yellow) jumps up quickly when the CPU is loaded and then steadily increases.  Meanwhile, the ACPI thermal zone (purple) trails the CPU load because it takes time for the machine to warm up and then cool down (it takes time for a fan to pump out the heat from the machine).

So, next time a laptop runs hot, running powerstat will capture the activity and correlating temperature with CPU activity should allow one to see if the overheating is related to a real CPU frequency scaling issue or a clogged up fan (or broken heat pipe!).

Read more
Colin Ian King

Snooping on I/O using iosnoop

A while ago I blogged about Brendan Gregg's excellent book for tracking down performance issues titled "Systems Performance, Enterprise and the Cloud".   Brendan has also produced a useful I/O diagnostic bash script iosnoop that uses ftrace to gather block device I/O events in real time.

The following example snoops on I/O for 1 second:

$ sudo iosnoop 1
Tracing block I/O for 1 seconds (buffered)...
COMM PID TYPE DEV BLOCK BYTES LATms
kworker/u16:2 650 W 8,0 441077032 28672 1.46
kworker/u16:2 650 W 8,0 441077024 4096 1.45
kworker/u16:2 650 W 8,0 364810624 462848 1.35
kworker/u16:2 650 W 8,0 364810240 69632 1.34

And the next example snoops and shows start and end time stamps:
$ sudo iosnoop -ts  
Tracing block I/O. Ctrl-C to end.
STARTs ENDs COMM PID TYPE DEV BLOCK BYTES LATms
35253.062020 35253.063148 jbd2/sda1-211 211 WS 8,0 29737200 53248 1.13
35253.063210 35253.063261 jbd2/sda1-211 211 FWS 8,0 18446744073709551615 0 0.05
35253.063282 35253.063616 <idle> 0 WS 8,0 29737304 4096 0.33
35253.063650 35253.063688 gawk 551 FWS 8,0 18446744073709551615 0 0.04
35253.766711 35253.767158 kworker/u16:0 305 W 8,0 433580264 4096 0.45
35253.766778 35253.767258 kworker/0:1H 321 FWS 8,0 18446744073709551615 0 0.48
35253.767289 35253.767635 <idle> 0 WS 8,0 273358464 4096 0.35
35253.767309 35253.767654 <idle> 0 W 8,0 118371312 4096 0.35
35253.767648 35253.767741 <idle> 0 FWS 8,0 18446744073709551615 0 0.09
^C
Ending tracing...
One needs to run the tool as root as it uses ftrace. There are a selection of filtering options, such as showing I/O from a specific device, I/O issues of a specific I/O type, selecting I/O on a specific PID or a specific name. iosnoop also can display the I/O completion times, start times and Queue insertion I/O start time. On Ubuntu, iosnoop can be installed using:
sudo apt-get install perf-tools-unstable
A useful I/O analysis tool indeed. For more details, install the tool and read the iosnoop man page.

Read more
Colin Ian King

During some spare moments I've added a couple of minor CPU related enhancements to powerstat.    The new -c option gathers CPU C-state activity over the run and shows a summary at the end, for example:

 C-State  Resident   Count Latency
C7-IVB 75.239% 102315 87
C6-IVB 0.004% 60 80
C3-IVB 0.138% 2892 59
C1E-IVB 1.150% 7599 10
C1-IVB 0.948% 4611 1
POLL 0.000% 3 0
C0 22.521%
The above example shows that my Ivybridge i5-3210M spent ~75% of the time in the deepest C7 sleep state and ~22.5% of the time in the fully operating C0 state.

A new -f option gathers CPU frequency statistics across all the on-line CPUs and displays the running average.   This provides an "instantaneous" view of the current CPU frequencies rather than a running average between the last sample, so beware that just gathering statistics using powerstat can cause CPU activity which of course can change CPU frequency.

For a simple test, I ran powerstat for a short 250 second run and normalised the CPU Core Power, CPU Load and CPU Frequency stats so that the data ranges are 0..1 so I can plot all three stats and easily compare them:

One can easily see the correlation between CPU Frequency, CPU Load and CPU core power consumed just from the powerstat data.

Powerstat tries to be as lightweight and as small as possible to minimize the impact on system behaviour.  My hope is that adding these extra CPU instrumentation features adds more useful functionality without adding a larger system impact.  I've instrumented powerstat with perf and I believe that the overhead is sufficiently small to justify these changes.

These two new features will be landing in powerstat 0.01.40 in Ubuntu Wily.

Read more
Colin Ian King

Over the last few weeks I have been toying with the idea of adding more performance monitoring to stress-ng so one can see how much a stress test impacts on the CPU. The obvious choice to get such low level data is via Linux perf events using perf_event_open(2).

The man page for perf_event_open() provides plenty of information to get perf working from userspace, however, I was a bit stumped when I used several hardware perf events and then ran out of hardware Perf Monitoring Units (PMUs) resulting in some strange event counter readings. I discovered that when one runs out of PMUs, perf will multiplex event counting and so the perf counters need to be scaled by multiplying by PERF_FORMAT_TOTAL_TIME_ENABLED and divided by PERF_FORMAT_TOTAL_TIME_RUNNING.

Once I had figured this out, it was relatively plain sailing to get perf working in stress-ng.  So stress-ng V0.04.04 now supports the --perf option that just enables perf monitoring on each stress test being run, it is as simple as that. For multiple instances of a stress test, stress-ng will sum all the perf counters of each processes running the stress-test to provide an overall total.

The following example will run the stress-ng cache stress test.  The first run enables cache flushing and so fetches of data will cause cache misses.  The second run has no cache flushing and hence has far lower cache miss rate.


Note how the cache-flushing not only causes a far higher cache miss rate, but also reduces the effective number of instructions per cycle being executed and hence reduces the throughput (as one would expect).  With cache-flushing enabled I was seeing only 17.53 bogo ops per second compared to the 35.97 bogo ops per second with cache-flushing disabled.

The perf stats are enlightening. I still find it incredible that my laptop has so much computing power.  Some of the more compute bound stressors (such as the stress-ng bitops cpu stressor) are hitting over 20 billion instructions per second on my machine, which is rather impressive.  It seems that gcc optimization and the x86 superscaler micro-ops are working efficiently with some of these stress tests.

My hope is that the integrated perf monitoring in stress-ng will be instructive when comparing results on different processor architectures across the range of stress-ng stress tests.

Read more
Colin Ian King

As simple experiment, I thought it would be interesting to investigate stress-ng compiled with GCC 4.9.1 and GCC 5.1.1 in terms of computational improvement and power consumption on various CPU stress methods.   The stress-ng CPU stress test contains various different mixes of integer, floating point, bit operations and logic operations that can be used for processor loading, so it makes a useful test to see how well the code gets optimized with GCC.

Stress-ng provides a "bogo-ops" mechanism to measure a "unit of operation", normally this is just a count of the number of operations performed in a unit of time, hence allowing us to compare the relative performance of each stress method when compiled with different versions of GCC.  Running each stress method for a relatively long time (a few minutes) on an idle machine allows us to get a fairly stable and accurate measurement of bogo-ops per second.  Tests were run on a Lenovo x230 with an i5-3210M CPU.

The first chart below shows the relative improvement in bogo-ops per second between the two versions of GCC.  A value of n indicates GCC 5.1.1 is n times faster  in terms of bogo-ops per second than GCC 4.9.1, hence values less than 1.0 show that GCC 5.1.1 has regressed in performance.


It appears that int64, int32, int16, int8 and rand show some remarkable improvements with GCC 5.1.1; these all perform various integer operations (add, subtract, multiply, divide, xor, and, or, shift).

In contrast, hamming, hanoi, parity and sieve show degraded performance with GCC 5.1.1.  Hanoi just exercises recursion of a function with a few arguments and some memory load/stores.  Hamming, parity and sieve exercise bit twiddling operations and memory load/stores.

Further to just measuring computation, I used the Intel RAPL CPU package power measurements (using powerstat) to next measure the power consumed and then compute bogo ops per Watt for stress-ng built with GCC 4.9.1 and 5.1.1.  I then compared the relative improvement of 5.1.1 compared to 4.9.1:
The chart above shows the same kind of characteristics as the first chart, but in terms of computational improvement per Watt.  Note that there are even better improvements in relative terms for the integer and rand CPU stress methods.  For example, the rand stress method shows a 1.6 x improvement in terms of computation per second and a 2.1 x improvement in terms of computation per Watt comparing GCC 4.9.1 with 5.1.1.

It seems that benchmarking performance in terms of just compute improvements really should take into consideration the power consumption too to get a better idea of how compiler optimization improvements.  Compute-per-watt rather than compute-per-second should perhaps be the preferred benchmark in the modern high-density compute farms.

Of course, these comparisons are just with one specific x86 micro-architecture,  so one would expect different results for different x86 CPUs..  I guess that is for another weekend to test if I get time.

Read more
Colin Ian King

comparing cpuburn and stress-ng

The cpuburn package contains several hand crafted assembler "burn" programs to load x86 processors and to maximize heat production to stress a system.  This also is the intention of the stress-ng "cpu" stress test which contains a variety of methods to stress CPUs with a wide range of instruction mixes.   Stress-ng is written in C and relies on the the compiler to generate efficient code to hopefully load the CPU.  So how does stress-ng compared to the hand crafted cpuburn suite of programs on modern processors?

Since there is a correlation between power consumed and heat generated, I took the liberty to measure the CPU package power consumption measures using the Intel RAPL interface as one way of comparing cpuburn and stress-ng.  Recent versions of powerstat supports RAPL, so I ran each stressor for 120 seconds and took CPU package power measurements every 4 seconds over this interval with powerstat.


So, the cpuburn "burn" programs do well, however, some of the stress-ng CPU stress methods seem to do better.   The best stress-ng CPU methods are: ackermann, callfunc, hanoi, decimal128, dither, int128decimal128, trig and zeta.  It appears that ackermann, callfunc and hanoi do well because these are very localised deeply recursive function calls, so I expect register save/restores and some stack activity is the main power consumer.  The rest exercise the integer and floating point units and memory load/stores.

As it stands, a handful of stress-ng CPU stressors aren't as good as cpuburn. What is noticeable is that burnBX on an i3120M seems to do rather well in terms of loading the CPU.

One conclusion to draw from this is that modern C compilers such as gcc (in this case, gcc 4.9.2) with a suitably chosen mix of stores, loads and integer/floating point operations can outperform hand written assembler in terms of loading the full CPU package.  When I have a little more time, I will try and repeat this experiment with clang and gcc 5

Read more
Colin Ian King

An on-going background project of mine is to add various interesting system stress tests to stress-ng.  Over the past several months I've been looking at the ways to exercise various less used or obscure system calls just to add more kernel coverage to the tool.

  • rlimit - generate tens of thousands of SIGXFSZ and many SIGXCPU signals
  • itimer - exercise ITIMER_PROF and generate SIGPROF signals
  • mlock - lock and unlock pages with mlock()/munlock()
  • timerfd - exercise rapid CLOCK_REALTIME events by select() and read() on a timerfd.
  • memfd - exercise anonymous populated page memory mapping and unmappoing using memfd.
  • more aggressive affinity stressor changes to force more CPU IPIs
  • hdd - add readv/writev I/O option
  • tee - tee data between a writer and reader process using tee()
  • crypt - encrypt data with MD5, SHA-256 and SHA-512 using libcrypt
  • mmapmany - perform tens of thousands of memory maps/unmaps to exhaust the per-process mapping limit.
  • zombie - fill up process table with tens of thousands of zombie processes
  • str - heavily exercise a range of glibc string functions
  • xattr - exercise file extended attributes
  • readahead - random reads with readaheads
  • vm - add a rowhammer memory stressor
..as well as extra per-stressor configuration settings and a lot of code clean up and bug fixing.

I've recently been using stress-ng to exercise various kernels on a range of hardware and it has been useful in forcing bugs, especially with the memory specific stressors that seem to trip low memory corner cases.

stress-ng 0.04.01 will be soon available in Ubuntu 15.10 Wily Werewolf.  Visit the stress-ng project page for more details.

Read more
Colin Ian King

powerstat improvements with RAPL

The Linux Running Average Power Limit (RAPL) interface was introduced about 2 years ago in the Linux kernel and allows userspace to read the power consumption from various x86 System-on-a-Chip (SoC) power domains.  The power domains range from the SoC package, CPU core, DRAM controller and graphics power plane.

It appears that the Intel energy status MSRs can be read very rapidly and the resolution is exceptionally good; however, reading the MSR too frequently will consume some power when using the RAPL interface.

I've improved powerstat to now use the RAPL interface with a new -R option (to measure just the total package power consumption).  A new -D option will show all the RAPL domain measurements available.  RAPL measurements are very responsive and one can easily correlate power spikes with bursts of system activity.

Finally, I have added a basic histogram output with the new -H option. This will plot histograms of the power measurements and CPU load from the stats gathered during the powerstat run.

Powerstat 0.01.37 is available in Ubuntu 15.10 Wily Werewolf and the source is available from the git repository.

Read more
Colin Ian King

Over the past year or more I was focused on identifying power consuming processes on various mobile devices.  One of many strategies to reducing power is to remove unnecessary file system activity, such as extraneous logging, repeated file writes, unnecessary file re-reads and to reduce metadata updates.

Fnotifystat is a utility I wrote to help identify such file system activity. My desire was to make the tool as small as possible for small embedded devices and to be relatively flexible without the need of using perf just in case the target device did not have perf built into the kernel by default.

By default, fnotifystat will dump out every second any file system open/close/read/write operations across all mounted file systems, however, one can specify the delay in seconds and the number of times to dump out statistics.   fnotifystat uses the fanotify(7) interface to get file activity across the system, hence it needs to be run with CAP_SYS_ADMIN capability.

An open(2), read(2)/write(2) and close(2) sequence by a process can produce multiple events, so fnotifystat has a -m option to merge events and hence reduce the amount of output.  A verbose -v option will output all file events if one desires to see the full system activity.

If one desires to just monitor a specific collection of processes, one can specify a list of the process ID(s) or process names using the -p option, for example:

sudo fnotifystat -p firefox,thunderbird

fnotifystat catch events on all mounted file systems, but one can restrict that by specifying just path(s) one is interested in using the -i (include) option, for example:

sudo fnotifystat -i /proc

..and one can exclude paths using the -x option.

More information and examples can be found on the fnotifystat project page and the manual also contains more details and some examples too.

Fnotifystat 0.01.10 is available in Ubuntu Vivid Vervet 15.04 and can also be installed for older releases from my power management tools PPA.

Read more
Colin Ian King

Finding kernel bugs with cppcheck

For the past year I have been running the cppcheck static analyzer against the linux kernel sources to see if it can detect any bugs introduced by new commits. Most of the bugs being found are minor thinkos, null pointer de-referencing, uninitialized variables, memory leaks and mistakes in error handling paths.

A useful feature of cppcheck is the --force option that will check against all the configurations in the source (and the kernel does have many!).  This allows us to check for code that may not be exercised much (because it is normally not built in with most config options) or even find dead code.

The downside of using the --force option is that each source file may need to be checked multiple times for each configuration.  For ~20800 sources files this can take a 24 processor server several hours to process.  Errors and warnings are then compared to previous runs (a delta), making it relatively easy to spot new issues on each run.

We also use the latest sources from the cppcheck git repository.  The upside of this is that new static analysis features are used early and this can result in finding existing bugs that previous versions of cppcheck missed.

A typical cppcheck run against the linux kernel source finds about 600 potential errors and 1700 warnings; however a lot of these are false positives.  These need to be individually eyeballed to sort the wheat from the chaff.

Finally, the data is passed through a gnu plot script to generate a trend graph so I can see how errors (red) and warnings (green) are progressing over time:


..note that the large changes in the graph are mostly with features being enabled (or fixed) in cppcheck.

I have been running the same experiment with smatch too, however I am finding that cppcheck seems to have better code coverage because of the --force option and seems to have less false positives.   As it stands, I am finding that the most productive time for finding issues is around the -rc1 and -rc2 merge times (obviously when most of the the major changes land in the kernel).  The outcome of this work has been a bunch of small fixes landing in the kernel to address bugs that cppcheck has found.

Anyhow, cppcheck is an excellent open source static analyzer for C and C++ that I'd heartily recommend as it does seem to catch useful bugs.

Read more
Colin Ian King

During idle moments in the final few weeks of 2014 I have been adding some more stressors and features to stress-ng as well as tidying up the code and fixing some small bugs that have crept in during the last development spin.   Stress-ng aims to stress a machine with various simple torture tests to trip overheating and kernel race conditions.

The mmap stressor now has the '--mmap-file' to use synchronous file backed memory mapping instead of the default anonymous mapping, and the '--mmap-async' option enables asynchronous file mapping if desired.

For socket stressing, the '--sock-domain unix' option now allows AF_UNIX (aka AF_LOCAL) sockets to be used. This compliments the existing AF_INET and AF_INET6 IPv4 and IPv6 protocols available with this stress test.

The CPU stressor now includes mixed integer and floating point stressors, covering 32 and 64 bit integer mixes with the float, double and long double floating point types. The generated object code contains a nice mix of operations which should exercise various functional units in the CPU.  For example, when running on a hyper-threaded CPU one notices a performance hit because these cpu stressor methods heavily contend on the CPU math functional blocks.

File based locking has been extended with the new lockf stressor, this stresses multiple locking and unlocking on portions of a small file and the default blocking mode can be turned into a CPU consuming rapid polling retry with the '--lockf-nonblock' option.

The dup(2) system call is also now stressed with the new dup stressor. This just repeatedly dup's a file opened on /dev/zero until all the free file slots are full, and then closes these. It is very much like the open stressors.

The fcntl(2) F_SETLEASE command is stress tested with the new lease stressor. This has a parent process that rapidly locks and unlocks a file based lease and 1 or more child processes try to open this file and cause lease breaking signal notifications to the parent.

For x86 CPUs, the cache stressor includes two new cache specific options. The '--cache-fence' option forces write serialization on each store operation, while the '--cache-flush' option forces flush cache on each store operation. The code has been specifically written to not incur any overhead if these options are not enabled or are not available on non-x86 CPUs.

This release also includes the stress-ng project mascot too; a humble CPU being tortured and stressed by a rather angry flame.

For more information, visit the stress-ng project page, or check out the stress-ng manual.

Read more
Colin Ian King

Controlling data flow using sluice

Earlier this year I was instrumenting wifi power consumption and needed a way to produce and also consume data at a specific rate to pipe over netcat for my measurements.  I had some older crufty code around to do this but it needed some polishing up so I eventually got around to turning this into a more usable tool called "sluice".  (A sluice gate controls flow of water,  the sluice tool controls rate of data though a pipe).

The sluice package is currently available in PPA:colin-king/white and is built for Ubuntu Trusty, Utopic and Vivid, but there the git repository and tarballs are available too if one wants to build this from source.

The following starts a netcat 'server' to send largefile at a rate of 1MB a second using 1K buffers and reports the transfer stats to stderr with the -v verbose mode enabled:

cat largefile | sluice -r 1MB -i 1K -v | nc -l 127.0.0.1 1234 

Sluice also allows one to adjust the read/write buffer sizes dynamically to try to avoid buffer underflow or overflows while trying to match the specified transfer rate.

Sluice can be used as a data sink on the receiving side and also has an option to throw the data away if one just wants to stream data and test the data link rates, e.g., get data from somehost.com on port 1234 and throw it away at 2MB a second:

nc somehost.com 1234 | sluice -d -r 2MB -i 8K

And finally, sluice as a "tee" mode, where data is copied to stdout and to a specified output file using the -t option.

For more details, refer to the sluice project page.

Read more
Colin Ian King

Smackerel of Opinion

It is approaching the Christmas Holiday season, so it's that time again to write some slightly obfuscated C in a seasonal way.  This year I thought I would try some coloured ASCII art for the output for a little variety.


#define r(s) s[e%(sizeof s-1)]
#include /* */
#define S "%s"/* Have */
#define u printf(/* a */
#define c )J/* Merry */
#define W "H"/* Christmas */
#define e rand()/* and */
#define U(i) v[i]/* a */
#define C(q) q[]=/* Happy */
#define J ;;/* New Year */
#define O [v]/* Colin.I.King */


typedef a
; a m, v[6] ,
H;a main(
){char C(
o){033,91,0},C(D)
"*Oo", C(t
)"^~#",Q[ ]=
"13747516",C(s
)".x+*";u S"2"
"J%s0;0H%s0"
";37;40"
"m",o,o, o
c while(U(!!m)
<22)u S"%dm%39s\n" ,
o,0 O++>19?42:',',""
c while(0 O++<'~') u S
"%d;%dH%s44m%c",o,e%21,e
%39,o,r(s)c for(J){1 O=1
-U(1),srand(v),u S"0;0"W S
"0;2;%dm",o,o,' 'c for(m=0
;m>>4<1;++m){u S"%d;%d"W,o
, m+2,20-m c;for(H=0;H<1+(m
<<1);H++){4 O=!H|H==m<<1 ,
2 O=!(e&05),U(3)=H>m*5/
3,5 O=r(D)J if(4 O|U(
2)){u S"%d;%d;3%cm"
"%c",o,U(4)?','
:'*',3 O?2:1+(U(1)^(1&e;)),r(Q),U(5)c}else u S"42;32\
;%dm%c",o,1+3 O,r(t)c u S"0m",o c} }while(m<19)u S"\
%d;19"W S"33;2;7m #\n",o,1+ ++m,o c sleep(m>=-H c}}

The source can be downloaded from here and compiled and run as follows:

gcc snowman.c -o snowman
./snowman

and press control-C to exit when you have seen enough.

Read more
Colin Ian King

Recently I was playing around with CPU loading and was trying to estimate the number of compute operations being executed on my machine.  In particular, I was interested to see how many instructions per cycle and stall cycles I was hitting on the more demanding instructions.   Fortunately, perf stat allows one to get detailed processor statistics to measure this.

In my first test, I wanted to see how the Intel rdrand instruction performed with 2 CPUs loaded (each with a hyper-thread):


$ perf stat stress-ng --rdrand 4 -t 60 --times
stress-ng: info: [7762] dispatching hogs: 4 rdrand
stress-ng: info: [7762] successful run completed in 60.00s
stress-ng: info: [7762] for a 60.00s run time:
stress-ng: info: [7762] 240.01s available CPU time
stress-ng: info: [7762] 231.05s user time ( 96.27%)
stress-ng: info: [7762] 0.11s system time ( 0.05%)
stress-ng: info: [7762] 231.16s total time ( 96.31%)

Performance counter stats for 'stress-ng --rdrand 4 -t 60 --times':

231161.945062 task-clock (msec) # 3.852 CPUs utilized
18,450 context-switches # 0.080 K/sec
92 cpu-migrations # 0.000 K/sec
821 page-faults # 0.004 K/sec
667,745,260,420 cycles # 2.889 GHz
646,960,295,083 stalled-cycles-frontend # 96.89% frontend cycles idle
stalled-cycles-backend
13,702,533,103 instructions # 0.02 insns per cycle
# 47.21 stalled cycles per insn
6,549,840,185 branches # 28.334 M/sec
2,352,175 branch-misses # 0.04% of all branches

60.006455711 seconds time elapsed

stress-ng's rdrand test just performs a 64 bit rdrand read and loops on this until the data is ready, and performs this 32 times in an unrolled loop.  Perf stat shows that each rdrand + loop sequence on average consumes about 47 stall cycles showing that rdrand is probably just waiting for the PRNG block to produce random data.

My next experiment was to run the stress-ng ackermann stressor; this performs a lot of recursion, hence one should see a predominantly large amount of branching.

$ perf stat stress-ng --cpu 4 --cpu-method ackermann -t 60 --times
stress-ng: info: [7796] dispatching hogs: 4 cpu
stress-ng: info: [7796] successful run completed in 60.03s
stress-ng: info: [7796] for a 60.03s run time:
stress-ng: info: [7796] 240.12s available CPU time
stress-ng: info: [7796] 226.69s user time ( 94.41%)
stress-ng: info: [7796] 0.26s system time ( 0.11%)
stress-ng: info: [7796] 226.95s total time ( 94.52%)

Performance counter stats for 'stress-ng --cpu 4 --cpu-method ackermann -t 60 --times':

226928.278602 task-clock (msec) # 3.780 CPUs utilized
21,752 context-switches # 0.096 K/sec
127 cpu-migrations # 0.001 K/sec
927 page-faults # 0.004 K/sec
594,117,596,619 cycles # 2.618 GHz
298,809,437,018 stalled-cycles-frontend # 50.29% frontend cycles idle
stalled-cycles-backend
845,746,011,976 instructions # 1.42 insns per cycle
# 0.35 stalled cycles per insn
298,414,546,095 branches # 1315.017 M/sec
95,739,331 branch-misses # 0.03% of all branches

60.032115099 seconds time elapsed

..so about 35% of the time is used in branching and we're getting  about 1.42 instructions per cycle and no many stall cycles, so the code is most probably executing inside the instruction cache, which isn't surprising because the test is rather small.

My final experiment was to measure the stall cycles when performing complex long double floating point math operations, again with stress-ng.

$ perf stat stress-ng --cpu 4 --cpu-method clongdouble -t 60 --times
stress-ng: info: [7854] dispatching hogs: 4 cpu
stress-ng: info: [7854] successful run completed in 60.00s
stress-ng: info: [7854] for a 60.00s run time:
stress-ng: info: [7854] 240.00s available CPU time
stress-ng: info: [7854] 225.15s user time ( 93.81%)
stress-ng: info: [7854] 0.44s system time ( 0.18%)
stress-ng: info: [7854] 225.59s total time ( 93.99%)

Performance counter stats for 'stress-ng --cpu 4 --cpu-method clongdouble -t 60 --times':

225578.329426 task-clock (msec) # 3.757 CPUs utilized
38,443 context-switches # 0.170 K/sec
96 cpu-migrations # 0.000 K/sec
845 page-faults # 0.004 K/sec
651,620,307,394 cycles # 2.889 GHz
521,346,311,902 stalled-cycles-frontend # 80.01% frontend cycles idle
stalled-cycles-backend
17,079,721,567 instructions # 0.03 insns per cycle
# 30.52 stalled cycles per insn
2,903,757,437 branches # 12.873 M/sec
52,844,177 branch-misses # 1.82% of all branches

60.048819970 seconds time elapsed

The complex math operations take some time to complete, stalling on average over 35 cycles per op.  Instead of using 4 concurrent processes, I re-ran this using just the two CPUs and eliminating 2 of the hyperthreads.  This resulted in 25.4 stall cycles per instruction showing that hyperthreaded processes are stalling because of contention on the floating point units.

Perf stat is an incredibly useful tool for examining performance issues at a very low level.   It is simple to use and yet provides excellent stats to allow one to identify issues and fine tune performance critical code.  Well worth using.

Read more
Colin Ian King

Before I started some analysis on benchmarking various popular file systems on Linux I was recommended to read "Systems Performance: Enterprise  and the Cloud" by Brendan Gregg.

In today's modern server and cloud based systems the multi-layered complexity can make it hard to pin point performance issues and bottlenecks. This book is packed full useful analysis techniques covering tracing, kernel internals, tools and benchmarking.

Critical to getting a well balanced and tuned system are all the different components, and the book has chapters covering CPU optimisation (cores, threading, caching and internconnects),  memory optimisation (virtual memory, paging, swapping, allocators, busses),  file system I/O, storage, networking (protcols, sockets, physical connections) and typical issues facing cloud computing.

The book is full of very useful examples and practical instructions on how to drill down and discover performance issues in a system and also includes some real-world case studies too.

It has helped me become even more focused on how to analyse performance issues and consider how to do deep system instrumentation to be able to understand where any why performance regressions occur.

All-in-all, a most systematic and well written book that I'd recommend to anyone running large complex servers and cloud computing environments.






Read more
Colin Ian King

even more stress in stress-ng

Over the past few weeks in spare moments I've been adding more stress methods to stress-ng  ready for Ubuntu 15.04 Vivid Vervet.   My intention is to produce a rich set of stress methods that can stress and exercise many facets of a system to force out bugs, catch thermal over-runs and generally torture a kernel in a controlled repeatable manner.

I've also re-structured the tool in several ways to enhance the features and make it easier to maintain.  The cpu stress method has been re-worked to include nearly 40 different ways to stress a processor, covering:

  • Bit manipulation: bitops, crc16, hamming
  • Integer operations: int8, int16, int32, int64, rand
  • Floating point:  long double, double,  float, ln2, hyperbolic, trig
  • Recursion: ackermann, hanoi
  • Computation: correlate, euler, explog, fibonacci, gcd, gray, idct, matrixprod, nsqrt, omega, phi, prime, psi, rgb, sieve, sqrt, zeta
  • Hashing: jenkin, pjw
  • Control flow: jmp, loop
..the intention was to have a wide enough eclectic mix of CPU exercising tests that cover a wide range of typical operations found in computationally intense software.   Use the new --cpu-method option to select the specific CPU stressor, or --cpu-method all to exercise all of them sequentially.

I've also added more generic system stress methods too:
  • bigheap - re-allocs to force OOM killing
  • rename - rename files rapidly
  • utime - update file modification times to create lots of dirty file metadata
  • fstat - rapid fstat'ing of large quantities of files
  • qsort - sorting of large quantities of random data
  • msg - System V message sending/receiving
  • nice - rapid re-nicing processes
  • sigfpe - catch rapid division by zero errors using SIGFPE
  • rdrand - rapid reading of Intel random number generator using the rdrand instruction (Ivybridge and later CPUs only)
Other new options:
  • metrics-brief - this dumps out only the bogo-op metrics that are relevant for just the tests that were run.
  • verify - this will sanity check the stress results per iteration to ensure memory operations and CPU computations are working as expected. Hopefully this will catch any errors on a hot machine that has errors in the hardware. 
  • sequential - this will run all the stress methods one by one (for a default of 60 seconds each) rather than all in parallel.   Use this with the --timeout option to run all the stress methods sequentially each for a specified amount of time. 
  • Specifying 0 instances of any stress method will run an instance of the stress method on all online CPUs. 
The tool also builds and runs on Debian kFreeBSD and GNU HURD kernels although some stress methods or stress options are not included due to lack of support on these other kernels.
The stress-ng man page gives far more explanation of each stress method and more detailed examples of how to use the tool.

For more details, visit here or read the manual.

Read more
Colin Ian King

a final few more features in stress-ng

While hoping to get a feature complete stress-ng sooner than later, I found a few more ways to fiendishly stress a system.

Stress-ng 0.01.22 will be landing soon in Ubuntu 14.10 with three more stress mechanisms:

  • CPU affinity stressing; this rapidly changes CPU affinity of the stress processes just to keep the scheduling busy wasting effort.
  • Timer stressing using the real-time clock; this allows one to generate a large amount of timer interrupts, so it is a useful interrupt saturation test.
  • Directory entry thrashing; this creates and deletes a selectable number of zero length files and hence populates and destroys directory entries.
I have also removed the need to use rand() for random number generation for some of the stress tests and re-used a the faster MWC "random" number generator to add in some well known and very simple math operations for CPU stressing.

Stress-ng now has 15 different simple stress mechanisms that exercise CPU, cache, memory, file system, I/O and CPU schedulers.  I could add more tests, but I think this is a large enough set to allow one to thrash a machine and see how well it performs under pressure.

Read more
Colin Ian King

more stress with stress-ng

Since my last article about stress-ng I have been adding a few more stress mechanisms to stress-ng:

  • file locking - exercise file locking with one or more processes (the more processes the better).
  • fallocate - this allocates a 4MB file, sync's, truncates to zero size and syncs repeatedly
  • yield - this loops on sched_yield() to repeatedly relinquish the CPU forcing a high context switch rate when run with multiple yielding processes.
Also, I have added some new features to tweak scheduling, I/O characteristics and memory allocations of the running stress processes:
  • --sched and --sched-prio options to specify the scheduler type and priority
  • --ionice-class and --ionice-level options to tweak I/O niceness
  • --vm-populate option to populate (pre-fault) page tables for a mapping for the --vm stress test.
If I think of other mechanisms to stress the kernel I will add them, but for now, stress-ng is becoming almost feature complete.

Read more