Canonical Voices

Colin Ian King

Recently I was playing around with CPU loading and was trying to estimate the number of compute operations being executed on my machine.  In particular, I was interested to see how many instructions per cycle and stall cycles I was hitting on the more demanding instructions.   Fortunately, perf stat allows one to get detailed processor statistics to measure this.

In my first test, I wanted to see how the Intel rdrand instruction performed with 2 CPUs loaded (each with a hyper-thread):


$ perf stat stress-ng --rdrand 4 -t 60 --times
stress-ng: info: [7762] dispatching hogs: 4 rdrand
stress-ng: info: [7762] successful run completed in 60.00s
stress-ng: info: [7762] for a 60.00s run time:
stress-ng: info: [7762] 240.01s available CPU time
stress-ng: info: [7762] 231.05s user time ( 96.27%)
stress-ng: info: [7762] 0.11s system time ( 0.05%)
stress-ng: info: [7762] 231.16s total time ( 96.31%)

Performance counter stats for 'stress-ng --rdrand 4 -t 60 --times':

231161.945062 task-clock (msec) # 3.852 CPUs utilized
18,450 context-switches # 0.080 K/sec
92 cpu-migrations # 0.000 K/sec
821 page-faults # 0.004 K/sec
667,745,260,420 cycles # 2.889 GHz
646,960,295,083 stalled-cycles-frontend # 96.89% frontend cycles idle
stalled-cycles-backend
13,702,533,103 instructions # 0.02 insns per cycle
# 47.21 stalled cycles per insn
6,549,840,185 branches # 28.334 M/sec
2,352,175 branch-misses # 0.04% of all branches

60.006455711 seconds time elapsed

stress-ng's rdrand test just performs a 64 bit rdrand read and loops on this until the data is ready, and performs this 32 times in an unrolled loop.  Perf stat shows that each rdrand + loop sequence on average consumes about 47 stall cycles showing that rdrand is probably just waiting for the PRNG block to produce random data.

My next experiment was to run the stress-ng ackermann stressor; this performs a lot of recursion, hence one should see a predominantly large amount of branching.

$ perf stat stress-ng --cpu 4 --cpu-method ackermann -t 60 --times
stress-ng: info: [7796] dispatching hogs: 4 cpu
stress-ng: info: [7796] successful run completed in 60.03s
stress-ng: info: [7796] for a 60.03s run time:
stress-ng: info: [7796] 240.12s available CPU time
stress-ng: info: [7796] 226.69s user time ( 94.41%)
stress-ng: info: [7796] 0.26s system time ( 0.11%)
stress-ng: info: [7796] 226.95s total time ( 94.52%)

Performance counter stats for 'stress-ng --cpu 4 --cpu-method ackermann -t 60 --times':

226928.278602 task-clock (msec) # 3.780 CPUs utilized
21,752 context-switches # 0.096 K/sec
127 cpu-migrations # 0.001 K/sec
927 page-faults # 0.004 K/sec
594,117,596,619 cycles # 2.618 GHz
298,809,437,018 stalled-cycles-frontend # 50.29% frontend cycles idle
stalled-cycles-backend
845,746,011,976 instructions # 1.42 insns per cycle
# 0.35 stalled cycles per insn
298,414,546,095 branches # 1315.017 M/sec
95,739,331 branch-misses # 0.03% of all branches

60.032115099 seconds time elapsed

..so about 35% of the time is used in branching and we're getting  about 1.42 instructions per cycle and no many stall cycles, so the code is most probably executing inside the instruction cache, which isn't surprising because the test is rather small.

My final experiment was to measure the stall cycles when performing complex long double floating point math operations, again with stress-ng.

$ perf stat stress-ng --cpu 4 --cpu-method clongdouble -t 60 --times
stress-ng: info: [7854] dispatching hogs: 4 cpu
stress-ng: info: [7854] successful run completed in 60.00s
stress-ng: info: [7854] for a 60.00s run time:
stress-ng: info: [7854] 240.00s available CPU time
stress-ng: info: [7854] 225.15s user time ( 93.81%)
stress-ng: info: [7854] 0.44s system time ( 0.18%)
stress-ng: info: [7854] 225.59s total time ( 93.99%)

Performance counter stats for 'stress-ng --cpu 4 --cpu-method clongdouble -t 60 --times':

225578.329426 task-clock (msec) # 3.757 CPUs utilized
38,443 context-switches # 0.170 K/sec
96 cpu-migrations # 0.000 K/sec
845 page-faults # 0.004 K/sec
651,620,307,394 cycles # 2.889 GHz
521,346,311,902 stalled-cycles-frontend # 80.01% frontend cycles idle
stalled-cycles-backend
17,079,721,567 instructions # 0.03 insns per cycle
# 30.52 stalled cycles per insn
2,903,757,437 branches # 12.873 M/sec
52,844,177 branch-misses # 1.82% of all branches

60.048819970 seconds time elapsed

The complex math operations take some time to complete, stalling on average over 35 cycles per op.  Instead of using 4 concurrent processes, I re-ran this using just the two CPUs and eliminating 2 of the hyperthreads.  This resulted in 25.4 stall cycles per instruction showing that hyperthreaded processes are stalling because of contention on the floating point units.

Perf stat is an incredibly useful tool for examining performance issues at a very low level.   It is simple to use and yet provides excellent stats to allow one to identify issues and fine tune performance critical code.  Well worth using.

Read more
Colin Ian King

Before I started some analysis on benchmarking various popular file systems on Linux I was recommended to read "Systems Performance: Enterprise  and the Cloud" by Brendan Gregg.

In today's modern server and cloud based systems the multi-layered complexity can make it hard to pin point performance issues and bottlenecks. This book is packed full useful analysis techniques covering tracing, kernel internals, tools and benchmarking.

Critical to getting a well balanced and tuned system are all the different components, and the book has chapters covering CPU optimisation (cores, threading, caching and internconnects),  memory optimisation (virtual memory, paging, swapping, allocators, busses),  file system I/O, storage, networking (protcols, sockets, physical connections) and typical issues facing cloud computing.

The book is full of very useful examples and practical instructions on how to drill down and discover performance issues in a system and also includes some real-world case studies too.

It has helped me become even more focused on how to analyse performance issues and consider how to do deep system instrumentation to be able to understand where any why performance regressions occur.

All-in-all, a most systematic and well written book that I'd recommend to anyone running large complex servers and cloud computing environments.






Read more
Colin Ian King

even more stress in stress-ng

Over the past few weeks in spare moments I've been adding more stress methods to stress-ng  ready for Ubuntu 15.04 Vivid Vervet.   My intention is to produce a rich set of stress methods that can stress and exercise many facets of a system to force out bugs, catch thermal over-runs and generally torture a kernel in a controlled repeatable manner.

I've also re-structured the tool in several ways to enhance the features and make it easier to maintain.  The cpu stress method has been re-worked to include nearly 40 different ways to stress a processor, covering:

  • Bit manipulation: bitops, crc16, hamming
  • Integer operations: int8, int16, int32, int64, rand
  • Floating point:  long double, double,  float, ln2, hyperbolic, trig
  • Recursion: ackermann, hanoi
  • Computation: correlate, euler, explog, fibonacci, gcd, gray, idct, matrixprod, nsqrt, omega, phi, prime, psi, rgb, sieve, sqrt, zeta
  • Hashing: jenkin, pjw
  • Control flow: jmp, loop
..the intention was to have a wide enough eclectic mix of CPU exercising tests that cover a wide range of typical operations found in computationally intense software.   Use the new --cpu-method option to select the specific CPU stressor, or --cpu-method all to exercise all of them sequentially.

I've also added more generic system stress methods too:
  • bigheap - re-allocs to force OOM killing
  • rename - rename files rapidly
  • utime - update file modification times to create lots of dirty file metadata
  • fstat - rapid fstat'ing of large quantities of files
  • qsort - sorting of large quantities of random data
  • msg - System V message sending/receiving
  • nice - rapid re-nicing processes
  • sigfpe - catch rapid division by zero errors using SIGFPE
  • rdrand - rapid reading of Intel random number generator using the rdrand instruction (Ivybridge and later CPUs only)
Other new options:
  • metrics-brief - this dumps out only the bogo-op metrics that are relevant for just the tests that were run.
  • verify - this will sanity check the stress results per iteration to ensure memory operations and CPU computations are working as expected. Hopefully this will catch any errors on a hot machine that has errors in the hardware. 
  • sequential - this will run all the stress methods one by one (for a default of 60 seconds each) rather than all in parallel.   Use this with the --timeout option to run all the stress methods sequentially each for a specified amount of time. 
  • Specifying 0 instances of any stress method will run an instance of the stress method on all online CPUs. 
The tool also builds and runs on Debian kFreeBSD and GNU HURD kernels although some stress methods or stress options are not included due to lack of support on these other kernels.
The stress-ng man page gives far more explanation of each stress method and more detailed examples of how to use the tool.

For more details, visit here or read the manual.

Read more
Colin Ian King

a final few more features in stress-ng

While hoping to get a feature complete stress-ng sooner than later, I found a few more ways to fiendishly stress a system.

Stress-ng 0.01.22 will be landing soon in Ubuntu 14.10 with three more stress mechanisms:

  • CPU affinity stressing; this rapidly changes CPU affinity of the stress processes just to keep the scheduling busy wasting effort.
  • Timer stressing using the real-time clock; this allows one to generate a large amount of timer interrupts, so it is a useful interrupt saturation test.
  • Directory entry thrashing; this creates and deletes a selectable number of zero length files and hence populates and destroys directory entries.
I have also removed the need to use rand() for random number generation for some of the stress tests and re-used a the faster MWC "random" number generator to add in some well known and very simple math operations for CPU stressing.

Stress-ng now has 15 different simple stress mechanisms that exercise CPU, cache, memory, file system, I/O and CPU schedulers.  I could add more tests, but I think this is a large enough set to allow one to thrash a machine and see how well it performs under pressure.

Read more
Colin Ian King

more stress with stress-ng

Since my last article about stress-ng I have been adding a few more stress mechanisms to stress-ng:

  • file locking - exercise file locking with one or more processes (the more processes the better).
  • fallocate - this allocates a 4MB file, sync's, truncates to zero size and syncs repeatedly
  • yield - this loops on sched_yield() to repeatedly relinquish the CPU forcing a high context switch rate when run with multiple yielding processes.
Also, I have added some new features to tweak scheduling, I/O characteristics and memory allocations of the running stress processes:
  • --sched and --sched-prio options to specify the scheduler type and priority
  • --ionice-class and --ionice-level options to tweak I/O niceness
  • --vm-populate option to populate (pre-fault) page tables for a mapping for the --vm stress test.
If I think of other mechanisms to stress the kernel I will add them, but for now, stress-ng is becoming almost feature complete.

Read more
Colin Ian King

Linaro's idlestat is another useful tool in the arsenal of CPU monitoring utilities.  Idlestat monitors and captures CPU C-state and P-state transitions using the kernel Ftrace tracer and outputs statistics based on entering/exiting each state for each CPU.  Idlestat  also captures IRQ activity as well which ones caused a CPU to exit an idle state -  knowing why a processor came out of a deep C state is always very useful way to help diagnose power consumption issues.

Using idlestat is easy, to capture 20 seconds of activity into a log file called example.log, run:

 sudo idlestat --trace -f example.log -t 20    
..and this will display the per CPU C-state and P-state and IRQ statistics for that run.

One can also take the saved log file and parse it again to calculate the statistics again using:
 idlestat --import -f example.log  

One can get the source from here and I've packaged version 0.3 (plus a bunch of minor fixes that will land in 0.4) for Ubuntu 14.10 Utopic Unicorn.

Read more
Colin Ian King

stress-ng: an updated system stress test tool

Recently added to Ubuntu 14.10 is stress-ng, a simple tool designed to stress various components of a Linux system.   stress-ng is a re-implementation of the original stress tool written by Amos  Waterland and adds various new ways to exercise a computer as well as a very simple "bogo-operation" set of metrics for each stress method.

stress-ng current contains the following methods to exercise the machine:

  • CPU compute - just lots of sqrt() operations on pseudo-random values. One can also specify the % loading of the CPUs
  • Cache thrashing, a naive cache read/write exerciser
  • Drive stress by writing and removing many temporary files
  • Process creation and termination, just lots of fork() + exit() calls
  • I/O syncs, just forcing lots of sync() calls
  • VM stress via mmap(), memory write and munmap()
  • Pipe I/O, large pipe writes and reads that exercise pipe, copying and context switching
  • Socket stressing, much like the pipe I/O test but using sockets
  • Context switching between a pair of producer and consumer processes
Many of the above stress methods have additional configuration options.  Each stress method can be run by one or more child processes.

The --metrics option dumps the number of operations performed by each stress method, aka "bogo ops", bogos because they are a rough and unscientific metric.  One can specify how long to run a test either by test duration in sections or by bogo ops.

I've tried to make stress-ng compatible with the older stress tool, but note that it is not guaranteed to produce identical results as the common test methods between the two tools have been implemented differently.

Stress-ng has been a useful for helping me measure different power consuming loads.  It is also useful with various thermald optimisation tweaks on one of my older machines.

For more information, consult the stress-ng manual page.  Be warned, this tool can make your system get seriously busy and warm!

Read more
Colin Ian King

Back in April I wrote smemstat in to dump out the per process shared memory usage based on the memory mapping data from /proc/$pid/smaps.   I tried to make smemstat as compact as possible with minimised changes in heap size to reduce the impact on the total system shared memory statistics.

smemstat reports the memory utilised per process taking in consideration pages that are shared with other processes. So, if two processes share 64K of memory, each process will report that memory as 32K each.

smemstat has to modes of operation: "dump all current memory stats" and a "dump periodic change in memory stats".  The former is useful for a single snapshot view of memory utilisation, where as the latter is useful to observe memory size changes over time. Exiting or new processes that share memory with other processes will cause a change the reported amounted of memory used by the processes because this memory is shared amongst a changed number of processes.

There are various options to smemstat, so consult the man page for more details.  One useful option is -o that collects the smemstat statistics into a JSON formatted output file. This can be useful for memory based tests as the structured JSON data can be easily parsed and analysed.

smemstat  has landed in Ubuntu Utopic 14.10 and is also available in Debian. Examples of smemstat can be found here. I hope it proves to be a useful utility.

Read more
Colin Ian King

In the last 6 months the Firmware Test Suite (fwts) has seen a lot of development and bug fixing activity in preparation for the Ubuntu 14.04 LTS Trusty release.  It is timely to give a brief overview of new features and improvements that have landed in this busy development cycle:

UEFI uefidump, additional support for:

  • KEK, KEKDefault, PK, PKDefault global variables scan
  • db, dbx, dbt variables scan
  • messaging device path: add Fibre Channel Ex subtype-21, ATA subtype-18, Fibre Channel Ex subtype-21, USB WWID subtype-16, VLAN subtype-20, Device Logical Unit subtype-17, SAS Ex subtype-22, iSCSI subtype-19, NVM Express namespace subtype-23, Media Protocol subtype-5, PIWG Firmware File subtype-6, PIWG Firmware Volume subtype-7 and extend the Messaging Device Path type Vendor subtype-10.
ACPI related:
  • Update to ACPICA version 20140325 
  • Add S3 hybrid suspend / resume support with new --s3-hybrid option 
  • Improved reporting of errors on ACPI evaluation errors
  • New _PLD, _CRS and _PRS dump utilities
  • New General Purpose Events (GPE) dump utility
  • --disassemble-aml option accepts an output directory argument
  • Add DBG2, DBGP, SPCR and MCHI tables to acpidump utility 
  • Add -R, --rsdp option to specify the RSDP address
  • _IFT, _SRV, _PIC, _UDP, _UPP, _PMM, _MSG, _GAI, _CID, _CDM and _CBA checks added to method test
SMBIOS:
  • dmicheck: add more checks for invalid DMI fields
Architecture related:
  • Support for i386 amd64 armel armhf aarch64 ppc64 ppc64e.  fwts support for aarch64 was a notable achievement
Kernel Log Scanning:
  • Sync klog scanning with 3.13 kernel error messages
Miscellaneous:
  • Remove unused LaunchPad bug tagging
  • Add Ivybridge and Haswell MSRs to msr test
  • Check CPU maximum frequencies
The fwts regression tests have been incorporated into the fwts repository and can be run with "make check". These tests are automatically run at build time to catch regressions.  fwts is now being regularly checked with static code analysis tools smatch, cppcheck and Coverity Scan and this has helped find memory leaks and numerous corner case bugs.  We also exercise fwts with a database of ACPI tables from real hardware and synthetically generated broken tables to check for regressions. 

Contributors to fwts in the current release cycle are (in alphabetical order):  Alex Hung, Colin King, Ivan Hu, Jeffrey Bastian, Keng-Yu Lin, Matt Fleming.  Also, thanks to Naresh Bhat for testing and feedback for the aarch64 port and to Robert Moore for the on-going work with ACPICA.

As ever, all contributions are welcome, including bug reports and feature requests.  Visit the fwts wiki page for more details.

Read more
Colin Ian King

One of my on-going projects is to try to reduce system activity where possible to try to shave off wasted power consumption.   One of the more interesting problems is when very short lived processes are spawned off and die and traditional tools such as ps and top sometimes don't catch that activity.   Over last weekend I wrote the bulk of the forkstat tool to track down these processes.

Forkstat uses the kernel proc connector interface to detect process activity.  Proc connector allows forkstat to receive notifications of process events such as fork, exec, exit, core dump and changing the process name in the comm field over a socket connection.

By default, forkstat will just log fork, exec and exit events, but the -e option allows one to specify one or more of the fork, exec, exit, core dump or comm events.  When a fork event occurs, forkstat will log the PID and process name of the parent and child, allowing one to easily identify where processes are originating.    Where possible, forkstat attempts to track the life time of a process and will log the duration of a processes when it exits (note: this is not an estimate of the CPU used).

The -S option to forkstat will dump out a statistical summary of activity.  This is useful to identify the frequency of processes activity and hence identifying the top offenders.

Forkstat is now available in Ubuntu 14.04 Trusty Tahr LTS.  To install forkstat use:

 sudo apt-get install forkstat  

For more information on the tool and examples of the forkstat output, visit the forkstat quick start page.

Read more
Colin Ian King

Keeping cool with thermald

The push for higher performance desktops and laptops has inevitably lead to higher power dissipation.  Laptops have also shrunk in size leading to increasing problems with removing excess heat and thermal overrun on heavily loaded high end machines.

Intel's thermald prevents machines from overheating and has been recently introduced in the Ubuntu Trusty 14.04 LTS release.  Thermald actively monitors thermal sensors and will attempt to keep the hardware cool by modifying a variety of cooling controls:
 

* Active or passive cooling devices as presented in sysfs
* The Running Average Power Limit (RAPL) driver (Sandybridge upwards)
* The Intel P-state CPU frequency driver (Sandybridge upwards)
* The Intel PowerClamp driver

Thermald has been found to be especially useful when using the Intel P-state CPU frequency scaling driver since this can push the CPU harder than other CPU frequency scaling drivers.

Over the past several weeks I've been working with Intel to shake out some final bugs and get thermald included into Ubuntu 14.04 LTS, so kudos to Srinivas Pandruvada for handling my patches and also providing a lot of timely fixes too.

By default, thermald works without any need for configuration, however, if one has incorrect thermal trip settings or other firmware related thermal zone bugs one can write one's own thermald configuration. 

For further details, consult the Ubuntu thermald wiki page.

Read more
Colin Ian King

Finding small bugs

Over the past few months I've been using static code analysis tools such as cppcheck, Coverity Scan and also smatch on various open source projects.   I've generally found that most open source code is fairly well written, however, most suffer a common pattern of bugs on the error handling paths.  Typically, these are not free'ing up memory or freeing up memory incorrectly.  Other frequent bugs are not initialising variables and overly complex code paths that introduce subtle bugs when certain rare conditions are occur.  Most of these bugs are small and very rarely hit; some of these just silently do things wrong while others can potentially trigger segmentation faults.

The --force option in cppcheck to force the checking of every build configuration has been very useful in finding code paths that are rarely built, executed or tested and hence are likely to contain bugs.

I'm coming to the conclusion that whenever I have to look at some new code I should take 5 minutes or so throwing it at various static code analysis tools to see what pops out and being a good citizen and fixing these and sending these upstream. It's not too much effort and helps reduce some of those more obscure bugs that rarely bite but do linger around in code.

Read more
Colin Ian King

Over the past months I have been using static code analysis tools such as smatch and Coverity Scan on various open source projects that I am involved with.  These, combined with using gcc's -Wall -Wextra have proved useful in tracking down and eliminating various bugs.

Recently I stumbled on cppcheck and gave it a spin on several larger projects.  One of the cppcheck project aims is to find errors that the compiler won't spot and also try to keep the number of false positives found to a minimum.

cppcheck is very easy to use, the default settings just work out of the box. However, for extra checking I enabled the --force option to check of all configurations and the --enable=all to report on checks to be totally thorough and pedantic.

The --enable option is especially useful. It allows one to select different types of checking, for example, coding style, execution performance, portability, unused functions and missing include files.

Even though my code has been through smatch and Coverity Scan, cppcheck still managed to find a few issues using --enable=all

1. unused functions
2. a potential memory leak with realloc(), for example:

buf = realloc(buf, new_size);
if (!buf)
     return NULL;

if realloc() fails, buf can be leaked.  A potential fix is:

tmp = realloc(buf, new_size);
if (!tmp) {
     free(buf);
     return NULL;
} else
     buf = tmp;

3. some potential sscanf buffer overflows
4. some coding style improvements, for example, local auto variables could be moved to a deeper scope

So cppcheck worked well for me.  I recommend referring to the cppcheck project wiki to check out the features and then subjecting your code to it and seeing if it can find any bugs.

Read more
Colin Ian King

Infinite Snowflake

So it's close to the Christmas Holiday season, so I thought I would spend some time writing something seasonal.   My son, who is rather mathematically minded, was asking me about finding reoccurring digits in reciprocals of primes, for example 1/7, 1/11, 1/13 etc., and somehow this got me looking at the binary digits of Pi, and after a little more browsing around wikipedia onto the Thue-Morse binary sequence.

The Thue-Morse binary sequence starts with zero and one successively appends to the existing sequence the boolean compliment of the sequence so far.   One interesting feature is that a turtle graphics program can be written to control the turtle by feeding it the successive digits from the sequence so that:

  • A zero moves the cursor forward by a step
  • A one turns the cursor anti-clockwise by 60 degress (or pi/3 radians)
 ..and a Koch snowflake is drawn.  So, I've implemented this in C and obfuscated it a little just for fun:

#include<stdio.h>
#include <math.h>
#define Q (1<<9)
#define M /**/for
#define E(b,a) {\
J q=1^(X+b)[3+Q*\
a];printf("%c%c%\
c",q,q,q);/*//*/}
int /*/*/ typedef
#define W double
#define z (Q<<5)

J;J X[3+(Q *Q)]
,t[z ] ;J main(
){ W o= Q >>1,V=
o/5, a=0; M (1 [X
]=1; X[1] <z;X
[1 ] *=2) for(
X[2] =0;X [2]<
X[1] ;X[2 ]++,
t[X[ 2]+X [1]]
=1^t [X[2 ]]);
for( 0[X] =0;X
[0]< 3;0[ X]++
)for (X[2] =0;X
[2]< z;X[ 2]++
)if( t[X[ 2]])
a-=( M_PI /3.0
);else{o+= cos(a ); V+=

sin(a);X[3+( int)o+Q*(int)V]|=1;}printf(
"P6 %d %d 1 ",Q,Q);M(1[X] =0;X[1]<Q;X[1]
++)M(2[X]=0;X[ 2]<Q;X[2]++)E(2[X],X[1]);
return 0;} /* Colin Ian King 2013 */

To build and run:
gcc koch-binary.c -lm -o koch-binary 
./koch-binary | ppmtojpeg > koch-binary.jpg
And the result is a koch-snowflake:


The original source code can be found here. It contains a 512 x 512 turtle graphics plotter, a Thue-Morse binary sequence generator and a PPM output backend. Anyway, have fun figuring out how it works (it is only mildly obfuscated) and have a great Christmas!

Read more
Colin Ian King

Detecting System Management Interrupts

System Management Mode (SMM) is a special operating mode on x86 processors that temporarily jumps from normal execution and executes specialised firmware code in a high privilege before returning back.  SMM is entered via the System Management Interrupt (SMI) and is intented to work transparently to the operating system.

For example, SMM can be used to handle shutdown if CPU temperature is too high, perform transparent fan control, handle special system events (e.g. chipset errors), emulate hardware (non existing or buggy hardware) and a lot more besides.

SMM in theory cannot be disabled by the operating system and have been known to interfere with the operating system even though is it meant to be transparent.  SMIs steal CPU cycles from the system - CPU state has to be stored and restored and there are side effects because of flushing out of the write back cache.  This CPU cycle stealing can impact real time behaviour and in the past it has been hard to determine how frequently SMIs occur and hence how much potential disruption they bring to a system.

When the CPU enters SMM the output pin SMIACT# is asserted (and all further memory cycles are redirected to a protected memory for SMM).  Hence one could use a logic analyser on SMIACT# to count SMIs.   An alternative is to have a non-interruptible thread on a CPU checking time skips by constantly monitoring the Time Stamp Counter (TSC) but this is a CPU expensive operation.

Fortunately, modern Intel CPUs (such as Ivybridge and Haswell) have a special SMI counter in a Model Specific Register. MSR_SMI_COUNT (0x00000034) is incremented when SMIs occur, allowing easy detection of SMIs.

As a quick test, I hacked up smistat that polls MSR_SMI_COUNT every second.  For example, pressing the backlight brightness keys on my Lenovo laptop bumps the counter and this is easy to see with smistat.   So this MSR provides some indication of the frequency of SMIs, however, it of course cannot  inform us how many CPU cycles are stolen in SMM.   Now that would be a very useful MSR to add to the next revision of the Intel silicon...

Read more
Colin Ian King

fwts 13.12.00 released

Version 13.12.00 of the Firmware Test Suite has been released today.  This latest release includes some of the following new features:

  • ACPI method test, add _IPC, _IFT, _SRV tests
  • Update to version 20131115 of ACPICA
  • Check for thermal overrun messages in klog test
  • Test for CPU frequency maximum limits on cpufreq test
  • Add Ivybridge and Haswell CPU specific MSR checks
  • UEFI variable dump:
    • add more subtype support on acpi device path type  
    • handle the End of Hardware Device Path sub-type 0x01
    • add Fibre Channel Ex subtype-21 support on messaging device path type
    • add SATA subtype-18 support on messaging device path type
    • add USB WWID subtype-16 support on messaging device path type
    • add VLAN subtype-20 support on messaging device path type
    • add Device Logical Unit subtype-17 support on messaging device path typ
    • add SAS Ex subtype-22 support on messaging device path type
    • add the iSCSI subtype-19 support on messaging device path type
    • add the NVM Express namespace subtype-23 support on messaging device path type
..and also the usual bug fixes. 

For more details, please consult the release notes
 
The source tarball is available at:  http://fwts.ubuntu.com/release/fwts-V13.12.00.tar.gz and can be cloned from the repository: git://kernel.ubuntu.com/hwe/fwts.git

The fwts 13.12.00 .debs are available in the firmware testing PPA and will soon appear in Ubuntu Trusty 14.04

And thanks to Alex Hung, Ivan Hu, Keng-Yu Lin their contributions and keen eye for reviewing the numerous patches and also to Robert Moore for the on-going work on ACPICA.

Read more
Colin Ian King

Digging into TRIM

Enabling the discard mount option to perform auto trimming on a SSD has some downsides.  The non-queued trim command can incur a large performance penalty on deletes; the driver needs to finish pending operations, then peform the trim command before it can resume normal operation again.  Queued TRIM support has been added to the 3.12 kernel which will help improve this issue, however, one requires hardware that supports the Serial ATA revision 3.1 Queued Trim command.

Trimming can trigger internal garbage collection on some firmware and causing some performance impact when one leasts expects it, so enabling TRIM via the discard mount flag may be suboptimal.  It is therefore better to batch up discards and run fstrim periodically when a machine is idle in terms of minimising performance impact (this is also more power efficient too).

For the curious, how much is being discarded when one runs fstrim? Fortunately the kernel has a tracepoint added to ext4_trim_extent and this enables one to observe the block device, allocation group, starting block of the free extent in the allocation group and number of blocks to TRIM.

With debugfs mounted one can easily observe this using:

 echo 1 | sudo tee /sys/kernel/debug/tracing/events/ext4/ext4_trim_extent/enable  
sudo cat /sys/kernel/debug/tracing/trace_pipe | grep ext4_trim_extent

then invoke fstrim from another terminal and one will observe:

fstrim-4909  [003] ....  6585.463841: ext4_trim_extent: dev 8,3 group 1378, start 2633, len 829
fstrim-4909  [003] ....  6585.474926: ext4_trim_extent: dev 8,3 group 1378, start 3463, len 187
fstrim-4909  [003] ....  6585.484014: ext4_trim_extent: dev 8,3 group 1378, start 3652, len 474
...

The deeper question is therefore how frequently should run fstrim?  It is hard to say exactly, since it is use-case dependant.  However, anecdotal evidence seems to suggest running fstrim either daily or weekly.

Read more
Colin Ian King

Finding spelling mistakes in source code

There are quite a few useful open source utilities around for finding spelling mistakes in source code.  Most recently I've been using codespell which works well for me.

Codespell is regularly being updated and comes with a dictionary originally derived from Wikipedia. I normally pull the latest updates from the repository before running it against my source code.

Fetching it and using it is relatively simple:

 git clone git://git.profusion.mobi/users/lucas/codespell  
cd your-project-dir
/path-to-codespell/codespell.py
..and it will find common spelling mistakes in the entire project directory. Easy!

Read more
Colin Ian King

..and now something seasonal

The Christmas holidays are almost here and to celebrate I've mildly obfuscated some C that prints the lyrics to a traditional English carol. The source code can be found here

#include <stdio.h>
#include <stdbool.h>
#include <malloc.h>
#define H malloc (1237) ;
#define c while
#define W >> /* fallen right tree */
#define w << /* fallen left tree */
#define B(wish,times) ((_ &wish) times 2)
#define L {o Q=q;p(3[Z]),P(false[Z],q),\
p(q>2?"th":N),p(Z[2]);c(true+Q)p(q|Q?Q?\
N:4[Z]:"a "),P(1[Z],Q),p(Q>1?", ":N),--\
Q;p(".\n\n"),++q;}
char typedef o;void typedef d

;
o*N
="";o
q;o*O(o
*p){o
*P,_;o*
f=P=H;c(_
=*p)_^=1,*f
++=B(1,w)|B(4
,W)|B(2,w)|
B(8,W)|(_&240
),++p,*f=false;
return P;}d p(o*f
){fputs(f,stdout);}
d P(o*s,o o){c( o
--){c(122-*s)s++; s
++;}c(122-*s)putchar(
*s++);} o main(void){o*
Z[]={O("hgy}p{}dmnj`{pcg"
"y`{hny{hgh{}gs{}dxdj{"
"dglc{jgj{pdj{dbdxdj{p|d"
"bh{"),O("qeypyg`ld!gj!e!q"
"dey!pydd{p|n!ptypbd!`nxd}{p"
"cydd!Hydjmc!Cdj}{hnty!mnbbw!i"
"gy`"
"}{h"
"gxd"
"!ln"
"b`d"
"j!ygjl}{}gs!ldd}"
"d&e&bewgjl{}dxdj"
"!}|ej}&e&}|gff"
"gjl{dglcp!feg`"
"}&e&fgbogjl{jgjd!be`gd}!`ejmgjl{pdj!bny`}&e&bdeqgjl{"
"dbdxdj!qgqdy}&qgqgjl{p|dbxd!`ytffdy}!`ytffgjl{") ,O (
"!`ew!nh!Mcyg}pfe}!fw!pytd!bnxd!lexd!pn!fd!"),O("Nj!p"
"cd!"),O("!ej`!e!")};L L L L L L L L L L L L /*Xmas*/}

Read more
Colin Ian King

Making software successful

What makes good software into excellent usable software?  In my opinion software that fails to be excellent software generally lacks some key features:

Documentation

A README file in the source is not sufficient for the end-user.  Good quality documentation must explain how to use all the key features in a coherent manner.  For example, a command line tool should be at least shipped with a man page, and this must cover all the options in the tool and explain clearly how each option works.

Documentation must be kept up to date with the features in the software.  If a feature is added to a program and the documentation has not been updated then the developer has been careless and this is a bug. Out of date documentation that provides wrong and useless information make a user lose faith in the tool.

Try to include worked examples whenever possible to provide clarity. A user may look at the man page, see a gazillion options and get lost in how to use a tool for the simple use-case.  So provide an example on how to get a user up and running with the simple use-case and also expand on this for the more complex use cases.  Don't leave the user guessing.

If there are known bugs, document them. At least be honest with the user and let them know that there are issues that are being worked on rather than let them discover it by chance.

Expect quality when developing

Don't ship code that is fragile and easily breaks. That puts users off and they will never come back to it.  There really is no excuse for shipping poor quality code when there are so many useful tools that can be used to improve quality with little or no extra cost.

For example, develop new applications with pedantic options enabled, such as gcc's -Wall -Wextra options.   And check the code for memory leaks with tools such as valgrind and gcc's mudflap build flags.

Use static code analysis with tools such as smatch or Coverity Scan.  Ensure code is reviewed frequently and pedantically.  Humans make mistakes, so the more eyes that can review code the better.   Good reviewers impart their wisdom and make the process a valuable learning process.

Be vigilant with all buffer allocations and inputs.  Use gcc's -fstack-protector to check for buffer overflows and tools such as ElectricFence.  Just set the bar high and don't cut corners.

Write tests.  Test early and often. Fix bugs before adding more functionality.  Fixing bugs can be harder work than adding new shiny features which is why one needs to resist this temptation and focus on putting fixes first over features.

Ensure any input data can't break the application.  In a previous role I analysed an MPEG2 decoder and manually examined every possible input pattern in the bitstream parsing to see if I could break the parser with bad data.  It took time, it required considerable effort, but it ended up being a solid piece of code.  Unsanitised input can break code, so be careful and meticulous.

Sane Defaults

Programs have many options, and the default modus operandi of any tool should be sane and reasonable.   If the user is having to specify lots of extra command line options to make it do the simplest of tasks then they will get frustrated and give up. 

I've used tools where one has to set a whole bunch of obtuse environment variables just to make it do the simplest thing.  Avoid this. If one has to do this, then please document it early on and not buried at the end of the documentation.

Don't break backward compatibility

Adding new features is a great way to continually improve a tool but don't break backward compatibility.  A command line option that changes meaning or disappears between releases is really not helpful.     Sometimes code can no longer be supported and a re-write is required.  However, avoid rolling out replacement code that drops useful and well used features that the user expects.

The list could go on, but I believe that if one strives for quality one will achieve it.  Let's strive to make software not just great, but excellent!

Read more