Canonical Voices

Posts tagged with 'kernel'

Colin Ian King

Static analysis on the Linux kernel

There are a wealth of powerful static analysis tools available nowadays for analyzing C source code. These tools help to find bugs in code by just analyzing the source code without actually having to execute the code.   Over that past year or so I have been running the following static analysis tools on linux-next every weekday to find kernel bugs:

Typically each tool can take 10-25+ hours of compute time to analyze the kernel source; fortunately I have a large server at hand to do this.  The automated analysis creates an Ubuntu server VM, installs the required static analysis tools, clones linux-next and then runs the analysis.  The VMs are configured to minimize write activity to the host and run with 48 threads and plenty of memory to try to speed up the analysis process.

At the end of each run, the output from the previous run is diff'd against the new output and generates a list of new and fixed issues.  I then manually wade through these and try to fix some of the low hanging fruit when I can find free time to do so.

I've been gathering statistics from the CoverityScan builds for the past 12 months tracking the number of defects found, outstanding issues and number of defects eliminated:

As one can see, there are a lot of defects getting fixed by the Linux developers and the overall trend of outstanding issues is downwards, which is good to see.  The defect rate in linux-next is currently 0.46 issues per 1000 lines (out of over 13 million lines that are being scanned). A typical defect rate for a project this size is 0.5 issues per 1000 lines.  Some of these issues are false positives or very minor / insignficant issues that will not cause any run time issues at all, so don't be too alarmed by the statistics.

Using a range of static analysis tools is useful because each one has it's own strengths and weaknesses.  For example smatch and sparse are designed for sanity checking the kernel source, so they have some smarts that detect kernel specific semantic issues.  CoverityScan is a commercial product however they allow open source projects the size of the linux-kernel to be built daily and the web based bug tracking tool is very easy to use and CoverityScan does manage to reliably find bugs that other tools can't reach.  Cppcheck is useful as scans all the code paths by forcibly trying all the #ifdef'd variations of code - which is useful on the more obscure CONFIG mixes.

Finally, I use clang's scan-build and the latest verion of gcc to try and find the more typical warnings found by the static analysis built into modern open source compilers.

The more typical issues being found by static analysis are ones that don't generally appear at run time, such as in corner cases like error handling code paths, resource leaks or resource failure conditions, uninitialized variables or dead code paths.

My intention is to continue this process of daily checking and I hope to report back next September to review the CoverityScan trends for another year.

Read more
Dustin Kirkland

Introducting the Canonical Livepatch Service

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.

I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

      Q: What are the system requirements?

      A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

      Q: What about other architectures?

      A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

      Q: What about other flavors?

      A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

      Q: What about other releases of Ubuntu?

      A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

      Q: What about derivatives of Ubuntu?

      A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

      Q: How does Canonical test livepatches?

      A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

      Q: What kinds of updates will be provided by the Canonical Livepatch Service?

      A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

      Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

      A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

      Q: Can I rollback a Canonical Livepatch?

      A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

      Q: What about low and medium severity CVEs?

      A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

      Q: Why are Canonical Livepatches provided as a subscription service?

      A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

      Q: But I don’t want to buy UA support!

      A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

      Q: But I don’t have an Ubuntu SSO account!

      A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

      Q: But I don’t want login to!

      A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

      Q: But I don't have Internet access to!

      A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

      Q: Where’s the source code?

      A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

      Q: What about Ubuntu Core?

      A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

      Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

      A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

      • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
      • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
      • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
      • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
      • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
      • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
      • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

      Q: What happens if I run into problems/bugs with Canonical Livepatches?

      A: Ubuntu Advantage customers will file a support request at where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

      Q: Why does canonical-livepatch client/server have a proprietary license?

      A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

      Q: How do I build my own livepatches?

      A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:

      Q: How do I get notifications of which CVEs are livepatched and which are not?

      A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

      Q: Isn't livepatching just a big ole rootkit?

      A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

      Keep the uptime!

      Read more
      Colin Ian King

      The BPF Compiler Collection (BCC) is a toolkit for building kernel tracing tools that leverage the functionality provided by the Linux extended Berkeley Packet Filters (BPF).

      BCC allows one to write BPF programs with front-ends in Python or Lua with kernel instrumentation written in C.  The instrumentation code is built into sandboxed eBPF byte code and is executed in the kernel.

      The BCC github project README file provides an excellent overview and description of BCC and the various available BCC tools.  Building BCC from scratch can be a bit time consuming, however,  the good news is that the BCC tools are now available as a snap and so BCC can be quickly and easily installed just using:

       sudo snap install --devmode bcc  

      There are currently over 50 BCC tools in the snap, so let's have a quick look at a few:

      cachetop allows one to view the top page cache hit/miss statistics. To run this use:

       sudo bcc.cachetop  

      The funccount tool allows one to count the number of times specific functions get called.  For example, to see how many kernel functions with the name starting with "do_" get called per second one can use:

       sudo bcc.funccount "do_*" -i 1  

      To see how to use all the options in this tool, use the -h option:

       sudo bcc.funccount -h  

      I've found the funccount tool to be especially useful to check on kernel activity by checking on hits on specific function names.

      The slabratetop tool is useful to see the active kernel SLAB/SLUB memory allocation rates:

       sudo bcc.slabratetop  

      If you want to see which process is opening specific files, one can snoop on open system calls use the opensnoop tool:

       sudo bcc.opensnoop -T

      Hopefully this will give you a taste of the useful tools that are available in BCC (I have barely scratched the surface in this article).  I recommend installing the snap and giving it a try.

      As it stands,BCC provides a useful mechanism to develop BPF tracing tools and I look forward to regularly updating the BCC snap as more tools are added to BCC. Kudos to Brendan Gregg for BCC!

      Read more
      Dustin Kirkland

      If you haven't heard about last week's Dirty COW vulnerability, I hope all of your Linux systems are automatically patching themselves...

      Why?  Because every single Linux-based phone, router, modem, tablet, desktop, PC, server, virtual machine, and absolutely everything in between -- including all versions of Ubuntu since 2007 -- was vulnerable to this face-palming critical security vulnerability.

      Any non-root local user of a vulnerable system can easily exploit the vulnerability and become the root user in a matter of a few seconds.  Watch...

      Coincidentally, just before the vulnerability was published, we released the Canonical Livepatch Service for Ubuntu 16.04 LTS.  The thousands of users who enabled canonical-livepatch on their Ubuntu 16.04 LTS systems with those first few hours received and applied the fix to Dirty COW, automatically, in the background, and without rebooting!

      If you haven't already enabled the Canonical Livepatch Service on your Ubuntu 16.04 LTS systems, you should really consider doing so, with 3 easy steps:
      1. Go to and retrieve your livepatch token
      2. Install the canonical-livepatch snap
        $ sudo snap install canonical-livepatch 
      3. Enable the service with your token
        $ sudo canonical-livepatch enable [TOKEN]
      And you’re done! You can check the status at any time using:

      $ canonical-livepatch status --verbose

      Let's retry that same vulnerability, on the same system, but this time, having been livepatched...

      Aha!  Thwarted!

      So that's the Ubuntu 16.04 LTS kernel space...  What about userspace?  Most of the other recent, branded vulnerabilities (Heartbleed, ShellShock, CRIME, BEAST) have been critical vulnerabilities in userspace packages.

      As of Ubuntu 16.04 LTS, the unattended-upgrades package is now part of the default package set, so you should already have it installed on your Ubuntu desktops and servers.  If you don't already have it installed, you can install it with:

      $ sudo apt install unattended-upgrades

      And moreover, as of Ubuntu 16.04 LTS, the unattended-upgrades package automatically downloads and installs important security updates once per day, automatically patching critical security vulnerabilities and keeping your Ubuntu systems safe by default.  Older versions of Ubuntu (or Ubuntu systems that upgraded to 16.04) might need to enable this behavior using:

      $ sudo dpkg-reconfigure unattended-upgrades

      With that combination enabled -- (1) automatic livepatches to your kernel, plus (2) automatic application of security package updates -- Ubuntu 16.04 LTS is the most secure Linux distribution to date.  Period.


      Read more

      1, Install Ubuntu 14.04 or 15.04 on ONDA V116w.
      2, Plug ethernet cable
      3, sudo apt-get update && sudo apt-get dist-upgrade && sudo apt-get install build-essential git
      4.1, git clone
      4.2, cd rtl8723bu
      4.3, make
      4.4, sudo make install
      5.1, git clone
      5.2, cd rtl8723au_bt
      5.3, ‘git checkout new’ for kernel 3.18 and later
      5.4, make
      5.5, sudo make install
      5, add ‘8723bu’ and ‘btusb’ in /etc/modules

      Read more
      Colin Ian King

      Finding kernel bugs with cppcheck

      For the past year I have been running the cppcheck static analyzer against the linux kernel sources to see if it can detect any bugs introduced by new commits. Most of the bugs being found are minor thinkos, null pointer de-referencing, uninitialized variables, memory leaks and mistakes in error handling paths.

      A useful feature of cppcheck is the --force option that will check against all the configurations in the source (and the kernel does have many!).  This allows us to check for code that may not be exercised much (because it is normally not built in with most config options) or even find dead code.

      The downside of using the --force option is that each source file may need to be checked multiple times for each configuration.  For ~20800 sources files this can take a 24 processor server several hours to process.  Errors and warnings are then compared to previous runs (a delta), making it relatively easy to spot new issues on each run.

      We also use the latest sources from the cppcheck git repository.  The upside of this is that new static analysis features are used early and this can result in finding existing bugs that previous versions of cppcheck missed.

      A typical cppcheck run against the linux kernel source finds about 600 potential errors and 1700 warnings; however a lot of these are false positives.  These need to be individually eyeballed to sort the wheat from the chaff.

      Finally, the data is passed through a gnu plot script to generate a trend graph so I can see how errors (red) and warnings (green) are progressing over time:

      ..note that the large changes in the graph are mostly with features being enabled (or fixed) in cppcheck.

      I have been running the same experiment with smatch too, however I am finding that cppcheck seems to have better code coverage because of the --force option and seems to have less false positives.   As it stands, I am finding that the most productive time for finding issues is around the -rc1 and -rc2 merge times (obviously when most of the the major changes land in the kernel).  The outcome of this work has been a bunch of small fixes landing in the kernel to address bugs that cppcheck has found.

      Anyhow, cppcheck is an excellent open source static analyzer for C and C++ that I'd heartily recommend as it does seem to catch useful bugs.

      Read more
      Colin Ian King

      During idle moments in the final few weeks of 2014 I have been adding some more stressors and features to stress-ng as well as tidying up the code and fixing some small bugs that have crept in during the last development spin.   Stress-ng aims to stress a machine with various simple torture tests to trip overheating and kernel race conditions.

      The mmap stressor now has the '--mmap-file' to use synchronous file backed memory mapping instead of the default anonymous mapping, and the '--mmap-async' option enables asynchronous file mapping if desired.

      For socket stressing, the '--sock-domain unix' option now allows AF_UNIX (aka AF_LOCAL) sockets to be used. This compliments the existing AF_INET and AF_INET6 IPv4 and IPv6 protocols available with this stress test.

      The CPU stressor now includes mixed integer and floating point stressors, covering 32 and 64 bit integer mixes with the float, double and long double floating point types. The generated object code contains a nice mix of operations which should exercise various functional units in the CPU.  For example, when running on a hyper-threaded CPU one notices a performance hit because these cpu stressor methods heavily contend on the CPU math functional blocks.

      File based locking has been extended with the new lockf stressor, this stresses multiple locking and unlocking on portions of a small file and the default blocking mode can be turned into a CPU consuming rapid polling retry with the '--lockf-nonblock' option.

      The dup(2) system call is also now stressed with the new dup stressor. This just repeatedly dup's a file opened on /dev/zero until all the free file slots are full, and then closes these. It is very much like the open stressors.

      The fcntl(2) F_SETLEASE command is stress tested with the new lease stressor. This has a parent process that rapidly locks and unlocks a file based lease and 1 or more child processes try to open this file and cause lease breaking signal notifications to the parent.

      For x86 CPUs, the cache stressor includes two new cache specific options. The '--cache-fence' option forces write serialization on each store operation, while the '--cache-flush' option forces flush cache on each store operation. The code has been specifically written to not incur any overhead if these options are not enabled or are not available on non-x86 CPUs.

      This release also includes the stress-ng project mascot too; a humble CPU being tortured and stressed by a rather angry flame.

      For more information, visit the stress-ng project page, or check out the stress-ng manual.

      Read more
      Colin Ian King

      Before I started some analysis on benchmarking various popular file systems on Linux I was recommended to read "Systems Performance: Enterprise  and the Cloud" by Brendan Gregg.

      In today's modern server and cloud based systems the multi-layered complexity can make it hard to pin point performance issues and bottlenecks. This book is packed full useful analysis techniques covering tracing, kernel internals, tools and benchmarking.

      Critical to getting a well balanced and tuned system are all the different components, and the book has chapters covering CPU optimisation (cores, threading, caching and internconnects),  memory optimisation (virtual memory, paging, swapping, allocators, busses),  file system I/O, storage, networking (protcols, sockets, physical connections) and typical issues facing cloud computing.

      The book is full of very useful examples and practical instructions on how to drill down and discover performance issues in a system and also includes some real-world case studies too.

      It has helped me become even more focused on how to analyse performance issues and consider how to do deep system instrumentation to be able to understand where any why performance regressions occur.

      All-in-all, a most systematic and well written book that I'd recommend to anyone running large complex servers and cloud computing environments.

      Read more
      Colin Ian King

      even more stress in stress-ng

      Over the past few weeks in spare moments I've been adding more stress methods to stress-ng  ready for Ubuntu 15.04 Vivid Vervet.   My intention is to produce a rich set of stress methods that can stress and exercise many facets of a system to force out bugs, catch thermal over-runs and generally torture a kernel in a controlled repeatable manner.

      I've also re-structured the tool in several ways to enhance the features and make it easier to maintain.  The cpu stress method has been re-worked to include nearly 40 different ways to stress a processor, covering:

      • Bit manipulation: bitops, crc16, hamming
      • Integer operations: int8, int16, int32, int64, rand
      • Floating point:  long double, double,  float, ln2, hyperbolic, trig
      • Recursion: ackermann, hanoi
      • Computation: correlate, euler, explog, fibonacci, gcd, gray, idct, matrixprod, nsqrt, omega, phi, prime, psi, rgb, sieve, sqrt, zeta
      • Hashing: jenkin, pjw
      • Control flow: jmp, loop
      ..the intention was to have a wide enough eclectic mix of CPU exercising tests that cover a wide range of typical operations found in computationally intense software.   Use the new --cpu-method option to select the specific CPU stressor, or --cpu-method all to exercise all of them sequentially.

      I've also added more generic system stress methods too:
      • bigheap - re-allocs to force OOM killing
      • rename - rename files rapidly
      • utime - update file modification times to create lots of dirty file metadata
      • fstat - rapid fstat'ing of large quantities of files
      • qsort - sorting of large quantities of random data
      • msg - System V message sending/receiving
      • nice - rapid re-nicing processes
      • sigfpe - catch rapid division by zero errors using SIGFPE
      • rdrand - rapid reading of Intel random number generator using the rdrand instruction (Ivybridge and later CPUs only)
      Other new options:
      • metrics-brief - this dumps out only the bogo-op metrics that are relevant for just the tests that were run.
      • verify - this will sanity check the stress results per iteration to ensure memory operations and CPU computations are working as expected. Hopefully this will catch any errors on a hot machine that has errors in the hardware. 
      • sequential - this will run all the stress methods one by one (for a default of 60 seconds each) rather than all in parallel.   Use this with the --timeout option to run all the stress methods sequentially each for a specified amount of time. 
      • Specifying 0 instances of any stress method will run an instance of the stress method on all online CPUs. 
      The tool also builds and runs on Debian kFreeBSD and GNU HURD kernels although some stress methods or stress options are not included due to lack of support on these other kernels.
      The stress-ng man page gives far more explanation of each stress method and more detailed examples of how to use the tool.

      For more details, visit here or read the manual.

      Read more
      Jussi Pakkanen

      If you read discussions on the Internet about memory allocation (and who doesn’t, really), one surprising tidbit that always comes up is that in Linux, malloc never returns null because the kernel does a thing called memory overcommit. This is easy to verify with a simple test application.

      int main(int argc, char **argv) {
        while(1) {
          char *x = malloc(1);
          if(!x) {
            printf("Malloc returned null.\n");
            return 0;
          *x = 0;
        return 1;

      This app tries to malloc memory one byte at a time and writes to it. It keeps doing this until either malloc returns null or the process is killed by the OOM killer. When run, the latter happens. Thus we have now proved conclusively that malloc never returns null.

      Or have we?

      Let’s change the code a bit.

      int main(int argc, char **argv) {
        long size=1;
        while(1) {
          char *x = malloc(size*1024);
          if(!x) {
            printf("Malloc returned null.\n");
            printf("Tried to alloc: %ldk.\n", size);
            return 0;
          *x = 0;
        return 1;

      In this application we try to allocate a block of ever increasing size. If the allocation is successful, we release the block before trying to allocate a bigger one. This program does receive a null pointer from malloc.

      When run on a machine with 16 GB of memory, the program will fail once the allocation grows to roughly 14 GB. I don’t know the exact reason for this, but it may be that the kernel reserves some part of the address space for itself and trying to allocate a chunk bigger than all remaining memory fails.

      Summarizing: malloc under Linux can either return null or not and the non-null pointer you get back is either valid or invalid and there is no way to tell which one it is.

      Happy coding.

      Read more

      The Ubuntu Developer Summit was held in Copenhagen last week, to discuss plans for the next six-month cycle of Ubuntu. This was the most productive UDS that I've been to — maybe it was the shorter four-day schedule, or the overlap with Linaro Connect, but it sure felt like a whirlwind of activity.

      I thought I'd share some details about some of the sessions that cover areas I'm working on at the moment. In no particular order:

      Improving cross-compilation

      Blueprint: foundations-r-improve-cross-compilation

      This plan is a part of a mutli-cycle effort to improve cross-compilation support in Ubuntu. Progress is generally going well — the consensus from the session was that the components are fairly close to complete, but we still need some work to pull those parts together into something usable.

      So, this cycle we'll be working on getting that done. While we have a few bugfixes and infrastructure updates to do, one significant part of this cycle’s work will be to document the “best-practices” for cross builds in Ubuntu, on This process will be heavily based on existing pages on the Linaro wiki. Because most of the support for cross-building is already done, the actual process for cross-building should be fairly straightforward, but needs to be defined somewhere.

      I'll post an update when we have a working draft on the Ubuntu wiki, stay tuned for details.

      Rapid archive bringup for new hardware

      Blueprint: foundations-r-rapid-archive-bringup

      I'd really like for there to be a way to get an Ubuntu archive built “from scratch”, to enable custom toolchain/libc/other system components to be built and tested. This is typically useful when bringing up new hardware, or testing rebuilds with new compiler settings. Because we may be dealing with new hardware, doing this bootstrap through cross-compilation is something we'd like too.

      Eventually, it would be great to have something as straightforward as the OpenEmbedded or OpenWRT build process to construct a repository with a core set of Ubuntu packages (say, minbase), for previously-unsupported hardware.

      The archive bootstrap process isn't done often, and can require a large amount of manual intervention. At present, there's only a couple of folks who know how to get it working. The plan here is to document the bootstrap process in this cycle, so that others can replicate the process, and possibly improve the bits that are a little too janky for general consumption.

      ARM64 / ARMv8 / aarch64 support

      Blueprint: foundations-r-aarch64

      This session is an update for progress on the support for ARMv8 processors in Ubuntu. While no general-purpose hardware exists at the moment, we want to have all the pieces ready for when we start seeing initial implementations. Because we don't have hardware yet, this work has to be done in a cross-build environment; another reason to keep on with the foundations-r-improve-cross-compilation plan!

      So far, toolchain progress is going well, with initial cross toolchains available for quantal.

      Although kernel support isn’t urgent at the moment, we’ll be building an initial kernel-headers package for aarch64. There's also a plan to get a page listing the aarch64-cross build status of core packages, so we'll know what is blocked for 64-bit ARM enablement.

      We’ve also got a bunch of workitems for volunteers to fix cross-build issues as they arise. If you're interested, add a workitem in the blueprint, and keep an eye on it for updates.

      Secure boot support in Ubuntu

      Blueprint: foundations-r-secure-boot

      This session covered progress of secure boot support as at the 12.10 Quantal Quetzal release, items that are planned for 13.04, and backports for 12.04.2.

      As for 12.10, we’ve got the significant components of secure boot support into the release — the signed boot chain. The one part that hasn't hit 12.10 yet is the certificate management & update infrastructure, but that is planned to reach 12.10 by way of a not-too-distant-future update.

      The foundations team also mentioned that they were starting the 12.04.2 backport right after UDS, which will bring secure boot support to our current “Long Term Support” (LTS) release. Since the LTS release is often preferred Ubuntu preinstall situations, this may be used as a base for hardware enablement on secure boot machines. Combined with the certificate management tools (described at sbkeysync & maintaining uefi key databases), and the requirement for “custom mode” in general-purpose hardware, this will allow for user-defined trust configuration in an LTS release.

      As for 13.04, we're planning to update the shim package to a more recent version, which will have Matthew Garrett's work on the Machine Owner Key plan from SuSE.

      We're also planning to figure out support for signed kernel modules, for users who wish to verify all kernel-level code. Of course, this will mean some changes to things like DKMS, which run custom module builds outside of the normal Ubuntu packages.

      Netboot with secure boot is still in progress, and will require some fixes to GRUB2.

      And finally, the sbsigntools codebase could do with some new testcases, particularly for the PE/COFF parsing code. If you're interested in contributing, please contact me at

      Read more

      Over the last couple of weeks I've been working on a set of secure-boot tools. Originally, this project was intended as a quick implementation of an EFI-image-signing utility, but it has since grown a little. I've now added code to help maintain the UEFI signature databases from within a running OS.

      A new utility, sbkeysync, reads the current EFI signature databases from firmware, and reads a set of keys from a standard location in the filesystem - for example, /etc/secureboot/keys. It then updates the firmware key databases with any keys that are not already present.

      A filesystem keystore will look something like this:

      • /etc/secureboot/keys/PK/<pk-file>
      • /etc/secureboot/keys/KEK/<kek-file-1>
      • /etc/secureboot/keys/KEK/<kek-file-2>
      • /etc/secureboot/keys/KEK/…
      • /etc/secureboot/keys/db/<db-file-1>
      • /etc/secureboot/keys/db/<db-file-2>
      • /etc/secureboot/keys/db/…
      • /etc/secureboot/keys/dbx/<dbx-file-1>
      • /etc/secureboot/keys/dbx/<dbx-file-2>
      • /etc/secureboot/keys/dbx/…

      These files need to be in a certain format: signed EFI_SIGNATURE_LIST data. There's two other utilities in the sbtools tree to help to create the key files: sbsiglist and sbvarsign. The following example shows how you'd use these tools to do a basic secure boot key configuration.

      An example key setup

      If you're interested in trying sbkeysync, the following guide should get you set up. To start, you'll need:

      • A build of the secure boot tools (git repository information below);
      • A kernel with the efivars filesystem. Either build your own kernel with efivars-1f087c6.patch (from Matthew Garrett, with some minor changes), or use linux-image-3.5.0-13-generic_3.5.0-13.14~efivars1_amd64.deb, which should work on Ubuntu 12.04 or 12.10; and
      • A machine with firmware that implements UEFI secure boot, configured to be in setup mode (ie, no PK installed).

      Be warned that you're playing with three different layers of development code here: the secure boot tools are new, the efivars implementation hasn't had a lot of review yet, and firmware secure boot implementations are still fairly recent too. I'd recommend against doing this testing on a production machine.

      generating keys

      We'll generate a test key, and a self-signed certificate:

      [jk@pecola ~]$ openssl genrsa -out test-key.rsa 2048
      [jk@pecola ~]$ openssl req -new -x509 -sha256 \
              -subj '/CN=test-key' -key test-key.rsa -out test-cert.pem
      [jk@pecola ~]$ openssl x509 -in test-cert.pem -inform PEM \
              -out test-cert.der -outform DER

      We'll also need a GUID to represent the "key owner". Just generate one with uuidgen.

      [jk@pecola ~]$ guid=$(uuidgen)
      [jk@pecola ~]$ echo $guid

      generating key updates

      In order to install this key into the firmware signature databases, we need to create an EFI_SIGNATURE_LIST container for the key, and provide an EFI_VARIABLE_AUTHENTICATION_2 descriptor. The update data will be self-signed, to keep things simple.

      First, we create the EFI_SIGNATURE_LIST containing the certificate:

      [jk@pecola ~]$ sbsiglist --owner $guid --type x509 --output test-cert.der.siglist test-cert.der

      Next, we create a signed update for the EFI signature databases. The signed update consists of the certificate, prefixed with an EFI_VARIABLE_AUTHENTICATION_2 descriptor. The authentication descriptor signs the key data, plus the variable name and attributes. Becuase the variable name is included, we need to generate a separate signed update for each variable (PK, KEK and db):

      [jk@pecola ~]$ for n in PK KEK db
      > do
      >   sbvarsign --key test-key.rsa --cert test-cert.pem \
      >     --output test-cert.der.siglist.$n.signed \
      >     $n test-cert.der.siglist
      > done

      creating a keystore

      Next up, we'll put our keys into standard locations for sbkeysync to find:

      [jk@pecola ~]$ sudo mkdir -p /etc/secureboot/keys/{PK,KEK,db,dbx}
      [jk@pecola ~]$ sudo cp *.PK.signed /etc/secureboot/keys/PK/
      [jk@pecola ~]$ sudo cp *.KEK.signed /etc/secureboot/keys/KEK/
      [jk@pecola ~]$ sudo cp *.db.signed /etc/secureboot/keys/db/

      If you'd rather use a different location for the keystore, just use the --keystore and/or --no-default-keystores arguments to the sbkeysync commands that follow.

      using sbkeysync

      We can now use sbkeysync to synchronise the firmware key databases with the keystore we just created. Do a dry-run first to make sure all is OK:

      [jk@pecola ~]$ sbkeysync --verbose --pk --dry-run
      Filesystem keystore:
        /etc/secureboot/keys/db/test-cert.der.siglist.db.signed [2116 bytes]
        /etc/secureboot/keys/KEK/test-cert.der.siglist.KEK.signed [2116 bytes]
        /etc/secureboot/keys/PK/test-cert.der.siglist.PK.signed [2116 bytes]
      firmware keys:
      filesystem keys:
           from /etc/secureboot/keys/PK/test-cert.der.siglist.PK.signed
           from /etc/secureboot/keys/KEK/test-cert.der.siglist.KEK.signed
           from /etc/secureboot/keys/db/test-cert.der.siglist.db.signed
      New keys in filesystem:

      The output will list the keys were found in the EFI key databases, keys found in the filesystem keystore, and which keys should be inserted to the EFI key databases to bring them in sync with the keystore.

      If all looks good, we can remove the --dry-run argument to actually update the firmware key databases. However, be careful here - once a PK is enrolled, the machine is no longer in setup mode, and secure boot is enforced. At the very least, ensure that your firmware setup screens have a facility for returning your machine to setup mode, and/or removing the PK.

      Note that some firmware implementations may require a reboot for the changes to take effect in the EFI variables. Before you do this though, you'll probably want to sign your bootloader, so that you can actually boot something!

      signing a bootloader

      Now that secure boot is enabled, it'd be nice if we actually had something to boot. Although it isn't recommended for production systems, we'll just sign the GRUB2 binary that's already there:

      [jk@pecola ~]$ sbsign --key test-key.rsa --cert test-cert.pem \
              --output grubx64.efi /boot/efi/efi/ubuntu/grubx64.efi
      [jk@pecola ~]$ sudo cp /boot/efi/efi/ubuntu/grubx64.efi{,.bak}
      [jk@pecola ~]$ sudo cp grubx64.efi /boot/efi/efi/ubuntu/

      reverting to setup mode

      Theoretically, since we have the private-key component of PK, we can revert the machine from user-mode to setup mode. This requires writing an empty signed update to the PK variable:

      [jk@pecola ~]$ : > empty
      [jk@pecola ~]$ sbvarsign --key test-key.rsa --cert test-cert.pem \
              --include-attrs --output empty.PK.signed PK empty
      [jk@pecola ~]$ sudo dd bs=4k if=empty.PK.signed \

      However, I have not been able to reset the PK on all firmware implementations so far; there may be bugs in the signing tools (or firmware) that prevent the update from being properly verified. Because of this, I strongly suggest checking for the facility to clear the PK through your firmware setup screens before attempting to set the PK.

      secure boot tools resources

      If you'd like to check out the code, the following links may be useful:

      Read more
      Colin Ian King

      Testing eCryptfs

      Over the past several months I've been occasionally back-porting a bunch of eCryptfs patches onto older Ubuntu releases.  Each back-ported fix needs to be properly sanity checked and so I've been writing test cases for each one and adding them to the eCryptfs test suite.

      To get hold of the test suite, check it out using bzr:

       bzr checkout lp:ecryptfs  
      and install the dependencies so one can build the test suite:
       sudo apt-get install debhelper autotools-dev autoconf automake \
      intltool libtool libgcrypt11-dev libglib2.0-dev libkeyutils-dev \
      libnss3-dev libpam0g-dev pkg-config python-dev swig acl \
      If you want to test eCrytpfs with xfs and btrfs as the lower file system onto which eCryptfs is mounted, then one needs to also install the tools for these:
       sudo apt-get install xfsprogs btrfs-tools  
      And then build the test programs:
       cd ecryptfs  
      autoreconf -ivf
      intltoolize -c -f
      ./configure --enable-tests --disable-pywrap
      To run the tests, one needs to create lower and upper mount points. The tests allow one to create ext2, ext3, ext4, xfs or btrfs loop-back mounted file systems on the lower mount point, and then eCryptfs is mounted on the upper mount point on top.   To create these, use something like:
       sudo mkdir /lower /upper  
      The loop-back file system image needs to be placed somewhere too, I generally place mine in a directory /tmp/image, so this needs creating too:
       mkdir /tmp/image  
      There are two categories of tests, "safe" and "destructive".  Safe tests should run in such a ways as to not lock up the machine.  Destructive tests try hard to force bugs that can cause kernel oopses or panics. One specifies the test category with the -c option.  Now to run the tests, use:
       sudo ./tests/ -K -c safe -b 1000000 -D /tmp/image -l /lower -u /upper  
      The -K option tells the test suite to run the kernel specific tests. These are the ones I am generally interested in since I'm testing kernel patches.

      The -b option specifies the size in 1K blocks of the loop-back mounted /lower file system size.  I generally use 1000000 blocks as a minimum.

      The -D option specifies the path where the temporary loop-back mounted image is kept and the -l and -u options specified the paths of the lower and upper mount points.

      By default the tests will use an ext4 lower filesystem. One can also run specify which file systems to run the tests on using the -f option, this can be a comma separated list of one or more file systems, for example:
       sudo ./tests/ -K -c safe -b 1000000 -D /tmp/image -l /lower -u /upper \
      -f ext2,ext3,ext4,xfs
      And also, instead of running a bunch of tests, one can just a particular test using the -t option:
       sudo ./tests/ -K -c safe -b 1000000 -D /tmp/image -l /lower -u /upper \
      -f ext2,ext3,ext4,xfs -t
      ..which tests the fix for LaunchPad bug 926292
      We also run these tests regularly on new kernel images to ensure we don't introduce and regressions.   As it stands, I'm currently adding in tests for each bug fix that we back-port and for most new bugs that require a thorough test. I hope to expand the breadth of the tests to ensure we get better general test coverage.

      And finally, thanks to Tyler Hicks for writing the test framework and for his valuable help in describing how to construct a bunch of these tests.

      Read more
      Colin Ian King

      A new Ubuntu portal is a jump-start page containing links to pages and documents useful for Original Design Manufactures (ODMs), Original Equipment Manufacturers (OEMs) and Independent BIOS vendors.

      Some of the highlights include:

      • A BIOS/UEFI requirements document that containing recommendations to ensure firmware is compatible with the Linux kernel.
      • Getting started links describing how to download, install, configure and debug Ubuntu.
      • Links to certified hardware, debugging tools, SystemTap guides, packaging guides, kernel building notes.
      • Debugging tips, covering: hotkeys, suspend/resume, sound, X and wireless and an A5 sized Ubuntu Debugging booklet.
      • Link to fwts-live, the Firmware Test Suite live image. lots of useful technical resources to call upon.

      Kudos to Chris Van Hoof for organizing this useful portal.

      Read more

      The Internet has been alive with doom saying since the IPv4 global address pool was parcelled out.  Now I do not subscribe to the view that the Internet is going to end imminently, but I do feel that if the technical people out there do not start playing with IPv6 soon then what hope is there for the masses?

      In the UK getting native IPv6 is not a trivial task, only one ISP I can find seems to offer it and of course it is not the one I am with.  So what options do I have?  Well there are a number of different types of IPv4 tunnelling techniques such as 6to4 but these seem to require the ability to handle the transition on your NAT router, not an option here.  The other is a proper 6in4 tunnel to a tunnel broker but this needs an end-point.

      As I have a local server that makes a sensible anchor for such a tunnel.  Talking round with those in the know I settled on getting a tunnel from Hurricane Electric (HE), a company which gives out tunnels to individuals for free and seems to have local presence for their tunnel hosts.  HE even supply you with tools to cope with your endpoint having a dynamic address, handy.  So with an HE tunnel configuration in hand I set about making my backup server into my IPv6 gateway.

      First I had to ensure that protocol 41 (the tunnelling protocol) was being forwarded to the appropriate host.  This is a little tricky as this required me to talk to the configurator for my wireless router.  With that passed on to my server I was able to start configuring the tunnel.

      Following the instructions on my HE tunnel broker page, a simple cut-n-paste into /etc/network/interfaces added the new tunnel network device, a quick ifup and my server started using IPv6.  Interestingly my apt-cacher-ng immediately switched backhaul of its incoming IPv4 requests to IPv6 no configuration needed.

      Enabling IPv6 for the rest of the network was surprisingly easy.  I had to install and configure radv with my assigned prefix.  It also passed out information on the HE DNS servers, prioritising IPv6 in DNS lookup results.  No changes were required for any of the client systems; well other than enabling firewalls.  Win.

      Overall IPv6 is still not simple as it is hard to obtain native IPv6 support, but if you can get it onto your network the client side is working very well indeed.

      Read more
      Andy Whitcroft

      We have uploaded a new Precise linux-ti-omap4 kernel. The most notable changes
      are as follows:

      * rebased on master branch 3.2.0-19.31

      The full changelog can be seen at:


      Read more
      Andy Whitcroft

      We have uploaded a new Precise linux-ti-omap4 kernel. The most notable changes
      are as follows:

      * rebased on master branch 3.2.0-18.29
      * a number of CVE fixes

      The full changelog can be seen at:

      Read more

      After 39 2.6.x releases Linus Torvalds has chosen to revisit the upstream kernel version.  The plan is to release what would have been 2.6.40 instead as version 3.0:

      "I decided to just bite the bullet, and call the next version 3.0. It
      will get released close enough to the 20-year mark, which is excuse
      enough for me, although honestly, the real reason is just that I can
      no longe rcomfortably count as high as 40."
      When 3.0-rc1 was released the Kernel Team had to decide what version to use for it in Ubuntu.  We typically upload every -rcN release within a couple of days of its release so the pressure was on.  We could simply call it 3.0.0 knowing that all the current scripting would cope, or as 3.0 better matching its official name knowing this would not be plain sailing.  This was not a decision we could delay as in Debian versioning 3.0 < 3.0.0 so we were likely to be committed for Oneiric if we uploaded using 3.0.0.  It is also not clear from upstream discussion what version number the final release will carry, as 3.0 clearly will cause breakage on older userspace.

      After much discussion we decided we bite the bullet and upload a 3.0 kernel.  At least we get a chance  to identify problematic applications, while still keeping our options open to move to a 3.0.0 kernel for release should that be prudent.  As expected this was not smooth sailing, not least for the kernel packaging which needed much love to even correctly build this version.  Plus we had to hack the meta packages to allow that to be reversioned later too.

      Once successfully uploaded the problem applications started to crawl out of the woodwork:
      • depmod -- the depmod incantion to create the module dependancies identifies the kernel version in its command line but was assuming that a version contained three digits, this lead it to miss the version entirely and rebuild the wrong dependancies;
      • libc6 -- both the runtime and the installation control scripts manipulate the kernel version number, in both cases assuming the version was three digits, enormous fun getting the pending updates installed;
      • ps/top -- when starting the kernel version was checked, and miss decoded triggering a rather nasty sounding version warning whenever they are started;
      • nfs-utils -- when attempting to read and identify the kernel version the nfs-utils would trigger a SIGSEGV and die, triggering boot failures on machines with NFS roots; and
      • lm-sensors-3 -- this package is only compatible with 2.6.5 and above, failed version detection lead to this test failing and sensors being unconfigured.
      Those are the ones we have found so far, I am sure there will be more.  If you do find one please file a bug against the failing package but tag it kernel-3.0 then we can find them.

      Read more

      During the early part of the Maverick cycle we once again revisited out Union Mount solution.  At that time VFS union-mounts was the hit of the day, set to finally to produce something which might get into the kernel.  Since then the complexity of changing every filesystem to support whiteouts, its invasiveness, and its affects on Posix semantics have lead to it falling by the wayside.  In its place has sprung overlayfs.

      overlayfs is a small patch set which is a hybrid of the VFS union-mount approach and that of aufs/unionfs in that it also provides a filesystem.  This greatly reduces the complexity of the patch set, reducing its invasiveness and thus increasing its chances of ever being merged.  So much so simpler is it that your author is actually able to understand and debug it.  Win.

      We have been tickling overlayfs for most of the Natty cycle, but with Natty in the can I have had had some time to catch up with its development and help out a little, both with testing and bug fixing.  Culminating today in my being able to inject a kernel containing overlayfs support into an Ubuntu LiveCD and boot it, then update it to the latest Natty, all without error.

      overlayfs may shortly be in a mergable state, nirvana for all union mount lovers.  Only time and testing will tell.

      Read more
      Jeremy Foshee

      Hi Folks,
      It is that time again. Time for our regularly scheduled Bug Day [0]. Like last time we will be focusing on the bugs in the new state. We had a great amount of work done last time, and I’d like for us to keep that momentum going. There are a number of these bugs that have been improperly set to New when they should be either Triaged or Confirmed. My goal is for us to get as many of them properly set as possible. Additionally, we’d like to perform our basic triage on the ones that are, in fact, new. This includes a brief review of the information that has been provided and a request for what is missing so that we can get as many as possible into the triaged state for further review by the team.
      I’d like to take a moment and thank those people on the team for working on the bugs with patches that we have. Through their efforts the number is the lowest it has been since I started here. There has also been quite a bit of effort put into continual review of the regression-proposed bugs. We now have them in a good place. All of this is due to the extra effort the team has put in, in addition to the massive amounts of work they normally do to achieve this. Thanks guys for your efforts. :-)

      Parties interested in beginning triage or if you have questions about what to do, or if you just find yourself stuck on a bug and are not certain how to continue, please reach out to me in the #ubuntu-kernel channel on FreeNode. I’m always there, and I am always willing to help. :-)


      Read more