Canonical Voices

Posts tagged with 'ecryptfs'

Dustin Kirkland

February 2008, Canonical's office in Lexington, MA
10 years ago today, I joined Canonical, on the very earliest version of the Ubuntu Server Team!

And in the decade since, I've had the tremendous privilege to work with so many amazing people, and the opportunity to contribute so much open source software to the Ubuntu ecosystem.

Marking the occasion, I've reflected about much of my work over that time period and thought I'd put down in writing a few of the things I'm most proud of (in chronological order)...  Maybe one day, my daughters will read this and think their daddy was a real geek :-)

1. update-motd / (September 2008)

Throughout the history of UNIX, the "message of the day" was always manually edited and updated by the local system administrator.  Until Ubuntu's message-of-the-day.  In fact, I received an email from Dennis Ritchie and Jon "maddog" Hall, confirming this, in April 2010.  This started as a feature request for the Landscape team, but has turned out to be tremendously useful and informative to all Ubuntu users.  Just last year, we launched, which provides even more dynamic information about important security vulnerabilities and general news from the Ubuntu ecosystem.  Mathias Gug help me with the design and publication.

2. (September 2008)

This was the first public open source project I worked on, in my spare time at Canonical.  I had a local copy of the Ubuntu archive and I was thinking about what sorts of automated jobs I could run on it.  So I wrote some scripts that extracted the manpages out of each one, formatted them as HTML, and published into a structured set of web directories.  10 years later, it's still up and running, serving thousands of hits per day.  In fact, this was one of the ways we were able to shrink the Ubuntu minimal image, but removing the manpages, since they're readable online.  Colin Watson and Kees Cook helped me with the initial implementation, and Matthew Nuzum helped with the CSS and Ubuntu theme in the HTML.

3. Byobu (December 2008)

If you know me at all, you know my passion for the command line UI/UX that is "Byobu".  Byobu was born as the "screen-profiles" project, over lunch at Google in Mountain View, in December of 2008, at the Ubuntu Developer Summit.  Around the lunch table, several of us (including Nick Barcet, Dave Walker, Michael Halcrow, and others), shared our tips and tricks from our own ~/.screenrc configuration files.  In Cape Town, February 2010, at the suggestion of Gustavo Niemeyer, I ported Byobu from Screen to Tmux.  Since Ubuntu Servers don't generally have GUIs, Byobu is designed to be a really nice interface to the Ubuntu command line environment.

4. eCryptfs / Ubuntu Encrypted Home Directories (October 2009)

I was familiar with eCryptfs from its inception in 2005, in the IBM Linux Technology Center's Security Team, sitting next to Michael Halcrow who was the original author.  When I moved to Canonical, I helped Michael maintain the userspace portion of eCryptfs (ecryptfs-utils) and I shepherded into Ubuntu.  eCryptfs was super powerful, with hundreds of options and supported configurations, but all of that proved far to difficult for users at large.  So I set out to simplify it drastically, with an opinionated set of basic defaults.  I started with a simple command to mount a "Private" directory inside of your home directory, where you could stash your secrets.  A few months later, on a long flight to Paris, I managed to hack a new PAM module, pam_ecryptfs.c, that actually encrypted your entire home directory!  This was pretty revolutionary at the time -- predating Apple's FileVault or Microsoft's Bitlocker, even.  Today, tens of millions of Ubuntu users have used eCryptfs to secure their personal data.  I worked closely with Tyler Hicks, Kees Cook, Jamie Strandboge, Michael Halcrow, Colin Watson, and Martin Pitt on this project over the years.

5. ssh-import-id (March 2010)

With the explosion of virtual machines and cloud instances in 2009 / 2010, I found myself constantly copying public SSH keys around.  Moreover, given Canonical's globally distributed nature, I also regularly found myself asking someone for their public SSH keys, so that I could give them access to an instance, perhaps for some pair programming or assistance debugging.  As it turns out, everyone I worked with, had a account, and had their public SSH keys available there.  So I created (at first) a simple shell script to securely fetch and install those keys.  Scott Moser helped clean up that earliest implementation.  Eventually, I met Casey Marshall, who helped rewrite it entirely in Python.  Moreover, we contacted the maintainers of Github, and asked them to expose user public SSH keys by the API -- which they did!  Now, ssh-import-id is integrated directly into Ubuntu's new subiquity installer and used by many other tools, such as cloud-init and MAAS.

6. Orchestra / MAAS (August 2011)

In 2009, Canonical purchased 5 Dell laptops, which was the Ubuntu Server team's first "cloud".  These laptops were our very first lab for deploying and testing Eucalyptus clouds.  I was responsible for those machines at my house for a while, and I automated their installation with PXE, TFTP, DHCP, DNS, and a ton of nasty debian-installer preseed data.  That said -- it worked!  As it turned out, Scott Moser and Mathias Gug had both created similar setups at their houses for the same reason.  I was mentoring a new hire at Canonical, named Andres Rodriquez at the time, and he took over our part-time hacks and we worked together to create the Orchestra project.  Orchestra, itself was short lived.  It was severely limited by Cobbler as a foundation technology.  So the Orchestra project was killed by Canonical.  But, six months later, a new project was created, based on the same general concept -- physical machine provisioning at scale -- with an entire squad of engineers led by...Andres Rodriguez :-)  MAAS today is easily one of the most important projects the Ubuntu ecosystem and one of the most successful products in Canonical's portfolio.

7. pollinate / pollen / (February 2014)

In 2013, I set out to secure Ubuntu at large from a set of attacks ranging from insufficient entropy at first boot.  This was especially problematic in virtual machine instances, in public clouds, where every instance is, by design, exactly identical to many others.  Moreover, the first thing that instance does, is usually ... generate SSH keys.  This isn't hypothetical -- it's quite real.  Raspberry Pi's running Debian were deemed susceptible to this exact problem in November 2015.  So designed and implemented a client (shell script that runs at boot, and fetches some entropy from one to many sources), as well as a high-performance server (golang).  The client is the 'pollinate' script, which runs on the first boot of every Ubuntu server, and the server is the cluster of physical machines processing hundreds of requests per minute at  Many people helped review the design and implementation, including Kees Cook, Jamie Strandboge, Seth Arnold, Tyler Hicks, James Troup, Scott Moser, Steve Langasek, Gustavo Niemeyer, and others.

8. The Orange Box (May 2014)

In December of 2011, in my regular 1:1 with my manager, Mark Shuttleworth, I told him about these new "Intel NUCs", which I had bought and placed them around my house.  I had 3, each of which was running Ubuntu, and attached to a TV around the house, as a media player (music, videos, pictures, etc).  In their spare time, though, they were OpenStack Nova nodes, capable of running a couple of virtual machines.  Mark immediately asked, "How many of those could you fit into a suitcase?"  Within 24 hours, Mark had reached out to the good folks at TranquilPC and introduced me to my new mission -- designing the Orange Box.  I worked with the Tranquil folks through Christmas, and we took our first delivery of 5 of these boxes in January of 2014.  Each chassis held 10 little Intel NUC servers, and a switch, as well as a few peripherals.  Effectively, it's a small data center that travels.  We spend the next 4 months working on the hardware under wraps and then unveiled them at the OpenStack Summit in Atlanta in May 2014.  We've gone through a couple of iterations on the hardware and software over the last 4 years, and these machines continue to deliver tremendous value, from live demos on the booth, to customer workshops on premises, or simply accelerating our own developer productivity by "shipping them a lab in a suitcase".  I worked extensively with Dan Poler on this project, over the course of a couple of years.

9. Hollywood (December 2014)

Perhaps the highlight of my professional career came in October of 2016.  Watching Saturday Night Live with my wife Kim, we were laughing at a skit that poked fun at another of my favorite shows, Mr. Robot.  On the computer screen behind the main character, I clearly spotted Hollywood!  Hollywood is just a silly, fun little project I created on a plane one day, mostly to amuse Kim.  But now, it's been used in Saturday Night LiveNBC Dateline News, and an Experian TV commercials!  Even Jess Frazelle created a Docker container

10. petname / golang-petname / python-petname (January 2015)

From "warty warthog" to "bionic beaver", we've always had a focus on fun, and user experience here in Ubuntu.  How hard is it to talk to your colleague about your Amazon EC2 instance, "i-83ab39f93e"?  Or your container "adfxkenw"?  We set out to make something a little more user-friendly with our "petnames".  Petnames are randomly generated "adjective-animal" names, which are easy to pronounce, spell, and remember.  I curated and created libraries that are easily usable in Shell, Golang, and Python.  With the help of colleagues like Stephane Graber and Andres Rodriguez, we now use these in many places in the Ubuntu ecosystem, such as LXD and MAAS.

If you've read this post, thank you for indulging me in a nostalgic little trip down memory lane!  I've had an amazing time designing, implementing, creating, and innovating with some of the most amazing people in the entire technology industry.  And here's to a productive, fun future!


Read more
Dustin Kirkland

Upon learning about the Heartbleed vulnerability in OpenSSL, my first thoughts were pretty desperate.  I basically lost all faith in humanity's ability to write secure software.  It's really that bad.

I spent the next couple of hours drowning in the sea of passwords and certificates I would personally need to change...ugh :-/

As of the hangover of that sobering reality arrived, I then started thinking about various systems over the years that I've designed, implemented, or was otherwise responsible for, and how Heartbleed affected those services.  Another throbbing headache set in.

I patched within minutes of Ubuntu releasing an updated OpenSSL package, and re-keyed the SSL certificate as soon as GoDaddy declared that it was safe for re-keying.

Likewise, the Ubuntu entropy service was patched and re-keyed, along with all Ubuntu-related https services by Canonical IT.  I pushed an new package of the pollinate client with updated certificate changes to Ubuntu 14.04 LTS (trusty), the same day.

That said, I did enjoy a bit of measured satisfaction, in one controversial design decision that I made in January 2012, when creating Gazzang's zTrustee remote key management system.

All default network communications, between zTrustee clients and servers, are encrypted twice.  The outer transport layer network traffic, like any https service, is encrypted using OpenSSL.  But the inner payloads are also signed and encrypted using GnuPG.

Hundreds of times, zTrustee and I were questioned or criticized about that design -- by customers, prospects, partners, and probably competitors.

In fact, at one time, there was pressure from a particular customer/partner/prospect, to disable the inner GPG encryption entirely, and have zTrustee rely solely on the transport layer OpenSSL, for performance reasons.  Tried as I might, I eventually lost that fight, and we added the "feature" (as a non-default option).  That someone might have some re-keying to do...

But even in the face of the Internet-melting Heartbleed vulnerability, I'm absolutely delighted that the inner payloads of zTrustee communications are still protected by GnuPG asymmetric encryption and are NOT vulnerable to Heartbleed style snooping.

In fact, these payloads are some of the very encryption keys that guard YOUR health care and financial data stored in public and private clouds around the world by Global 2000 companies.

Truth be told, the insurance against crypto library vulnerabilities zTrustee bought by using GnuPG and OpenSSL in combination was really the secondary objective.

The primary objective was actually to leverage asymmetric encryption, to both sign AND encrypt all payloads, in order to cryptographically authenticate zTrustee clients, ensure payload integrity, and enforce key revocations.  We technically could have used OpenSSL for both layers and even realized a few performance benefits -- OpenSSL is faster than GnuPG in our experience, and can leverage crypto accelerator hardware more easily.  But I insisted that the combination of GPG over SSL would buy us protection against vulnerabilities in either protocol, and that was worth any performance cost in a key management product like zTrustee.

In retrospect, this makes me wonder why diverse, backup, redundant encryption, isn't more prevalent in the design of security systems...

Every elevator you have ever used has redundant safety mechanisms.  Your car has both seat belts and air bags.  Your friendly cashier will double bag your groceries if you ask.  And I bet you've tied your shoes with a double knot before.

Your servers have redundant power supplies.  Your storage arrays have redundant hard drives.  You might even have two monitors.  You're might be carrying a laptop, a tablet, and a smart phone.

Moreover, important services on the Internet are often highly available, redundant, fault tolerant or distributed by design.

But the underpinnings of the privacy and integrity of the very Internet itself, is usually protected only once, with transport layer encryption of the traffic in motion.

At this point, can we afford the performance impact of additional layers of security?  Or, rather, at this point, can we afford not to use all available protection?


p.s. I use both dm-crypt and eCryptFS on my Ubuntu laptop ;-)

Read more
Colin Ian King

Testing eCryptfs

Over the past several months I've been occasionally back-porting a bunch of eCryptfs patches onto older Ubuntu releases.  Each back-ported fix needs to be properly sanity checked and so I've been writing test cases for each one and adding them to the eCryptfs test suite.

To get hold of the test suite, check it out using bzr:

 bzr checkout lp:ecryptfs  
and install the dependencies so one can build the test suite:
 sudo apt-get install debhelper autotools-dev autoconf automake \
intltool libtool libgcrypt11-dev libglib2.0-dev libkeyutils-dev \
libnss3-dev libpam0g-dev pkg-config python-dev swig acl \
If you want to test eCrytpfs with xfs and btrfs as the lower file system onto which eCryptfs is mounted, then one needs to also install the tools for these:
 sudo apt-get install xfsprogs btrfs-tools  
And then build the test programs:
 cd ecryptfs  
autoreconf -ivf
intltoolize -c -f
./configure --enable-tests --disable-pywrap
To run the tests, one needs to create lower and upper mount points. The tests allow one to create ext2, ext3, ext4, xfs or btrfs loop-back mounted file systems on the lower mount point, and then eCryptfs is mounted on the upper mount point on top.   To create these, use something like:
 sudo mkdir /lower /upper  
The loop-back file system image needs to be placed somewhere too, I generally place mine in a directory /tmp/image, so this needs creating too:
 mkdir /tmp/image  
There are two categories of tests, "safe" and "destructive".  Safe tests should run in such a ways as to not lock up the machine.  Destructive tests try hard to force bugs that can cause kernel oopses or panics. One specifies the test category with the -c option.  Now to run the tests, use:
 sudo ./tests/ -K -c safe -b 1000000 -D /tmp/image -l /lower -u /upper  
The -K option tells the test suite to run the kernel specific tests. These are the ones I am generally interested in since I'm testing kernel patches.

The -b option specifies the size in 1K blocks of the loop-back mounted /lower file system size.  I generally use 1000000 blocks as a minimum.

The -D option specifies the path where the temporary loop-back mounted image is kept and the -l and -u options specified the paths of the lower and upper mount points.

By default the tests will use an ext4 lower filesystem. One can also run specify which file systems to run the tests on using the -f option, this can be a comma separated list of one or more file systems, for example:
 sudo ./tests/ -K -c safe -b 1000000 -D /tmp/image -l /lower -u /upper \
-f ext2,ext3,ext4,xfs
And also, instead of running a bunch of tests, one can just a particular test using the -t option:
 sudo ./tests/ -K -c safe -b 1000000 -D /tmp/image -l /lower -u /upper \
-f ext2,ext3,ext4,xfs -t
..which tests the fix for LaunchPad bug 926292
We also run these tests regularly on new kernel images to ensure we don't introduce and regressions.   As it stands, I'm currently adding in tests for each bug fix that we back-port and for most new bugs that require a thorough test. I hope to expand the breadth of the tests to ensure we get better general test coverage.

And finally, thanks to Tyler Hicks for writing the test framework and for his valuable help in describing how to construct a bunch of these tests.

Read more
Dustin Kirkland

eCryptfs in the Wild

Perhaps you're aware of my involvement in the eCryptfs project, as the maintainer of the ecryptfs-utils userspace tools...

This post is just a collection of some recent news and headlines about the project...

  1. I'm thrilled that eCryptfs' kernel maintainer, Tyler Hicks, joined Canonical's Ubuntu Security Team last month!  He'll be working on the usual Security Updates for stable Ubuntu releases, but he'll also be helping develop, triage and fix eCryptfs kernel bugs, both in the Upstream Linux Kernel, and in Ubuntu's downstream Linux kernel packages.  Welcome Tyler!
  2. More and more and more products seem to be landing in the market, using eCryptfs encryption!  This is, all at the same time, impressive/intimidating/frightening to me :-)
    • Google's ChromeOS uses eCryptfs to securely store browser cache locally (this feature was in fact modeled after Ubuntu Encrypted Private Directory feature, and the guys over at Google even sent me a Cr48 to play with!)
    • We've spotted several NAS solutions on the market running eCryptfs, such as this Synology DS1010+ and the BlackArmor NAS 220 from Seagate
    • Do you know of any others?
  3. I've had several conversations with Android developers recently, who are also quite interested in using eCryptfs to efficiently secure local storage on their devices.  As an avid Android user, I'd love to see this!
  4. There's a company here in Austin, called Gazzang, that's developing Cloud Storage solutions (mostly database backends) backed by eCryptfs.
  5. And there's a start-up in the Bay Area investingating eCryptfs + LXC + MongoDB for added security to their personal storage solution.
Exciting times in eCryptfs-land, for sure!

Which brings me to the point of this post...  We could really use some more community interaction and developer involvement around eCryptfs!
  • Do you know anything about encryption?
  • What about Linux filesystems?
  • Perhaps you're a user who's interested in helping with some bug triage, or willing to help support some other users?
  • We have both kernel, and user space bug-fixing and new development to be done!
  • There's code in both C and Shell that need some love.
  • Heck, even our documentation has plenty of room for improvement!
If you'd like to get involved, drop by #ecryptfs in, and poke kirkland or tyhicks.


Read more
Chad Miller

I’ve spoken before (Barcamp Orlando, 2007) about how we, as a culture, do not protect data as well as we should.  We value physical things more than we value data, even though some of us spend far more of our lives creating and using data.

One way we can lose data is through theft of the hardware it’s on.  We can think about the value of encryption, not only in how much positive value some information has to you (how much is a random string worth, really?), but also in how much negative value it would have if someone else had control of it (like when your bank password is in the hands of someone else).

Ubuntu can’t help you value your data more, but it can help remove some of the negative impact of someone stealing your hardware.  Though a single directory was available only to advanced users back in Ubuntu 9.04, now all users can take advantage of having all personal data encrypted! That’s right — you entire home directory can be encrypted.  There’s a great article written by Dustin Kirkland that elucidates how to begin to use encrypted home, even for established users and systems.

I thought I understood the migration process more than I really did, so my first attempt failed.  I worked on it until I understood it, and so maybe someone will find a high-level summary of what I learned to be useful:

  1. Start ~/Private dir encryption.  Copy all of your home into it (excluding the “Private” therein).  Unmount the encryption.
  2. Make a new directory outside to hold config files.  Move your ~/.ecryptfs  into it, so there’s no chicken-egg problem in loading the config files.  Move your ~/.Private ciphertext into it also.
  3. Make a new directory that will be renamed to your home directory later, which will hold almost nothing except a symlink to the config files and symlink to ciphertext dir.
  4. Move your home out of the way, to back up.  Move the new tmp into place.
  5. Then, at log in, it reads the configs, finds the ciphertext, mounts this on top of your home, and all works as it should.

That is of course, just an overview of what one does, and should only help one grapple with the concepts, not help one actually do anything.  For more helpful advice see that article above.

Encrypted home directories are new in Ubuntu 9.10 and are very easy to use if starting a new user from scratch.

Read more