Canonical Voices

What Jamie Strandboge talks about

Posts tagged with 'ubuntu'

jdstrand

Compiz vs Ubuntu Classic Desktop

I am running the development version of Ubuntu (the Natty Narwhal). I’ve tried the Unity desktop (and will continue to do so) but for reasons I won’t go into here, I need to use the Ubuntu Classic Desktop for now. After today’s update I could no longer login to a functional Ubuntu Classic Desktop because of bug #683686. There were a number of things that went wrong and I wasted an hour on trying to work it out (thank you to didrocks and seb128 for helping me). Here is what I’ve learned:

  • Do not disable the Unity plugin in CompizConfig Settings Manager while in Unity (or enable it when not in Unity)
  • To use Unity, login to GDM with ‘Ubuntu Desktop’
  • To use the traditional desktop, login to GDM with ‘Ubuntu Classic Desktop’
  • If after logging in to the Ubuntu Classic Desktop your window manager does not start, this might be bug #683686. To work around it, logout, move your ~/.config/compiz-1 aside (logging into a console first), then log back in with GDM like normal. This bug is actively being worked on.
  • There is a known bug with compiz and the gnome-panel that may cause applets to not load. Logging out and back in again usually solves this. This bug is actively being worked on.

Hope this helps anyone suffering from the same problems I did. Please file bugs if you are having other problems with Compiz, Unity or the Ubuntu Classic Desktop.


Filed under: ubuntu

Read more
jdstrand

So, like a lot people, I get asked to install Ubuntu on friend’s and family’s computers. I talk to them about what their use cases are and more often than not (by far) I install it on their systems. That’s cool. What is less cool is tech support for said installation. Not that I mind doing it or that there is a lot to do, but what becomes problematic is when they go home and are sitting behind their NAT router from their ISP, and I can’t just connect to them to fix something I forgot or to help them out of a jam. Before I go any farther let me say that what I am going to describe is never done on machines without the owner’s permission and that I am always upfront in that my account on his/her machine is an administrative account that has access to everything on the system (barring any encryption they use, of course). I should also mention that what I am describing is more in the ‘fun hack that other people might like’ category, and not in the ‘serious systems administration’ category. In other words, this is total crack, but it is fun crack. :)

Now, there are a lot of ways to do this, and I have tried some and surely missed many others. Here are a few I’ve tried:

  • Giving realtime tech support over the phone
  • Giving realtime tech support over some secure/encrypted chat mechanism
  • Email support
  • OpenSSH access from my machine to theirs, including adjusting their router for port forwarding
  • VPN access from their machine to my network, at which point I can OpenSSH to their machine

Rather than going through a comparison of all the different techniques, let’s just say they all have issues: realtime support almost always entails describing some obscure incantation to fix the problem, which is neither confidence inspiring for them and is extremely time consuming. Email is too slow. Routers get reset and straight OpenSSH doesn’t work when they are away from home. VPN access is not bad, especially with the use of OpenVPN client and server certificates. It has the added benefit of being opt-in by the user, and is easy to use with the network-manager-openvpn-gnome package. It is probably the most legitimate form of access, and should be highly considered, especially for corporate environments. The only real issues are that some draconian networks will block this VPN traffic and that my IP happens to change so they have to fiddle with their connection setting. So I started doing something different.

Remotely triggered reverse OpenSSH connections
The basic idea is this: on the client install OpenSSH, harden it a bit, install a firewall so that no one can connect to it, then create a cron job to poll an HTTP server you have access to for an IP address to connect to, then create a reverse SSH connection to that IP address. If this sounds a little shady and a bit like a botnet, well, you’d be right, but again, I did ask for permission first. :)

So, more specifically, on the remote machine (ie, the one you want to administer):

  • Install openssh-server and set /etc/ssh/sshd_config with the following:
    # Force key authentication (ie, no passwords)
    PasswordAuthentication no
    # Only allow logins to my account
    AllowUsers me
    Obviously, you will need to copy your ssh key over to this machine (man ssh-copy-id) before restarting OpenSSH and putting the above into effect.
  • Setup a firewall. Eg:$ sudo ufw enable
  • Create some passwordless ssh keys. Eg:
    $ ssh-keygen -f revssh.id_rsa
  • Create a script to poll some HTTP server, then create the reverse connection (this is an abridged script. I’ve omitted error checking and locking for brevity):#!/bin/sh
    hname=`hostname | cut -f 1 -d '.'`
    ip=`elinks -no-home -dump "http://your.web.server/$hname" | head -1 | awk '{print $1}' | egrep '^[1-9][0-9\.]*$'` || {
    #echo "Could not obtain ip" >&2
    exit 0
    }
    echo "Connecting to $ip"
    # we use StrictHostKeyChecking=no so that keys gets added without prompting
    ssh -oStrictHostKeyChecking=no -i $HOME/.ssh/revssh.id_rsa -p 8080 -NR 3333:localhost:22 revssh@"$ip" sleep 30 || true

To remotely administer a machine, create a file on the server with the name of the hostname of the client to have only the external IP address of your network in it, and then the remote client will pick it up and try to setup a reverse ssh connection on port 8080 (usually open even on the most draconian firewalls, but you could also use 53/tcp, 80/tcp or 443/tcp), after which you can connect to the remote machine with something like:$ ssh -t -p 3333 localhost

Caveats
This is hardly perfect. For one thing, the client is polling an HTTP server so it can easily be man-in-the-middled, but that isn’t a big deal because even if the attacker had full knowledge of this technique, all it gives is a connection to OpenSSH on the client, which is configured to only allow connections from ‘me’ and with my ssh key. This could of course be fixed by connecting with HTTPS and using connection.ssl.cert_verify=1 with elinks. Similarly, the HTTP server could be subverted and under attacker control. For the client, this is no different than the MITM attack in that the attacker really doesn’t have much to work with due to the OpenSSH configuration. Also, the client completely ignores the fingerprint of the server it is connecting to, but again, not huge deal because of our OpenSSH configuration on the client, but you will need to be extra careful in checking fingerprints when connecting to the reverse connection.

You need to remember to update your webserver (ie, just remove the file) so the client isn’t always trying to connect to you. Also, it is somewhat inconvenient that when you logout the reverse connection is still there, but instead of using ‘exit’ to logout of the remote machine, use something like: $ kill -9 `ps auxww | grep [s]sh | grep 3333 | awk '{print $2}'`

While is it relatively easy to setup the client, it is somewhat harder to setup the initiating end. First off, you need to have a webserver that can be accessed by the client. Then you need to have the ssh server on your local machine listen on port 8080 (done either via a port redirect or a separate ssh server). You also need to setup a non-privileged ‘revssh’ user (eg, /bin/false for the shell, a disabled password, etc) on your machine. If you are behind a firewall/router you need to allow connections through your firewall to your machine so that the connection to port 8080 from the client is not blocked. Finally, if you remotely administer multiple clients you will want to keep track of their ssh fingerprints, because when you connect to ‘-p 3333 localhost’ they will conflict with each other (most annoying, but workable). I have written a ‘revssh_allow’ script to automate the above for me (not included, as it is highly site-specific). It will: fire up an sshd server on port 8080 that is specially configured for this purpose, adjust my local firewall to open port 8080/tcp from the client, connect to my router to set up the port redirection to my machine, then poll (via ‘netstat -atn | grep “:3333.*LISTEN”‘) for the connection from the client, then remind me how to connect to the client and how to properly kill the connection.

Summary
So yeah, this is a fun hack. Is it something to put in production? Probably not. Does it work for administering friend’s and family’s computers? Absolutely, but I’ll have to see how well it works over the long haul.

Have fun!


Filed under: security, ubuntu

Read more
jdstrand

What I Do

I’m often asked, “So Jamie, what do you do?” I find my answer is usually quite different depending on whom I am talking to. Normally I say something fairly bland like, “I’m a security engineer for Ubuntu, which is a Free alternative to Windows and Mac.” I try to say something about freedom and beer, but really by the time I get to the word ‘engineer,’ many people’s eyes go glassy (maybe they’re tearing up at the thought of working on free software for a living and I am just not empathetic enough to notice). There might be a follow-up question or two and I may even offer a free CD, but usually the response is a simple, “Oh, you work with computers?”

Yes, I work with computers.

The truth is I would love to talk in depth about what I do with people who ask, so when my employer asked people to blog about what they do, I was pretty stoked. So where to start? How about where I got started.

I started using Free Software in 1996, when I went back to school to expand my education. Not long after that, my wife was pregnant and I found myself needing a way to work on my school assignments from home. My computer graphics professor gave me the new RedHat 5.0. I went straight home, installed it and was hooked. A little while later I installed a pre-release version of Debian Slink. Like many others, I loved Debian’s package management, its policy and how it is community-based. These gifts of Free Software and the community around them were, and still are, very meaningful for me.

Fast forward a few years and you’ll find me setting up a business with Debian Woody. Back then Debian stable still had Gnome 1.4, so I was keen on finding a newer desktop on top of the reliable, stable and secure foundation that I admired in Debian. I found Gustavo Noronha Silva’s unofficial Gnome 2 packages, but I really wanted Gnome 2.2. He didn’t plan on providing 2.2 packages, so I took up that work by providing a full, modern desktop including Xfree86, evolution, Mozilla, and a whole lot more. I realized that I had a pretty good thing going and thought others could benefit, so I released this as the Gnome 2.2 Backport for Debian Woody. I provided security support and an upgrade path for the backport for more than 3 years until Woody’s end of life. During this same time I developed an intense interest in secure servers which led me to consulting and a strong advocacy of Free Software. These experiences helped me understand how much good you can bring to people by working on Free Software.

In 2007 I was interviewed for a position at Canonical and I’ll never forget Matt Zimmerman’s question in my interview: “What will stop you from quitting a year from now and going back to consulting?” Though I did not expect this question, the answer was immediate: “Because I know how much of an impact Free Software can have and I won’t have the opportunity to help more people than with Ubuntu.”

These days, I get paid by Canonical to work with computers.

As an Ubuntu Security engineer, I am on a team of people who are responsible for tending to known threats against Ubuntu. We track security vulnerabilities, triage bugs, interact with upstreams, coordinate with other vendors, sponsor patches from the community, liaise with upstreams and vendors on behalf of researchers, analyze vulnerabilities, add to the Debian CVE tracker and of course fix security bugs in Ubuntu. Quality assurance is an integral part of fixing a bug that can land on millions of users’ desktops, so I helped start and regularly participate in the QA Regression Testing (QRT) project. In addition to helping our team prevent regressions in our updates, it is regularly used by the Ubuntu QA team to test the development release and in stable bug fix updates. The scripts in QRT have on several occasions found bugs in software in our development release that led to upstream and/or Debian bug reports and fixes. I also regularly update the Ubuntu CVE Tracker and security team tools for tracking, building and publishing security updates.

Another part of what I do is help develop security features, tools and documentation for Ubuntu. I am the principal author of the Uncomplicated Firewall (ufw) which aims to help people unfamiliar with firewall concepts be safer while helping seasoned administrators get their job done faster. It’s the default firewall for Ubuntu and included in other distributions such as Debian and Arch Linux. Several projects have popped up around ufw and provide graphical frontends, and I coordinate features and bug fixes in ufw with those projects.

I have joined the AppArmor project. AppArmor is the default Mandatory Access Control (MAC) system in Ubuntu and OpenSUSE, and thanks to to the tireless work of John Johansen and many others, is now included in the mainline Linux kernel. My upstream focus is on AppArmor testing, profiling, documentation, userspace tools and ease of distribution integration. In addition, I regularly participate in upstream planning discussions and meetings. For Ubuntu, I have authored many of the profiles in Ubuntu and regularly provide testing, bug fixes and new versions of AppArmor in Ubuntu.

I’ve also authored a few smaller applications like auth-client-config, openssl-blacklist, and openvpn-blacklist. auth-client-config is a program for modifying nsswitch.conf and pam configuration, but has largely been superseded in Ubuntu by pam-auth-update. The openssl-blacklist and openvpn-blacklist tools and lists were developed by me to detect known-bad RSA keys, and are included in Debian. I’ve had patches accepted by upstream for random software such as Gnucash and Gourmet, and have submitted many patches to Debian over the years.

I use virtualization for much of my development work and testing, so I regularly triage and fix bugs in libvirt and other parts of the virtualization stack with the Ubuntu Server team. I wrote and regularly maintain the AppArmor security driver in upstream libvirt. In Ubuntu I tend to focus on libvirt’s bug triage, AppArmor integration, merges with Debian, and testing. Writing much of the vm-tools in the Ubuntu QA Tools, I hope these scripts help anyone be more efficient when trying to manipulate several machines at one time, such as when performing ISO testing or testing a patch on many different operating systems.

When not working at home, I might be at a conference such as the Ubuntu Developer Summit (UDS), where I collaborate with people from all over the world in the Ubuntu community and upstreams to help plan and implement new features with security in mind. I’ve also attended security conferences such as DefCon and BlackHat.

Yes, I work with computers and am happy for it! On any given day I might publish an update, audit a piece of software, discuss a vulnerability with upstream, submit a profile to AppArmor, forward a patch to Debian, plan a ufw feature, test and refine a security fix with other vendors, triage and comment on a security vulnerability, write up some documentation, develop a test script, and/or fire up a bunch of virtual machines. What I do is fun, challenging… satisfying. It is hugely rewarding working on Free Software with so many talented and intelligent individuals in Canonical, the Ubuntu community, and the upstreams I interact with every day. I am blessed to work with these fantastic people who continually inspire me to stretch to learn and do more. I believe by working together on Free Software all of us are in our own way changing the world for the better. That’s why I do what I do.


Filed under: ubuntu

Read more
jdstrand

So yesterday I rebooted a Lucid server I administer, and fsck ran. Ok, that’s cool. Granted it takes about 45 minutes on my RAID1 terabyte filesystem, but so be it. As in the past, I was slightly annoyed again that upstart/plymouth did not tell me that it was fscking my drive like my desktop does (may be it’s because this was an upgrade from Hardy and not a fresh install? It would be nice if I looked into why), but I knew that was what was happening, so I went about my business .

Until… there was a failure that had to be manually resolved by fsck. Looking at the path, it was no big deal (easily restoreable), so I just needed to run ‘fsck /dev/md2′. Hmmm, /dev/md2 is /var on this system, and mountall got stuck cause /var couldn’t be mounted. Getting slightly more annoyed, I tried to reboot with ‘Ctrl+Alt+Del’, but that didn’t work, so I had to SysRq my way out (using Alt+SysRq+R, Alt+SysRq+S, Alt+SysRq+U, and finally Alt+SysRq+B) and boot into single usermode. Surely I could reboot to runlevel 1 and get a prompt…. 45 minutes later (ie, after another failed fsck on /var) I was shown to be wrong. Thankfully I had an amd64 10.04 Server CD handy and booted into rescue mode, which in the end worked fine for fscking my drive manually.

Why was running fsck manually so hard? Why is plymouth/upstart so quiet on my server?

It turns out because I removed ‘splash quiet’ from my kernel boot options, plymouth wouldn’t show the message to ‘S’ (skip) or ‘M’ (maunually recover) /var. It was still running, so I could press ‘m’ to get to a maintenance shell (you might need to change your tty for this).

For the plymouth/upstart silence I came across the following:

In short, the above has you add to /etc/modprobe.d/blacklist-custom.conf:
blacklist vga16fb

Then adjust your kernel boot line to have:
ro nosplash INIT_VERBOSE=yes init=/sbin/init noplymouth -v

This is not as pretty as SysV bootup, but is verbose enough to let you know what is happening. The problem is that because there is no plymouth, there is no way enter a maintenance shell when you need to, so the above is hard to recommend. :(

To me, it boils down to the following two choices:

  • Boot with ‘splash’ but without ‘quiet’ and lose boot messages but gain fsck feedback
  • Boot without ‘splash quiet’ and lose fsck feedback and remember you can press ‘m’ to enter a maintenance shell when there is a problem

It would really be nice to have both fsck feedback and no splash, but this doesn’t seem possible at this time. If someone knows a way to do this on Lucid, please let me know. In the meantime, I have filed bug #613562.


Filed under: ubuntu, ubuntu-server

Read more
jdstrand

A proposed security update for chromium-browser on Ubuntu 10.04 LTS is available. If you are able, please test and comment in https://launchpad.net/bugs/602142.


Filed under: security, ubuntu

Read more
jdstrand

Fabien Tassin (fta) has prepared a security update for chromium-browser for Ubuntu 10.04 LTS (Lucid Lynx). Please test and provide positive and/or negative feedback in: https://bugs.launchpad.net/ubuntu/+source/chromium-browser/+bug/591474.

This update addresses the following upstream issues:
http://googlechromereleases.blogspot.com/2010/06/stable-channel-update.html.

Thanks Fabien!


Filed under: security, ubuntu

Read more
jdstrand

I make use of the Master Password feature in Firefox. While not on by default, when enabled this feature encrypts your Firefox saved passwords on disk, and Firefox will prompt you when you need access to a saved password. When your browser is not running, your passwords are safe. There is a tool to try to brute force your master password if your machine is stolen, but as long as you use a strong password you should be ok (or at the very least, give you time to change them). For more information, see http://kb.mozillazine.org/Master_password.

This is a nice feature, and one which Chromium lacks. If you let Chromium save your passwords, they are stored in the ‘~/.config/chromium/Default/Web Data’ sqlite database. Displaying them is surprisingly easy (this is 5.0.342.9~r43360-0ubuntu2 on Ubuntu 10.04 LTS, newer versions may save them somewhere else):

$ echo 'SELECT username_value, password_value FROM logins;' | sqlite3 ~/.config/chromium/Default/Web\ Data | grep -v '^|$'
username|password
username2|password2

As you can see, in essence your passwords are stored in plain text on your disk (though the ~/.config/chromium directory does have 0700 permissions). I won’t go into the reasons why Google hasn’t implemented this feature yet since people can read the bug, but it seems clear that:

  • Google is not going to fix this anytime soon
  • People need a way to protect themselves

There are some alternatives with LastPass and RoboForm, but these apparently require you to store your passwords online (I’ve not verified this personally). As it stands, there is not a way to lock your saved passwords, so I encourage Chromium users to encrypt their data using eCryptfs or LUKS full disk encryption so that at least when you turn off your computer the passwords are not readily available. In Ubuntu, you can:

  • setup LUKS full disk encryption using the alternate installer
  • setup an encrypted home directory in all the Desktop and Server installers (or migrate an existing home directory by using ‘ecryptfs-migrate-home’)
  • setup an encrypted private directory using ‘ecryptfs-setup-private’ (if you go this route, you’ll want to move ~/.config/chromium and ~/.cache/chromium into the encrypted directory and use symlinks to point to them)

In this scenario, normal DAC permissions will protect your passwords on multiuser systems (though you’ll need to be careful about the security of backups) and encrypted disks/folders will protect them in the case of theft. As always, please be vigilant about screen locking when you leave your computer while logged in though….


Filed under: security, ubuntu

Read more
jdstrand

A coworker turned me onto browser profiles in Firefox (thanks Kees!). Browser profiles are a great way to keep your passwords, bookmarks, preferences and even extensions separate. I like to use one for work and one for personal stuff (and a few others). For more information on how to use them in Firefox, see http://support.mozilla.com/en-US/kb/profiles.

I started playing with Chromium lately, and found that it also supports profiles (see http://www.chromium.org/user-experience/user-data-directory), but not quite as conveniently as Firefox. With Firefox, you can launch it like so:

$ firefox -ProfileManager -no-remote
and get a nice little dialog. Well, I wanted the same in Chromium, so I hacked up this little script which achieves the same:

#!/bin/sh
set -e
  
topdir="$HOME/.config/chromium"
profiles="True Default"
for d in `find -H $topdir -maxdepth 1 -mindepth 1 -type d` ; do
  if [ "$d" != "$topdir/Default" ] && [ "$d" != "$topdir/Dictionaries" ]; then
    profiles="$profiles False `basename $d`"
  fi
done
  
if ans=`zenity --title "Chromium profile chooser" --text "Choose a profile from the list below:" --list --radiolist --column "Profile" --column "Item" $profiles` ; then
  if [ "$ans" = "Default" ]; then
    chromium-browser $@
  else
    chromium-browser --user-data-dir="$topdir/$ans" $@
  fi
else
  echo "Aborted"
fi

I saved this as $HOME/bin/chromium-launcher.sh then created a launcher in Gnome using:

/home/<my username>/bin/chromium-launcher.sh %u

This should pick up new profiles as you add them and also works the first time you launch Chromium. Enjoy!


Filed under: ubuntu

Read more
jdstrand

Upstream ClamAV pushed out an update via freshclam that crashed versions of 0.95 and earlier on 32 bit systems (Ubuntu 9.10 and earlier are affected). Upstream issued an update via freshclam within 15 minutes, but affected users’ clamd daemon will not restart automatically. People running ClamAV should check that it is still running. For details see:

http://lurker.clamav.net/message/20100507.110656.573e90d7.en.html


Filed under: security, ubuntu, ubuntu-server

Read more
jdstrand

As posted to ubuntu-devel:

Hi!

At UDS Lucid we discussed[1] ways to improve the sponsorship of security
updates, particularly for community supported packages. As a result, we
have changed our process to work like other similar processes and
integrate into the SRU process for certain updates. The changes are:

* A new team has been created (ubuntu-security-sponsors) to review community contributed security updates
* ubuntu-security and motu-swat are members of ubuntu-security-sponsors
* The SecurityTeam/UpdatePreparation page has been updated for the changes
* SponsorshipProcess has been updated to include our new process
* The process for sponsoring large updates has been formalized, and utilizes the verification procedures of SRU
* SecurityTeam/SponsorsQueue has been created (based on MOTU’s) for the process of handling the security sponsorship queue

If you have any questions regarding the new process, don’t hesitate to ask. Hopefully these changes will make it easier for people to contribute security updates, make our team a little more transparent, and ultimately better integrate our teams.

Thanks!

Jamie

[1] https://blueprints.launchpad.net/ubuntu/+spec/security-lucid-sponsorship-review


Posted in ubuntu

Read more
jdstrand

Background
Ubuntu has been using libvirt as part of its recommended virtualization management toolkit since Ubuntu 8.04 LTS. In short, libvirt provides a set of tools and APIs for managing virtual machines (VMs) such as creating a VM, modifying its hardware configuration, starting and stopping VMs, and a whole lot more. For more information on using libvirt in Ubuntu, see http://doc.ubuntu.com/ubuntu/serverguide/C/libvirt.html.

Libvirt greatly eases the deployment and management of VMs, but due to the fact that it has traditionally been limited to using POSIX ACLs and sometimes needs to perform privileged actions, using libvirt (or any virtualization technology for that matter) can create a security risk, especially when the guest VM isn’t trusted. It is easy to imagine a bug in the hypervisor which would allow a compromised VM to modify other guests or files on the host. Considering that when using qemu:///system the guest VM process runs as a root (this is configurable in 0.7.0 and later, but still the default in Fedora and Ubuntu 9.10), it is even more important to contain what a guest can do. To address these issues, SELinux developers created a fork of libvirt, called sVirt, which when using kvm/qemu, allows libvirt to add SELinux labels to files required for the VM to run. This work was merged back into upstream libvirt in version 0.6.1, and it’s implementation, features and limitations can be seen in a blog post by Dan Walsh and an article on LWN.net. This is inspired work, and the sVirt folks did a great job implementing it by using a plugin framework so that others could create different security drivers for libvirt.

AppArmor Security Driver for Libvirt
While Ubuntu has SELinux support, by default it uses AppArmor. AppArmor is different from SELinux and some other MAC systems available on Linux in that it is path-based, allows for mixing of enforcement and complain mode profiles, uses include files to ease development, and typically has a far lower barrier to entry than other popular MAC systems. It has been an important security feature of Ubuntu since 7.10, where CUPS was confined by AppArmor in the default installation (more profiles have been added with each new release).

Since virtualization is becoming more and more prevalent, improving the security stance for libvirt users is of primary concern. It was very natural to look at adding an AppArmor security driver to libvirt, and as of libvirt 0.7.2 and Ubuntu 9.10, users have just that. In terms of supported features, the AppArmor driver should be on par with the SELinux driver, where the vast majority of libvirt functionality is supported by both drivers out of the box.

Implementation
First, the libvirtd process is confined with a lenient profile that allows the libvirt daemon to launch VMs, change into another AppArmor profile and use virt-aa-helper to manipulate AppArmor profiles. virt-aa-helper is a helper application that can add, remove, modify, load and unload AppArmor profiles in a limited and restricted way. Specifically, libvirtd is not allowed to adjust anything in /sys/kernel/security directly, or modify the profiles for the virtual machines directly. Instead, libvirtd must use virt-aa-helper, which is itself run under a very restrictive AppArmor profile. Using this architecture helps prevent any opportunities for a subverted libvirtd to change its own profile (especially useful if the libvirtd profile is adjusted to be restrictive) or modify other AppArmor profiles on the system.

Next, there are several profiles that comprise the system:

  • /etc/apparmor.d/usr.sbin.libvirtd
  • /etc/apparmor.d/usr.bin.virt-aa-helper
  • /etc/apparmor.d/abstractions/libvirt-qemu
  • /etc/apparmor.d/libvirt/TEMPLATE
  • /etc/apparmor.d/libvirt/libvirt-<uuid>
  • /etc/apparmor.d/libvirt/libvirt-<uuid>.files

/etc/apparmor.d/usr.sbin.libvirtd and /etc/apparmor.d/usr.bin.virt-aa-helper define the profiles for libvirtd and virt-aa-helper (note that in libvirt 0.7.2, virt-aa-helper is located in /usr/lib/libvirt/virt-aa-helper). /etc/apparmor.d/libvirt/TEMPLATE is consulted when creating a new profile when one does not already exist. /etc/apparmor.d/abstractions/libvirt-qemu is the abstraction shared by all running VMs. /etc/apparmor.d/libvirt/libvirt-<uuid> is the unique base profile for an individual VM, and /etc/apparmor.d/libvirt/libvirt-<uuid>.files contains rules for the guest-specific files required to run this individual VM.

The confinement process is as follows (assume the VM has a libvirt UUID of ‘a22e3930-d87a-584e-22b2-1d8950212bac’):

  1. When libvirtd is started, it determines if it should use a security driver. If so, it checks which driver to use (eg SELinux or AppArmor). If libvirtd is confined by AppArmor, it will use the AppArmor security driver
  2. When a VM is started, libvirtd decides whether to ask virt-aa-helper to create a new profile or modify an existing one. If no profile exists, libvirtd asks virt-aa-helper to generate the new base profile, in this case /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac, which it does based on /etc/apparmor.d/libvirt/TEMPLATE. Notice, the new profile has a profile name that is based on the guest’s UUID. Once the base profile is created, virt-aa-helper works the same for create and modify: virt-aa-helper will determine what files are required for the guest to run (eg kernel, initrd, disk, serial, etc), updates /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac.files, then loads the profile into the kernel.
  3. libvirtd will proceed as normal at this point, until just before it forks a qemu/kvm process, it will call aa_change_profile() to transition into the profile ‘libvirt-a22e3930-d87a-584e-22b2-1d8950212bac’ (the one virt-aa-helper loaded into the kernel in the previous step)
  4. When the VM is shutdown, libvirtd asks virt-aa-helper to remove the profile, and virt-aa-helper unloads the profile from the kernel

It should be noted that due to current limitations of AppArmor, only qemu:///system is confined by AppArmor. In practice, this is fine because qemu:///session is run as a normal user and does not have privileged access to the system like qemu:///system does.

Basic Usage
By default in Ubuntu 9.10, both AppArmor and the AppArmor security driver for libvirt are enabled, so users benefit from the AppArmor protection right away. To see if libvirtd is using the AppArmor security driver, do:

$ virsh capabilities
Connecting to uri: qemu:///system
<capabilities>
 <host>
  ...
  <secmodel>
    <model>apparmor</model>
    <doi>0</doi>
  </secmodel>
 </host>
 ...
</capabilities>

Next, start a VM and see if it is confined:

$ virsh start testqemu
Connecting to uri: qemu:///system
Domain testqemu started

$ virsh domuuid testqemu
Connecting to uri: qemu:///system
a22e3930-d87a-584e-22b2-1d8950212bac

$ sudo aa-status
apparmor module is loaded.
16 profiles are loaded.
16 profiles are in enforce mode.
...
  /usr/bin/virt-aa-helper
  /usr/sbin/libvirtd
  libvirt-a22e3930-d87a-584e-22b2-1d8950212bac
...
0 profiles are in complain mode.
6 processes have profiles defined.
6 processes are in enforce mode :
...
  libvirt-a22e3930-d87a-584e-22b2-1d8950212bac (6089)
...
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

$ ps ww 6089
PID TTY STAT TIME COMMAND
6089 ? R 0:00 /usr/bin/qemu-system-x86_64 -S -M pc-0.11 -no-kvm -m 64 -smp 1 -name testqemu -uuid a22e3930-d87a-584e-22b2-1d8950212bac -monitor unix:/var/run/libvirt/qemu/testqemu.monitor,server,nowait -boot c -drive file=/var/lib/libvirt/images/testqemu.img,if=ide,index=0,boot=on -drive file=,if=ide,media=cdrom,index=2 -net nic,macaddr=52:54:00:86:5b:6e,vlan=0,model=virtio,name=virtio.0 -net tap,fd=17,vlan=0,name=tap.0 -serial none -parallel none -usb -vnc 127.0.0.1:1 -k en-us -vga cirrus

Here is the unique, restrictive profile for this VM:

$ cat /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac
#
# This profile is for the domain whose UUID
# matches this file.
#
 
#include <tunables/global>
 
profile libvirt-a22e3930-d87a-584e-22b2-1d8950212bac {
   #include <abstractions/libvirt-qemu>
   #include <libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac.files>
}

$ cat /etc/apparmor.d/libvirt/libvirt-a22e3930-d87a-584e-22b2-1d8950212bac.files
# DO NOT EDIT THIS FILE DIRECTLY. IT IS MANAGED BY LIBVIRT.
  "/var/log/libvirt/**/testqemu.log" w,
  "/var/run/libvirt/**/testqemu.monitor" rw,
  "/var/run/libvirt/**/testqemu.pid" rwk,
  "/var/lib/libvirt/images/testqemu.img" rw,

Now shut it down:

$ virsh shutdown testqemu
Connecting to uri: qemu:///system
Domain testqemu is being shutdown

$ virsh domstate testqemu
Connecting to uri: qemu:///system
shut off

$ sudo aa-status | grep 'a22e3930-d87a-584e-22b2-1d8950212bac'
[1]

Advanced Usage
In general, you can forget about AppArmor confinement and just use libvirt like normal. The guests will be isolated from each other and user-space protection for the host is provided. However, the design allows for a lot of flexibility in the system. For example:

  • If you want to adjust the profile for all future, newly created VMs, adjust /etc/apparmor.d/libvirt/TEMPLATE
  • If you need to adjust access controls for all VMs, new or existing, adjust /etc/apparmor.d/abstractions/libvirt-qemu
  • If you need to adjust access controls for a single guest, adjust /etc/apparmor.d/libvirt-<uuid>, where <uuid> is the UUID of the guest
  • To disable the driver, either adjust /etc/libvirt/qemu.conf to have ‘security_driver = “none”‘ or remove the AppArmor profile for libvirtd from the kernel and restart libvirtd

Of course, you can also adjust the profiles for libvirtd and virt-aa-helper if desired. All the files are simple text files. See AppArmor for more information on using AppArmor in general.

Limitations and the Future
While the sVirt framework provides good guest isolation and user-space host protection, the framework does not provide protection against in-kernel attacks (eg, where a guest process is able to access the host kernel memory). The AppArrmor security driver as included in Ubuntu 9.10 also does not handle access to host devices as well as it could. Allowing a guest to access a local pci device or USB disk is a potentially dangerous operation anyway, and the driver will block this access by default. Users can work around this by adjusting the base profile for the guest.

There are few missing features in the sVirt model, such as labeling state files. The AppArmor driver also needs to better support host devices. Once AppArmor provides the ability for regular users to define profiles, then qemu:///session can be properly supported. Finally, it will be great when distributions take advantage of libvirt’s recently added ability to run guests as non-root when using qemu:///system (while the sVirt framework largely mitigates this risk, it is good to have security in depth).

Summary
While cloud computing feels like it is talked about everywhere and virtualization becoming even more important in the data center, leveraging technologies like libvirt and AppArmor is a must. Virtualization removes the traditional barriers afforded to stand-alone computers, thus increasing the attack surface for hostile users and compromised guests. By using the sVirt framework in libvirt, and in particular AppArmor on Ubuntu 9.10, administrators can better defend themselves against virtualization-specific attacks. Have fun and be safe!

More Information
http://libvirt.org/
https://wiki.ubuntu.com/AppArmor
https://wiki.ubuntu.com/SecurityTeam/Specifications/AppArmorLibvirtProfile


Posted in security, ubuntu, ubuntu-server

Read more
jdstrand

Happy Halloween!

Karmic Pumpkin
Karmic Pumpkin (Lit)


Posted in ubuntu

Read more
jdstrand

Recently I decided to replace NFS on a small network with something that was more secure and more resistant to network failures (as in, Nautilus wouldn’t hang because of a symlink to a directory in an autofs mounted NFS share). Most importantly, I wanted something that was secure, simple and robust. I naturally thought of SFTP, but there were at least two problems with a naive SFTP implementation, both of which I decided I must solve to meet the ‘secure’ criteria:

  • shell access to the file server
  • SFTP can’t restrict users to a particular directory

Of course, there are other alternatives that I could have pursued: sshfs, shfs, SFTP with scponly, patching OpenSSH to support chrooting (see ‘Update’ below), NFSv4, IPsec, etc. Adding all the infrastructure to properly support NFSv4 and IPsec did not meet the simplicity requirement, and neither did running a patched OpenSSH server. sshfs, shfs, and SFTP with scponly did not really fit the bill either.

What did I come up with? A combination of SFTP, a hardened sshd configuration and AppArmor on Ubuntu.

SFTP setup
This was easy because sftp is enabled by default on Ubuntu and Debian systems. This should be enough:

$ sudo apt-get install openssh-server
Just make sure you have the following set in /etc/ssh/sshd_config (see man sshd_config for details).

Subsystem sftp /usr/lib/openssh/sftp-server

With that in place, all I needed to do was add users with strong passwords:

$ sudo adduser sftupser1
$ sudo adduser sftpuser2

Then you can test if it worked with:

$ sftp sftpuser1@server
sftp> ls /
/bin ...

Hardened sshd
Now that SFTP is working, we need to limit access. One way to do this is via a Match rule that uses a ForceCommand. Combined with AllowUsers, adding something like this to /etc/ssh/sshd_config is pretty powerful:

AllowUsers adminuser sftpuser1@192.168.0.10 sftpuser2
Match User sftpuser1,sftpuser2
    AllowTcpForwarding no
    X11Forwarding no
    ForceCommand /usr/lib/openssh/sftp-server -l INFO

Remember to restart ssh with ‘/etc/init.d/ssh restart’. The above allows normal shell access for the adminuser, SFTP-only access to sftpuser1 from 192.168.0.10 and to sftupser2 from anywhere. One can imagine combining this with ‘PasswordAuthentication no’ or GSSAPI to enforce more stringent authentication so access is even more tightly controlled.

AppArmor
The above does a lot to increase security over the standard NFS shares that I had before. Good encryption, strong authentication and reliable UID mappings for DAC (POSIX discretionary access controls) are all in place. However, it doesn’t have the ability to confine access to a certain directory like NFS does. A simple AppArmor profile can achieve this and give even more granularity than just DAC. Imagine the following directory structure:

  • /var/exports (top-level ‘exported’ filesystem (where all the files you want to share are))
  • /var/exports/users (user-specific files, only to be accessed by the users themselves)
  • /var/exports/shared (a free-for-all shared directory where any user can put stuff. The ‘shared’ directory has ’2775′ permissions with group ‘shared’)

Now add to /etc/apparmor.d/usr.lib.openssh.sftp-server (and enable with ‘sudo apparmor_parser -r /etc/apparmor.d/usr.lib.openssh.sftp-server’):

#include <tunables/global>
/usr/lib/openssh/sftp-server {
  #include <abstractions/base>
  #include <abstractions/nameservice>
 
  # Served files
  # Need read access for every parent directory
  / r,
  /var/ r,
  /var/exports/ r,
 
  /var/exports/**/ r,
  owner /var/exports/** rwkl,
 
  # don't require ownership to read shared files
  /var/exports/shared/** rwkl,
}

This is a very simple profile for sftp-server itself, with access to files in /var/exports. Notice that by default the owner must match for any files in /var/exports, but in /var/exports/shared it does not. AppArmor works in conjunction with DAC, so that if DAC denies access, AppArmor is not consulted. If DAC permits access, then AppArmor is consulted and may deny access.

Summary
This is but one man’s implementation for a simple, secure and robust file service. There are limitations with the method as described, notably managing the sshd_config file and not supporting some traditional setups such as $HOME on NFS. That said, with a little creativity, a lot of possibilities exist for file serving with this technique. For my needs, the combination of standard OpenSSH and AppArmor on Ubuntu was very compelling. Enjoy!

Update
OpenSSH 4.8 and higher (available in Ubuntu 8.10 and later) contains the ChrootDirectory option, which may be enough for certain environments. It is simpler to setup (ie AppArmor is not required), but doesn’t have the same granularity and sftp-server protection that the AppArmor method provides. See comment 32 and comment 34 for details. Combining ChrootDirectory and AppArmor would provide even more defense in depth. It’s great to have so many options for secure file sharing! :)


Posted in security, ubuntu, ubuntu-server

Read more
jdstrand

After a lot of hard work by John Johansen and the Ubuntu kernel team, bug #375422 is well on its way to be fixed. More than just forward ported for Ubuntu, AppArmor has been reworked to use the updated kernel infrastructure for LSMs. As seen in #apparmor on Freenode a couple of days ago:

11:24 < jjohansen> I am working to a point where I can try upstreaming again, base off of the security_path_XXX patches instead of the vfs patches
11:24 < jjohansen> so the module is mostly self contained again

These patches are in the latest 9.10 kernel, and help testing AppArmor in Karmic is needed. To get started, verify you have at least 2.6.31-3.19-generic:

$ cat /proc/version_signature
Ubuntu 2.6.31-3.19-generic

AppArmor will be enabled by default for Karmic just like in previous Ubuntu releases, but it is off for now until a few kinks are worked out. To test it right away, you’ll need to reboot, adding ‘security=apparmor’ to the kernel command line. Then fire up ‘aa-status’ to see if it is enabled. A fresh install of 9.10 as of today should look something like:

$ sudo aa-status
apparmor module is loaded.
8 profiles are loaded.
8 profiles are in enforce mode.
/usr/lib/connman/scripts/dhclient-script
/usr/share/gdm/guest-session/Xsession
/usr/sbin/tcpdump
/usr/lib/cups/backend/cups-pdf
/sbin/dhclient3
/usr/sbin/cupsd
/sbin/dhclient-script
/usr/lib/NetworkManager/nm-dhcp-client.action
0 profiles are in complain mode.
2 processes have profiles defined.
2 processes are in enforce mode :
/sbin/dhclient3 (3271)
/usr/sbin/cupsd (2645)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

Please throw all your crazy profiles at it as well as testing the packages with existing profiles, then file bugs:

  • For the kernel, add your comments (positive and negative) to bug #375422
  • AppArmor tools bugs should be filed with ‘ubuntu-bug apparmor’
  • Profile bugs should be filed against the individual source package with ‘ubuntu-bug <source package name>’. See DebuggingApparmor for details.

Thank you Ubuntu Kernel team and especially John for all the hard work on getting the “MAC system for human beings” (as I like to call it) not only working again, but upstreamable — this is really great stuff! :)


Posted in security, ubuntu, ubuntu-server

Read more
jdstrand

Short answer: Of course not!

Longer answer:
Ubuntu has had AppArmor1 on by default for a while now, and with each new release more and more profiles are added. The Ubuntu community has worked hard to make the installed profiles work well, and by far and large, most people happily use their Ubuntu systems without noticing AppArmor is even there.

Of course, like with any software, there are bugs. I know these AppArmor profile bugs can be frustrating, but because AppArmor is a path-based system, diagnosing, fixing and even working around profile bugs is actually quite easy. AppArmor has the ability to disable specific profiles rather than simply turning it on or off, yet I’ve seen people in IRC and forums advise others to disable AppArmor completely. This is totally misguided and YOU SHOULD NEVER DISABLE APPARMOR ENTIRELY to work around a profiling problem. That is like trying to open your front door with dynamite– it will work, but it’ll leave a big hole and you’ll likely hurt yourself. Think about it, on my regular ol’ Jaunty laptop system, I have 4 profiles in place installed via Ubuntu packages (to see the profiles on your system, look in /etc/apparmor.d). Why would I want to disable all of AppArmor (and therefore all of those profiles) instead of dealing with just the one that is causing me problems? Obviously, the more software you install with AppArmor protection, the more you have to lose by disabling AppArmor completely.

So, when dealing with a profile bug, there are only a few things you need to know:

  1. AppArmor messages show up in /var/log/kern.log (by default)
  2. AppArmor profiles are located in /etc/apparmor.d
  3. The profile name is, by convention, <absolute path with ‘/’ replaced by ‘.’>. E.g ‘/etc/apparmor.d/sbin.dhclient3′ is the profile for ‘/sbin/dhclient3′.
  4. Profiles are simple text files

With this in mind, let’s say tcpdump is misbehaving. You can check /var/log/kern.log for entries like:

Jul 7 12:21:15 laptop kernel: [272480.175323] type=1503 audit(1246987275.634:324): operation="inode_create" requested_mask="a::" denied_mask="a::" fsuid=0 name="/opt/foo.out" pid=24113 profile="/usr/sbin/tcpdump"

That looks complicated, but it isn’t really, and it tells you everything you need to know to file a bug and fix the problem yourself. Specifically, “/usr/sbin/tcpdump” was denied “a” access to “/opt/foo.out”.

So now what?

If you are using the program with a default configuration or non-default but common configuration, then by all means, file a bug. If unsure, ask on IRC, on a mailing list or just file it anyway.

If you are a non-technical user or just need to put debugging this issue on hold, then you can disable this specific profile (there are others ways of doing this, but this method works best):

$ sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.tcpdump
$ sudo ln -s /etc/apparmor.d/usr.sbin.tcpdump /etc/apparmor.d/disable/usr.sbin.tcpdump

What that does is remove the profile for tcpdump from the kernel, then disables the profile such that AppArmor won’t load it when it is started (eg, on reboot). Now you can use the application without AppArmor protection, but leaving all those other applications with profiles protected.

If you are technically minded, dive into /etc/apparmor.d/usr.sbin.tcpdump and adjust the profile, then reload it:

$ sudo <your favorite editor> /etc/apparmor.d/usr.sbin.tcpdump
$ sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.tcpdump

This will likely be an iterative process, but you can base your new or updated rules on what is already the profile– it is pretty straightforward. After a couple of times, it will be second nature and you might want to start contributing to developing new profiles. Once the profile is working for you, please add your proposed fix to the bug report you filed earlier.

The DebuggingApparmor page has information on how to triage, fix and work-around AppArmor profile bugs. To learn more about AppArmor and the most frequently used access rules, install the apparmor-docs package, and read /usr/share/doc/apparmor-docs/techdoc.pdf.gz.

1. For those of you who don’t know, AppArmor is a path-based (as opposed to SELinux, which is inode-based) mandatory access control (MAC) system that limits access a process has to a predefined set of files and operations. These access controls are known as ‘profiles’ in AppArmor parlance.


Posted in security, ubuntu, ubuntu-server

Read more
jdstrand

It all started back in the good ol’ days of the Jaunty development cycle when I heard this new fangled filesystem thingy called ext4 was going to be an option in Jaunty. It claimed to be faster with much shorter fsck times. So, like any good Ubuntu developer, I tried it. It was indeed noticeably faster and fsck times were much improved.

The honeymoon was over when I ended up hitting bug #317781. That was no fun as it ate several virtual machines and quite a few other things (I had backups for all but the VMs). This machine is on a UPS, uses raid1, and on modern hardware (dual core Intel system with 4GB of RAM). In other words, this is not some flaky system but one that normally is only rebooted when there is a kernel upgrade (well, that is a white lie, but you get the point– it is a stable machine). According to Ted Ts’o, I shouldn’t be seeing this. Frustrated, I spent the better part of a weekend shuffling disks around to try to move my data off of ext4 and reformat the drive back to ext3. I was, how shall I put it, disappointed.

Some welcome patches were applied to Jaunty’s kernel soon after that to make ext4 behave more like ext3. By all accounts this stops ext4 from eating files under adverse conditions. So, now not only does the filesystem perform well, it doesn’t eat files. Life was good…

… until I noticed that under certain conditions I would get a total system freeze. Naturally, there was nothing in the logs (something I always appreciate ;). I thought it might have been several things, but I couldn’t prove any of them. Yesterday, however, I was able to reliably freeze my system. Basically, I was compiling a Jaunty kernel (2.6.28) in a schroot and using this command:

$ CONCURRENCY_LEVEL=2 AUTOBUILD=1 NOEXTRAS=1 skipabi=true fakeroot debian/rules binary-generic
Things were going along fine until I tried to delete a ton of files in a deep directory during the compile, then it would freeze. I was able to reproduce this 3 times in a row. Finally, I shuffled some things around and put my /home partition back on ext3 and I could not reproduce the freeze. There are several bugs in Launchpad talking about ext4 and system freezes, and after a bit more research I will add my comments, but for now, I am simply hopeful I will not see the freezes any more.

To be fair, ext4 is not the default filesystem on 9.04, and while it is supposed to be in 9.10, people I know running ext4 on 9.10 aren’t seeing these problems (yes, I’ve asked around). I do continue to use ext4 for ‘/’ on Jaunty systems with a separate /home partition as ext3, because it really does perform better, and this seems to be a good compromise. Having been burned by ext4 a couple of times now, I think it’ll probably be awhile before I trust ext4 for my important data though. Time will tell. :)


Posted in ubuntu, ubuntu-server

Read more
jdstrand

While not exactly news as it happened sometime last month, ufw is now in Debian and is even available in Squeeze. What is new is that the fine folks in Debian have started to translate the debconf strings in ufw, and during the process the strings are much better. Thanks Debian!

In other news, ufw trunk now has support for filtering by interface. To use it, do something like:

$ sudo ufw allow in on eth0 to any port 80

See the man page for more information. This feature will be in ufw 0.28 which is targeted for Ubuntu Karmic and I also hope to add egress filtering this cycle. I haven’t started on egress filtering yet, but I have a good idea on how to proceed. Stay tuned!


Posted in security, ubuntu, ubuntu-server

Read more