Canonical Voices

Posts tagged with 'security'

Dustin Kirkland

A StackExchange question, back in February of this year inspired a new feature in Byobu, that I had been thinking about for quite some time:

Wouldn't it be nice to have a hot key in Byobu that would send a command to multiple splits (or windows?
This feature was added and is available in Byobu 5.73 and newer (in Ubuntu 14.04 and newer, and available in the Byobu PPA for older Ubuntu releases).

I actually use this feature all the time, to update packages across multiple computers.  Of course, Landscape is a fantastic way to do this as well.  But if you don't have access to Landscape, you can always do this very simply with Byobu!

Create some splits, using Ctrl-F2 and Shift-F2, and in each split, ssh into a target Ubuntu (or Debian) machine.

Now, use Shift-F9 to open up the purple prompt at the bottom of your screen.  Here, you enter the command you want to run on each split.  First, you might want to run:

sudo true

This will prompt you for your password, if you don't already have root or sudo access.  You might need to use Shift-Up, Shift-Down, Shift-Left, Shift-Right to move around your splits, and enter passwords.

Now, update your package lists:

sudo apt-get update

And now, apply your updates:

sudo apt-get dist-upgrade

Here's a video to demonstrate!


In a related note, another user-requested feature has been added, to simultaneously synchronize this behavior among all splits.  You'll need the latest version of Byobu, 5.87, which will be in Ubuntu 14.10 (Utopic).  Here, you'll press Alt-F9 and just start typing!  Another demonstration video here...




Cheers,
Dustin

Read more
Dustin Kirkland

Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!


Q: Why should I care about randomness? 

A: Because entropy is important!

  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

Q: Where does entropy come from?

A: Hardware, typically.

  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on

Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?

A: Pseudo random number generators are our only viable alternative.

  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always

Q: Are Linux PRNGs secure enough?

A: Yes, if they are properly seeded.

  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...

Q: And what exactly is a random seed?

A: Basically, its a small catalyst that primes the PRNG pump.

  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...

XKCD on random numbers

RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

Q: So my OS generates an initial seed at first boot?

A: Yep, but computers are predictable, especially VMs.

  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

Dilbert on random numbers


http://j.mp/1dHAK4V


Q: Surely you're just being paranoid about this, right?

A: I’m afraid not...

Analysis of the LRNG (2006)

  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT

Black Hat (2009)

  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu

Factorable.net (2012)

  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx

Dual_EC_DRBG Backdoor (2013)

  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB

Q: Ruh roh...so what can we do about it?

A: For starters, do a better job seeding our PRNGs.

  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

Q: But how do we ensure that in cloud guests?

A: Run Ubuntu!


Sorry, shameless plug...

Q: And what is Ubuntu's solution?

A: Meet pollinate.

  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate

Q: What about the back end?

A: Introducing pollen.

  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen

Q: Golang, did you say?  That sounds cool!

A: Indeed. Around 50 lines of code, cool!

pollen.go

Q: Is there a public entropy service available?

A: Hello, entropy.ubuntu.com.

  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

Q: But what if I don't necessarily trust Canonical?

A: Then use a different entropy service :-)

  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

Q: So does this increase the overall entropy on a system?

A: No, no, no, no, no!

  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!

Q: Why the challenge/response in the protocol?

A: Think of it like the Heisenberg Uncertainty Principle.

  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

Q: What if pollinate gets crappy or compromised or no random seeds?

A: Functionally, it’s no better or worse than it was without pollinate in the mix.

  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise

Q: What about SSL compromises, or CA Man-in-the-Middle attacks?

A: We are mitigating that by bundling the public certificates in the client.


  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers

Q: What information gets logged by the pollen server?

A: The usual web server debug info.

  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

Q: Have the code or design been audited?

A: Yes, but more feedback is welcome!

  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated

Q: Where can I find more information?

A: Read Up!


Stay safe out there!
:-Dustin

Read more
jdstrand

Last time I discussed AppArmor, I talked about new features in Ubuntu 13.10 and a bit about ApplicationConfinement for Ubuntu Touch. With the release of Ubuntu 14.04 LTS, several improvements were made:

  • Mediation of signals
  • Mediation of ptrace
  • Various policy updates for 14.04, including new tunables, better support for XDG user directories, and Unity7 abstractions
  • Parser policy compilation performance improvements
  • Google Summer of Code (SUSE sponsored) python rewrite of the userspace tools

Signal and ptrace mediation

Prior to Ubuntu 14.04 LTS, a confined process could send signals to other processes (subject to DAC) and ptrace other processes (subject to DAC and YAMA). AppArmor on 14.04 LTS adds mediation of both signals and ptrace which brings important security improvements for all AppArmor confined applications, such as those in the Ubuntu AppStore and qemu/kvm machines as managed by libvirt and OpenStack.

When developing policy for signal and ptrace rules, it is important to remember that AppArmor does a cross check such that AppArmor verifies that:

  • the process sending the signal/performing the ptrace is allowed to send the signal to/ptrace the target process
  • the target process receiving the signal/being ptraced is allowed to receive the signal from/be ptraced by the sender process

Signal(7) permissions use the ‘signal’ rule with the ‘receive/send’ permissions governing signals. PTrace permissions use the ‘ptrace’ rule with the ‘trace/tracedby’ permissions governing ptrace(2) and the ‘read/readby’ permissions governing certain proc(5) filesystem accesses, kcmp(2), futexes (get_robust_list(2)) and perf trace events.

Consider the following denial:

Jun 6 21:39:09 localhost kernel: [221158.831933] type=1400 audit(1402083549.185:782): apparmor="DENIED" operation="ptrace" profile="foo" pid=29142 comm="cat" requested_mask="read" denied_mask="read" peer="unconfined"

This demonstrates that the ‘cat’ binary running under the ‘foo’ profile was unable to read the contents of a /proc entry (in my test, /proc/11300/environ). To allow this process to read /proc entries for unconfined processes, the following rule can be used:

ptrace (read) peer=unconfined,

If the receiving process was confined, the log entry would say ‘peer=”<profile name>”‘ and you would adjust the ‘peer=unconfined’ in the rule to match that in the log denial. In this case, because unconfined processes implicitly can be readby all other processes, we don’t need to specify the cross check rule. If the target process was confined, the profile for the target process would need a rule like this:

ptrace (readby) peer=foo,

Likewise for signal rules, consider this denial:

Jun 6 21:53:15 localhost kernel: [222005.216619] type=1400 audit(1402084395.937:897): apparmor="DENIED" operation="signal" profile="foo" pid=29069 comm="bash" requested_mask="send" denied_mask="send" signal=term peer="unconfined"

This shows that ‘bash’ running under the ‘foo’ profile tried to send the ‘term’ signal to an unconfined process (in my test, I used ‘kill 11300′) and was blocked. Signal rules use ‘read’ and ‘send to determine access, so we can add a rule like so to allow sending of the signal:

signal (send) set=("term") peer=unconfined,

Like with ptrace, a cross-check is performed with signal rules but implicit rules allow unconfined processes to send and receive signals. If pid 11300 were confined, you would adjust the ‘peer=’ in the rule of the foo profile to match the denial in the log, and then adjust the target profile to have something like:

signal (receive) set=("term") peer=foo,

Signal and ptrace rules are very flexible and the AppArmor base abstraction in Ubuntu 14.04 LTS has several rules to help make profiling and transitioning to the new mediation easier:

# Allow other processes to read our /proc entries, futexes, perf tracing and
# kcmp for now
ptrace (readby),
 
# Allow other processes to trace us by default (they will need
# 'trace' in the first place). Administrators can override
# with:
# deny ptrace (tracedby) ...
ptrace (tracedby),
 
# Allow unconfined processes to send us signals by default
signal (receive) peer=unconfined,
 
# Allow us to signal ourselves
signal peer=@{profile_name},
 
# Checking for PID existence is quite common so add it by default for now
signal (receive, send) set=("exists"),

Note the above uses the new ‘@{profile_name}’ AppArmor variable, which is particularly handy with ptrace and signal rules. See man 5 apparmor.d for more details and examples.

14.10

Work still remains and some of the things we’d like to do for 14.10 include:

  • Finishing mediation for non-networking forms of IPC (eg, abstract sockets). This will be done in time for the phone release.
  • Have services integrate with AppArmor and the upcoming trust-store to become trusted helpers (also for phone release)
  • Continue work on netowrking IPC (for 15.04)
  • Continue to work with the upstream kernel on kdbus
  • Work continued on LXC stacking and we hope to have stacked profiles within the current namespace for 14.10. Full support for stacked profiles where different host and container policy for the same binary at the same time should be ready by 15.04
  • Various fixes to the python userspace tools for remaining bugs. These will also be backported to 14.04 LTS

Until next time, enjoy!


Filed under: canonical, security, ubuntu, ubuntu-server

Read more
mark

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.

Read more
Dustin Kirkland



In about an hour, I have the distinct honor to address a room full of federal sector security researchers and scientists at the US Department of Energy's Oak Ridge National Labs, within the Cyber and Information Security Research Conference.

I'm delighted to share with you the slide deck I have prepared for this presentation.  You can download a PDF here.

To a great extent, I have simply reformatted the excellent Ubuntu Security Features wiki page our esteemed Ubuntu Security Team maintains, into a format by which I can deliver as a presentation.

Hopefully you'll learn something!  I certainly did, as I researched and built this presentation ;-)
On a related security note, it's probably worth mentioning that Canonical's IS team have updated all SSL services with patched OpenSSL from the Ubuntu security archive, and have restarted all relevant services (using Landscape, for the win), against the Heartbleed vulnerability. I will release an updated pollinate package in a few minutes, to ship the new public key for entropy.ubuntu.com.



Stay safe,
Dustin

Read more
jdstrand

Excellent blog post by my colleague Marc Deslauriers where he is discussing how we are working to provide a safe and usable experience in the Ubuntu app store: http://mdeslaur.blogspot.com/2013/12/ubuntu-touch-and-user-privacy.html


Filed under: canonical, security

Read more
jdstrand

Excellent blog post by my colleague Marc Deslauriers where he is discussing how we are working to provide a safe and usable experience in the Ubuntu app store: http://mdeslaur.blogspot.com/2013/12/ubuntu-touch-and-user-privacy.html


Filed under: canonical, security

Read more
jdstrand

Last time I discussed AppArmor, I gave an overview of how AppArmor is used in Ubuntu. With the release of Ubuntu 13.10, a number of features have been added:

  • Support for fine-grained DBus mediation for bus, binding name, object path, interface and member/method
  • The return of named AF_UNIX socket mediation
  • Integration with several services as part of the ApplicationConfinement work in support of click packages and the Ubuntu appstore
  • Better support for policy generation via the aa-easyprof tool and apparmor-easyprof-ubuntu policy
  • Native AppArmor support in Upstart

DBus mediation

 
Prior to Ubuntu 13.10, access to the DBus system bus was on/off and there was no mediation of the session bus or any other DBus buses, such as the accessibility bus. 13.10 introduces fine-grained DBus mediation. In a nutshell, you define ‘dbus’ rules in your AppArmor policy just like any other rules. When an application that is confined by AppArmor uses DBus, the dbus-daemon queries the kernel on if the application is allowed to perform this action. If it is, DBus proceeds normally, if not, DBus denies the access and logs it to syslog. An example denial is:
 
Oct 18 16:02:50 localhost dbus[3626]: apparmor="DENIED" operation="dbus_method_call" bus="session" path="/ca/desrt/dconf/Writer/user" interface="ca.desrt.dconf.Writer" member="Change" mask="send" name="ca.desrt.dconf" pid=30538 profile="/usr/lib/firefox/firefox{,*[^s][^h]}" peer_pid=3927 peer_profile="unconfined"

We can see that firefox tried to access gsettings (dconf) but was denied.

DBus rules are a bit more involved than most other AppArmor rules, but they are still quite readable and understandable. For example, consider the following rule:
 
dbus (send)
   bus=session
   path=/org/freedesktop/DBus
   interface=org.freedesktop.DBus
   member=Hello
   peer=(name=org.freedesktop.DBus),

This rule says that the application is allowed to use the ‘Hello’ method on the ‘org.freedesktop.DBus’ interface of the ‘/org/freedesktop/DBus’ object for the process bound to the ‘org.freedesktop.DBus’ name on the ‘session’ bus. That is fine-grained indeed!

However, rules don’t have to be that fine-grained. For example, all of the following are valid rules:
 
dbus,
dbus bus=accessibility,
dbus (send) bus=session peer=(name=org.a11y.Bus),

Couple of things to keep in mind:

  • Because dbus-daemon is the one performing the mediation, DBus denials are logged to syslog and not kern.log. Recent versions of Ubuntu log kernel messages to /var/log/syslog, so I’ve gotten in the habit of just looking there for everything
  • The message content of DBus traffic is not examined
  • The userspace tools don’t understand DBus rules yet. That means aa-genprof, aa-logprof and aa-notify don’t work with these new rules. The userspace tools are being rewritten and support will be added in a future release.
  • The less fine-grained the rule, the more access is permitted. So ‘dbus,’ allows unrestricted access to DBus.
  • Responses to messages are implicitly allowed, so if you allow an application to send a message to a service, the service is allowed to respond without needing a corresponding rule.
  • dbus-daemon is considered a trusted helper (it integrates with AppArmor to enforce the mediation) and is not confined by default.

As a transitional step, existing policy for packages in the Ubuntu archive that use DBus will continue to have full access to DBus, but future Ubuntu releases may provide fine-grained DBus rules for this software. See ‘man 5 apparmor.d’ for more information on DBus mediation and AppArmor.

Application confinement

 
Ubuntu will support an app store model where software that has not gone through the traditional Ubuntu archive process is made available to users. While this greatly expands the quantity of quality software available to Ubuntu users, it also introduces new security risks. An important part of addressing these risks is to run applications under confinement. In this manner, apps are isolated from each other and are limited in what they can do on the system. AppArmor is at the heart of the Ubuntu ApplicationConfinement story and is already working on Ubuntu 13.10 for phones in the appstore. A nice introduction for developers on what the Ubuntu trust model is and how apps work within it can be found at http://developer.ubuntu.com.

In essence, a developer will design software with the Ubuntu SDK, then declare what type of application it is (which determines the AppArmor template to use), then declares any addition policy groups that the app needs. The templates and policy groups define AppArmor file, network, DBus and anything other rules that are needed. The software is packaged as a lightwight click package and when it is installed, an AppArmor click hook is run which creates a versioned profile for the application based on the templates and policy groups. On Unity 8, application lifecycle makes sure that the app is launched under confinement via an upstart job. For other desktop environments, a desktop file is generated in ~/.local/share/applications that prepends ‘aa-exec-click’ to the Exec line. The upstart job and ‘aa-exec-click’ not only launch the app under confinement, but also setup the environment (eg, set TMPDIR to an application specific directory). Various APIs have been implemented so apps can access files (eg, Pictures via the gallery app), connect to services (eg, location and online accounts) and work within Unity (eg, the HUD) safely and in a controlled and isolated manner.

The work is not done of course and serveral important features need to be implemented and bugs fixed, but application confinement has already added a very significant security improvement on Ubuntu 13.10 for phones.

14.04

As mentioned, work remains. Some of the things we’d like to do for 14.04 include:

  • Finishing IPC mediation for things like signals, networking and abstract sockets
  • Work on APIs and AppArmor integration of services to work better on the converged device (ie, with traditional desktop applications)
  • Work with the upstream kernel on kdbus so we are ready for when that is available
  • Finish the LXC stacking work to allow different host and container policy for the same binary at the same time
  • While Mir already handles keyboard and mouse sniffing, we’d like to integrate with Mir in other ways where applicable (note, X mediation for keyboard/mouse sniffing, clipboard, screen grabs, drag and drop, and xsettings is not currently scheduled nor is wayland support. Both are things we’d like to have though, so if you’d like to help out here, join us on #apparmor on OFTC to discuss how to contribute)

Until next time, enjoy!


Filed under: canonical, security, ubuntu

Read more
jdstrand

Last time I discussed AppArmor, I gave an overview of how AppArmor is used in Ubuntu. With the release of Ubuntu 13.10, a number of features have been added:

  • Support for fine-grained DBus mediation for bus, binding name, object path, interface and member/method
  • The return of named AF_UNIX socket mediation
  • Integration with several services as part of the ApplicationConfinement work in support of click packages and the Ubuntu appstore
  • Better support for policy generation via the aa-easyprof tool and apparmor-easyprof-ubuntu policy
  • Native AppArmor support in Upstart

DBus mediation

 
Prior to Ubuntu 13.10, access to the DBus system bus was on/off and there was no mediation of the session bus or any other DBus buses, such as the accessibility bus. 13.10 introduces fine-grained DBus mediation. In a nutshell, you define ‘dbus’ rules in your AppArmor policy just like any other rules. When an application that is confined by AppArmor uses DBus, the dbus-daemon queries the kernel on if the application is allowed to perform this action. If it is, DBus proceeds normally, if not, DBus denies the access and logs it to syslog. An example denial is:
 
Oct 18 16:02:50 localhost dbus[3626]: apparmor="DENIED" operation="dbus_method_call" bus="session" path="/ca/desrt/dconf/Writer/user" interface="ca.desrt.dconf.Writer" member="Change" mask="send" name="ca.desrt.dconf" pid=30538 profile="/usr/lib/firefox/firefox{,*[^s][^h]}" peer_pid=3927 peer_profile="unconfined"

We can see that firefox tried to access gsettings (dconf) but was denied.

DBus rules are a bit more involved than most other AppArmor rules, but they are still quite readable and understandable. For example, consider the following rule:
 
dbus (send)
   bus=session
   path=/org/freedesktop/DBus
   interface=org.freedesktop.DBus
   member=Hello
   peer=(name=org.freedesktop.DBus),

This rule says that the application is allowed to use the ‘Hello’ method on the ‘org.freedesktop.DBus’ interface of the ‘/org/freedesktop/DBus’ object for the process bound to the ‘org.freedesktop.DBus’ name on the ‘session’ bus. That is fine-grained indeed!

However, rules don’t have to be that fine-grained. For example, all of the following are valid rules:
 
dbus,
dbus bus=accessibility,
dbus (send) bus=session peer=(name=org.a11y.Bus),

Couple of things to keep in mind:

  • Because dbus-daemon is the one performing the mediation, DBus denials are logged to syslog and not kern.log. Recent versions of Ubuntu log kernel messages to /var/log/syslog, so I’ve gotten in the habit of just looking there for everything
  • The message content of DBus traffic is not examined
  • The userspace tools don’t understand DBus rules yet. That means aa-genprof, aa-logprof and aa-notify don’t work with these new rules. The userspace tools are being rewritten and support will be added in a future release.
  • The less fine-grained the rule, the more access is permitted. So ‘dbus,’ allows unrestricted access to DBus.
  • Responses to messages are implicitly allowed, so if you allow an application to send a message to a service, the service is allowed to respond without needing a corresponding rule.
  • dbus-daemon is considered a trusted helper (it integrates with AppArmor to enforce the mediation) and is not confined by default.

As a transitional step, existing policy for packages in the Ubuntu archive that use DBus will continue to have full access to DBus, but future Ubuntu releases may provide fine-grained DBus rules for this software. See ‘man 5 apparmor.d’ for more information on DBus mediation and AppArmor.

Application confinement

 
Ubuntu will support an app store model where software that has not gone through the traditional Ubuntu archive process is made available to users. While this greatly expands the quantity of quality software available to Ubuntu users, it also introduces new security risks. An important part of addressing these risks is to run applications under confinement. In this manner, apps are isolated from each other and are limited in what they can do on the system. AppArmor is at the heart of the Ubuntu ApplicationConfinement story and is already working on Ubuntu 13.10 for phones in the appstore. A nice introduction for developers on what the Ubuntu trust model is and how apps work within it can be found at http://developer.ubuntu.com.

In essence, a developer will design software with the Ubuntu SDK, then declare what type of application it is (which determines the AppArmor template to use), then declares any addition policy groups that the app needs. The templates and policy groups define AppArmor file, network, DBus and anything other rules that are needed. The software is packaged as a lightwight click package and when it is installed, an AppArmor click hook is run which creates a versioned profile for the application based on the templates and policy groups. On Unity 8, application lifecycle makes sure that the app is launched under confinement via an upstart job. For other desktop environments, a desktop file is generated in ~/.local/share/applications that prepends ‘aa-exec-click’ to the Exec line. The upstart job and ‘aa-exec-click’ not only launch the app under confinement, but also setup the environment (eg, set TMPDIR to an application specific directory). Various APIs have been implemented so apps can access files (eg, Pictures via the gallery app), connect to services (eg, location and online accounts) and work within Unity (eg, the HUD) safely and in a controlled and isolated manner.

The work is not done of course and serveral important features need to be implemented and bugs fixed, but application confinement has already added a very significant security improvement on Ubuntu 13.10 for phones.

14.04

As mentioned, work remains. Some of the things we’d like to do for 14.04 include:

  • Finishing IPC mediation for things like signals, networking and abstract sockets
  • Work on APIs and AppArmor integration of services to work better on the converged device (ie, with traditional desktop applications)
  • Work with the upstream kernel on kdbus so we are ready for when that is available
  • Finish the LXC stacking work to allow different host and container policy for the same binary at the same time
  • While Mir already handles keyboard and mouse sniffing, we’d like to integrate with Mir in other ways where applicable (note, X mediation for keyboard/mouse sniffing, clipboard, screen grabs, drag and drop, and xsettings is not currently scheduled nor is wayland support. Both are things we’d like to have though, so if you’d like to help out here, join us on #apparmor on OFTC to discuss how to contribute)

Until next time, enjoy!


Filed under: canonical, security, ubuntu

Read more
niemeyer

In an effort to polish the recently released draft of the strepr v1 specification, I’ve spent the last couple of days in a Go reference implementation.

The implemented algorithm is relatively simple, efficient, and consumes a conservative amount of memory. The aspect of it that deserved the most attention is the efficient encoding of a float number when it carries an integer value, as covered before. The provided tests are a useful reference as well.

The API offered by the implemented package is minimal, and matches existing conventions. For example, this simple snippet will generate a hash for the stable representation of the provided value:

value := map[string]interface{}{"a": 1, "b": []int{2, 3}}
hash := sha1.New()
strepr.NewEncoder(hash).Encode(value)
fmt.Printf("%x\n", hash.Sum(nil))
// Outputs: 29a77d09441528e02a27dc498d0a757da06250a0

Along with the reference implementation comes a simple command line tool to play with the concept. It allows easily arriving at the same result obtained above by processing a JSON value instead:

$ echo '{"a": 1.0, "b": [2, 3]}' | ./strepr -in-json -out-sha1
29a77d09441528e02a27dc498d0a757da06250a0

Or YAML:

$ cat | ./strepr -in-yaml -out-sha1                 
a: 1
b:
   - 2
   - 3
29a77d09441528e02a27dc498d0a757da06250a0

Or even BSON, the binary format used by MongoDB:

$ bsondump dump.bson
{ "a" : 1, "b" : [ 2, 3 ] }
1 objects found
$ cat dump.bson | ./strepr -in-bson -out-sha1
29a77d09441528e02a27dc498d0a757da06250a0

In all of those cases the hash obtained is the same, despite the fact that the processed values were typed differently in some occasions. For example, due to its Javascript background, some JSON libraries may unmarshal numbers as binary floating point values, while others distinguish the value based on the formatting used. The strepr algorithm flattens out that distinction so that different platforms can easily agree on a common result.

To visualize (or debug) the stable representation defined by strepr, the reference implementation has a debug dump facility which is also exposed in the command line tool:

$ echo '{"a": 1.0, "b": [2, 3]}' | ./strepr -in-json -out-debug
map with 2 pairs (0x6d02):
   string of 1 byte (0x7301) "a" (0x61)
    => uint 1 (0x7001)
   string of 1 byte (0x7301) "b" (0x62)
    => list with 2 items (0x6c02):
          - uint 2 (0x7002)
          - uint 3 (0x7003)

Assuming a Go compiler and the go tool are available, the command line strepr tool may be installed with:

$ go get launchpad.net/strepr/cmd/strepr

As a result of the reference implementation work, a few clarifications and improvements were made to the specification:

  • Enforce the use of UTF-8 for Unicode strings and explain why normalization is being left out.
  • Enforce a single NaN representation for floats.
  • Explain that map key uniqueness refers to the representation.
  • Don’t claim the specification is easy to implement; floats require attention.
  • Mention reference implementation.

Read more
niemeyer

Note: This is a candidate version of the specification. This note will be removed once v1 is closed, and any changes will be described at the end. Please get in touch if you’re implementing it.

Contents


Introduction

This specification defines strepr, a stable representation that enables computing hashes and cryptographic signatures out of a defined set of composite values that is commonly found across a number of languages and applications.

Although the defined representation is a serialization format, it isn’t meant to be used as a traditional one. It may not be seen entirely in memory at once, or written to disk, or sent across the network. Its role is specifically in aiding the generation of hashes and signatures for values that are serialized via other means (JSON, BSON, YAML, HTTP headers or query parameters, configuration files, etc).

The format is designed with the following principles in mind:

Understandable — The representation must be easy to understand to increase the chances of it being implemented correctly.

Portable — The defined logic works properly when the data is being transferred across different platforms and implementations, independently from the choice of protocol and serialization implementation.

Unambiguous — As a natural requirement for producing stable hashes, there is a single way to process any supported value being held in the native form of the host language.

Meaning-oriented — The stable representation holds the meaning of the data being transferred, not its type. For example, the number 7 must be represented in the same way whether it’s being held in a float64 or in an uint16.


Supported values

The following values are supported:

  • nil: the nil/null/none singleton
  • bool: the true and false singletons
  • string: raw sequence of bytes
  • integers: positive, zero, and negative integer numbers
  • floats: IEEE754 binary floating point numbers
  • list: sequence of values
  • map: associative value→value pairs


Representation

nil = 'z'

The nil/null/none singleton is represented by the single byte 'z' (0x7a).

bool = 't' / 'f'

The true and false singletons are represented by the bytes 't' (0x74) and 'f' (0x66), respectively.

unsigned integer = 'p' <value>

Positive and zero integers are represented by the byte 'p' (0x70) followed by the variable-length encoding of the number.

For example, the number 131 is always represented as {0x70, 0x81, 0x03}, independently from the type that holds it in the host language.

negative integer = 'n' <absolute value>

Negative integers are represented by the byte 'n' (0x6e) followed by the variable-length encoding of the absolute value of the number.

For example, the number -131 is always represented as {0x6e, 0x81, 0x03}, independently from the type that holds it in the host language.

string = 's' <num bytes> <bytes>

Strings are represented by the byte 's' (0x73) followed by the variable-length encoding of the number of bytes in the string, followed by the specified number of raw bytes. If the string holds a list of Unicode code points, the raw bytes must contain their UTF-8 encoding.

For example, the string hi is represented as {0x73, 0x02, 'h', 'i'}

Due to the complexity involved in Unicode normalization, it is not required for the implementation of this specification. Consequently, Unicode strings that if normalized would be equal may have different stable representations.

binary float = 'd' <binary64>

32-bit or 64-bit IEEE754 binary floating point numbers that are not holding integers are represented by the byte 'd' (0x64) followed by the big-endian 64-bit IEEE754 binary floating point encoding of the number.

There are two exceptions to that rule:

1. If the floating point value is holding a NaN, it must necessarily be encoded by the following sequence of bytes: {0x64, 0x7f, 0xf8, 0x00 0x00, 0x00, 0x00, 0x00, 0x00}. This ensures all NaN values have a single representation.

2. If the floating point value is holding an integer number it must instead be encoded as an unsigned or negative integer, as appropriate. Floating point values that hold integer numbers are defined as those where floor(v) == v && abs(v) != ∞.

For example, the value 1.1 is represented as {0x64, 0x3f, 0xf1, 0x99, 0x99, 0x99, 0x99, 0x99, 0x9a}, but the value 1.0 is represented as {0x70, 0x01}, and -0.0 is represented as {0x70, 0x00}.

This distinction means all supported numbers have a single representation, independently from the data type used by the host language and serialization format.

list = 'l' <num items> [<item> ...]

Lists of values are represented by the byte 'l' (0x6c), followed by the variable-length encoding of the number of pairs in the list, followed by the stable representation of each item in the list in the original order.

For example, the value [131, -131] is represented as {0x6c, 0x70, 0x81, 0x03, 0x6e, 0x81, 0x03, 0x65}

map = 'm' <num pairs> [<item key> <item value>  ...]

Associative maps of values are represented by the byte 'm' (0x6d) followed by the variable-length encoding of the number of pairs in the map, followed by an ordered sequence of the stable representation of each key and value in the map. The pairs must be sorted so that the stable representation of the keys is in ascending lexicographical order. A map must not have multiple keys with the same representation.

For example, the map {"a": 4, 5: "b"} is always represented as {0x6d, 0x02, 0x70, 0x05, 0x73, 0x01, 'b', 0x73, 0x01, 'a', 0x70, 0x04}.


Variable-length encoding

Integers are variable-length encoded so that they can be represented in short space and with unbounded size. In an encoded number, the last byte holds the 7 least significant bits of the unsigned value, and zero as the eight bit. If there are remaining non-zero bits, the previous byte holds the next 7 bits, and the eight bit is set on to flag the continuation to the next byte. The process continues until there are non-zero bits remaining. The most significant bits end up in the first byte of the encoded value, which must necessarily not be 0x80.

For example, the number 128 is variable-length encoded as {0x81, 0x00}.


Reference implementation

A reference implementation is available, including a test suite which should be considered when implementing the specification.


Changes

draft1 → draft2

  • Enforce the use of UTF-8 for Unicode strings and explain why normalization is being left out.
  • Enforce a single NaN representation for floats.
  • Explain that map key uniqueness refers to the representation.
  • Don’t claim the specification is easy to implement; floats require attention.
  • Mention reference implementation.

Read more
niemeyer

Today ubuntufinder.com was updated with the latest image data for Ubuntu 13.04 and all the previous releases as well. Rather than simply hardcoding the values again, though, the JavaScript code was changed so that it imports the new JSON-based feeds that Canonical has been publishing for the official Ubuntu images that are available in EC2, thanks to recent work by Scott Moser. This means the site is always up-to-date, with no manual actions.

Although the new feeds made that quite straightforward, there was a small detail to sort out: the Ubuntu Finder is visually dynamic, but it is actually a fully static web site served from S3, and the JSON feeds are served from the Canonical servers. This means the same-origin policy won’t allow that kind of cross-domain import to be easily done without further action.

The typical workaround for this kind of situation is to put a tiny proxy within the site server to load the JSON and dispatch to the browser from the same origin. Unfortunately, this isn’t an option in this case because there’s no custom server backing the data. There’s a similar option that actually works, though: deploying that tiny proxy server in some other corner and forward the JSON payload as JSONP or with cross-origin resource sharing enabled, so that browsers can bypass the same-origin restriction, and that’s what was done.

Rather than once again doing a special tiny server for that one service, though, this time around a slightly more general tool has emerged, and as an experiment it has been put live so anyone can use it. The server logic is pretty simple, and the idea is even simpler. Using the services from jsontest.com as an example, the following URL will serve a JSON document that can only be loaded from a page that is in a location allowed by the same-origin policy:

If one wanted to load that page from a different location, it might be transformed into a JSONP document by loading it from:

Alternatively, modern browsers that support the cross-origin resource sharing can simply load pure JSON by omitting the jsonpeercb parameter. The jsonpeer server will emit the proper header to allow the browser to load it:

This service is backed by a tiny Go server that lives in App Engine so it’s fast, secure (hopefully), and maintenance-less.

Some further details about the service:

  • Results are JSON with cross-origin resource sharing by default
  • With a query parameter jsonpeercb=<callback name>, results are JSONP
  • The callback name must consist of characters in the set [_.a-zA-Z0-9]
  • Query parameters provided to jsonpeer are used when doing the upstream request
  • HTTP headers are discarded in both directions
  • Results are cached for 5 minutes on memcache before being re-fetched
  • Upstream results must be valid JSON
  • Upstream results must have Content-Type application/json or text/plain
  • Upstream results must be under 500kb
  • Both http and https work; just tweak the URL and the path accordingly

Have fun if you need it, and please get in touch before abusing it.

UPDATE: The service and blog post were tweaked so that it defaults to returning plain JSON with CORS enabled, thanks to a suggestion by James Henstridge.

Read more
pitti

PostgreSQL just released security updates. 9.1 (as found in Debian testing and unstable and Ubuntu 11.10 and later) is affected by a critical remote vulnerability which potentially allows anyone who can access the TCP port (without credentials) to corrupt local files. If your PostgreSQL database exposes the TCP port to any potentially untrusted location, please shut down your servers and update now!

PostgreSQL 8.4 for Debian stable (squeeze) and Ubuntu 8.04 LTS and 10.04 LTS also got an update, but these are much less urgent.

Debian and Ubuntu advisories for all stable releases, as well as Debian testing are going out as we speak. The updates are already on security.debian.org and security.ubuntu.com.

I also uploaded updates for Debian unstable (8.4, 9.1, and 9.2 in experimental) and the Ubuntu backports PPA, but it will take a bit for these to build as we don’t have embargoed staging builds for those. Christoph updated the apt.postgresql.org repository as well.

Warning: If you use the current Ubuntu raring Beta-2 candidate images, you will still have the old version. So if you do anything serious with those installations, please make sure to upgrade immediately.

Update: Debian and Ubuntu security announcements have been sent out, and all packages in the backports PPA are built.

Please see the official FAQ if you want to know some more details about the nature of the vulnerabilities.

Read more
jdstrand

Last time I discussed AppArmor, gave some background and some information on how it can be used. This time I’d like to discuss how AppArmor is used within Ubuntu. The information in this part applies to Ubuntu 12.10 and later, and unless otherwise noted, Ubuntu 12.04 LTS (and because we also push our changes into Debian, much of this will apply to Debian as well).

/etc/apparmor.d

  • Ubuntu follows the upstream policy layout and naming conventions and uses the abstractions extensively. A few abstractions are Ubuntu-specific and they are prefixed with ‘ubuntu’. Eg /etc/apparmor.d/abstractions/ubuntu-browsers.
  • Ubuntu encourages the use of the include files in /etc/apparmor.d/local in any shipped profiles. This allows administrators to make profile additions and apply overrides without having to change the shipped profile (will need to reload the profile with apparmor_parser, see /etc/apparmor.d/local/README for more information)
  • All profiles in Ubuntu use ‘#include <tunables/global>’, which pulls in a number of tunables: ‘@{PROC}’ from ‘tunables/proc’, ‘@{HOME}’ and ‘@{HOMEDIRS}’ from ‘tunables/home’ and @{multiarch} from ‘tunables/multiarch’
  • In addition to ‘tunables/home’, Ubuntu utilizes the ‘tunables/home.d/ubuntu’ file so that ‘@{HOMEDIRS}’ is preseedable at installation time, or adjustable via ‘sudo dpkg-reconfigure apparmor’
  • Binary caches in /etc/apparmor.d/cache are used to speed up boot time.
  • Ubuntu uses the /etc/apparmor.d/disable and /etc/apparmor.d/force-complain directories. Touching (or symlinking) a file with the same name as the profile in one of these directories will cause AppArmor to either skip policy load (eg disable/usr.sbin.rsyslogd) or load in complain mode (force-complain/usr.bin.firefox)

Boot

Policy load happens in 2 stages during boot:

  1. within the job of a package with an Upstart job file
  2. via the SysV initscript (/etc/init.d/apparmor)

For packages with an Upstart job and an AppArmor profile (eg, cups), the job file must load the AppArmor policy prior to execution of the confined binary. As a convenience, the /lib/init/apparmor-profile-load helper is provided to simplify Upstart integration. For packages that ship policy but do not have a job file (eg, evince), policy must be loaded sometime before application launch, which is why stage 2 is needed. Stage 2 will (re)load all policy. Binary caches are used in both stages unless it is determined that policy must be recompiled (eg, booting a new kernel).

dhclient is a corner case because it needs to have its policy loaded very early in the boot process (ie, before any interfaces are brought up). To accommodate this, the /etc/init/network-interface-security.conf upstart job file is used.

For more information on why this implementation is used, please see the upstart mailing list.

Packaging

AppArmor profiles are shipped as Debian conffiles (ie, the package manager treats them specially during upgrades). AppArmor profiles in Ubuntu should use an include directive to include a file from /etc/apparmor.d/local so that local site changes can be made without modifying the shipped profile. When a package ships an AppArmor profile, it is added to /etc/apparmor.d then individually loaded into the kernel (with caching enabled). On package removal, any symlinks/files in /etc/apparmor.d/disable, /etc/apparmor.d/force-complain and /etc/apparmor.d are cleaned up. Developers can use dh_apparmor to aid in shipping profiles with their packages.

Profiles and applications

Ubuntu ships a number of AppArmor profiles. The philosophy behind AppArmor profiles on Ubuntu is that the profile should add a meaningful security benefit while at the same time not introduce regressions in default or common functionality. Because it is all too easy for a security mechanism to be turned off completely in order to get work done, Ubuntu won’t ship overly restrictive profiles. If we can provide a meaningful security benefit with an AppArmor profile in the default install while still maintaining functionality for the vast majority of users, we may ship an enforcing profile. Unfortunately, because applications are not designed to run under confinement or are designed to do many things, it can be difficult to confine these applications while still maintaining usability. Sometimes we will ship disabled-by-default profiles so that people can opt into them if desired. Usually profiles are shipped in the package that provides the confined binary (eg, tcpdump ships its own AppArmor profile). Some in progress profiles are also offered in the apparmor-profiles package and are in complain mode by default. When filing AppArmor bugs in Ubuntu, it is best to file the bug against the application that ships the profile.

In addition to shipped profiles, some applications have AppArmor integration built in or have AppArmor confinement applied in a non-standard way.

Libvirt

An AppArmor sVirt driver is provided and enabled by default for libvirt managed QEMU virtual machines. This provides strong guest isolation and userspace host protection for virtual machines. AppArmor profiles are dynamically generated in /etc/apparmor.d/libvirt and usually you won’t have to worry about AppArmor confinement. If needed, profiles for the individual machines can be adjusted in /etc/apparmor.d/libvirt/libvirt-<uuid> or in all virtual machines in /etc/apparmor.d/abstractions/libvirt-qemu. See /usr/share/doc/libvirt-bin/README.Debian.gz for details.

LXC

LXC in Ubuntu uses AppArmor to help make sure files in the container cannot access security-sensitive files on the host. lxc-start is confined by its own restricted profile which allows mounting in the container’s tree and, just before executing the container’s init, transitioning to the container’s own profile. See LXC in precise and beyond for details. Note, currently the libvirt-lxc sVirt driver does not have AppArmor support (confusingly, libvirt-lxc is a different project than LXC, but there are longer term plans to integrate the two and/or add AppArmor support to libvirt-lxc).

Firefox

Ubuntu ships a disabled-by-default profile for Firefox. While it is known to work well in the default Ubuntu installation, Firefox, like all browsers, is a very complex piece of software that can do much more than simply surf web pages so enabling the profile in the default install could affect the overall usability of Firefox in Ubuntu. The goals of the profile are to provide a good usability experience with strong additional protection. The profile allows for the use of plugins and extensions, various helper applications, and access to files in the user’s HOME directory, removable media and network filesystems. The profile prevents execution of arbitrary code, malware, reading and writing to sensitive files such as ssh and gpg keys, and writing to files in the user’s default PATH. It also prevents reading of system and kernel files. All of this provides a level of protection exceeding that of normal UNIX permissions. Additionally, the profile’s use of includes allows for great flexibility for tightly confining firefox. /etc/apparmor.d/usr.bin.firefox is a very restricted profile, but includes both /etc/apparmor.d/local/usr.bin.firefox and /etc/apparmor.d/abstractions/ubuntu-browsers.d/firefox. /etc/apparmor.d/abstractions/ubuntu-browsers.d/firefox contains other include files for tasks such as multimedia, productivity, etc and the file can be manipulated via the aa-update-browser command to add or remove functionality as needed.

Chromium

A complain-mode profile for chromium-browser is available in the apparmor-profiles package. It uses the same methodology as the Firefox profile (see above) with a strict base profile including /etc/apparmor.d/abstractions/ubuntu-browsers.d/chromium-browser whichcan then include any number of additional abstractions (also configurable via aa-update-browser).

Lightdm guest session

The guest session in Ubuntu is protected via AppArmor. When selecting the guest session, Lightdm will transition to a restrictive profile that disallows access to others’ files.

libapache2-mod-apparmor

The libapache2-mod-apparmor package ships a disabled-by-default profile in /etc/apparmor.d/usr.lib.apache2.mpm-prefork.apache2. This profile provides a permissive profile for Apache itself but allows the administrator to add confinement policy for individual web applications as desired via AppArmor’s change_hat() mechanism (note, apache2-mpm-prefork must be used) and an example profile for phpsysinfo is provided in the apparmor-profiles package. For more information on how to use AppArmor with Apache, please see /etc/apparmor.d/usr.lib.apache2.mpm-prefork.apache2 as well as the upstream documentation.

PAM

AppArmor also has a PAM module which allows great flexibility in setting up policies for different users. The idea behind pam_apparmor is simple: when someone uses a confined binary (such as login, su, sshd, etc), that binary will transition to an AppArmor role via PAM. Eg, if ‘su’ is configured for use with pam_apparmor, when a user invokes ‘su’, PAM is consulted and when the PAM session is started, pam_apparmor will change_hat() to a hat that matches the username, the primary group, or the DEFAULT hat (configurable). These hats (typically) provide rudimentary policy which declares the transition to a role profile when the user’s shell is started. The upstream documentation has an example of how to put this all together for ‘su’, unconfined admin users, a tightly confined user, and somewhat confined default users.

aa-notify

aa-notify is a very simple program that can alert you when there are denials on the system. aa-notify will report any new AppArmor denials by consulting /var/log/kern.log (or /var/log/audit/audit.log if auditd is installed). For desktop users who install apparmor-notify, aa-notify is started on session start via /etc/X11/Xsession.d/90apparmor-notify and will watch the logs for any new denials and report them via notify-osd. Server users can add something like ‘/usr/bin/aa-notify -s 1 -v’ to their shell startup files (eg, ~/.profile) to show any AppArmor denials within the last day whenever they login. See ‘man 8 aa-notify’ for details.

Current limitations

For all AppArmor can currently do and all the places it is used in Ubuntu, there are limitations in AppArmor 2.8 and lower (ie, what is in Ubuntu and other distributions). Right now, it is great for servers, network daemons and tools/utilities that process untrusted input. While it can provide a security benefit to client applications, there are currently a number of gaps in this area:

  • Access to the DBus system bus is on/off and there is no mediation of the session bus or any other buses that rely on Unix abstract sockets, such as the accessibility bus
  • AppArmor does not provide environment filtering beyond having the ability to clear the very limited set of glibc secure-exec variables
  • Display management mediation is not present (eg, it doesn’t protect against X snooping)
  • Networking mediation is too coarse-grained (eg you can allow ‘tcp’ but cannot restrict the binding port, integrate with a firewall or utilize secmark)
  • LXC/containers support is functional, but not complete (eg, it doesn’t allow different host and container policy for the same binary at the same time)
  • AppArmor doesn’t currently integrate with other client technologies that might be useful (eg gnome-keyring, signon/gnome-online-accounts and gsettings) and there is no facility to dynamically update a profile via a user prompt like a file/open dialog

Next time I’ll discuss ongoing and future work to address these limitations.


Filed under: canonical, security, ubuntu

Read more
jdstrand

Last time I discussed AppArmor, gave some background and some information on how it can be used. This time I’d like to discuss how AppArmor is used within Ubuntu. The information in this part applies to Ubuntu 12.10 and later, and unless otherwise noted, Ubuntu 12.04 LTS (and because we also push our changes into Debian, much of this will apply to Debian as well).

/etc/apparmor.d

  • Ubuntu follows the upstream policy layout and naming conventions and uses the abstractions extensively. A few abstractions are Ubuntu-specific and they are prefixed with ‘ubuntu’. Eg /etc/apparmor.d/abstractions/ubuntu-browsers.
  • Ubuntu encourages the use of the include files in /etc/apparmor.d/local in any shipped profiles. This allows administrators to make profile additions and apply overrides without having to change the shipped profile (will need to reload the profile with apparmor_parser, see /etc/apparmor.d/local/README for more information)
  • All profiles in Ubuntu use ‘#include <tunables/global>’, which pulls in a number of tunables: ‘@{PROC}’ from ‘tunables/proc’, ‘@{HOME}’ and ‘@{HOMEDIRS}’ from ‘tunables/home’ and @{multiarch} from ‘tunables/multiarch’
  • In addition to ‘tunables/home’, Ubuntu utilizes the ‘tunables/home.d/ubuntu’ file so that ‘@{HOMEDIRS}’ is preseedable at installation time, or adjustable via ‘sudo dpkg-reconfigure apparmor’
  • Binary caches in /etc/apparmor.d/cache are used to speed up boot time.
  • Ubuntu uses the /etc/apparmor.d/disable and /etc/apparmor.d/force-complain directories. Touching (or symlinking) a file with the same name as the profile in one of these directories will cause AppArmor to either skip policy load (eg disable/usr.sbin.rsyslogd) or load in complain mode (force-complain/usr.bin.firefox)

Boot

Policy load happens in 2 stages during boot:

  1. within the job of a package with an Upstart job file
  2. via the SysV initscript (/etc/init.d/apparmor)

For packages with an Upstart job and an AppArmor profile (eg, cups), the job file must load the AppArmor policy prior to execution of the confined binary. As a convenience, the /lib/init/apparmor-profile-load helper is provided to simplify Upstart integration. For packages that ship policy but do not have a job file (eg, evince), policy must be loaded sometime before application launch, which is why stage 2 is needed. Stage 2 will (re)load all policy. Binary caches are used in both stages unless it is determined that policy must be recompiled (eg, booting a new kernel).

dhclient is a corner case because it needs to have its policy loaded very early in the boot process (ie, before any interfaces are brought up). To accommodate this, the /etc/init/network-interface-security.conf upstart job file is used.

For more information on why this implementation is used, please see the upstart mailing list.

Packaging

AppArmor profiles are shipped as Debian conffiles (ie, the package manager treats them specially during upgrades). AppArmor profiles in Ubuntu should use an include directive to include a file from /etc/apparmor.d/local so that local site changes can be made without modifying the shipped profile. When a package ships an AppArmor profile, it is added to /etc/apparmor.d then individually loaded into the kernel (with caching enabled). On package removal, any symlinks/files in /etc/apparmor.d/disable, /etc/apparmor.d/force-complain and /etc/apparmor.d are cleaned up. Developers can use dh_apparmor to aid in shipping profiles with their packages.

Profiles and applications

Ubuntu ships a number of AppArmor profiles. The philosophy behind AppArmor profiles on Ubuntu is that the profile should add a meaningful security benefit while at the same time not introduce regressions in default or common functionality. Because it is all too easy for a security mechanism to be turned off completely in order to get work done, Ubuntu won’t ship overly restrictive profiles. If we can provide a meaningful security benefit with an AppArmor profile in the default install while still maintaining functionality for the vast majority of users, we may ship an enforcing profile. Unfortunately, because applications are not designed to run under confinement or are designed to do many things, it can be difficult to confine these applications while still maintaining usability. Sometimes we will ship disabled-by-default profiles so that people can opt into them if desired. Usually profiles are shipped in the package that provides the confined binary (eg, tcpdump ships its own AppArmor profile). Some in progress profiles are also offered in the apparmor-profiles package and are in complain mode by default. When filing AppArmor bugs in Ubuntu, it is best to file the bug against the application that ships the profile.

In addition to shipped profiles, some applications have AppArmor integration built in or have AppArmor confinement applied in a non-standard way.

Libvirt

An AppArmor sVirt driver is provided and enabled by default for libvirt managed QEMU virtual machines. This provides strong guest isolation and userspace host protection for virtual machines. AppArmor profiles are dynamically generated in /etc/apparmor.d/libvirt and usually you won’t have to worry about AppArmor confinement. If needed, profiles for the individual machines can be adjusted in /etc/apparmor.d/libvirt/libvirt-<uuid> or in all virtual machines in /etc/apparmor.d/abstractions/libvirt-qemu. See /usr/share/doc/libvirt-bin/README.Debian.gz for details.

LXC

LXC in Ubuntu uses AppArmor to help make sure files in the container cannot access security-sensitive files on the host. lxc-start is confined by its own restricted profile which allows mounting in the container’s tree and, just before executing the container’s init, transitioning to the container’s own profile. See LXC in precise and beyond for details. Note, currently the libvirt-lxc sVirt driver does not have AppArmor support (confusingly, libvirt-lxc is a different project than LXC, but there are longer term plans to integrate the two and/or add AppArmor support to libvirt-lxc).

Firefox

Ubuntu ships a disabled-by-default profile for Firefox. While it is known to work well in the default Ubuntu installation, Firefox, like all browsers, is a very complex piece of software that can do much more than simply surf web pages so enabling the profile in the default install could affect the overall usability of Firefox in Ubuntu. The goals of the profile are to provide a good usability experience with strong additional protection. The profile allows for the use of plugins and extensions, various helper applications, and access to files in the user’s HOME directory, removable media and network filesystems. The profile prevents execution of arbitrary code, malware, reading and writing to sensitive files such as ssh and gpg keys, and writing to files in the user’s default PATH. It also prevents reading of system and kernel files. All of this provides a level of protection exceeding that of normal UNIX permissions. Additionally, the profile’s use of includes allows for great flexibility for tightly confining firefox. /etc/apparmor.d/usr.bin.firefox is a very restricted profile, but includes both /etc/apparmor.d/local/usr.bin.firefox and /etc/apparmor.d/abstractions/ubuntu-browsers.d/firefox. /etc/apparmor.d/abstractions/ubuntu-browsers.d/firefox contains other include files for tasks such as multimedia, productivity, etc and the file can be manipulated via the aa-update-browser command to add or remove functionality as needed.

Chromium

A complain-mode profile for chromium-browser is available in the apparmor-profiles package. It uses the same methodology as the Firefox profile (see above) with a strict base profile including /etc/apparmor.d/abstractions/ubuntu-browsers.d/chromium-browser whichcan then include any number of additional abstractions (also configurable via aa-update-browser).

Lightdm guest session

The guest session in Ubuntu is protected via AppArmor. When selecting the guest session, Lightdm will transition to a restrictive profile that disallows access to others’ files.

libapache2-mod-apparmor

The libapache2-mod-apparmor package ships a disabled-by-default profile in /etc/apparmor.d/usr.lib.apache2.mpm-prefork.apache2. This profile provides a permissive profile for Apache itself but allows the administrator to add confinement policy for individual web applications as desired via AppArmor’s change_hat() mechanism (note, apache2-mpm-prefork must be used) and an example profile for phpsysinfo is provided in the apparmor-profiles package. For more information on how to use AppArmor with Apache, please see /etc/apparmor.d/usr.lib.apache2.mpm-prefork.apache2 as well as the upstream documentation.

PAM

AppArmor also has a PAM module which allows great flexibility in setting up policies for different users. The idea behind pam_apparmor is simple: when someone uses a confined binary (such as login, su, sshd, etc), that binary will transition to an AppArmor role via PAM. Eg, if ‘su’ is configured for use with pam_apparmor, when a user invokes ‘su’, PAM is consulted and when the PAM session is started, pam_apparmor will change_hat() to a hat that matches the username, the primary group, or the DEFAULT hat (configurable). These hats (typically) provide rudimentary policy which declares the transition to a role profile when the user’s shell is started. The upstream documentation has an example of how to put this all together for ‘su’, unconfined admin users, a tightly confined user, and somewhat confined default users.

aa-notify

aa-notify is a very simple program that can alert you when there are denials on the system. aa-notify will report any new AppArmor denials by consulting /var/log/kern.log (or /var/log/audit/audit.log if auditd is installed). For desktop users who install apparmor-notify, aa-notify is started on session start via /etc/X11/Xsession.d/90apparmor-notify and will watch the logs for any new denials and report them via notify-osd. Server users can add something like ‘/usr/bin/aa-notify -s 1 -v’ to their shell startup files (eg, ~/.profile) to show any AppArmor denials within the last day whenever they login. See ‘man 8 aa-notify’ for details.

Current limitations

For all AppArmor can currently do and all the places it is used in Ubuntu, there are limitations in AppArmor 2.8 and lower (ie, what is in Ubuntu and other distributions). Right now, it is great for servers, network daemons and tools/utilities that process untrusted input. While it can provide a security benefit to client applications, there are currently a number of gaps in this area:

  • Access to the DBus system bus is on/off and there is no mediation of the session bus or any other buses that rely on Unix abstract sockets, such as the accessibility bus
  • AppArmor does not provide environment filtering beyond having the ability to clear the very limited set of glibc secure-exec variables
  • Display management mediation is not present (eg, it doesn’t protect against X snooping)
  • Networking mediation is too coarse-grained (eg you can allow ‘tcp’ but cannot restrict the binding port, integrate with a firewall or utilize secmark)
  • LXC/containers support is functional, but not complete (eg, it doesn’t allow different host and container policy for the same binary at the same time)
  • AppArmor doesn’t currently integrate with other client technologies that might be useful (eg gnome-keyring, signon/gnome-online-accounts and gsettings) and there is no facility to dynamically update a profile via a user prompt like a file/open dialog

Next time I’ll discuss ongoing and future work to address these limitations.


Filed under: canonical, security, ubuntu

Read more
jdstrand

Mirror, mirror…

Recently I wanted to mirror traffic from one interface to another. There is a cool utility from Martin Roesch (of Snort fame) called daemonlogger. It’s use is pretty simple (note I created the ‘daemonlogger user and group on my Ubuntu system since I wanted it to drop privileges after starting):
$ sudo /usr/bin/daemonlogger -u daemonlogger \
  -g daemonlogger -i eth0 -o eth1

Sure enough, examing tcpcump output, anything coming in on eth0 was mirrored to eth1. I then used a crossover cable and used tcpdump on the other system and it all worked as advertised. This could be very useful for an IDS.

Then I thought how cool it would be to do this for multiple interfaces, but then I quickly realized that would be confusing for whatever was trying to interpret that traffic. I read that someone used a combination of daemonlogger and vtun for a remote snort monitor to work in EC2, which got me playing….

In my test environment, I again used a crossover cable to connect two systems (let’s call them the ‘server’ and the ‘monitor’). The server system runs vtund. The monitor system then connects as a vtun client. When this happens, tap devices are created on the server and the monitor. The vtund server is setup to start daemonlogger on connect to mirror the traffic from eth0 to the tap0 device. The monitor system can then listen on the tap device and get all the traffic. I then created a second vtun tunnel and mirrored from eth1 to tap1. The monitor could monitor on both tap devices. Neat!

To do accomplish this, I used this /etc/vtund.conf on the server:options {
 type stand;
 port 5000;
 syslog daemon;
 ifconfig /sbin/ifconfig;
}
default {
  type ether;
  proto udp;
  compress no;
  encrypt no;
  multi yes;
  keepalive yes;
  srcaddr {
   iface eth2;
   addr 10.0.3.1;
  };
}
internal {
  password pass0;
  device tap0;
  up {
   ifconfig "%d up";
   program /usr/bin/daemonlogger "-u daemonlogger -g daemonlogger -i eth0 -o %d";
  };
  down {
   ifconfig "%d down";
   program /usr/bin/pkill "-f -x '/usr/bin/daemonlogger -u daemonlogger -g daemonlogger -i eth0 -o %d'";
  };
}
external {
  password pass1;
  device tap1;
  up {
   ifconfig "%d up";
   program /usr/bin/daemonlogger "-u daemonlogger -g daemonlogger -i eth1 -o %d";
  };
  down {
   ifconfig "%d down";
   program /usr/bin/pkill "-f -x '/usr/bin/daemonlogger -u daemonlogger -g daemonlogger -i eth1 -o %d'";
  };
}

Ther server’s /etc/default/vtun has:RUN_SERVER=yes
SERVER_ARGS="-P 5000"

And here is the client /etc/vtund.conf:options {
  port 5000;
  ifconfig /sbin/ifconfig;
}
default {
  type ether;
  proto udp;
  compress no;
  encrypt no;
  persist yes;
}
internal {
  device tap0;
  password pass0;
  up {
   ifconfig "%d up";
  };
  down {
   ifconfig "%d down";
  };
}
external {
  device tap1;
  password pass1;
  up {
   ifconfig "%d up";
  };
  down {
   ifconfig "%d down";
  };
}

The monitor system’s /etc/default/vtun looks like this:CLIENT0_NAME=internal
CLIENT0_HOST=10.0.2.1
CLIENT1_NAME=external
CLIENT1_HOST=10.0.2.1

With the above, I was able to start suricata (a snort-compatible rewrite that is multithreaded) to listen to both tap0 and tap1 on the monitoring system and see all the traffic. Very cool. :)

Now, this isn’t optimized for anything– sending 2 interfaces’ worth of traffic through one interface is likely going to result in droppoed packets to the monitor if the links are busy. Could maybe use compression to help there. Also, the server really needs to have an extra interface for vtun traffic that isn’t mirrored, otherwise you end up mirroring the vtun traffic itself (in my test environment, eth2 on the server used 10.0.2.1 and the montior used 10.0.2.2). Finally, vtun doesn’t offer any meaningful line encryption so if you are doing this for anything resembling production, you’re going to want to use a dedicated network segment between the server and the monitor (a crossover cable worked fine in my test environment).

Enjoy!


Filed under: security, ubuntu

Read more
jdstrand

Mirror, mirror…

Recently I wanted to mirror traffic from one interface to another. There is a cool utility from Martin Roesch (of Snort fame) called daemonlogger. It’s use is pretty simple (note I created the ‘daemonlogger user and group on my Ubuntu system since I wanted it to drop privileges after starting):
$ sudo /usr/bin/daemonlogger -u daemonlogger \
  -g daemonlogger -i eth0 -o eth1

Sure enough, examing tcpcump output, anything coming in on eth0 was mirrored to eth1. I then used a crossover cable and used tcpdump on the other system and it all worked as advertised. This could be very useful for an IDS.

Then I thought how cool it would be to do this for multiple interfaces, but then I quickly realized that would be confusing for whatever was trying to interpret that traffic. I read that someone used a combination of daemonlogger and vtun for a remote snort monitor to work in EC2, which got me playing….

In my test environment, I again used a crossover cable to connect two systems (let’s call them the ‘server’ and the ‘monitor’). The server system runs vtund. The monitor system then connects as a vtun client. When this happens, tap devices are created on the server and the monitor. The vtund server is setup to start daemonlogger on connect to mirror the traffic from eth0 to the tap0 device. The monitor system can then listen on the tap device and get all the traffic. I then created a second vtun tunnel and mirrored from eth1 to tap1. The monitor could monitor on both tap devices. Neat!

To do accomplish this, I used this /etc/vtund.conf on the server:options {
 type stand;
 port 5000;
 syslog daemon;
 ifconfig /sbin/ifconfig;
}
default {
  type ether;
  proto udp;
  compress no;
  encrypt no;
  multi yes;
  keepalive yes;
  srcaddr {
   iface eth2;
   addr 10.0.3.1;
  };
}
internal {
  password pass0;
  device tap0;
  up {
   ifconfig "%d up";
   program /usr/bin/daemonlogger "-u daemonlogger -g daemonlogger -i eth0 -o %d";
  };
  down {
   ifconfig "%d down";
   program /usr/bin/pkill "-f -x '/usr/bin/daemonlogger -u daemonlogger -g daemonlogger -i eth0 -o %d'";
  };
}
external {
  password pass1;
  device tap1;
  up {
   ifconfig "%d up";
   program /usr/bin/daemonlogger "-u daemonlogger -g daemonlogger -i eth1 -o %d";
  };
  down {
   ifconfig "%d down";
   program /usr/bin/pkill "-f -x '/usr/bin/daemonlogger -u daemonlogger -g daemonlogger -i eth1 -o %d'";
  };
}

Ther server’s /etc/default/vtun has:RUN_SERVER=yes
SERVER_ARGS="-P 5000"

And here is the client /etc/vtund.conf:options {
  port 5000;
  ifconfig /sbin/ifconfig;
}
default {
  type ether;
  proto udp;
  compress no;
  encrypt no;
  persist yes;
}
internal {
  device tap0;
  password pass0;
  up {
   ifconfig "%d up";
  };
  down {
   ifconfig "%d down";
  };
}
external {
  device tap1;
  password pass1;
  up {
   ifconfig "%d up";
  };
  down {
   ifconfig "%d down";
  };
}

The monitor system’s /etc/default/vtun looks like this:CLIENT0_NAME=internal
CLIENT0_HOST=10.0.2.1
CLIENT1_NAME=external
CLIENT1_HOST=10.0.2.1

With the above, I was able to start suricata (a snort-compatible rewrite that is multithreaded) to listen to both tap0 and tap1 on the monitoring system and see all the traffic. Very cool. :)

Now, this isn’t optimized for anything– sending 2 interfaces’ worth of traffic through one interface is likely going to result in droppoed packets to the monitor if the links are busy. Could maybe use compression to help there. Also, the server really needs to have an extra interface for vtun traffic that isn’t mirrored, otherwise you end up mirroring the vtun traffic itself (in my test environment, eth2 on the server used 10.0.2.1 and the montior used 10.0.2.2). Finally, vtun doesn’t offer any meaningful line encryption so if you are doing this for anything resembling production, you’re going to want to use a dedicated network segment between the server and the monitor (a crossover cable worked fine in my test environment).

Enjoy!


Filed under: security, ubuntu

Read more
jdstrand

A lot of exciting work has been going on with AppArmor and this multipart series will discuss where AppArmor is now, how it is currently used in Ubuntu and how it fits into the larger application isolation story moving forward.

Brief History and Background

AppArmor is a Mandatory Access Control (MAC) system which is a Linux Security Module (LSM) to confine programs to a limited set of resources. AppArmor’s security model is to bind access control attributes to programs rather than to users. AppArmor confinement is provided via profiles loaded into the kernel. AppArmor profiles can be in one of two modes: enforcement and complain. Profiles loaded in enforcement mode will result in enforcement of the policy defined in the profile as well as reporting policy violation attempts (either via syslog or auditd) such that what is not allowed in policy is denied. Profiles in complain mode will not enforce policy but instead report policy violation attempts. AppArmor is typically deployed on systems as a targeted policy where only some (eg high risk) applications have an AppArmor profile defined, but it also supports system wide policy.

Some defining characteristics of AppArmor are that it:

  • is root strong
  • is path-based
  • allows for mixing of enforcement and complain mode profiles
  • uses include files to ease development
  • is very lightweight in terms of resources
  • is easy to learn
  • is relatively easy to audit

AppArmor is an established technology first seen in Immunix and later integrated into SUSE and Ubuntu and their derivatives. AppArmor is also available in Debian, Mandriva, Arch, PLD, Pardus and others. Core AppArmor functionality is in the mainline Linux kernel starting with 2.6.36. AppArmor maintenance and development is ongoing.

An example AppArmor profile

Probably the easiest way to describe what AppArmor does and how it works is to look at an example, in this case the profile for tcpdump on Ubuntu 12.04 LTS:

#include <tunables/global>
/usr/sbin/tcpdump {
  #include <abstractions/base>
  #include <abstractions/nameservice>
  #include <abstractions/user-tmp>
  capability net_raw,
  capability setuid,
  capability setgid,
  capability dac_override,

  network raw,
  network packet,

  # for -D
  capability sys_module,
  @{PROC}/bus/usb/ r,
  @{PROC}/bus/usb/** r,

  # for finding an interface
  @{PROC}/[0-9]*/net/dev r,
  /sys/bus/usb/devices/ r,
  /sys/class/net/ r,
  /sys/devices/**/net/* r,

  # for tracing USB bus, which libpcap supports
  /dev/usbmon* r,
  /dev/bus/usb/ r,
  /dev/bus/usb/** r,

  # for init_etherarray(), with -e
  /etc/ethers r,

  # for USB probing (see libpcap-1.1.x/
  # pcap-usb-linux.c:probe_devices())
  /dev/bus/usb/**/[0-9]* w,

  # for -z
  /bin/gzip ixr,
  /bin/bzip2 ixr,

  # for -F and -w
  audit deny @{HOME}/.* mrwkl,
  audit deny @{HOME}/.*/ rw,
  audit deny @{HOME}/.*/** mrwkl,
  audit deny @{HOME}/bin/ rw,
  audit deny @{HOME}/bin/** mrwkl,
  owner @{HOME}/ r,
  owner @{HOME}/** rw,

  # for -r, -F and -w
  /**.[pP][cC][aA][pP] rw,

  # for convenience with -r (ie, read 
  # pcap files from other sources)
  /var/log/snort/*log* r,

  /usr/sbin/tcpdump r,

  # Site-specific additions and overrides. See 
  # local/README for details.
  #include <local/usr.sbin.tcpdump>
}

This profile is representative of traditional AppArmor profiling for a program that processes untrusted input over the network. As can be seen:

  • profiles are simple text files
  • comments are supported in the profile
  • absolute paths as well as file globbing can be used when specifying file access
  • various access controls for files are present. From the profile we see ‘r’ (read), ‘w’ (write), ‘m’ (memory map as executable), ‘k’ (file locking), ‘l’ (creation hard links), and ‘ix’ to execute another program with the new program inheriting policy. Other access rules also exists such as ‘Px’ (execute under another profile, after cleaning the environment), ‘Cx’ (execute under a child profile, after cleaning the environment), and ‘Ux’ (execute unconfined, after cleaning the environment)
  • access controls for capabilities are present
  • access controls for networking are present
  • explicit deny rules are supported, to override other allow rules (eg access to @{HOME}/bin/bad.sh is denied with auditing due to ‘audit deny @{HOME}/bin/** mrwkl,’ even though general access to @{HOME} is permitted with ‘@{HOME}/** rw,’)
  • include files are supported to ease development and simplify profiles (ie #include <abstractions/base>, #include <abstractions/nameservice>, #include <abstractions/user-tmp>, #include <local/usr.sbin.tcpdump>)
  • variables can be defined and manipulated outside the profile (#include <tunables/global> for @{PROC} and @{HOME})
  • AppArmor profiles are fairly easy to read and audit

Complete information on the profile language can be found in ‘man 5 apparmor.d’ as well as the AppArmor wiki.

Updating and creating profiles

AppArmor uses the directory heirarchy as described in policy layout, but most of the time, you are either updating an existing profile or creating a new one and so the files you most care about are in /etc/apparmor.d.

The AppArmor wiki has a lot of information on debugging and updating existing profiles. AppArmor denials are logged to /var/log/kern.log (or /var/log/audit/audit.log if auditd is installed). If an application is misbehaving and you think it is because of AppArmor, check the logs first. If there is an AppArmor denial, adjust the policy in /etc/apparmor.d, then reload the policy and restart the program like so:

$ sudo apparmor_parser -R /etc/apparmor.d/usr.bin.foo
$ sudo apparmor_parser -a /etc/apparmor.d/usr.bin.foo
$ <restart application>

Oftentimes it is enough to just reload the policy without unloading/loading the profile or restarting the application:

$ sudo apparmor_parser -r /etc/apparmor.d/usr.bin.foo

Creating a profile can be done either with tools or by hand. Due to the current pace and focus of development, the tools are somewhat behind and lack some features. It is generally recommended that you profile by hand instead.

When profiling, keep in mind that:

  • AppArmor provides an additional permission check to traditional Discretionary Access Controls (DAC). DAC is always checked in addition to the AppArmor permission checks. As such, AppArmor cannot override DAC to provide more access than what would be normally allowed.
  • AppArmor normalizes path names. It resolves symlinks and considers each hard link as a different access path.
  • AppArmor evaluates file access by pathname rather than using on disk labeling. This eases profiling since AppArmor handles all the labelling behind the scenes.
  • Deny rules are evaluated after allow rules and cannot be overridden by an allow rule.
  • Creation of files requires the create permission (implied by w) on the path to be created. Separate rules for writing to the directory of where the file resides are not required. Deletion works like creation but requires the delete permission (implied by w). Copy requires ‘r’ of the source with create and write at the destination (implied by w). Move is like copy, but also requires delete at source.
  • The profile must be loaded before an application starts for the confinement to take effect. You will want to make sure that you load policy during boot before any confined daemons or applications.
  • The kernel will rate limit AppArmor denials which can cause problems while profiling. You can avoid this be installing auditd or by adjusting rate limiting in the kernel:
    $ sudo sysctl -w kernel.printk_ratelimit=0

Resources

There is a lot of documentation on AppArmor (though some is still in progress):

Next time I’ll discuss the specifics of how Ubuntu uses AppArmor in the distribution.

Thanks to Seth Arnold and John Johansen for their review.


Filed under: canonical, security, ubuntu

Read more
jdstrand

A lot of exciting work has been going on with AppArmor and this multipart series will discuss where AppArmor is now, how it is currently used in Ubuntu and how it fits into the larger application isolation story moving forward.

Brief History and Background

AppArmor is a Mandatory Access Control (MAC) system which is a Linux Security Module (LSM) to confine programs to a limited set of resources. AppArmor’s security model is to bind access control attributes to programs rather than to users. AppArmor confinement is provided via profiles loaded into the kernel. AppArmor profiles can be in one of two modes: enforcement and complain. Profiles loaded in enforcement mode will result in enforcement of the policy defined in the profile as well as reporting policy violation attempts (either via syslog or auditd) such that what is not allowed in policy is denied. Profiles in complain mode will not enforce policy but instead report policy violation attempts. AppArmor is typically deployed on systems as a targeted policy where only some (eg high risk) applications have an AppArmor profile defined, but it also supports system wide policy.

Some defining characteristics of AppArmor are that it:

  • is root strong
  • is path-based
  • allows for mixing of enforcement and complain mode profiles
  • uses include files to ease development
  • is very lightweight in terms of resources
  • is easy to learn
  • is relatively easy to audit

AppArmor is an established technology first seen in Immunix and later integrated into SUSE and Ubuntu and their derivatives. AppArmor is also available in Debian, Mandriva, Arch, PLD, Pardus and others. Core AppArmor functionality is in the mainline Linux kernel starting with 2.6.36. AppArmor maintenance and development is ongoing.

An example AppArmor profile

Probably the easiest way to describe what AppArmor does and how it works is to look at an example, in this case the profile for tcpdump on Ubuntu 12.04 LTS:

#include <tunables/global>
/usr/sbin/tcpdump {
  #include <abstractions/base>
  #include <abstractions/nameservice>
  #include <abstractions/user-tmp>
  capability net_raw,
  capability setuid,
  capability setgid,
  capability dac_override,

  network raw,
  network packet,

  # for -D
  capability sys_module,
  @{PROC}/bus/usb/ r,
  @{PROC}/bus/usb/** r,

  # for finding an interface
  @{PROC}/[0-9]*/net/dev r,
  /sys/bus/usb/devices/ r,
  /sys/class/net/ r,
  /sys/devices/**/net/* r,

  # for tracing USB bus, which libpcap supports
  /dev/usbmon* r,
  /dev/bus/usb/ r,
  /dev/bus/usb/** r,

  # for init_etherarray(), with -e
  /etc/ethers r,

  # for USB probing (see libpcap-1.1.x/
  # pcap-usb-linux.c:probe_devices())
  /dev/bus/usb/**/[0-9]* w,

  # for -z
  /bin/gzip ixr,
  /bin/bzip2 ixr,

  # for -F and -w
  audit deny @{HOME}/.* mrwkl,
  audit deny @{HOME}/.*/ rw,
  audit deny @{HOME}/.*/** mrwkl,
  audit deny @{HOME}/bin/ rw,
  audit deny @{HOME}/bin/** mrwkl,
  owner @{HOME}/ r,
  owner @{HOME}/** rw,

  # for -r, -F and -w
  /**.[pP][cC][aA][pP] rw,

  # for convenience with -r (ie, read 
  # pcap files from other sources)
  /var/log/snort/*log* r,

  /usr/sbin/tcpdump r,

  # Site-specific additions and overrides. See 
  # local/README for details.
  #include <local/usr.sbin.tcpdump>
}

This profile is representative of traditional AppArmor profiling for a program that processes untrusted input over the network. As can be seen:

  • profiles are simple text files
  • comments are supported in the profile
  • absolute paths as well as file globbing can be used when specifying file access
  • various access controls for files are present. From the profile we see ‘r’ (read), ‘w’ (write), ‘m’ (memory map as executable), ‘k’ (file locking), ‘l’ (creation hard links), and ‘ix’ to execute another program with the new program inheriting policy. Other access rules also exists such as ‘Px’ (execute under another profile, after cleaning the environment), ‘Cx’ (execute under a child profile, after cleaning the environment), and ‘Ux’ (execute unconfined, after cleaning the environment)
  • access controls for capabilities are present
  • access controls for networking are present
  • explicit deny rules are supported, to override other allow rules (eg access to @{HOME}/bin/bad.sh is denied with auditing due to ‘audit deny @{HOME}/bin/** mrwkl,’ even though general access to @{HOME} is permitted with ‘@{HOME}/** rw,’)
  • include files are supported to ease development and simplify profiles (ie #include <abstractions/base>, #include <abstractions/nameservice>, #include <abstractions/user-tmp>, #include <local/usr.sbin.tcpdump>)
  • variables can be defined and manipulated outside the profile (#include <tunables/global> for @{PROC} and @{HOME})
  • AppArmor profiles are fairly easy to read and audit

Complete information on the profile language can be found in ‘man 5 apparmor.d’ as well as the AppArmor wiki.

Updating and creating profiles

AppArmor uses the directory heirarchy as described in policy layout, but most of the time, you are either updating an existing profile or creating a new one and so the files you most care about are in /etc/apparmor.d.

The AppArmor wiki has a lot of information on debugging and updating existing profiles. AppArmor denials are logged to /var/log/kern.log (or /var/log/audit/audit.log if auditd is installed). If an application is misbehaving and you think it is because of AppArmor, check the logs first. If there is an AppArmor denial, adjust the policy in /etc/apparmor.d, then reload the policy and restart the program like so:

$ sudo apparmor_parser -R /etc/apparmor.d/usr.bin.foo
$ sudo apparmor_parser -a /etc/apparmor.d/usr.bin.foo
$ <restart application>

Oftentimes it is enough to just reload the policy without unloading/loading the profile or restarting the application:

$ sudo apparmor_parser -r /etc/apparmor.d/usr.bin.foo

Creating a profile can be done either with tools or by hand. Due to the current pace and focus of development, the tools are somewhat behind and lack some features. It is generally recommended that you profile by hand instead.

When profiling, keep in mind that:

  • AppArmor provides an additional permission check to traditional Discretionary Access Controls (DAC). DAC is always checked in addition to the AppArmor permission checks. As such, AppArmor cannot override DAC to provide more access than what would be normally allowed.
  • AppArmor normalizes path names. It resolves symlinks and considers each hard link as a different access path.
  • AppArmor evaluates file access by pathname rather than using on disk labeling. This eases profiling since AppArmor handles all the labelling behind the scenes.
  • Deny rules are evaluated after allow rules and cannot be overridden by an allow rule.
  • Creation of files requires the create permission (implied by w) on the path to be created. Separate rules for writing to the directory of where the file resides are not required. Deletion works like creation but requires the delete permission (implied by w). Copy requires ‘r’ of the source with create and write at the destination (implied by w). Move is like copy, but also requires delete at source.
  • The profile must be loaded before an application starts for the confinement to take effect. You will want to make sure that you load policy during boot before any confined daemons or applications.
  • The kernel will rate limit AppArmor denials which can cause problems while profiling. You can avoid this be installing auditd or by adjusting rate limiting in the kernel:
    $ sudo sysctl -w kernel.printk_ratelimit=0

Resources

There is a lot of documentation on AppArmor (though some is still in progress):

Next time I’ll discuss the specifics of how Ubuntu uses AppArmor in the distribution.

Thanks to Seth Arnold and John Johansen for their review.


Filed under: canonical, security, ubuntu

Read more
jdstrand

While this may be old news to some, I only just now figured out how to conveniently use multiple identities in OpenSSH. I have several ssh keys, but only two that I want to use with the agent: one for personal use and one for work. I’d like to be able to not specify which identity to use on the command line most of the time and just use ssh like so:

$ ssh <personal>
$ ssh <work>
$ ssh -i ~/.ssh/other.id_rsa <other>

where the first two use the agent with ~/.ssh/id_rsa and ~/.ssh/work.id_rsa respectively, and the last does not use the agent. ‘man ssh_config’ tells me that the agent looks at all the different IdentityFile configuration directives (in order) and also the IdentitiesOnly option. Therefore, I can set up my ~/.ssh/config to have something like:
# This makes it so that only my default identity
# and my work identity are used by the agent
IdentitiesOnly yes
IdentityFile ~/.ssh/id_rsa
IdentityFile ~/.ssh/work.id_rsa

# Default to using my work key with work domains
Host *.work1.com *.work2.com
    IdentityFile ~/.ssh/work.id_rsa

With the above, it all works the way I want. Cool! :)


Filed under: security, ubuntu

Read more