Canonical Voices

Posts tagged with 'ubuntu-server'

Dustin Kirkland


As promised last week, we're now proud to introduce Ubuntu Snappy images on another of our public cloud partners -- Google Compute Engine.
In the video below, you can join us walking through the instructions we have published here.
Snap it up!
:-Dustin

Read more
Dustin Kirkland



A couple of months ago, I re-introduced an old friend -- Ubuntu JeOS (Just enough OS) -- the smallest, (merely 63MB compressed!) functional OS image that we can still call “Ubuntu”.  In fact, we call it Ubuntu Core.

That post was a prelude to something we’ve been actively developing at Canonical for most of 2014 -- Snappy Ubuntu Core!  Snappy Ubuntu combines the best of the ground-breaking image-based Ubuntu remix known as Ubuntu Touch for phones and tablets with the base Ubuntu server operating system trusted by millions of instances in the cloud.

Snappy introduces transactional updates and atomic, image based workflows -- old ideas implemented in databases for decades -- adapted to Ubuntu cloud and server ecosystems for the emerging cloud design patterns known as microservice architectures.

The underlying, base operating system is a very lean Ubuntu Core installation, running on a read-only system partition, much like your iOS, Android, or Ubuntu phone.  One or more “frameworks” can be installed through the snappy command, which is an adaptation of the click packaging system we developed for the Ubuntu Phone.  Perhaps the best sample framework is Docker.  Applications are also packaged and installed using snappy, but apps run within frameworks.  This means that any of the thousands of Docker images available in DockerHub are trivially installable as snap packages, running on the Docker framework in Snappy Ubuntu.

Take Snappy for a Drive


You can try Snappy for yourself in minutes!

You can download Snappy and launch it in a local virtual machine like this:

$ wget http://cdimage.ubuntu.com/ubuntu-core/preview/ubuntu-core-alpha-01.img
$ kvm -m 512 -redir :2222::22 -redir :4443::443 ubuntu-core-alpha-01.img

Then, SSH into it with password 'ubuntu':

$ ssh -p 2222 ubuntu@localhost

At this point, you might want to poke around the system.  Take a look at the mount points, and perhaps try to touch or modify some files.


$ sudo rm /sbin/init
rm: cannot remove ‘/sbin/init’: Permission denied
$ sudo touch /foo

touch: cannot touch ‘foo’: Permission denied
$ apt-get install docker
apt-get: command not found

Rather, let's have a look at the new snappy package manager:

$ sudo snappy --help



And now, let’s install the Docker framework:

$ sudo snappy install docker

At this point, we can do essentially anything available in the Docker ecosystem!

Now, we’ve created some sample Snappy apps using existing Docker containers.  For one example, let’s now install OwnCloud:

$ sudo snappy install owncloud

This will take a little while to install, but eventually, you can point a browser at your own private OwnCloud image, running within a Docker container, on your brand new Ubuntu Snappy system.

We can also update the entire system with a simple command and a reboot:
$ sudo snappy versions
$ sudo snappy update
$ sudo reboot

And we can rollback to the previous version!
$ sudo snappy rollback
$ sudo reboot

Here's a short screencast of all of the above...


While the downloadable image is available for your local testing today, you will very soon be able to launch Snappy Ubuntu instances in your favorite public (Azure, GCE, AWS) and private clouds (OpenStack).


Enjoy!
Dustin

Read more
jdstrand

Ubuntu Core with Snappy was recently announced and a key ingredient for snappy is security. Snappy applications are confined by AppArmor and the confinement story for snappy is an evolution of the security model for Ubuntu Touch. The basic concepts for confined applications and the AppStore model pertain to snappy applications as well. In short, snappy applications are confined using AppArmor by default and this is achieved through an easy to understand, use and developer-friendly system. Read the snappy security specification for all the nitty gritty details.

A developer doc will be published soon.


Filed under: canonical, security, ubuntu, ubuntu-server

Read more
Dustin Kirkland


I had the great pleasure to deliver a 90 minute talk at the USENIX LISA14 conference, in Seattle, Washington.

During the course of the talk, we managed to:

  • Deploy OpenStack Juno across 6 physical nodes, on an Orange Box on stage
  • Explain all of the major components of OpenStack (Nova, Neutron, Swift, Cinder, Horizon, Keystone, Glance, Ceilometer, Heat, Trove, Sahara)
  • Explore the deployed OpenStack cloud's Horizon interface in depth
  • Configured Neutron networking with internal and external networks, as well as a gateway and a router
  • Setup our security groups to open ICMP and SSH ports
  • Upload an SSH keypair
  • Modify the flavor parameters
  • Update a bunch of quotas
  • Add multiple images to Glance
  • Launch some instances until we max out our hypervisor limits
  • Scale up the Nova Compute nodes from 3 units to 6 units
  • Deploy a real workload (Hadoop + Hive + Kibana + Elastic Search)
  • Then, we deleted the entire environment, and ran it all over again from scratch, non-stop
Slides and a full video are below.  Enjoy!




Cheers,
Dustin

Read more
Dustin Kirkland

Earlier this week, here in Paris, at the OpenStack Design Summit, Mark Shuttleworth and Canonical introduced our vision and proof of concept for LXD.

You can find the official blog post on Canonical Insights, and a short video introduction on Youtube (by yours truly).

Our Canonical colleague Stephane Graber posted a bit more technical design detail here on the lxc-devel mailing list, which was picked up by HackerNews.  And LWN published a story yesterday covering another Canonical colleague of ours, Serge Hallyn, and his work on Cgroups and CGManager, all of which feeds into LXD.  As it happens, Stephane and Serge are upstream co-maintainers of Linux Containers.  Tycho Andersen, another colleague of ours, has been working on CRIU, which was the heart of his amazing demo this week, live migrating a container running the cult classic 1st person shooter, Doom! between two containers, back and forth.


Moreover, we've answered a few journalists' questions for excellent articles on ZDnet and SynergyMX.  Predictably, El Reg is skeptical (which isn't necessarily a bad thing).  But unfortunately, The Var Guy doesn't quite understand the technology (and unfortunately uses this article to conflate LXD with other random Canonical/Ubuntu complaints).

In any case, here's a bit more about LXD, in my own words...

Our primary design goal with LXD, is to extend containers into process based systems that behave like virtual machines.

We love KVM for its total machine abstraction, as a full virtualization hypervisor.  Moreover, we love what Docker does for application level development, confinement, packaging, and distribution.

But as an operating system and Linux distribution, our customers are, in fact, asking us for complete operating systems that boot and function within a Linux Container's execution space, natively.

Linux Containers are essential to our reference architecture of OpenStack, where we co-locate multiple services on each host.  Nearly every host is a Nova compute node, as well as a Ceph storage node, and also run a couple of units of "OpenStack overhead", such as MySQL, RabbitMQ, MongoDB, etc.  Rather than running each of those services all on the same physical system, we actually put each of them in their own container, with their own IP address, namespace, cgroup, etc.  This gives us tremendous flexibility, in the orchestration of those services.  We're able to move (migrate, even live migrate) those services from one host to another.  With that, it becomes possible to "evacuate" a given host, by moving each contained set of services elsewhere, perhaps a larger or smaller system, and then shut down the unit (perhaps to replace a hard drive or memory, or repurpose it entirely).

Containers also enable us to similarly confine services on virtual machines themselves!  Let that sink in for a second...  A contained workload is able, then, to move from one virtual machine to another, to a bare metal system.  Even from one public cloud provider, to another public or private cloud!

The last two paragraphs capture a few best practices that what we've learned over the last few years implementing OpenStack for some of the largest telcos and financial services companies in the world.  What we're hearing from Internet service and cloud providers is not too dissimilar...  These customers have their own customers who want cloud instances that perform at bare metal equivalence.  They also want to maximize the utilization of their server hardware, sometimes by more densely packing workloads on given systems.

As such, LXD is then a convergence of several different customer requirements, and our experience deploying some massively complex, scalable workloads (a la OpenStack, Hadoop, and others) in enterprises. 

The rapid evolution of a few key technologies under and around LXC have recently made this dream possible.  Namely: User namespaces, Cgroups, SECCOMP, AppArmorCRIU, as well as the library abstraction that our external tools use to manage these containers as systems.

LXD is a new "hypervisor" in that it provides (REST) APIs that can manage Linux Containers.  This is a step function beyond where we've been to date: able to start and stop containers with local commands and, to a limited extent, libvirt, but not much more.  "Booting" a system, in a container, running an init system, bringing up network devices (without nasty hacks in the container's root filesystem), etc. was challenging, but we've worked our way all of these, and Ubuntu boots unmodified in Linux Containers today.

Moreover, LXD is a whole new semantic for turning any machine -- Intel, AMD, ARM, POWER, physical, or even a virtual machine (e.g. your cloud instances) -- into a system that can host and manage and start and stop and import and export and migrate multiple collections of services bundled within containers.

I've received a number of questions about the "hardware assisted" containerization slide in my deck.  We're under confidentiality agreements with vendors as to the details and timelines for these features.

What (I think) I can say, is that there are hardware vendors who are rapidly extending some of the key features that have made cloud computing and virtualization practical, toward the exciting new world of Linux Containers.  Perhaps you might read a bit about CPU VT extensions, No Execute Bits, and similar hardware security technologies.  Use your imagination a bit, and you can probably converge on a few key concepts that will significantly extend the usefulness of Linux Containers.

As soon as such hardware technology is enabled in Linux, you have our commitment that Ubuntu will bring those features to end users faster than anyone else!

If you want to play with it today, you can certainly see the primitives within Ubuntu's LXC.  Launch Ubuntu containers within LXC and you'll start to get the general, low level idea.  If you want to view it from one layer above, give our new nova-compute-flex (flex was the code name, before it was released as LXD), a try.  It's publicly available as a tech preview in Ubuntu OpenStack Juno (authored by Chuck Short, Scott Moser, and James Page).  Here, you can launch OpenStack instances as LXC containers (rather than KVM virtual machines), as "general purpose" system instances.

Finally, perhaps lost in all of the activity here, is a couple of things we're doing different for the LXD project.  We at Canonical have taken our share of criticism over the years about choice of code hosting (our own Bazaar and Launchpad.net), our preferred free software licence (GPLv3/AGPLv3), and our contributor license agreement (Canonical CLA).   [For the record: I love bzr/Launchpad, prefer GPL/AGPL, and am mostly ambivalent on the CLA; but I won't argue those points here.]
  1. This is a public, community project under LinuxContainers.org
  2. The code and design documents are hosted on Github
  3. Under an Apache License
  4. Without requiring signatures of the Canonical CLA
These have been very deliberate, conscious decisions, lobbied for and won by our engineers leading the project, in the interest of collaborating and garnering the participation of communities that have traditionally shunned Canonical-led projects, raising the above objections.  I, for one, am eager to see contribution and collaboration that too often, we don't see.

Cheers!
:-Dustin

Read more
Dustin Kirkland

Say it with me, out loud.  Lex.  See.  Lex-see.  LXC.

Now, change the "see" to a "dee".  Lex.  Dee.  Lex-dee.  LXD.

Easy!

Earlier this week, here in Paris, at the OpenStack Design Summit, Mark Shuttleworth and Canonical introduced our vision and proof of concept for LXD.

You can find the official blog post on Canonical Insights, and a short video introduction on Youtube (by yours truly).

Our Canonical colleague Stephane Graber posted a bit more technical design detail here on the lxc-devel mailing list, which was picked up by HackerNews.  And LWN published a story yesterday covering another Canonical colleague of ours, Serge Hallyn, and his work on Cgroups and CGManager, all of which feeds into LXD.  As it happens, Stephane and Serge are upstream co-maintainers of Linux Containers.  Tycho Andersen, another colleague of ours, has been working on CRIU, which was the heart of his amazing demo this week, live migrating a container running the cult classic 1st person shooter, Doom! between two containers, back and forth.



Moreover, we've answered a few journalists' questions for excellent articles on ZDnet and SynergyMX.  Predictably, El Reg is skeptical (which isn't necessarily a bad thing).  But unfortunately, The Var Guy doesn't quite understand the technology (and unfortunately uses this article to conflate LXD with other random Canonical/Ubuntu complaints).

In any case, here's a bit more about LXD, in my own words...

Our primary design goal with LXD, is to extend containers into process based systems that behave like virtual machines.

We love KVM for its total machine abstraction, as a full virtualization hypervisor.  Moreover, we love what Docker does for application level development, confinement, packaging, and distribution.

But as an operating system and Linux distribution, our customers are, in fact, asking us for complete operating systems that boot and function within a Linux Container's execution space, natively.

Linux Containers are essential to our reference architecture of OpenStack, where we co-locate multiple services on each host.  Nearly every host is a Nova compute node, as well as a Ceph storage node, and also run a couple of units of "OpenStack overhead", such as MySQL, RabbitMQ, MongoDB, etc.  Rather than running each of those services all on the same physical system, we actually put each of them in their own container, with their own IP address, namespace, cgroup, etc.  This gives us tremendous flexibility, in the orchestration of those services.  We're able to move (migrate, even live migrate) those services from one host to another.  With that, it becomes possible to "evacuate" a given host, by moving each contained set of services elsewhere, perhaps a larger or smaller system, and then shut down the unit (perhaps to replace a hard drive or memory, or repurpose it entirely).

Containers also enable us to similarly confine services on virtual machines themselves!  Let that sink in for a second...  A contained workload is able, then, to move from one virtual machine to another, to a bare metal system.  Even from one public cloud provider, to another public or private cloud!

The last two paragraphs capture a few best practices that what we've learned over the last few years implementing OpenStack for some of the largest telcos and financial services companies in the world.  What we're hearing from Internet service and cloud providers is not too dissimilar...  These customers have their own customers who want cloud instances that perform at bare metal equivalence.  They also want to maximize the utilization of their server hardware, sometimes by more densely packing workloads on given systems.

As such, LXD is then a convergence of several different customer requirements, and our experience deploying some massively complex, scalable workloads (a la OpenStack, Hadoop, and others) in enterprises. 

The rapid evolution of a few key technologies under and around LXC have recently made this dream possible.  Namely: User namespaces, Cgroups, SECCOMP, AppArmorCRIU, as well as the library abstraction that our external tools use to manage these containers as systems.

LXD is a new "hypervisor" in that it provides (REST) APIs that can manage Linux Containers.  This is a step function beyond where we've been to date: able to start and stop containers with local commands and, to a limited extent, libvirt, but not much more.  "Booting" a system, in a container, running an init system, bringing up network devices (without nasty hacks in the container's root filesystem), etc. was challenging, but we've worked our way all of these, and Ubuntu boots unmodified in Linux Containers today.

Moreover, LXD is a whole new semantic for turning any machine -- Intel, AMD, ARM, POWER, physical, or even a virtual machine (e.g. your cloud instances) -- into a system that can host and manage and start and stop and import and export and migrate multiple collections of services bundled within containers.

I've received a number of questions about the "hardware assisted" containerization slide in my deck.  We're under confidentiality agreements with vendors as to the details and timelines for these features.

What (I think) I can say, is that there are hardware vendors who are rapidly extending some of the key features that have made cloud computing and virtualization practical, toward the exciting new world of Linux Containers.  Perhaps you might read a bit about CPU VT extensions, No Execute Bits, and similar hardware security technologies.  Use your imagination a bit, and you can probably converge on a few key concepts that will significantly extend the usefulness of Linux Containers.

As soon as such hardware technology is enabled in Linux, you have our commitment that Ubuntu will bring those features to end users faster than anyone else!

If you want to play with it today, you can certainly see the primitives within Ubuntu's LXC.  Launch Ubuntu containers within LXC and you'll start to get the general, low level idea.  If you want to view it from one layer above, give our new nova-compute-flex (flex was the code name, before it was released as LXD), a try.  It's publicly available as a tech preview in Ubuntu OpenStack Juno (authored by Chuck Short, Scott Moser, and James Page).  Here, you can launch OpenStack instances as LXC containers (rather than KVM virtual machines), as "general purpose" system instances.

Finally, perhaps lost in all of the activity here, is a couple of things we're doing different for the LXD project.  We at Canonical have taken our share of criticism over the years about choice of code hosting (our own Bazaar and Launchpad.net), our preferred free software licence (GPLv3/AGPLv3), and our contributor license agreement (Canonical CLA).   [For the record: I love bzr/Launchpad, prefer GPL/AGPL, and am mostly ambivalent on the CLA; but I won't argue those points here.]
  1. This is a public, community project under LinuxContainers.org
  2. The code and design documents are hosted on Github
  3. Under an Apache License
  4. Without requiring signatures of the Canonical CLA
These have been very deliberate, conscious decisions, lobbied for and won by our engineers leading the project, in the interest of collaborating and garnering the participation of communities that have traditionally shunned Canonical-led projects, raising the above objections.  I, for one, am eager to see contribution and collaboration that too often, we don't see.

Cheers!
:-Dustin

Read more
Dustin Kirkland


This little snippet of ~200 lines of YAML is the exact OpenStack that I'm deploying tonight, at the OpenStack Austin Meetup.

Anyone with a working Juju and MAAS setup, and 7 registered servers should be able to deploy this same OpenStack setup, in about 12 minutes, with a single command.


$ wget http://people.canonical.com/~kirkland/icehouseOB.yaml
$ juju-deployer -c icehouseOB.yaml
$ cat icehouseOB.yaml

icehouse:
overrides:
openstack-origin: "cloud:trusty-icehouse"
source: "distro"
services:
ceph:
charm: "cs:trusty/ceph-27"
num_units: 3
constraints: tags=physical
options:
fsid: "9e7aac42-4bf4-11e3-b4b7-5254006a039c"
"monitor-secret": AQAAvoJSOAv/NRAAgvXP8d7iXN7lWYbvDZzm2Q==
"osd-devices": "/srv"
"osd-reformat": "yes"
annotations:
"gui-x": "2648.6688842773438"
"gui-y": "708.3873901367188"
keystone:
charm: "cs:trusty/keystone-5"
num_units: 1
constraints: tags=physical
options:
"admin-password": "admin"
"admin-token": "admin"
annotations:
"gui-x": "2013.905517578125"
"gui-y": "75.58013916015625"
"nova-compute":
charm: "cs:trusty/nova-compute-3"
num_units: 3
constraints: tags=physical
to: [ceph=0, ceph=1, ceph=2]
options:
"flat-interface": eth0
annotations:
"gui-x": "776.1040649414062"
"gui-y": "-81.22811031341553"
"neutron-gateway":
charm: "cs:trusty/quantum-gateway-3"
num_units: 1
constraints: tags=virtual
options:
ext-port: eth1
instance-mtu: 1400
annotations:
"gui-x": "329.0572509765625"
"gui-y": "46.4658203125"
"nova-cloud-controller":
charm: "cs:trusty/nova-cloud-controller-41"
num_units: 1
constraints: tags=physical
options:
"network-manager": Neutron
annotations:
"gui-x": "1388.40185546875"
"gui-y": "-118.01156234741211"
rabbitmq:
charm: "cs:trusty/rabbitmq-server-4"
num_units: 1
to: mysql
annotations:
"gui-x": "633.8120727539062"
"gui-y": "862.6530151367188"
glance:
charm: "cs:trusty/glance-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "1147.3269653320312"
"gui-y": "1389.5643157958984"
cinder:
charm: "cs:trusty/cinder-4"
num_units: 1
to: nova-cloud-controller
options:
"block-device": none
annotations:
"gui-x": "1752.32568359375"
"gui-y": "1365.716194152832"
"ceph-radosgw":
charm: "cs:trusty/ceph-radosgw-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "2216.68212890625"
"gui-y": "697.16796875"
cinder-ceph:
charm: "cs:trusty/cinder-ceph-1"
num_units: 0
annotations:
"gui-x": "2257.5515747070312"
"gui-y": "1231.2130126953125"
"openstack-dashboard":
charm: "cs:trusty/openstack-dashboard-4"
num_units: 1
to: "keystone"
options:
webroot: "/"
annotations:
"gui-x": "2353.6898193359375"
"gui-y": "-94.2642593383789"
mysql:
charm: "cs:trusty/mysql-1"
num_units: 1
constraints: tags=physical
options:
"dataset-size": "20%"
annotations:
"gui-x": "364.4567565917969"
"gui-y": "1067.5167846679688"
mongodb:
charm: "cs:trusty/mongodb-0"
num_units: 1
constraints: tags=physical
annotations:
"gui-x": "-70.0399979352951"
"gui-y": "1282.8224487304688"
ceilometer:
charm: "cs:trusty/ceilometer-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "-78.13333225250244"
"gui-y": "919.3128051757812"
ceilometer-agent:
charm: "cs:trusty/ceilometer-agent-0"
num_units: 0
annotations:
"gui-x": "-90.9158582687378"
"gui-y": "562.5347595214844"
heat:
charm: "cs:trusty/heat-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "494.94012451171875"
"gui-y": "1363.6024169921875"
ntp:
charm: "cs:trusty/ntp-4"
num_units: 0
annotations:
"gui-x": "-104.57728099822998"
"gui-y": "294.6641273498535"
relations:
- - "keystone:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:amqp"
- "rabbitmq:amqp"
- - "nova-cloud-controller:image-service"
- "glance:image-service"
- - "nova-cloud-controller:identity-service"
- "keystone:identity-service"
- - "glance:shared-db"
- "mysql:shared-db"
- - "glance:identity-service"
- "keystone:identity-service"
- - "cinder:shared-db"
- "mysql:shared-db"
- - "cinder:amqp"
- "rabbitmq:amqp"
- - "cinder:cinder-volume-service"
- "nova-cloud-controller:cinder-volume-service"
- - "cinder:identity-service"
- "keystone:identity-service"
- - "neutron-gateway:shared-db"
- "mysql:shared-db"
- - "neutron-gateway:amqp"
- "rabbitmq:amqp"
- - "neutron-gateway:quantum-network-service"
- "nova-cloud-controller:quantum-network-service"
- - "openstack-dashboard:identity-service"
- "keystone:identity-service"
- - "nova-compute:shared-db"
- "mysql:shared-db"
- - "nova-compute:amqp"
- "rabbitmq:amqp"
- - "nova-compute:image-service"
- "glance:image-service"
- - "nova-compute:cloud-compute"
- "nova-cloud-controller:cloud-compute"
- - "cinder:storage-backend"
- "cinder-ceph:storage-backend"
- - "ceph:client"
- "cinder-ceph:ceph"
- - "ceph:client"
- "nova-compute:ceph"
- - "ceph:client"
- "glance:ceph"
- - "ceilometer:identity-service"
- "keystone:identity-service"
- - "ceilometer:amqp"
- "rabbitmq:amqp"
- - "ceilometer:shared-db"
- "mongodb:database"
- - "ceilometer-agent:container"
- "nova-compute:juju-info"
- - "ceilometer-agent:ceilometer-service"
- "ceilometer:ceilometer-service"
- - "heat:shared-db"
- "mysql:shared-db"
- - "heat:identity-service"
- "keystone:identity-service"
- - "heat:amqp"
- "rabbitmq:amqp"
- - "ceph-radosgw:mon"
- "ceph:radosgw"
- - "ceph-radosgw:identity-service"
- "keystone:identity-service"
- - "ntp:juju-info"
- "neutron-gateway:juju-info"
- - "ntp:juju-info"
- "ceph:juju-info"
- - "ntp:juju-info"
- "keystone:juju-info"
- - "ntp:juju-info"
- "nova-compute:juju-info"
- - "ntp:juju-info"
- "nova-cloud-controller:juju-info"
- - "ntp:juju-info"
- "rabbitmq:juju-info"
- - "ntp:juju-info"
- "glance:juju-info"
- - "ntp:juju-info"
- "cinder:juju-info"
- - "ntp:juju-info"
- "ceph-radosgw:juju-info"
- - "ntp:juju-info"
- "openstack-dashboard:juju-info"
- - "ntp:juju-info"
- "mysql:juju-info"
- - "ntp:juju-info"
- "mongodb:juju-info"
- - "ntp:juju-info"
- "ceilometer:juju-info"
- - "ntp:juju-info"
- "heat:juju-info"
series: trusty

:-Dustin

Read more
Dustin Kirkland

What would you say if I told you, that you could continuously upload your own Software-as-a-Service  (SaaS) web apps into an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, autonomically managed by an open source Orchestration-Service… right now, today?

“An idea is resilient. Highly contagious. Once an idea has taken hold of the brain it's almost impossible to eradicate.”

“Now, before you bother telling me it's impossible…”

“No, it's perfectly possible. It's just bloody difficult.” 

Perhaps something like this...

“How could I ever acquire enough detail to make them think this is reality?”

“Don’t you want to take a leap of faith???”
Sure, let's take a look!

Okay, this looks kinda neat, what is it?

This is an open source Java Spring web application, called Spring-Music, deployed as an app, running inside of Linux containers in CloudFoundry


Cloud Foundry?

CloudFoundry is an open source Platform-as-a-Service (PAAS) cloud, deployed into Linux virtual machine instances in OpenStack, by Juju.


OpenStack?

Juju?

OpenStack is an open source Infrastructure-as-a-Service (IAAS) cloud, deployed by Juju and Landscape on top of MAAS.

Juju is an open source Orchestration System that deploys and scales complex services across many public clouds, private clouds, and bare metal servers.

Landscape?

MAAS?

Landscape is a systems management tool that automates software installation, updates, and maintenance in both physical and virtual machines. Oh, and it too is deployed by Juju.

MAAS is an open source bare metal provisioning system, providing a cloud-like API to physical servers. Juju can deploy services to MAAS, as well as public and private clouds.

"Ready for the kick?"

If you recall these concepts of nesting cloud technologies...

These are real technologies, which exist today!

These are Software-as-a-Service  (SaaS) web apps served by an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, managed by an open source Orchestration-Service.

Spring Music, served by CloudFoundry, running on top of OpenStack, deployed on MAAS, managed by Juju and Landscape!

“The smallest seed of an idea can grow…”

Oh, and I won't leave you hanging...you're not dreaming!


:-Dustin

Read more
Dustin Kirkland



In case you missed the recent Cloud Austin MeetUp, you have another chance to see the Ubuntu Orange Box live and in action here in Austin!

This time, we're at the OpenStack Austin MeetUp, next Wednesday, September 10, 2014, at 6:30pm at Tech Ranch Austin, 9111 Jollyville Rd #100, Austin, TX!

If you join us, you'll witness all of OpenStack Ice House, deployed in minutes to real hardware. Not an all-in-one DevStack; not a minimum viable set of components.  Real, rich, production-quality OpenStack!  Ceilometer, Ceph, Cinder, Glance, Heat, Horizon, Keystone, MongoDB, MySQL, Nova, NTP, Quantum, and RabbitMQ -- intelligently orchestrated and rapidly scaled across 10 physical servers sitting right up front on the podium.  Of course, we'll go under the hood and look at how all of this comes together on the fabulous Ubuntu Orange Box.

And like any good open source software developer, I generally like to make things myself, and share them with others.  In that spirit, I'll also bring a couple of growlers of my own home brewed beer, Ubrewtu ;-)  Free as in beer, of course!
Cheers,Dustin

Read more
Dustin Kirkland


Docker 1.0.1 is available for testing, in Ubuntu 14.04 LTS!

Docker 1.0.1 has landed in the trusty-proposed archive, which we hope to SRU to trusty-updates very soon.  We would love to have your testing feedback, to ensure both upgrades from Docker 0.9.1, as well as new installs of Docker 1.0.1 behave well, and are of the highest quality you have come to expect from Ubuntu's LTS  (Long Term Stable) releases!  Please file any bugs or issues here.

Moreover, this new version of the Docker package now installs the Docker binary to /usr/bin/docker, rather than /usr/bin/docker.io in previous versions. This should help Ubuntu's Docker package more closely match the wealth of documentation and examples available from our friends upstream.

A big thanks to Paul Tagliamonte, James Page, Nick Stinemates, Tianon Gravi, and Ryan Harper for their help upstream in Debian and in Ubuntu to get this package updated in Trusty!  Also, it's probably worth mentioning that we're targeting Docker 1.1.2 (or perhaps 1.2.0) for Ubuntu 14.10 (Utopic), which will release on October 23, 2014.

Here are a few commands that might help your testing...

Check What Candidate Versions are Available

$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

If that shows 0.9.1~dfsg1-2 (as it should), then you need to enable the trusty-proposed pocket.

$ echo "deb http://archive.ubuntu.com/ubuntu/ trusty-proposed universe" | sudo tee -a /etc/apt/sources.list
$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

And now you should see the new version, 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1, available (probably in addition to 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1).

Upgrades

Check if you already have Docker installed, using:

$ dpkg -l docker.io

If so, you can simply upgrade.

$ sudo apt-get upgrade

And now, you can check your Docker version:

$ sudo dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
0.9.1~dfsg1-2

New Installations

You can simply install the new package with:

$ sudo apt-get install docker.io

And ensure that you're on the latest version with:

$ dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1

Running Docker

If you're already a Docker user, you probably don't need these instructions.  But in case you're reading this, and trying Docker for the first time, here's the briefest of quick start guides :-)

$ sudo docker pull ubuntu
$ sudo docker run -i -t ubuntu /bin/bash

And now you're running a bash shell inside of an Ubuntu Docker container.  And only bash!

root@1728ffd1d47b:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 13:42 ? 00:00:00 /bin/bash
root 8 1 0 13:43 ? 00:00:00 ps -ef

If you want to do something more interesting in Docker, well, that's whole other post ;-)

:-Dustin

Read more
Dustin Kirkland


If you're interested in learning how to more effectively use your terminal as your integrated devops environment, consider taking 10 minutes and watching this video while enjoying the finale of Mozart's Symphony No. 40Allegro Assai (part of which is rumored to have inspired Beethoven's 5th).

I'm often asked for a quick-start guide, to using Byobu effectively.  This wiki page is a decent start, as is the manpage, and the various links on the upstream website.  But it seems that some of the past screencast videos have had the longest lasting impressions to Byobu users over the years.

I was on a long, international flight from Munich to Newark this past Saturday with a bit of time on my hands, and I cobbled together this instructional video.    That recent international trip to Nuremberg inspired me to rediscover Mozart, and I particularly like this piece, which Mozart wrote in 1788, but sadly never heard performed.  You can hear it now, and learn how to be more efficient in command line environments along the way :-)


Enjoy!
:-Dustin

Read more
Dustin Kirkland



I hope you'll join me at Rackspace on Tuesday, August 19, 2014, at the Cloud Austin Meetup, at 6pm, where I'll use our spectacular Orange Box to deploy Hadoop, scale it up, run a terasort, destroy it, deploy OpenStack, launch instances, and destroy it too.  I'll talk about the hardware (the Orange Box, Intel NUCs, Managed VLAN switch), as well as the software (Ubuntu, OpenStack, MAAS, Juju, Hadoop) that makes all of this work in 30 minutes or less!

Be sure to RSVP, as space is limited.

http://www.meetup.com/CloudAustin/events/194009002/

Cheers,
Dustin

Read more
Dustin Kirkland

Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!


Q: Why should I care about randomness? 

A: Because entropy is important!

  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

Q: Where does entropy come from?

A: Hardware, typically.

  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on

Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?

A: Pseudo random number generators are our only viable alternative.

  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always

Q: Are Linux PRNGs secure enough?

A: Yes, if they are properly seeded.

  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...

Q: And what exactly is a random seed?

A: Basically, its a small catalyst that primes the PRNG pump.

  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...

XKCD on random numbers

RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

Q: So my OS generates an initial seed at first boot?

A: Yep, but computers are predictable, especially VMs.

  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

Dilbert on random numbers


http://j.mp/1dHAK4V


Q: Surely you're just being paranoid about this, right?

A: I’m afraid not...

Analysis of the LRNG (2006)

  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT

Black Hat (2009)

  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu

Factorable.net (2012)

  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx

Dual_EC_DRBG Backdoor (2013)

  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB

Q: Ruh roh...so what can we do about it?

A: For starters, do a better job seeding our PRNGs.

  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

Q: But how do we ensure that in cloud guests?

A: Run Ubuntu!


Sorry, shameless plug...

Q: And what is Ubuntu's solution?

A: Meet pollinate.

  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate

Q: What about the back end?

A: Introducing pollen.

  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen

Q: Golang, did you say?  That sounds cool!

A: Indeed. Around 50 lines of code, cool!

pollen.go

Q: Is there a public entropy service available?

A: Hello, entropy.ubuntu.com.

  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

Q: But what if I don't necessarily trust Canonical?

A: Then use a different entropy service :-)

  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

Q: So does this increase the overall entropy on a system?

A: No, no, no, no, no!

  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!

Q: Why the challenge/response in the protocol?

A: Think of it like the Heisenberg Uncertainty Principle.

  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

Q: What if pollinate gets crappy or compromised or no random seeds?

A: Functionally, it’s no better or worse than it was without pollinate in the mix.

  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise

Q: What about SSL compromises, or CA Man-in-the-Middle attacks?

A: We are mitigating that by bundling the public certificates in the client.


  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers

Q: What information gets logged by the pollen server?

A: The usual web server debug info.

  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

Q: Have the code or design been audited?

A: Yes, but more feedback is welcome!

  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated

Q: Where can I find more information?

A: Read Up!


Stay safe out there!
:-Dustin

Read more
Dustin Kirkland

Transcoding video is a very resource intensive process.

It can take many minutes to process a small, 30-second clip, or even hours to process a full movie.  There are numerous, excellent, open source video transcoding and processing tools freely available in Ubuntu, including libav-toolsffmpegmencoder, and handbrake.  Surprisingly, however, none of those support parallel computing easily or out of the box.  And disappointingly, I couldn't find any MPI support readily available either.

I happened to have an Orange Box for a few days recently, so I decided to tackle the problem myself, and develop a scalable, parallel video transcoding solution myself.  I'm delighted to share the result with you today!

When it comes to commercial video production, it can take thousands of machines, hundreds of compute hours to render a full movie.  I had the distinct privilege some time ago to visit WETA Digital in Wellington, New Zealand and tour the render farm that processed The Lord of the Rings triology, Avatar, and The Hobbit, etc.  And just a few weeks ago, I visited another quite visionary, cloud savvy digital film processing firm in Hollywood, called Digital Film Tree.

Windows and Mac OS may be the first platforms that come to mind, when you think about front end video production, Linux is far more widely used for batch video processing, and with Ubuntu, in particular, being extensively at both WETA Digital and Digital Film Tree, among others.

While I could have worked with any of a number of tools, I settled on avconv (the successor(?) of ffmpeg), as it was the first one that I got working well on my laptop, before scaling it out to the cluster.

I designed an approach on my whiteboard, in fact quite similar to some work I did parallelizing and scaling the john-the-ripper password quality checker.

At a high level, the algorithm looks like this:
  1. Create a shared network filesystem, simultaneously readable and writable by all nodes
  2. Have the master node split the work into even sized chunks for each worker
  3. Have each worker process their segment of the video, and raise a flag when done
  4. Have the master node wait for each of the all-done flags, and then concatenate the result
And that's exactly what I implemented that in a new transcode charm and transcode-cluster bundle.  It provides linear scalability and performance improvements, as you add additional units to the cluster.  A transcode job that takes 24 minutes on a single node, is down to 3 minutes on 8 worker nodes in the Orange Box, using Juju and MAAS against physical hardware nodes.


For the curious, the real magic is in the config-changed hook, which has decent inline documentation.



The trick, for anyone who might make their way into this by way of various StackExchange questions and (incorrect) answers, is in the command that splits up the original video (around line 54):

avconv -ss $start_time -i $filename -t $length -s $size -vcodec libx264 -acodec aac -bsf:v h264_mp4toannexb -f mpegts -strict experimental -y ${filename}.part${current_node}.ts

And the one that puts it back together (around line 72):

avconv -i concat:"$concat" -c copy -bsf:a aac_adtstoasc -y ${filename}_${size}_x264_aac.${format}

I found this post and this documentation particularly helpful in understanding and solving the problem.

In any case, once deployed, my cluster bundle looks like this.  8 units of transcoders, all connected to a shared filesystem, and performance monitoring too.


I was able to leverage the shared-fs relation provided by the nfs charm, as well as the ganglia charm to monitor the utilization of the cluster.  You can see the spikes in the cpu, disk, and network in the graphs below, during the course of a transcode job.




For my testing, I downloaded the movie Code Rushfreely available under the CC-BY-NC-SA 3.0 license.  If you haven't seen it, it's an excellent documentary about the open source software around Netscape/Mozilla/Firefox and the dotcom bubble of the late 1990s.

Oddly enough, the stock, 746MB high quality MP4 video doesn't play in Firefox, since it's an mpeg4 stream, rather than H264.  Fail.  (Yes, of course I could have used mplayer, vlc, etc., that's not the point ;-)


Perhaps one of the most useful, intriguing features of HTML5 is it's support for embedding multimedia, video, and sound into webpages.  HTML5 even supports multiple video formats.  Sounds nice, right?  If it only were that simple...  As it turns out, different browsers have, and lack support for the different formats.  While there is no one format to rule them all, MP4 is supported by the majority of browsers, including the two that I use (Chromium and Firefox).  This matrix from w3schools.com illustrates the mess.

http://www.w3schools.com/html/html5_video.asp

The file format, however, is only half of the story.  The audio and video contents within the file also have to be encoded and compressed with very specific codecs, in order to work properly within the browsers.  For MP4, the video has to be encoded with H264, and the audio with AAC.

Among the various brands of phones, webcams, digital cameras, etc., the output format and codecs are seriously all over the map.  If you've ever wondered what's happening, when you upload a video to YouTube or Facebook, and it's a while before it's ready to be viewed, it's being transcoded and scaled in the background. 

In any case, I find it quite useful to transcode my videos to MP4/H264/AAC format.  And for that, a scalable, parallel computing approach to video processing would be quite helpful.

During the course of the 3 minute run, I liked watching the avconv log files of all of the nodes, using Byobu and Tmux in a tiled split screen format, like this:


Also, the transcode charm installs an Apache2 webserver on each node, so you can expose the service and point a browser to any of the nodes, where you can find the input, output, and intermediary data files, as well as the logs and DONE flags.



Once the job completes, I can simply click on the output file, Code_Rush.mp4_1280x720_x264_aac.mp4, and see that it's now perfectly viewable in the browser!


In case you're curious, I have verified the same charm with a couple of other OGG, AVI, MPEG, and MOV input files, too.


Beyond transcoding the format and codecs, I have also added configuration support within the charm itself to scale the video frame size, too.  This is useful to take a larger video, and scale it down to a more appropriate size, perhaps for a phone or tablet.  Again, this resource intensive procedure perfectly benefits from additional compute units.


File format, audio/video codec, and frame size changes are hardly the extent of video transcoding workloads.  There are hundreds of options and thousands of combinations, as the manpages of avconv and mencoder attest.  All of my scripts and configurations are free software, open source.  Your contributions and extensions are certainly welcome!

In the mean time, I hope you'll take a look at this charm and consider using it, if you have the need to scale up your own video transcoding ;-)

Cheers,
Dustin

Read more
jdstrand

Last time I discussed AppArmor, I talked about new features in Ubuntu 13.10 and a bit about ApplicationConfinement for Ubuntu Touch. With the release of Ubuntu 14.04 LTS, several improvements were made:

  • Mediation of signals
  • Mediation of ptrace
  • Various policy updates for 14.04, including new tunables, better support for XDG user directories, and Unity7 abstractions
  • Parser policy compilation performance improvements
  • Google Summer of Code (SUSE sponsored) python rewrite of the userspace tools

Signal and ptrace mediation

Prior to Ubuntu 14.04 LTS, a confined process could send signals to other processes (subject to DAC) and ptrace other processes (subject to DAC and YAMA). AppArmor on 14.04 LTS adds mediation of both signals and ptrace which brings important security improvements for all AppArmor confined applications, such as those in the Ubuntu AppStore and qemu/kvm machines as managed by libvirt and OpenStack.

When developing policy for signal and ptrace rules, it is important to remember that AppArmor does a cross check such that AppArmor verifies that:

  • the process sending the signal/performing the ptrace is allowed to send the signal to/ptrace the target process
  • the target process receiving the signal/being ptraced is allowed to receive the signal from/be ptraced by the sender process

Signal(7) permissions use the ‘signal’ rule with the ‘receive/send’ permissions governing signals. PTrace permissions use the ‘ptrace’ rule with the ‘trace/tracedby’ permissions governing ptrace(2) and the ‘read/readby’ permissions governing certain proc(5) filesystem accesses, kcmp(2), futexes (get_robust_list(2)) and perf trace events.

Consider the following denial:

Jun 6 21:39:09 localhost kernel: [221158.831933] type=1400 audit(1402083549.185:782): apparmor="DENIED" operation="ptrace" profile="foo" pid=29142 comm="cat" requested_mask="read" denied_mask="read" peer="unconfined"

This demonstrates that the ‘cat’ binary running under the ‘foo’ profile was unable to read the contents of a /proc entry (in my test, /proc/11300/environ). To allow this process to read /proc entries for unconfined processes, the following rule can be used:

ptrace (read) peer=unconfined,

If the receiving process was confined, the log entry would say ‘peer=”<profile name>”‘ and you would adjust the ‘peer=unconfined’ in the rule to match that in the log denial. In this case, because unconfined processes implicitly can be readby all other processes, we don’t need to specify the cross check rule. If the target process was confined, the profile for the target process would need a rule like this:

ptrace (readby) peer=foo,

Likewise for signal rules, consider this denial:

Jun 6 21:53:15 localhost kernel: [222005.216619] type=1400 audit(1402084395.937:897): apparmor="DENIED" operation="signal" profile="foo" pid=29069 comm="bash" requested_mask="send" denied_mask="send" signal=term peer="unconfined"

This shows that ‘bash’ running under the ‘foo’ profile tried to send the ‘term’ signal to an unconfined process (in my test, I used ‘kill 11300′) and was blocked. Signal rules use ‘read’ and ‘send to determine access, so we can add a rule like so to allow sending of the signal:

signal (send) set=("term") peer=unconfined,

Like with ptrace, a cross-check is performed with signal rules but implicit rules allow unconfined processes to send and receive signals. If pid 11300 were confined, you would adjust the ‘peer=’ in the rule of the foo profile to match the denial in the log, and then adjust the target profile to have something like:

signal (receive) set=("term") peer=foo,

Signal and ptrace rules are very flexible and the AppArmor base abstraction in Ubuntu 14.04 LTS has several rules to help make profiling and transitioning to the new mediation easier:

# Allow other processes to read our /proc entries, futexes, perf tracing and
# kcmp for now
ptrace (readby),
 
# Allow other processes to trace us by default (they will need
# 'trace' in the first place). Administrators can override
# with:
# deny ptrace (tracedby) ...
ptrace (tracedby),
 
# Allow unconfined processes to send us signals by default
signal (receive) peer=unconfined,
 
# Allow us to signal ourselves
signal peer=@{profile_name},
 
# Checking for PID existence is quite common so add it by default for now
signal (receive, send) set=("exists"),

Note the above uses the new ‘@{profile_name}’ AppArmor variable, which is particularly handy with ptrace and signal rules. See man 5 apparmor.d for more details and examples.

14.10

Work still remains and some of the things we’d like to do for 14.10 include:

  • Finishing mediation for non-networking forms of IPC (eg, abstract sockets). This will be done in time for the phone release.
  • Have services integrate with AppArmor and the upcoming trust-store to become trusted helpers (also for phone release)
  • Continue work on netowrking IPC (for 15.04)
  • Continue to work with the upstream kernel on kdbus
  • Work continued on LXC stacking and we hope to have stacked profiles within the current namespace for 14.10. Full support for stacked profiles where different host and container policy for the same binary at the same time should be ready by 15.04
  • Various fixes to the python userspace tools for remaining bugs. These will also be backported to 14.04 LTS

Until next time, enjoy!


Filed under: canonical, security, ubuntu, ubuntu-server

Read more
Dustin Kirkland

Click and drag to rotate, zoom with middle mouse button

It was September of 2009.  I answered a couple of gimme trivia questions and dropped my business card into a hat at a Linux conference in Portland, Oregon.  A few hours later, I received an email...I had just "won" a developer edition HTC Dream -- the Android G1.  I was quite anxious to have a hardware platform where I could experiment with Android.  I had, of course, already downloaded the SDK, compiled Android from scratch, and fiddled with it in an emulator.  But that experience fell far short of Android running on real hardware.  Until the G1.  The G1 was the first device to truly showcase the power and potential of the Android operating system.

And with that context, we are delighted to introduce the Orange Box!


The Orange Box


Conceived by Canonical and custom built by TranquilPC, the Orange Box is a 10-node cluster computer, that fits in a suitcase.

Ubuntu, MAAS, Juju, Landscape, OpenStack, Hadoop, CloudFoundry, and more!

The Orange Box provides a spectacular development platform, showcasing in mere minutes the power of hardware provisioning and service orchestration with Ubuntu, MAAS, Juju, and Landscape.  OpenStack, Hadoop, CloudFoundry, and hundreds of other workloads deploy in minutes, to real hardware -- not just instances in AWS!  It also makes one hell of a Steam server -- there's a charm for that ;-)


OpenStack deployed by Juju, takes merely 6 minutes on an Orange Box

Most developers here certainly recognize the term "SDK", or "Software Development Kit"...  You can think of the Orange Box as a "HDK", or "Hardware Development Kit".  Pair an Orange Box with MAAS and Juju, and you have yourself a compact cloud.  Or a portable big data number cruncher.  Or a lightweight cluster computer.


The underside of an Orange Box, with its cover off


Want to get your hands on one?

Drop us a line, and we'd be delighted to hand-deliver an Orange Box to your office, and conduct 2 full days of technical training, covering MAAS, Juju, Landscape, and OpenStack.  The box is yours for 2 weeks, as you experiment with the industry leading Ubuntu ecosystem of cloud technologies at your own pace and with your own workloads.  We'll show back up, a couple of weeks later, to review what you learned and discuss scaling these tools up, into your own data center, on your own enterprise hardware.  (And if you want your very own Orange Box to keep, you can order one from our friends at TranquilPC.)


Manufacturers of the Orange Box

Gear head like me?  Interested in the technical specs?


Remember those posts late last year about Intel NUCs?  Someone took notice, and we set out to build this ;-)


Each Orange Box chassis contains:
  • 10x Intel NUCs
  • All 10x Intel NUCs contain
    • Intel HD Graphics 4000 GPU
    • 16GB of DDR3 RAM
    • 120GB SSD root disk
    • Intel Gigabit ethernet
  • D-Link DGS-1100-16 managed gigabit switch with 802.1q VLAN support
    • All 10 nodes are internally connected to this gigabit switch
  • 100-240V AC/DC power supply
    • Adapter supplied for US, UK, and EU plug types
    • 19V DC power supplied to each NUC
    • 5V DC power supplied to internal network switch


Intel NUC D53427RKE board

That's basically an Amazon EC2 m3.xlarge ;-)

The first node, node0, additionally contains:
  • A 2TB Western Digital HDD, preloaded with a full Ubuntu archive mirror
  • USB and HDMI ports are wired and accessible from the rear of the box

Most planes fly in clouds...this cloud flies in planes!


In aggregate, this micro cluster effectively fields 40 cores, 160GB of RAM, 1.2TB of solid state storage, and is connected over an internal gigabit network fabric.  A single fan quietly cools the power supply, while all of the nodes are passively cooled by aluminum heat sinks spanning each side of the chassis. All in a chassis the size of a tower PC!

It fits in a suit case, and can travel anywhere you go.


Pelican iM2875 Storm Case

How are we using them at Canonical?

If you're here at the OpenStack Summit in Atlanta, GA, you'll see at least a dozen Orange Boxes, in our booth, on stage during Mark Shuttleworth's keynote, and in our breakout conference rooms.


Canonical sales engineer, Ameet Paranjape,
demonstrating OpenStack on the Orange Box in the Ubuntu booth
at the OpenStack Summit in Atlanta, GA
We are also launching an update to our OpenStack Jumpstart program, where we'll deliver and Orange Box and 2 full days of training to your team, and leave you the box while you experiment with OpenStack, MAAS, Juju, Hadoop, and more for 2 weeks.  Without disrupting your core network or production data center workloads,  prototype your OpenStack experience within a private sandbox environment. You can experiment with various storage alternatives, practice scaling services, destroy and rebuild the environment repeatedly. Safe. Risk free.


This is Cloud, for the Free Man.

:-Dustin

Read more
Dustin Kirkland





This article is cross-posted on Docker's blog as well.

There is a design pattern, occasionally found in nature, when some of the most elegant and impressive solutions often seem so intuitive, in retrospect.



For me, Docker is just that sort of game changing, hyper-innovative technology, that, at its core,  somehow seems straightforward, beautiful, and obvious.



Linux containers, repositories of popular base images, snapshots using modern copy-on-write filesystem features.  Brilliant, yet so simple.  Docker.io for the win!


I clearly recall nine long months ago, intrigued by a fervor of HackerNews excitement pulsing around a nascent Docker technology.  I followed a set of instructions on a very well designed and tastefully manicured web page, in order to launch my first Docker container.  Something like: start with Ubuntu 13.04, downgrade the kernel, reboot, add an out-of-band package repository, install an oddly named package, import some images, perhaps debug or ignore some errors, and then launch.  In few moments, I could clearly see the beginnings of a brave new world of lightning fast, cleanly managed, incrementally saved, highly dense, operating system containers.

Ubuntu inside of Ubuntu, Inception style.  So.  Much.  Potential.



Fast forward to today -- April 18, 2014 -- and the combination of Docker and Ubuntu 14.04 LTS has raised the bar, introducing a new echelon of usability and convenience, and coupled with the trust and track record of enterprise grade Long Term Support from Canonical and the Ubuntu community.
Big thanks, by the way, to Paul Tagliamonte, upstream Debian packager of Docker.io, as well as all of the early testers and users of Docker during the Ubuntu development cycle.
Docker is now officially in Ubuntu.  That makes Ubuntu 14.04 LTS the first enterprise grade Linux distribution to ship with Docker natively packaged, continuously tested, and instantly installable.  Millions of Ubuntu servers are now never more than three commands away from launching or managing Linux container sandboxes, thanks to Docker.


sudo apt-get install docker.io
sudo docker.io pull ubuntu
sudo docker.io run -i -t ubuntu /bin/bash


And after that last command, Ubuntu is now running within Docker, inside of a Linux container.

Brilliant.

Simple.

Elegant.

User friendly.

Just the way we've been doing things in Ubuntu for nearly a decade. Thanks to our friends at Docker.io!


Cheers,
:-Dustin

Read more
Dustin Kirkland



In about an hour, I have the distinct honor to address a room full of federal sector security researchers and scientists at the US Department of Energy's Oak Ridge National Labs, within the Cyber and Information Security Research Conference.

I'm delighted to share with you the slide deck I have prepared for this presentation.  You can download a PDF here.

To a great extent, I have simply reformatted the excellent Ubuntu Security Features wiki page our esteemed Ubuntu Security Team maintains, into a format by which I can deliver as a presentation.

Hopefully you'll learn something!  I certainly did, as I researched and built this presentation ;-)
On a related security note, it's probably worth mentioning that Canonical's IS team have updated all SSL services with patched OpenSSL from the Ubuntu security archive, and have restarted all relevant services (using Landscape, for the win), against the Heartbleed vulnerability. I will release an updated pollinate package in a few minutes, to ship the new public key for entropy.ubuntu.com.



Stay safe,
Dustin

Read more
Antonio Rosales

Meeting information

Agenda

  • Review ACTION points from previous meeting
  • T Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee)
  • Weekly Updates & Questions regarding Ubuntu ARM Server (rbasak)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair

Minutes

Summary

This weeks meeting had a focus on addressing items needed before Feature Freeze on Feb 20. This included conversations around high/essential bugs, red high/essential blueprints, and test failures.

Specific bugs discussed in this weeks meeting were:

  • 1248283 in juju-core (Ubuntu Trusty) “juju userdata should not restart networking” [High,Triaged] https://launchpad.net/bugs/1248283
  • 1278897 in dovecot (Ubuntu Trusty) “dovecot warns about moved ssl certs on upgrade” [High,Triaged] https://launchpad.net/bugs/1278897
  • 1259166 in horizon (Ubuntu Trusty) “Fix lintian error” [High,Triaged]
  • 1273877 in neutron (Ubuntu Trusty) “neutron-plugin-nicira should be renamed to neutron-plugin-vmware” [High,Triaged]

Specific Blueprints discussed:

  • curtain, openstack charms, ceph, mysql alt, cloud-init, openstack (general)

Meeting closed with announcing Marco and Jorge will be at SCALE12x giving a talk, so be sure to stop by if your are going to be at SCALE.

Review ACTION points from previous meeting

The discussion about “Review ACTION points from previous meeting” started at 16:04.

16:06 <arosales> gaughen follow up with jamespage on bug 1243076 16:06 <ubottu> bug 1243076 in mod-auth-mysql (Ubuntu Trusty) “libapache2-mod-auth-mysql is missing in 13.10 amd64″ [High,Won't fix] https://launchpad.net/bugs/1243076 16:09 <jamespage> not got to that yet 16:10 <jamespage> working on a few pre-freeze items first 16:10 <arosales> ack I’ll take its appropriately on your radar Smile :-) –thanks 16:10 <jamespage> it is

16:06 <arosales> gaughen follow up on dbus task for bug 1248283 16:06 <ubottu> bug 1248283 in juju-core (Ubuntu Trusty) “juju userdata should not restart networking” [High,Triaged] https://launchpad.net/bugs/1248283

16:07 <arosales> jamespage to follow up on bug 1278897 (policy compliant) 16:07 <ubottu> bug 1278897 in dovecot (Ubuntu Trusty) “dovecot warns about moved ssl certs on upgrade” [High,Triaged] https://launchpad.net/bugs/1278897

16:07 <arosales> smoser update servercloud-1311-curtin bp 16:07 <smoser> i updated it . 16:07 <smoser> i’ll file a ffe today

16:07 <arosales> hallyn follow up on 1248283 from an lxc pov, ping serue to coordinate 16:08 <serue> Done 16:08 <arosales> smoser update cloud-init BP 16:08 <smoser> we’ll say same there.

Trusty Development

The discussion about “Trusty Development” started at 16:10.

Weekly Updates & Questions for the QA Team (psivaa)

The discussion about “Weekly Updates & Questions for the QA Team (psivaa)” started at 16:27.

Ubuntu Server Team Events

The discussion about “Ubuntu Server Team Events” started at 16:35.

Action items, by person

  • gaughen
    • gaughen ensure BPs are updated
  • coreycb
    • follow upon bug 1273877

Announce next meeting date and time

Next meeting will be on Tuesday, February 25th at 16:00 UTC in #ubuntu-meeting.

People present (lines said)

  • arosales (77)
  • jamespage (19)
  • psivaa (12)
  • smoser (10)
  • ubottu (9)
  • meetingology (5)
  • serue (2)
  • zul (2)
  • sforshee (1)
  • rbasak (1)
  • gaughen (1)
  • smb (1)

Read more
Dustin Kirkland

and bought 3 more with the i5-3427u CPU!



A couple of weeks ago, I waxed glowingly about Ubuntu running on a handful of Intel NUCs that I picked up on Amazon, replacing some aging PCs serving various purposes around the house.  I have since returned all three of those...and upgraded to the i5 version!!!  Read on to find out why...
Whenever I publish an article here, the Blogger/G+ integration immediately posts a link to my G+ feed.  In that thread, Mark Shuttleworth asked if these NUCs supported IPMI or a similar technology, such that they could be enabled in MAAS.  I responded in kind, that, sadly, no, they only support tried-and-trusty-but-dumb-old-Wake-on-LAN.

Alas, an old friend, fellow homebrewer, and new Canonicaler, Ryan Harper, noted that the i5-3427u version of the NUC (performance specs here) actually supports Intel AMT, which is similar to IPMI.  Actually, it's an implementation of WBEM, which itself is fundamentally an implementation of the CIM standard.

That's a health dose of alphabet soup for you.  MAAS, NUC, AMT, IPMI, WEBM, CIM.  What does all of this mean?

Let's do a quick round of introductions for the uninitiated!
  • NUC - Intel's Next Unit of Computing.  It's a palm sized computer, probably intended to be a desktop, but actually functions quite well as a Linux server too.  Drawing about 10W, it's has roughly the same power of an AWS m1.xlarge, and costs about as much as 45 days of an m1.xlarge's EC2 bill.
  •  MAAS - Metal as a Service.  Installing Ubuntu servers (or desktops, for that matter), one by one, with a CD/DVD/USB-key is so 2004.  MAAS is your PXE/DHCP/TFTP/DNS (shit, more alphabet soup...) solution, all-in-one, ready to install Ubuntu onto lots of systems at scale!  Oh, and good news...  Juju supports MAAS as one of its environments, which is cool, in that you can deploy any charmed Juju workload to bare metal, in addition to AWS and OpenStack clouds.
  • AMT - Intel's Asset Management Technology.  This is a feature found on some Intel platforms (specifically, those whose CPU and motherboard support vPro technology), which enables remote management of the system.  Specifically, if you can authenticate successfully to the system, you can retrieve detailed information about the hardware, power cycle it on and off, and modify the boot sequence.  These are the essential functions that MAAS requires to support a system.
  • IPMI - Intelligent Platform Management Interface.  Also pioneered by Intel, this is a more server focused remote network management of systems, providing power on/off and other capabilities.
  • WBEM - Web Based Enterprise Management.  Remote system management technology available through a web browser, based on some internet standards, including CIM.
  • CIM - Common Information Model.  An open open standard that defines how systems in an IT environment are represented and managed.  Does that sound meta to you?  Well, yes, yes it is.
Okay, we have our vocabulary...now what?

So I actually returned all 3 of my Intel NUCs, which had the i3 processor, in favor of the more powerful (and slightly more expensive) i5 versions.  Note that I specifically bought the i5 Ivy Bridge versions, rather than the newer i5 Haswell, because only the Ivy Bridge actually supports AMT (for reasons that I cannot explain).  In fact, in comparison to Haswell, the Ivy Bridge systems:
  1. have AMT
  2. are less expensive
  3. have a higher maximum clock speed
  4. support a higher maximum memory
The only advantage I can see of the newer Haswells is a slightly lower energy footprint, and a slightly better video processor.

When 3 of my shiny new NUCs arrived, I was quite excited to try out this fancy new AMT feature.  In fact, I had already enabled it and experimented with it on a couple of my development i7 Thinkpads, so I more or less knew what to expect.

At this point, I split this post in two.  You're welcome to read on, to learn what you need to know about Intel AMT + Ubuntu + the i5-3427u NUC...

:-Dustin

Read more