Canonical Voices

Dustin Kirkland

A StackExchange question, back in February of this year inspired a new feature in Byobu, that I had been thinking about for quite some time:

Wouldn't it be nice to have a hot key in Byobu that would send a command to multiple splits (or windows?
This feature was added and is available in Byobu 5.73 and newer (in Ubuntu 14.04 and newer, and available in the Byobu PPA for older Ubuntu releases).

I actually use this feature all the time, to update packages across multiple computers.  Of course, Landscape is a fantastic way to do this as well.  But if you don't have access to Landscape, you can always do this very simply with Byobu!

Create some splits, using Ctrl-F2 and Shift-F2, and in each split, ssh into a target Ubuntu (or Debian) machine.

Now, use Shift-F9 to open up the purple prompt at the bottom of your screen.  Here, you enter the command you want to run on each split.  First, you might want to run:

sudo true

This will prompt you for your password, if you don't already have root or sudo access.  You might need to use Shift-Up, Shift-Down, Shift-Left, Shift-Right to move around your splits, and enter passwords.

Now, update your package lists:

sudo apt-get update

And now, apply your updates:

sudo apt-get dist-upgrade

Here's a video to demonstrate!


In a related note, another user-requested feature has been added, to simultaneously synchronize this behavior among all splits.  You'll need the latest version of Byobu, 5.87, which will be in Ubuntu 14.10 (Utopic).  Here, you'll press Alt-F9 and just start typing!  Another demonstration video here...




Cheers,
Dustin

Read more
Dustin Kirkland


This little snippet of ~200 lines of YAML is the exact OpenStack that I'm deploying tonight, at the OpenStack Austin Meetup.

Anyone with a working Juju and MAAS setup, and 7 registered servers should be able to deploy this same OpenStack setup, in about 12 minutes, with a single command.


$ wget http://people.canonical.com/~kirkland/icehouseOB.yaml
$ juju-deployer -c icehouseOB.yaml
$ cat icehouseOB.yaml

icehouse:
overrides:
openstack-origin: "cloud:trusty-icehouse"
source: "distro"
services:
ceph:
charm: "cs:trusty/ceph-27"
num_units: 3
constraints: tags=physical
options:
fsid: "9e7aac42-4bf4-11e3-b4b7-5254006a039c"
"monitor-secret": AQAAvoJSOAv/NRAAgvXP8d7iXN7lWYbvDZzm2Q==
"osd-devices": "/srv"
"osd-reformat": "yes"
annotations:
"gui-x": "2648.6688842773438"
"gui-y": "708.3873901367188"
keystone:
charm: "cs:trusty/keystone-5"
num_units: 1
constraints: tags=physical
options:
"admin-password": "admin"
"admin-token": "admin"
annotations:
"gui-x": "2013.905517578125"
"gui-y": "75.58013916015625"
"nova-compute":
charm: "cs:trusty/nova-compute-3"
num_units: 3
constraints: tags=physical
to: [ceph=0, ceph=1, ceph=2]
options:
"flat-interface": eth0
annotations:
"gui-x": "776.1040649414062"
"gui-y": "-81.22811031341553"
"neutron-gateway":
charm: "cs:trusty/quantum-gateway-3"
num_units: 1
constraints: tags=virtual
options:
ext-port: eth1
instance-mtu: 1400
annotations:
"gui-x": "329.0572509765625"
"gui-y": "46.4658203125"
"nova-cloud-controller":
charm: "cs:trusty/nova-cloud-controller-41"
num_units: 1
constraints: tags=physical
options:
"network-manager": Neutron
annotations:
"gui-x": "1388.40185546875"
"gui-y": "-118.01156234741211"
rabbitmq:
charm: "cs:trusty/rabbitmq-server-4"
num_units: 1
to: mysql
annotations:
"gui-x": "633.8120727539062"
"gui-y": "862.6530151367188"
glance:
charm: "cs:trusty/glance-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "1147.3269653320312"
"gui-y": "1389.5643157958984"
cinder:
charm: "cs:trusty/cinder-4"
num_units: 1
to: nova-cloud-controller
options:
"block-device": none
annotations:
"gui-x": "1752.32568359375"
"gui-y": "1365.716194152832"
"ceph-radosgw":
charm: "cs:trusty/ceph-radosgw-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "2216.68212890625"
"gui-y": "697.16796875"
cinder-ceph:
charm: "cs:trusty/cinder-ceph-1"
num_units: 0
annotations:
"gui-x": "2257.5515747070312"
"gui-y": "1231.2130126953125"
"openstack-dashboard":
charm: "cs:trusty/openstack-dashboard-4"
num_units: 1
to: "keystone"
options:
webroot: "/"
annotations:
"gui-x": "2353.6898193359375"
"gui-y": "-94.2642593383789"
mysql:
charm: "cs:trusty/mysql-1"
num_units: 1
constraints: tags=physical
options:
"dataset-size": "20%"
annotations:
"gui-x": "364.4567565917969"
"gui-y": "1067.5167846679688"
mongodb:
charm: "cs:trusty/mongodb-0"
num_units: 1
constraints: tags=physical
annotations:
"gui-x": "-70.0399979352951"
"gui-y": "1282.8224487304688"
ceilometer:
charm: "cs:trusty/ceilometer-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "-78.13333225250244"
"gui-y": "919.3128051757812"
ceilometer-agent:
charm: "cs:trusty/ceilometer-agent-0"
num_units: 0
annotations:
"gui-x": "-90.9158582687378"
"gui-y": "562.5347595214844"
heat:
charm: "cs:trusty/heat-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "494.94012451171875"
"gui-y": "1363.6024169921875"
ntp:
charm: "cs:trusty/ntp-4"
num_units: 0
annotations:
"gui-x": "-104.57728099822998"
"gui-y": "294.6641273498535"
relations:
- - "keystone:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:amqp"
- "rabbitmq:amqp"
- - "nova-cloud-controller:image-service"
- "glance:image-service"
- - "nova-cloud-controller:identity-service"
- "keystone:identity-service"
- - "glance:shared-db"
- "mysql:shared-db"
- - "glance:identity-service"
- "keystone:identity-service"
- - "cinder:shared-db"
- "mysql:shared-db"
- - "cinder:amqp"
- "rabbitmq:amqp"
- - "cinder:cinder-volume-service"
- "nova-cloud-controller:cinder-volume-service"
- - "cinder:identity-service"
- "keystone:identity-service"
- - "neutron-gateway:shared-db"
- "mysql:shared-db"
- - "neutron-gateway:amqp"
- "rabbitmq:amqp"
- - "neutron-gateway:quantum-network-service"
- "nova-cloud-controller:quantum-network-service"
- - "openstack-dashboard:identity-service"
- "keystone:identity-service"
- - "nova-compute:shared-db"
- "mysql:shared-db"
- - "nova-compute:amqp"
- "rabbitmq:amqp"
- - "nova-compute:image-service"
- "glance:image-service"
- - "nova-compute:cloud-compute"
- "nova-cloud-controller:cloud-compute"
- - "cinder:storage-backend"
- "cinder-ceph:storage-backend"
- - "ceph:client"
- "cinder-ceph:ceph"
- - "ceph:client"
- "nova-compute:ceph"
- - "ceph:client"
- "glance:ceph"
- - "ceilometer:identity-service"
- "keystone:identity-service"
- - "ceilometer:amqp"
- "rabbitmq:amqp"
- - "ceilometer:shared-db"
- "mongodb:database"
- - "ceilometer-agent:container"
- "nova-compute:juju-info"
- - "ceilometer-agent:ceilometer-service"
- "ceilometer:ceilometer-service"
- - "heat:shared-db"
- "mysql:shared-db"
- - "heat:identity-service"
- "keystone:identity-service"
- - "heat:amqp"
- "rabbitmq:amqp"
- - "ceph-radosgw:mon"
- "ceph:radosgw"
- - "ceph-radosgw:identity-service"
- "keystone:identity-service"
- - "ntp:juju-info"
- "neutron-gateway:juju-info"
- - "ntp:juju-info"
- "ceph:juju-info"
- - "ntp:juju-info"
- "keystone:juju-info"
- - "ntp:juju-info"
- "nova-compute:juju-info"
- - "ntp:juju-info"
- "nova-cloud-controller:juju-info"
- - "ntp:juju-info"
- "rabbitmq:juju-info"
- - "ntp:juju-info"
- "glance:juju-info"
- - "ntp:juju-info"
- "cinder:juju-info"
- - "ntp:juju-info"
- "ceph-radosgw:juju-info"
- - "ntp:juju-info"
- "openstack-dashboard:juju-info"
- - "ntp:juju-info"
- "mysql:juju-info"
- - "ntp:juju-info"
- "mongodb:juju-info"
- - "ntp:juju-info"
- "ceilometer:juju-info"
- - "ntp:juju-info"
- "heat:juju-info"
series: trusty

:-Dustin

Read more
Dustin Kirkland

What would you say if I told you, that you could continuously upload your own Software-as-a-Service  (SaaS) web apps into an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, autonomically managed by an open source Orchestration-Service… right now, today?

“An idea is resilient. Highly contagious. Once an idea has taken hold of the brain it's almost impossible to eradicate.”

“Now, before you bother telling me it's impossible…”

“No, it's perfectly possible. It's just bloody difficult.” 

Perhaps something like this...

“How could I ever acquire enough detail to make them think this is reality?”

“Don’t you want to take a leap of faith???”
Sure, let's take a look!

Okay, this looks kinda neat, what is it?

This is an open source Java Spring web application, called Spring-Music, deployed as an app, running inside of Linux containers in CloudFoundry


Cloud Foundry?

CloudFoundry is an open source Platform-as-a-Service (PAAS) cloud, deployed into Linux virtual machine instances in OpenStack, by Juju.


OpenStack?

Juju?

OpenStack is an open source Infrastructure-as-a-Service (IAAS) cloud, deployed by Juju and Landscape on top of MAAS.

Juju is an open source Orchestration System that deploys and scales complex services across many public clouds, private clouds, and bare metal servers.

Landscape?

MAAS?

Landscape is a systems management tool that automates software installation, updates, and maintenance in both physical and virtual machines. Oh, and it too is deployed by Juju.

MAAS is an open source bare metal provisioning system, providing a cloud-like API to physical servers. Juju can deploy services to MAAS, as well as public and private clouds.

"Ready for the kick?"

If you recall these concepts of nesting cloud technologies...

These are real technologies, which exist today!

These are Software-as-a-Service  (SaaS) web apps served by an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, managed by an open source Orchestration-Service.

Spring Music, served by CloudFoundry, running on top of OpenStack, deployed on MAAS, managed by Juju and Landscape!

“The smallest seed of an idea can grow…”

Oh, and I won't leave you hanging...you're not dreaming!


:-Dustin

Read more
Dustin Kirkland



In case you missed the recent Cloud Austin MeetUp, you have another chance to see the Ubuntu Orange Box live and in action here in Austin!

This time, we're at the OpenStack Austin MeetUp, next Wednesday, September 10, 2014, at 6:30pm at Tech Ranch Austin, 9111 Jollyville Rd #100, Austin, TX!

If you join us, you'll witness all of OpenStack Ice House, deployed in minutes to real hardware. Not an all-in-one DevStack; not a minimum viable set of components.  Real, rich, production-quality OpenStack!  Ceilometer, Ceph, Cinder, Glance, Heat, Horizon, Keystone, MongoDB, MySQL, Nova, NTP, Quantum, and RabbitMQ -- intelligently orchestrated and rapidly scaled across 10 physical servers sitting right up front on the podium.  Of course, we'll go under the hood and look at how all of this comes together on the fabulous Ubuntu Orange Box.

And like any good open source software developer, I generally like to make things myself, and share them with others.  In that spirit, I'll also bring a couple of growlers of my own home brewed beer, Ubrewtu ;-)  Free as in beer, of course!
Cheers,Dustin

Read more
Dustin Kirkland


Docker 1.0.1 is available for testing, in Ubuntu 14.04 LTS!

Docker 1.0.1 has landed in the trusty-proposed archive, which we hope to SRU to trusty-updates very soon.  We would love to have your testing feedback, to ensure both upgrades from Docker 0.9.1, as well as new installs of Docker 1.0.1 behave well, and are of the highest quality you have come to expect from Ubuntu's LTS  (Long Term Stable) releases!  Please file any bugs or issues here.

Moreover, this new version of the Docker package now installs the Docker binary to /usr/bin/docker, rather than /usr/bin/docker.io in previous versions. This should help Ubuntu's Docker package more closely match the wealth of documentation and examples available from our friends upstream.

A big thanks to Paul Tagliamonte, James Page, Nick Stinemates, Tianon Gravi, and Ryan Harper for their help upstream in Debian and in Ubuntu to get this package updated in Trusty!  Also, it's probably worth mentioning that we're targeting Docker 1.1.2 (or perhaps 1.2.0) for Ubuntu 14.10 (Utopic), which will release on October 23, 2014.

Here are a few commands that might help your testing...

Check What Candidate Versions are Available

$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

If that shows 0.9.1~dfsg1-2 (as it should), then you need to enable the trusty-proposed pocket.

$ echo "deb http://archive.ubuntu.com/ubuntu/ trusty-proposed universe" | sudo tee -a /etc/apt/sources.list
$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

And now you should see the new version, 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1, available (probably in addition to 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1).

Upgrades

Check if you already have Docker installed, using:

$ dpkg -l docker.io

If so, you can simply upgrade.

$ sudo apt-get upgrade

And now, you can check your Docker version:

$ sudo dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
0.9.1~dfsg1-2

New Installations

You can simply install the new package with:

$ sudo apt-get install docker.io

And ensure that you're on the latest version with:

$ dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1

Running Docker

If you're already a Docker user, you probably don't need these instructions.  But in case you're reading this, and trying Docker for the first time, here's the briefest of quick start guides :-)

$ sudo docker pull ubuntu
$ sudo docker run -i -t ubuntu /bin/bash

And now you're running a bash shell inside of an Ubuntu Docker container.  And only bash!

root@1728ffd1d47b:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 13:42 ? 00:00:00 /bin/bash
root 8 1 0 13:43 ? 00:00:00 ps -ef

If you want to do something more interesting in Docker, well, that's whole other post ;-)

:-Dustin

Read more
Dustin Kirkland


If you're interested in learning how to more effectively use your terminal as your integrated devops environment, consider taking 10 minutes and watching this video while enjoying the finale of Mozart's Symphony No. 40Allegro Assai (part of which is rumored to have inspired Beethoven's 5th).

I'm often asked for a quick-start guide, to using Byobu effectively.  This wiki page is a decent start, as is the manpage, and the various links on the upstream website.  But it seems that some of the past screencast videos have had the longest lasting impressions to Byobu users over the years.

I was on a long, international flight from Munich to Newark this past Saturday with a bit of time on my hands, and I cobbled together this instructional video.    That recent international trip to Nuremberg inspired me to rediscover Mozart, and I particularly like this piece, which Mozart wrote in 1788, but sadly never heard performed.  You can hear it now, and learn how to be more efficient in command line environments along the way :-)


Enjoy!
:-Dustin

Read more
Dustin Kirkland



I hope you'll join me at Rackspace on Tuesday, August 19, 2014, at the Cloud Austin Meetup, at 6pm, where I'll use our spectacular Orange Box to deploy Hadoop, scale it up, run a terasort, destroy it, deploy OpenStack, launch instances, and destroy it too.  I'll talk about the hardware (the Orange Box, Intel NUCs, Managed VLAN switch), as well as the software (Ubuntu, OpenStack, MAAS, Juju, Hadoop) that makes all of this work in 30 minutes or less!

Be sure to RSVP, as space is limited.

http://www.meetup.com/CloudAustin/events/194009002/

Cheers,
Dustin

Read more
Dustin Kirkland

Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!


Q: Why should I care about randomness? 

A: Because entropy is important!

  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

Q: Where does entropy come from?

A: Hardware, typically.

  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on

Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?

A: Pseudo random number generators are our only viable alternative.

  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always

Q: Are Linux PRNGs secure enough?

A: Yes, if they are properly seeded.

  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...

Q: And what exactly is a random seed?

A: Basically, its a small catalyst that primes the PRNG pump.

  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...

XKCD on random numbers

RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

Q: So my OS generates an initial seed at first boot?

A: Yep, but computers are predictable, especially VMs.

  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

Dilbert on random numbers


http://j.mp/1dHAK4V


Q: Surely you're just being paranoid about this, right?

A: I’m afraid not...

Analysis of the LRNG (2006)

  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT

Black Hat (2009)

  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu

Factorable.net (2012)

  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx

Dual_EC_DRBG Backdoor (2013)

  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB

Q: Ruh roh...so what can we do about it?

A: For starters, do a better job seeding our PRNGs.

  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

Q: But how do we ensure that in cloud guests?

A: Run Ubuntu!


Sorry, shameless plug...

Q: And what is Ubuntu's solution?

A: Meet pollinate.

  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate

Q: What about the back end?

A: Introducing pollen.

  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen

Q: Golang, did you say?  That sounds cool!

A: Indeed. Around 50 lines of code, cool!

pollen.go

Q: Is there a public entropy service available?

A: Hello, entropy.ubuntu.com.

  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

Q: But what if I don't necessarily trust Canonical?

A: Then use a different entropy service :-)

  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

Q: So does this increase the overall entropy on a system?

A: No, no, no, no, no!

  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!

Q: Why the challenge/response in the protocol?

A: Think of it like the Heisenberg Uncertainty Principle.

  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

Q: What if pollinate gets crappy or compromised or no random seeds?

A: Functionally, it’s no better or worse than it was without pollinate in the mix.

  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise

Q: What about SSL compromises, or CA Man-in-the-Middle attacks?

A: We are mitigating that by bundling the public certificates in the client.


  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers

Q: What information gets logged by the pollen server?

A: The usual web server debug info.

  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

Q: Have the code or design been audited?

A: Yes, but more feedback is welcome!

  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated

Q: Where can I find more information?

A: Read Up!


Stay safe out there!
:-Dustin

Read more
Dustin Kirkland

Transcoding video is a very resource intensive process.

It can take many minutes to process a small, 30-second clip, or even hours to process a full movie.  There are numerous, excellent, open source video transcoding and processing tools freely available in Ubuntu, including libav-toolsffmpegmencoder, and handbrake.  Surprisingly, however, none of those support parallel computing easily or out of the box.  And disappointingly, I couldn't find any MPI support readily available either.

I happened to have an Orange Box for a few days recently, so I decided to tackle the problem myself, and develop a scalable, parallel video transcoding solution myself.  I'm delighted to share the result with you today!

When it comes to commercial video production, it can take thousands of machines, hundreds of compute hours to render a full movie.  I had the distinct privilege some time ago to visit WETA Digital in Wellington, New Zealand and tour the render farm that processed The Lord of the Rings triology, Avatar, and The Hobbit, etc.  And just a few weeks ago, I visited another quite visionary, cloud savvy digital film processing firm in Hollywood, called Digital Film Tree.

Windows and Mac OS may be the first platforms that come to mind, when you think about front end video production, Linux is far more widely used for batch video processing, and with Ubuntu, in particular, being extensively at both WETA Digital and Digital Film Tree, among others.

While I could have worked with any of a number of tools, I settled on avconv (the successor(?) of ffmpeg), as it was the first one that I got working well on my laptop, before scaling it out to the cluster.

I designed an approach on my whiteboard, in fact quite similar to some work I did parallelizing and scaling the john-the-ripper password quality checker.

At a high level, the algorithm looks like this:
  1. Create a shared network filesystem, simultaneously readable and writable by all nodes
  2. Have the master node split the work into even sized chunks for each worker
  3. Have each worker process their segment of the video, and raise a flag when done
  4. Have the master node wait for each of the all-done flags, and then concatenate the result
And that's exactly what I implemented that in a new transcode charm and transcode-cluster bundle.  It provides linear scalability and performance improvements, as you add additional units to the cluster.  A transcode job that takes 24 minutes on a single node, is down to 3 minutes on 8 worker nodes in the Orange Box, using Juju and MAAS against physical hardware nodes.


For the curious, the real magic is in the config-changed hook, which has decent inline documentation.



The trick, for anyone who might make their way into this by way of various StackExchange questions and (incorrect) answers, is in the command that splits up the original video (around line 54):

avconv -ss $start_time -i $filename -t $length -s $size -vcodec libx264 -acodec aac -bsf:v h264_mp4toannexb -f mpegts -strict experimental -y ${filename}.part${current_node}.ts

And the one that puts it back together (around line 72):

avconv -i concat:"$concat" -c copy -bsf:a aac_adtstoasc -y ${filename}_${size}_x264_aac.${format}

I found this post and this documentation particularly helpful in understanding and solving the problem.

In any case, once deployed, my cluster bundle looks like this.  8 units of transcoders, all connected to a shared filesystem, and performance monitoring too.


I was able to leverage the shared-fs relation provided by the nfs charm, as well as the ganglia charm to monitor the utilization of the cluster.  You can see the spikes in the cpu, disk, and network in the graphs below, during the course of a transcode job.




For my testing, I downloaded the movie Code Rushfreely available under the CC-BY-NC-SA 3.0 license.  If you haven't seen it, it's an excellent documentary about the open source software around Netscape/Mozilla/Firefox and the dotcom bubble of the late 1990s.

Oddly enough, the stock, 746MB high quality MP4 video doesn't play in Firefox, since it's an mpeg4 stream, rather than H264.  Fail.  (Yes, of course I could have used mplayer, vlc, etc., that's not the point ;-)


Perhaps one of the most useful, intriguing features of HTML5 is it's support for embedding multimedia, video, and sound into webpages.  HTML5 even supports multiple video formats.  Sounds nice, right?  If it only were that simple...  As it turns out, different browsers have, and lack support for the different formats.  While there is no one format to rule them all, MP4 is supported by the majority of browsers, including the two that I use (Chromium and Firefox).  This matrix from w3schools.com illustrates the mess.

http://www.w3schools.com/html/html5_video.asp

The file format, however, is only half of the story.  The audio and video contents within the file also have to be encoded and compressed with very specific codecs, in order to work properly within the browsers.  For MP4, the video has to be encoded with H264, and the audio with AAC.

Among the various brands of phones, webcams, digital cameras, etc., the output format and codecs are seriously all over the map.  If you've ever wondered what's happening, when you upload a video to YouTube or Facebook, and it's a while before it's ready to be viewed, it's being transcoded and scaled in the background. 

In any case, I find it quite useful to transcode my videos to MP4/H264/AAC format.  And for that, a scalable, parallel computing approach to video processing would be quite helpful.

During the course of the 3 minute run, I liked watching the avconv log files of all of the nodes, using Byobu and Tmux in a tiled split screen format, like this:


Also, the transcode charm installs an Apache2 webserver on each node, so you can expose the service and point a browser to any of the nodes, where you can find the input, output, and intermediary data files, as well as the logs and DONE flags.



Once the job completes, I can simply click on the output file, Code_Rush.mp4_1280x720_x264_aac.mp4, and see that it's now perfectly viewable in the browser!


In case you're curious, I have verified the same charm with a couple of other OGG, AVI, MPEG, and MOV input files, too.


Beyond transcoding the format and codecs, I have also added configuration support within the charm itself to scale the video frame size, too.  This is useful to take a larger video, and scale it down to a more appropriate size, perhaps for a phone or tablet.  Again, this resource intensive procedure perfectly benefits from additional compute units.


File format, audio/video codec, and frame size changes are hardly the extent of video transcoding workloads.  There are hundreds of options and thousands of combinations, as the manpages of avconv and mencoder attest.  All of my scripts and configurations are free software, open source.  Your contributions and extensions are certainly welcome!

In the mean time, I hope you'll take a look at this charm and consider using it, if you have the need to scale up your own video transcoding ;-)

Cheers,
Dustin

Read more
Dustin Kirkland

It's that simple.
It was about 4pm on Friday afternoon, when I had just about wrapped up everything I absolutely needed to do for the day, and I decided to kick back and have a little fun with the remainder of my work day.

 It's now 4:37pm on Friday, and I'm now done.

Done with what?  The Yo charm, of course!

The Internet has been abuzz this week about the how the Yo app received a whopping $1 million dollars in venture funding.  (Forbes notes that this is a pretty surefire indication that there's another internet bubble about to burst...)

It's little more than the first program any kid writes -- hello world!

Subsequently I realized that we don't really have a "hello world" charm.  And so here it is, yo.

$ juju deploy yo

Deploying up a webpage that says "Yo" is hardly the point, of course.  Rather, this is a fantastic way to see the absolute simplest form of a Juju charm.  Grab the source, and go explore it yo-self!

$ charm-get yo
$ tree yo
├── config.yaml
├── copyright
├── hooks
│   ├── config-changed
│   ├── install
│   ├── start
│   ├── stop
│   ├── upgrade-charm
│   └── website-relation-joined
├── icon.svg
├── metadata.yaml
└── README.md
1 directory, 11 files



  • The config.yaml let's you set and dynamically changes the service itself (the color and size of the font that renders "Yo").
  • The copyright is simply boilerplate GPLv3
  • The icon.svg is just a vector graphics "Yo."
  • The metadata.yaml explains what this charm is, how it can relate to other charms
  • The README.md is a simple getting-started document
  • And the hooks...
    • config-changed is the script that runs when you change the configuration -- basically, it uses sed to inline edit the index.html Yo webpage
    • install simply installs apache2 and overwrites /var/www/index.html
    • start and stop simply starts and stops the apache2 service
    • upgrade-charm is currently a no-op
    • website-relation-joined sets and exports the hostname and port of this system
The website relation is very important here...  Declaring and defining this relation instantly lets me relate this charm with dozens of other services.  As you can see in the screenshot at the top of this post, I was able to easily relate the varnish website accelerator in front of the Yo charm.

Hopefully this simple little example might help you examine the anatomy of a charm for the first time, and perhaps write your own first charm!

Cheers,

Dustin

Read more
Dustin Kirkland

It's hard for me to believe that I have sat on a this draft blog post for almost 6 years.  But I'm stuck on a plane this evening, inspired by Elon Musk and Tesla's (cleverly titled) announcement, "All Our Patents Are Belong To You."  Musk writes:

Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.
When I get home, I'm going to take down a plaque that has proudly hung in my own home office for nearly 10 years now.  In 2004, I was named an IBM Master Inventor, recognizing sustained contributions to IBM's patent portfolio.

Musk continues:
When I started out with my first company, Zip2, I thought patents were a good thing and worked hard to obtain them. And maybe they were good long ago, but too often these days they serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors. After Zip2, when I realized that receiving a patent really just meant that you bought a lottery ticket to a lawsuit, I avoided them whenever possible.
And I feel the exact same way!  When I was an impressionable newly hired engineer at IBM, I thought patents were wonderful expressions of my own creativity.  IBM rewarded me for the work, and recognized them as important contributions to my young career.  Remember, in 2003, IBM was defending the Linux world against evil SCO.  (Confession: I think I read Groklaw every single day.)

Yeah, I filed somewhere around 75 patents in about 4 years, 47 of which have been granted by the USPTO to date.

I'm actually really, really proud of a couple of them.  I was the lead inventor on a couple of early patents defining the invention you might know today as Swype (Android) or Shapewriter (iPhone) on your mobile devices.  In 2003, I called it QWERsive, as the was basically applying "cursive handwriting" to a "qwerty keyboard."  Along with one of my co-inventors, we actually presented a paper at the 27th UNICODE conference in Berlin in 2005, and IBM sold the patent to Lenovo a year later.  (To my knowledge, thankfully that patent has never been enforced, as I used Swype every single day.)

QWERsive

But that enthusiasm evaporated very quickly between 2005 and 2007, as I reviewed thousands of invention disclosures by my IBM colleagues, and hundreds of software patents by IBM competitors in the industry.

I spent most of 2005 working onsite at Red Hat in Westford, MA, and came to appreciate how much more efficiently innovation happened in a totally open source world, free of invention disclosures, black out periods, gag orders, and software patents.  I met open source activists in the free software community, such as Jon maddog Hall, who explained the wake of destruction behind, and the impending doom ahead, in a world full of software patents.

Finally, in 2008, I joined an amazing little free software company called Canonical, which was far too busy releasing Ubuntu every 6 months on time, and building an amazing open source software ecosystem, to fart around with software patents.  To my delight, our founder, Mark Shuttleworth, continues to share the same enlightened view, as he states in this TechCrunch interview (2012):
“People have become confused,” Shuttleworth lamented, “and think that a patent is incentive to create at all.” No one invents just to get a patent, though — people invent in order to solve problems. According to him, patents should incentivize disclosure. Software is not something you can really keep secret, and as such Shuttleworth’s determination is that “society is not benefited by software patents at all.”Software patents, he said, are a bad deal for society. The remedy is to shorten the duration of patents, and reduce the areas people are allowed to patent. “We’re entering a third world war of patents,” Shuttleworth said emphatically. “You can’t do anything without tripping over a patent!” One cannot possibly check all possible patents for your invention, and the patent arms race is not about creation at all.
And while I'm still really proud of some of my ideas today, I'm ever so ashamed that they're patented.

If I could do what Elon Musk did with Tesla's patent portfolio, you have my word, I absolutely would.  However, while my name is listed as the "inventor" on four dozen patents, all of them are "assigned" to IBM (or Lenovo).  That is to say, they're not mine to give, or open up.

What I can do, is speak up, and formally apologize.  I'm sorry I filed software patents.  A lot of them.  I have no intention on ever doing so again.  The system desperately needs a complete overhaul.  Both the technology and business worlds are healthier, better, more innovative environment without software patents.

I do take some consolation that IBM seems to be "one of the good guys", in so much as our modern day IBM has not been as litigious as others, and hasn't, to my knowledge, used any of the patents for which I'm responsible in an offensive manner.

No longer hanging on my wall.  Tucked away in a box in the attic.
But there are certainly those that do.  Patent trolls.

Another former employer of mine, Gazzang was acquired earlier this month (June 3rd) by Cloudera -- a super sharp, up-and-coming big data open source company with very deep pockets and tremendous market potential.  Want to guess what happened 3 days later?  A super shady patent infringement lawsuit is filed, of course!
Protegrity Corp v. Gazzang, Inc.
Complaint for Patent InfringementCivil Action No. 3:14-cv-00825; no judge yet assigned. Filed on June 6, 2014 in the U.S. District Court for the District of Connecticut;Patents in case 7,305,707: “Method for intrusion detection in a database system” by Mattsson. Prosecuted by Neuner; George W. Cohen; Steven M. Edwards Angell Palmer & Dodge LLP. Includes 22 claims (2 indep.). Was application 11/510,185. Granted 12/4/2007.
Yuck.  And the reality is that happens every single day, and in places where the stakes are much, much higher.  See: Apple v. Google, for instance.

Musk concludes his post:
Technology leadership is not defined by patents, which history has repeatedly shown to be small protection indeed against a determined competitor, but rather by the ability of a company to attract and motivate the world’s most talented engineers. We believe that applying the open source philosophy to our patents will strengthen rather than diminish Tesla’s position in this regard.
What a brave, bold, ballsy, responsible assertion!

I've never been more excited to see someone back up their own rhetoric against software patents, with such a substantial, palpable, tangible assertion.  Kudos, Elon.

Moreover, I've also never been more interested in buying a Tesla.   Coincidence?

Maybe it'll run an open source operating system and apps, too.  Do that, and I'm sold.

:-Dustin

Read more
Dustin Kirkland

Click and drag to rotate, zoom with middle mouse button

It was September of 2009.  I answered a couple of gimme trivia questions and dropped my business card into a hat at a Linux conference in Portland, Oregon.  A few hours later, I received an email...I had just "won" a developer edition HTC Dream -- the Android G1.  I was quite anxious to have a hardware platform where I could experiment with Android.  I had, of course, already downloaded the SDK, compiled Android from scratch, and fiddled with it in an emulator.  But that experience fell far short of Android running on real hardware.  Until the G1.  The G1 was the first device to truly showcase the power and potential of the Android operating system.

And with that context, we are delighted to introduce the Orange Box!


The Orange Box


Conceived by Canonical and custom built by TranquilPC, the Orange Box is a 10-node cluster computer, that fits in a suitcase.

Ubuntu, MAAS, Juju, Landscape, OpenStack, Hadoop, CloudFoundry, and more!

The Orange Box provides a spectacular development platform, showcasing in mere minutes the power of hardware provisioning and service orchestration with Ubuntu, MAAS, Juju, and Landscape.  OpenStack, Hadoop, CloudFoundry, and hundreds of other workloads deploy in minutes, to real hardware -- not just instances in AWS!  It also makes one hell of a Steam server -- there's a charm for that ;-)


OpenStack deployed by Juju, takes merely 6 minutes on an Orange Box

Most developers here certainly recognize the term "SDK", or "Software Development Kit"...  You can think of the Orange Box as a "HDK", or "Hardware Development Kit".  Pair an Orange Box with MAAS and Juju, and you have yourself a compact cloud.  Or a portable big data number cruncher.  Or a lightweight cluster computer.


The underside of an Orange Box, with its cover off


Want to get your hands on one?

Drop us a line, and we'd be delighted to hand-deliver an Orange Box to your office, and conduct 2 full days of technical training, covering MAAS, Juju, Landscape, and OpenStack.  The box is yours for 2 weeks, as you experiment with the industry leading Ubuntu ecosystem of cloud technologies at your own pace and with your own workloads.  We'll show back up, a couple of weeks later, to review what you learned and discuss scaling these tools up, into your own data center, on your own enterprise hardware.  (And if you want your very own Orange Box to keep, you can order one from our friends at TranquilPC.)


Manufacturers of the Orange Box

Gear head like me?  Interested in the technical specs?


Remember those posts late last year about Intel NUCs?  Someone took notice, and we set out to build this ;-)


Each Orange Box chassis contains:
  • 10x Intel NUCs
  • All 10x Intel NUCs contain
    • Intel HD Graphics 4000 GPU
    • 16GB of DDR3 RAM
    • 120GB SSD root disk
    • Intel Gigabit ethernet
  • D-Link DGS-1100-16 managed gigabit switch with 802.1q VLAN support
    • All 10 nodes are internally connected to this gigabit switch
  • 100-240V AC/DC power supply
    • Adapter supplied for US, UK, and EU plug types
    • 19V DC power supplied to each NUC
    • 5V DC power supplied to internal network switch


Intel NUC D53427RKE board

That's basically an Amazon EC2 m3.xlarge ;-)

The first node, node0, additionally contains:
  • A 2TB Western Digital HDD, preloaded with a full Ubuntu archive mirror
  • USB and HDMI ports are wired and accessible from the rear of the box

Most planes fly in clouds...this cloud flies in planes!


In aggregate, this micro cluster effectively fields 40 cores, 160GB of RAM, 1.2TB of solid state storage, and is connected over an internal gigabit network fabric.  A single fan quietly cools the power supply, while all of the nodes are passively cooled by aluminum heat sinks spanning each side of the chassis. All in a chassis the size of a tower PC!

It fits in a suit case, and can travel anywhere you go.


Pelican iM2875 Storm Case

How are we using them at Canonical?

If you're here at the OpenStack Summit in Atlanta, GA, you'll see at least a dozen Orange Boxes, in our booth, on stage during Mark Shuttleworth's keynote, and in our breakout conference rooms.


Canonical sales engineer, Ameet Paranjape,
demonstrating OpenStack on the Orange Box in the Ubuntu booth
at the OpenStack Summit in Atlanta, GA
We are also launching an update to our OpenStack Jumpstart program, where we'll deliver and Orange Box and 2 full days of training to your team, and leave you the box while you experiment with OpenStack, MAAS, Juju, Hadoop, and more for 2 weeks.  Without disrupting your core network or production data center workloads,  prototype your OpenStack experience within a private sandbox environment. You can experiment with various storage alternatives, practice scaling services, destroy and rebuild the environment repeatedly. Safe. Risk free.


This is Cloud, for the Free Man.

:-Dustin

Read more
Dustin Kirkland


Upon learning about the Heartbleed vulnerability in OpenSSL, my first thoughts were pretty desperate.  I basically lost all faith in humanity's ability to write secure software.  It's really that bad.

I spent the next couple of hours drowning in the sea of passwords and certificates I would personally need to change...ugh :-/

As of the hangover of that sobering reality arrived, I then started thinking about various systems over the years that I've designed, implemented, or was otherwise responsible for, and how Heartbleed affected those services.  Another throbbing headache set in.

I patched DivItUp.com within minutes of Ubuntu releasing an updated OpenSSL package, and re-keyed the SSL certificate as soon as GoDaddy declared that it was safe for re-keying.

Likewise, the Ubuntu entropy service was patched and re-keyed, along with all Ubuntu-related https services by Canonical IT.  I pushed an new package of the pollinate client with updated certificate changes to Ubuntu 14.04 LTS (trusty), the same day.

That said, I did enjoy a bit of measured satisfaction, in one controversial design decision that I made in January 2012, when creating Gazzang's zTrustee remote key management system.

All default network communications, between zTrustee clients and servers, are encrypted twice.  The outer transport layer network traffic, like any https service, is encrypted using OpenSSL.  But the inner payloads are also signed and encrypted using GnuPG.

Hundreds of times, zTrustee and I were questioned or criticized about that design -- by customers, prospects, partners, and probably competitors.

In fact, at one time, there was pressure from a particular customer/partner/prospect, to disable the inner GPG encryption entirely, and have zTrustee rely solely on the transport layer OpenSSL, for performance reasons.  Tried as I might, I eventually lost that fight, and we added the "feature" (as a non-default option).  That someone might have some re-keying to do...

But even in the face of the Internet-melting Heartbleed vulnerability, I'm absolutely delighted that the inner payloads of zTrustee communications are still protected by GnuPG asymmetric encryption and are NOT vulnerable to Heartbleed style snooping.

In fact, these payloads are some of the very encryption keys that guard YOUR health care and financial data stored in public and private clouds around the world by Global 2000 companies.

Truth be told, the insurance against crypto library vulnerabilities zTrustee bought by using GnuPG and OpenSSL in combination was really the secondary objective.

The primary objective was actually to leverage asymmetric encryption, to both sign AND encrypt all payloads, in order to cryptographically authenticate zTrustee clients, ensure payload integrity, and enforce key revocations.  We technically could have used OpenSSL for both layers and even realized a few performance benefits -- OpenSSL is faster than GnuPG in our experience, and can leverage crypto accelerator hardware more easily.  But I insisted that the combination of GPG over SSL would buy us protection against vulnerabilities in either protocol, and that was worth any performance cost in a key management product like zTrustee.

In retrospect, this makes me wonder why diverse, backup, redundant encryption, isn't more prevalent in the design of security systems...

Every elevator you have ever used has redundant safety mechanisms.  Your car has both seat belts and air bags.  Your friendly cashier will double bag your groceries if you ask.  And I bet you've tied your shoes with a double knot before.

Your servers have redundant power supplies.  Your storage arrays have redundant hard drives.  You might even have two monitors.  You're might be carrying a laptop, a tablet, and a smart phone.

Moreover, important services on the Internet are often highly available, redundant, fault tolerant or distributed by design.

But the underpinnings of the privacy and integrity of the very Internet itself, is usually protected only once, with transport layer encryption of the traffic in motion.

At this point, can we afford the performance impact of additional layers of security?  Or, rather, at this point, can we afford not to use all available protection?

Dustin

p.s. I use both dm-crypt and eCryptFS on my Ubuntu laptop ;-)

Read more
Dustin Kirkland





This article is cross-posted on Docker's blog as well.

There is a design pattern, occasionally found in nature, when some of the most elegant and impressive solutions often seem so intuitive, in retrospect.



For me, Docker is just that sort of game changing, hyper-innovative technology, that, at its core,  somehow seems straightforward, beautiful, and obvious.



Linux containers, repositories of popular base images, snapshots using modern copy-on-write filesystem features.  Brilliant, yet so simple.  Docker.io for the win!


I clearly recall nine long months ago, intrigued by a fervor of HackerNews excitement pulsing around a nascent Docker technology.  I followed a set of instructions on a very well designed and tastefully manicured web page, in order to launch my first Docker container.  Something like: start with Ubuntu 13.04, downgrade the kernel, reboot, add an out-of-band package repository, install an oddly named package, import some images, perhaps debug or ignore some errors, and then launch.  In few moments, I could clearly see the beginnings of a brave new world of lightning fast, cleanly managed, incrementally saved, highly dense, operating system containers.

Ubuntu inside of Ubuntu, Inception style.  So.  Much.  Potential.



Fast forward to today -- April 18, 2014 -- and the combination of Docker and Ubuntu 14.04 LTS has raised the bar, introducing a new echelon of usability and convenience, and coupled with the trust and track record of enterprise grade Long Term Support from Canonical and the Ubuntu community.
Big thanks, by the way, to Paul Tagliamonte, upstream Debian packager of Docker.io, as well as all of the early testers and users of Docker during the Ubuntu development cycle.
Docker is now officially in Ubuntu.  That makes Ubuntu 14.04 LTS the first enterprise grade Linux distribution to ship with Docker natively packaged, continuously tested, and instantly installable.  Millions of Ubuntu servers are now never more than three commands away from launching or managing Linux container sandboxes, thanks to Docker.


sudo apt-get install docker.io
sudo docker.io pull ubuntu
sudo docker.io run -i -t ubuntu /bin/bash


And after that last command, Ubuntu is now running within Docker, inside of a Linux container.

Brilliant.

Simple.

Elegant.

User friendly.

Just the way we've been doing things in Ubuntu for nearly a decade. Thanks to our friends at Docker.io!


Cheers,
:-Dustin

Read more
Dustin Kirkland



In about an hour, I have the distinct honor to address a room full of federal sector security researchers and scientists at the US Department of Energy's Oak Ridge National Labs, within the Cyber and Information Security Research Conference.

I'm delighted to share with you the slide deck I have prepared for this presentation.  You can download a PDF here.

To a great extent, I have simply reformatted the excellent Ubuntu Security Features wiki page our esteemed Ubuntu Security Team maintains, into a format by which I can deliver as a presentation.

Hopefully you'll learn something!  I certainly did, as I researched and built this presentation ;-)
On a related security note, it's probably worth mentioning that Canonical's IS team have updated all SSL services with patched OpenSSL from the Ubuntu security archive, and have restarted all relevant services (using Landscape, for the win), against the Heartbleed vulnerability. I will release an updated pollinate package in a few minutes, to ship the new public key for entropy.ubuntu.com.



Stay safe,
Dustin

Read more
Dustin Kirkland

and bought 3 more with the i5-3427u CPU!



A couple of weeks ago, I waxed glowingly about Ubuntu running on a handful of Intel NUCs that I picked up on Amazon, replacing some aging PCs serving various purposes around the house.  I have since returned all three of those...and upgraded to the i5 version!!!  Read on to find out why...
Whenever I publish an article here, the Blogger/G+ integration immediately posts a link to my G+ feed.  In that thread, Mark Shuttleworth asked if these NUCs supported IPMI or a similar technology, such that they could be enabled in MAAS.  I responded in kind, that, sadly, no, they only support tried-and-trusty-but-dumb-old-Wake-on-LAN.

Alas, an old friend, fellow homebrewer, and new Canonicaler, Ryan Harper, noted that the i5-3427u version of the NUC (performance specs here) actually supports Intel AMT, which is similar to IPMI.  Actually, it's an implementation of WBEM, which itself is fundamentally an implementation of the CIM standard.

That's a health dose of alphabet soup for you.  MAAS, NUC, AMT, IPMI, WEBM, CIM.  What does all of this mean?

Let's do a quick round of introductions for the uninitiated!
  • NUC - Intel's Next Unit of Computing.  It's a palm sized computer, probably intended to be a desktop, but actually functions quite well as a Linux server too.  Drawing about 10W, it's has roughly the same power of an AWS m1.xlarge, and costs about as much as 45 days of an m1.xlarge's EC2 bill.
  •  MAAS - Metal as a Service.  Installing Ubuntu servers (or desktops, for that matter), one by one, with a CD/DVD/USB-key is so 2004.  MAAS is your PXE/DHCP/TFTP/DNS (shit, more alphabet soup...) solution, all-in-one, ready to install Ubuntu onto lots of systems at scale!  Oh, and good news...  Juju supports MAAS as one of its environments, which is cool, in that you can deploy any charmed Juju workload to bare metal, in addition to AWS and OpenStack clouds.
  • AMT - Intel's Asset Management Technology.  This is a feature found on some Intel platforms (specifically, those whose CPU and motherboard support vPro technology), which enables remote management of the system.  Specifically, if you can authenticate successfully to the system, you can retrieve detailed information about the hardware, power cycle it on and off, and modify the boot sequence.  These are the essential functions that MAAS requires to support a system.
  • IPMI - Intelligent Platform Management Interface.  Also pioneered by Intel, this is a more server focused remote network management of systems, providing power on/off and other capabilities.
  • WBEM - Web Based Enterprise Management.  Remote system management technology available through a web browser, based on some internet standards, including CIM.
  • CIM - Common Information Model.  An open open standard that defines how systems in an IT environment are represented and managed.  Does that sound meta to you?  Well, yes, yes it is.
Okay, we have our vocabulary...now what?

So I actually returned all 3 of my Intel NUCs, which had the i3 processor, in favor of the more powerful (and slightly more expensive) i5 versions.  Note that I specifically bought the i5 Ivy Bridge versions, rather than the newer i5 Haswell, because only the Ivy Bridge actually supports AMT (for reasons that I cannot explain).  In fact, in comparison to Haswell, the Ivy Bridge systems:
  1. have AMT
  2. are less expensive
  3. have a higher maximum clock speed
  4. support a higher maximum memory
The only advantage I can see of the newer Haswells is a slightly lower energy footprint, and a slightly better video processor.

When 3 of my shiny new NUCs arrived, I was quite excited to try out this fancy new AMT feature.  In fact, I had already enabled it and experimented with it on a couple of my development i7 Thinkpads, so I more or less knew what to expect.

At this point, I split this post in two.  You're welcome to read on, to learn what you need to know about Intel AMT + Ubuntu + the i5-3427u NUC...

:-Dustin

Read more
Dustin Kirkland


A couple of weeks ago, I waxed glowingly about Ubuntu running on a handful of Intel NUCs that I picked up on Amazon, replacing some aging PCs serving various purposes around the house.  I have since returned all three of those, and upgraded to the i5-3427u version, since it supports Intel AMT.  Why would I do that?  Read on...
When my shiny new NUCs arrived, I was quite excited to try out this fancy new AMT feature.  In fact, I had already enabled it and experimented with it on a couple of my development i7 Thinkpads, so I more or less knew what to expect.

But what followed was 6 straight hours of complete and utter frustration :-(  Like slam your fist into the keyboard and shout obscenities into cheese.
Actually, on that last point, I find it useful, when I'm mad, to open up cheese on my desktop and get visibly angry.  Once I realize how dumb I look when I'm angry, its a bit easier to stop being angry.  Seriously, try it sometime.
Okay, so I posted a couple of support requests on Intel's community forums.

Basically, I found it nearly impossible (like 1 in 100 chances) of actually getting into the AMT configuration menu using the required Ctrl-P.  And in the 2 or 3 times I did get in there, the default password, "admin", did not work.

After putting the kids to bed, downing a few pints of homebrewed beer, and attempting sleep (with a 2-week-old in the house), I lay in bed, awake in the middle of the night and it crossed my mind that...
No, no.  No way.  That couldn't be it.  Surely not.  That's really, really dumb.  Is it possible that the NUC's BIOS...  Nah.  Maybe, though.  It's worth a try at this point?  Maybe, just maybe, the NumLock key is enabled at boot???  It can't be.  The NumLock key is effin retarded, and almost as dumb as its braindead cousin, the CapsLock key.  OMFG!!!
Yep, that was it.  Unbelievable.  The system boots with the NumLock key toggled on.  My keyboard doesn't have an LED indicator that tells me such inane nonsense is the case.  And the BIOS doesn't expose a setting to toggle this behavior.  The "P" key is one of the keys that is NumLocked to "*".


So there must be some incredibly unlikely race condition that I could win 1 in 100 times where me pressing Ctrl-P frantically enough actually sneaks me into the AMT configuration.  Seriously, Intel peeps, please make this an F-key, like the rest of the BIOS and early boot options...

And once I was there, the default password, "admin", includes two more keys that are NumLocked.  For security reasons, these look like "*****" no matter what I'm typing.  When I thought I was typing "admin", I was actually typing "ad05n".  And of course, there's no scratch pad where I can test my keyboard and see that this is the case.  In fact, I'm not the only person hitting similar issues.  It seems that most people using keyboards other than US-English are quite confused when they type "admin" over and over and over again, to their frustration.

Okay, rant over.  I posted my solution back to my own questions on the forum.  And finally started playing with AMT!

The synopsis: AMT is really, really impressive!

First, you need to enter bios and ensure that it's enabled.  Then, you need to do whatever it takes to enter Intel's MEBx interface, using Ctrl-P (NumLock notwithstanding).  You'll be prompted for a password, and on your first login, this should be "admin" (NumLock notwithstanding).  Then you'll need to choose your own strong password.  Once in there, you'll need to enable a couple of settings, including networking/dhcp auto setup.  You can, at your option, also install some TLS certificates and secure your communications with your device.

AMT has a very simple, intuitive web interface.  Here are a comprehensive set of screen shots of all of the individual pages.

Once AMT is enabled on the target system, point a browser to port 16992, and click "Log On..."

The username is always "admin".  You'll set this password in the MEBx interface, using Ctrl-P just after BIOS post.

Here's the basic system status/overview.

The System Information page contains basic information about the system itself, including some of its capabilities.

The processor information page gives you the low down on your CPU.  Search ark.intel.com for your Intel CPU type to see all of its capabilities.

Check your memory capacity, type, speed, etc.

And your disk type, size, and serial number.

NUCs don't have battery information, but my Thinkpad does.

An event log has some interesting early boot and debug information here.

Arguably the most useful page, here you can power a system on, off, or hard reboot it.

If you have wireless capability, you choose whether you want that enabled/disabled when the system is off, suspended, or hibernated.

Here you can configure the network settings.  Unlike a BMC (Board Management Controller) on most server class hardware, which has its own dedicated interface, Intel AMT actually shares the network interface with the Operating System.

AMT actually supports IPv6 networking as well, though I haven't played with it yet.

Configure the hostname and Dynamic DNS here.

You can set up independent user accounts, if necessary.

And with a BIOS update, you can actually use Intel AMT over a wireless connection (if you have an Intel wireless card)
So this pointy/clicky web interface is nice, but not terribly scriptable (without some nasty screenscraping).  What about the command line interface?

The amttool command (provided by the amtterm package in Ubuntu) offers a nice command line interface into some of the functionality exposed by AMT.  You need to export an environment variable, AMT_PASSWORD, and then you can get some remote information about the system:

kirkland@x230:~⟫ amttool 10.0.0.14 info
### AMT info on machine '10.0.0.14' ###
AMT version: 7.1.20
Hostname: nuc1.
Powerstate: S0
Remote Control Capabilities:
IanaOemNumber 0
OemDefinedCapabilities IDER SOL BiosSetup BiosPause
SpecialCommandsSupported PXE-boot HD-boot cd-boot
SystemCapabilitiesSupported powercycle powerdown powerup reset
SystemFirmwareCapabilities f800

You can also retrieve the networking information:

kirkland@x230:~⟫ amttool 10.0.0.14 netinfo
Network Interface 0:
DhcpEnabled true
HardwareAddressDescription Wired0
InterfaceMode SHARED_MAC_ADDRESS
LinkPolicy 31
MACAddress 00-aa-bb-cc-dd-ee
DefaultGatewayAddress 10.0.0.1
LocalAddress 10.0.0.14
PrimaryDnsAddress 10.0.0.1
SecondaryDnsAddress 0.0.0.0
SubnetMask 255.255.255.0
Network Interface 1:
DhcpEnabled true
HardwareAddressDescription Wireless1
InterfaceMode SHARED_MAC_ADDRESS
LinkPolicy 0
MACAddress ee-ff-aa-bb-cc-dd
DefaultGatewayAddress 0.0.0.0
LocalAddress 0.0.0.0
PrimaryDnsAddress 0.0.0.0
SecondaryDnsAddress 0.0.0.0
SubnetMask 0.0.0.0

Far more handy than WoL alone, you can power up, power down, and power cycle the system.

kirkland@x230:~⟫ amttool 10.0.0.14 powerdown
host x220., powerdown [y/N] ? y
execute: powerdown
result: pt_status: success

kirkland@x230:~⟫ amttool 10.0.0.14 powerup
host x220., powerup [y/N] ? y
execute: powerup
result: pt_status: success

kirkland@x230:~⟫ amttool 10.0.0.14 powercycle
host x220., powercycle [y/N] ? y
execute: powercycle
result: pt_status: success

I was a little disappointed that amttool's info command didn't provide nearly as much information as the web interface.  However, I did find a fork of Gerd Hoffman's original Perl script in Sourceforge here.  I don't know the upstream-ability of this code, but it worked very well for my part, and I'm considering sponsoring/merging it into Ubuntu for 14.04.  Anyone have further experience with these enhancements?

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data BIOS
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'BIOS' (1 item):
(data struct.ver. 1.0)
Vendor: 'Intel Corp.'
Version: 'RKPPT10H.86A.0028.2013.1016.1429'
Release date: '10/16/2013'
BIOS characteristics: 'PCI' 'BIOS upgradeable' 'BIOS shadowing
allowed' 'Boot from CD' 'Selectable boot' 'EDD spec' 'int13h 5.25 in
1.2 mb floppy' 'int13h 3.5 in 720 kb floppy' 'int13h 3.5 in 2.88 mb
floppy' 'int5h print screen services' 'int14h serial services'
'int17h printer services'

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data ComputerSystem
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'ComputerSystem' (1 item):
(data struct.ver. 1.0)
Manufacturer: ' '
Product: ' '
Version: ' '
Serial numb.: ' '
UUID: 7ae34e30-44ab-41b7-988f-d98c74ab383d

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data Baseboard
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'Baseboard' (1 item):
(data struct.ver. 1.0)
Manufacturer: 'Intel Corporation'
Product: 'D53427RKE'
Version: 'G87971-403'
Serial numb.: '27XC63723G4'
Asset tag: 'To be filled by O.E.M.'
Replaceable: yes

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data Processor
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'Processor' (1 item):
(data struct.ver. 1.0)
ID: 0x4529f9eaac0f
Max Socket Speed: 2800 MHz
Current Speed: 1800 MHz
Processor Status: Enabled
Processor Type: Central
Socket Populated: yes
Processor family: 'Intel(R) Core(TM) i5 processor'
Upgrade Information: [0x22]
Socket Designation: 'CPU 1'
Manufacturer: 'Intel(R) Corporation'
Version: 'Intel(R) Core(TM) i5-3427U CPU @ 1.80GHz'

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data MemoryModule
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'MemoryModule' (2 items):
(* No memory device in the socket *)
(data struct.ver. 1.0)
Size: 8192 Mb
Form Factor: 'SODIMM'
Memory Type: 'DDR3'
Memory Type Details:, 'Synchronous'
Speed: 1333 MHz
Manufacturer: '029E'
Serial numb.: '123456789'
Asset Tag: '9876543210'
Part Number: 'GE86sTBF5emdppj '

kirkland@x230:/tmp⟫ ./amttool 10.0.0.37 hwasset data VproVerificationTable
## '10.0.0.37' :: AMT Hardware Asset
Data for the asset 'VproVerificationTable' (1 item):
(data struct.ver. 1.0)
CPU: VMX=Enabled SMX=Enabled LT/TXT=Enabled VT-x=Enabled
MCH: PCI Bus 0x00 / Dev 0x08 / Func 0x00
Dev Identification Number (DID): 0x0000
Capabilities: VT-d=NOT_Capable TXT=NOT_Capable Bit_50=Enabled
Bit_52=Enabled Bit_56=Enabled
ICH: PCI Bus 0x00 / Dev 0xf8 / Func 0x00
Dev Identification Number (DID): 0x1e56
ME: Enabled
Intel_QST_FW=NOT_Supported Intel_ASF_FW=NOT_Supported
Intel_AMT_FW=Supported Bit_13=Enabled Bit_14=Enabled Bit_15=Enabled
ME FW ver. 8.1 hotfix 40 build 1416
TPM: Disabled
TPM on board = NOT_Supported
Network Devices:
Wired NIC - PCI Bus 0x00 / Dev 0xc8 / Func 0x00 / DID 0x1502
BIOS supports setup screen for (can be editable): VT-d TXT
supports VA extensions (ACPI Op region) with maximum ver. 2.6
SPI Flash has Platform Data region reserved.

On a different note, I recently sponsored a package, wsmancli, into Ubuntu Universe for Trusty, at the request of Kent Baxley (Canonical) and Jared Dominguez (Dell), which provides the wsman command.  Jared writes more about it here in this Dell technical post.  With Kent's help, I did manage get wsman to remotely power on a system.  I must say that it's a bit less user friendly than the equivalent amttool functionality above...

kirkland@x230:~⟫  wsman invoke -a RequestPowerStateChange -J request.xml http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_PowerManagementService?SystemCreationClassName="CIM_ComputerSystem",SystemName="Intel(r)AMT",CreationClassName="CIM_PowerManagementService",Name="Intel(r) AMT Power Management Service" --port 16992 -h 10.0.0.14 --username admin -p "ABC123abc123#" -V -v

I'm really enjoying the ability to remotely administer these systems.  And I'm really, really looking forward to the day when I can use MAAS to provision these systems!

:-Dustin

Read more
Dustin Kirkland

Last week, I posed a question on Google+, looking for suggestions on a minimal physical format, x86 machine.  I was looking for something like a Raspberry Pi (of which I already have one), but really it had to be x86.

I was aware of a few options out there, but I was very fortunately introduced to one spectacular little box...the Intel NUC!

The unboxing experience is nothing short of pure marketing genius!



The "NUC" stands for Intel's Next Unit of Computing.  It's a compact little device, that ships barebones.  You need to add DDR3 memory (up to 16GB), an mSATA hard drive (if you want to boot locally), and an mSATA WiFi card (if you want wireless networking).

The physical form factor of all models is identical:

  • 4.6" x 4.4" x 1.6"
  • 11.7cm x 11.2cm x 4.1cm

There are 3 different processor options:


And there are three different peripheral setups:

  • HDMI 1.4a (x2) + USB 2.0 (x3) + Gigabit ethernet
  • HDMI 1.4a (x1) + Thunderbolt supporting DisplayPort 1.1a (x1) + USB 2.0 (x3)
  • HDMI 1.4a (x1) + Mini DisplayPort 1.1a (x2) + USB 2.0 (x2); USB 3.0 (x1)
I ended up buying 3 of these last week, and reworked my audio/video and baby monitoring setup in the house last week.  I bought 2 of these (i3 + Ethernet) , and 1 of these (i3 + Thunderbolt)

Quite simply, I couldn't be happier with these little devices!

I used one of these to replace the dedicated audio/video PC (an x201 Thinkpad) hooked up in my theater.  The x201 was a beefy machine, with plenty of CPU and video capability.  But it was pretty bulky, rather noisy, and drew too much power.

And the other two are Baby-buntu baby monitors, as previously blogged here, replacing a real piece-of-crap Lenovo Q100 (Atom + SiS307DV and all the horror maligned with that sick chip set).

All 3 are now running Ubuntu 13.10, spectacularly I might add!  All of the hardware cooperated perfectly.




Here are the two views that I really wanted Amazon to show me, as I was buying the device...what the inside looks like!  You can see two mSATA ports and red/black WiFi antenna leads on the left, and two DDR3 slots on the right.


On the left, you can now see a 24GB mSATA SSD, and beneath it (not visible) is an Intel Centrino Advanced-N 6235 WiFi adapter.  On the right, I have two 8GB DDR3 memory modules.

Note, to get wireless working properly I did have to:

echo "options iwlwifi 11n_disable=1" | sudo tee -a /etc/modprobe.d/iwlwifi.conf


The BIOS is really super fancy :-)  There's a mouse and everything.  I made a few minor tweaks, to the boot order, assigned 512MB of memory to the display adapter, and configured it to power itself back on at any power loss.


Speaking of power, it sustains about 10 watts of power, at idle, which costs me about $11/year in electricity.


Some of you might be interested in some rough disk IO statistics...

kirkland@living:~⟫ sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 11306 MB in 2.00 seconds = 5657.65 MB/sec
Timing buffered disk reads: 1478 MB in 3.00 seconds = 492.32 MB/sec

And the lshw output...

    description: Desktop Computer
product: (To be filled by O.E.M.)
width: 64 bits
capabilities: smbios-2.7 dmi-2.7 vsyscall32
configuration: boot=normal chassis=desktop family=To be filled by O.E.M. sku=To be filled by O.E.M. uuid=[redacted]
*-core
description: Motherboard
product: D33217CK
vendor: Intel Corporation
physical id: 0
version: G76541-300
serial: [redacted]
*-firmware
description: BIOS
vendor: Intel Corp.
physical id: 0
version: GKPPT10H.86A.0025.2012.1011.1534
date: 10/11/2012
size: 64KiB
capacity: 6336KiB
capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int14serial int17printer acpi usb biosbootspecification uefi
*-cache:0
width: 32 bits
clock: 66MHz
capabilities: storage msi pm ahci_1.0 bus_master cap_list
configuration: driver=ahci latency=0
resources: irq:40 ioport:f0b0(size=8) ioport:f0a0(size=4) ioport:f090(size=8) ioport:f080(size=4) ioport:f060(size=32) memory:f6906000-f69067ff
*-serial UNCLAIMED
description: SMBus
product: 7 Series/C210 Series Chipset Family SMBus Controller
vendor: Intel Corporation
physical id: 1f.3
bus info: pci@0000:00:1f.3
version: 04
width: 64 bits
clock: 33MHz
configuration: latency=0
resources: memory:f6905000-f69050ff ioport:f040(size=32)
*-scsi
physical id: 1
logical name: scsi0
capabilities: emulated
*-disk
description: ATA Disk
product: BP4 mSATA SSD
physical id: 0.0.0
bus info: scsi@0:0.0.0
logical name: /dev/sda
version: S8FM
serial: [redacted]
size: 29GiB (32GB)
capabilities: gpt-1.00 partitioned partitioned:gpt
configuration: ansiversion=5 guid=be0ab026-45c1-4bd5-a023-1182fe75194e sectorsize=512
*-volume:0
description: Windows FAT volume
vendor: mkdosfs
physical id: 1
bus info: scsi@0:0.0.0,1
logical name: /dev/sda1
logical name: /boot/efi
version: FAT32
serial: 2252-bc3f
size: 486MiB
capacity: 486MiB
capabilities: boot fat initialized
configuration: FATs=2 filesystem=fat mount.fstype=vfat mount.options=rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro state=mounted
*-volume:1
description: EXT4 volume
vendor: Linux
physical id: 2
bus info: scsi@0:0.0.0,2
logical name: /dev/sda2
logical name: /
version: 1.0
serial: [redacted]
size: 25GiB
capabilities: journaled extended_attributes large_files huge_files dir_nlink recover extents ext4 ext2 initialized
configuration: created=2013-11-06 13:01:57 filesystem=ext4 lastmountpoint=/ modified=2013-11-12 15:38:33 mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,data=ordered mounted=2013-11-12 15:38:33 state=mounted
*-volume:2
description: Linux swap volume
vendor: Linux
physical id: 3
bus info: scsi@0:0.0.0,3
logical name: /dev/sda3
version: 1
serial: [redacted]
size: 3994MiB
capacity: 3994MiB
capabilities: nofs swap initialized
configuration: filesystem=swap pagesize=4095

It also supports: virtualization technology, S3/S4/S5 sleep states, Wake-on-LAN, and PXE boot.  Sadly, it does not support IPMI :-(

Finally, it's worth noting that I bought the model with the i3 for a specific purpose...  These three machines all have full virtualization capabilities (KVM).  Which means these little boxes, with their dual-core hyper-threaded CPUs and 16GB of RAM are about to become Nova compute nodes in my local OpenStack cluster ;-)  That will be a separate blog post ;-)

Dustin

Read more
Dustin Kirkland


Necessity is truly the mother of invention.  I was working from the Isle of Man recently, and really, really enjoyed my stay!  There's no better description for the Isle of Man than "quaint":
quaint
kwānt/
adjective1. attractively unusual or old-fashioned.
"quaint country cottages"
synonyms:picturesquecharmingsweetattractiveold-fashionedold-world
Though that description applies to the Internet connectivity, as well :-)  Truth be told, most hotel WiFi is pretty bad.  But nestle a lovely little old hotel on a forgotten little Viking/Celtic island and you will really see the problem exacerbated.

I worked around most of my downstream issues with a couple of new extensions to the run-one project, and I'm delighted as always to share these with you in Ubuntu's package!

As a reminder, the run-one package already provides:
  • run-one COMMAND [ARGS]
    • This is a wrapper script that runs no more than one unique instance of some command with a unique set of arguments.
    • This is often useful with cronjobs, when you want no more than one copy running at a time.
  • run-this-one COMMAND [ARGS]
    • This is exactly like run-one, except that it will use pgrep and kill to find and kill any running processes owned by the user and matching the target commands and arguments.
    • Note that run-this-one will block while trying to kill matching processes, until all matching processes are dead.
    • This is often useful when you want to kill any previous copies of the process you want to run (like VPN, SSL, and SSH tunnels).
  • keep-one-running COMMAND [ARGS]
    • This command operates exactly like run-one except that it respawns the command with its arguments if it exits for any reason (zero or non-zero).
    • This is useful when you want to ensure that you always have a copy of a command or process running, in case it dies or exits for any reason.
Newly added, you can now:
  • run-one-constantly COMMAND [ARGS]
    • This is simply an alias for keep-one-running.
    • I've never liked the fact that this command started with "keep-" instead of "run-one-", from a namespace and discoverability perspective.
  • run-one-until-success COMMAND [ARGS]
    • This command operates exactly like run-one-constantly except that it respawns "COMMAND [ARGS]" until COMMAND exits successfully (ie, exits zero).
    • This is useful when downloading something, perhaps using wget --continue or rsync, over a crappy quaint hotel WiFi connection.
  • run-one-until-failure COMMAND [ARGS]
    •  This command operates exactly like run-one-constantly except that it respawns "COMMAND [ARGS]" until COMMAND exits with failure (ie, exits non-zero).
    • This is useful when you want to run something until something goes wrong.
I am occasionally asked about the difference between these tools and the nohup command...
  1. First, the "one" part of run-one-constantly is important, in that it uses run-one to protect you from running more than one instances of the specified command. This is handy for something like an ssh tunnel, that you only really want/need one of.
  2. Second, nohup doesn't rerun the specified command if it exits cleanly, or forcibly gets killed. nohup only ignores the hangup signal.
So you might say that the run-one tools are a bit more resilient than nohup.

You can use all of these as of Ubuntu 13.10 (Saucy), by simply:

sudo apt-get install run-one

Or, for older Ubuntu releases:

sudo apt-add-repository ppa:run-one/ppa
sudo apt-get update
sudo apt-get install run-one

I was also asked about the difference between these tools and upstart...

Upstart is Ubuntu's event driven replacement for sysvinit.  It's typically used to start daemons and other scripts, utilities, and "jobs" at boot time.  It has a really cool feature/command/option called respawn, which can be used to provide a very similar effect as run-one-constantly.  In fact, I've used respawn in several of the upstart jobs I've written for the Ubuntu server, so I'm happy to credit upstart's respawn for the idea.

That said, I think the differences between upstart and run-one are certainly different enough to merit both tools, at least on my servers.

  1. An upstart job is defined by its own script-like syntax.  You can see many examples in Ubuntu's /etc/init/*.conf.  On my system the average upstart job is 25 lines long.  The run-one commands are simply prepended onto the beginning of any command line program and arguments you want to run.  You can certainly use run-one and friends inside of a script, but they're typically used in an interactive shell command line.
  2. An upstart job typically runs at boot time, or when "started" using the start command, and these start jobs located in the root-writable /etc/init/.  Can a non-root user write their own upstart job, and start and stop it?  Not that I can tell (and I'm happy to be corrected here)...    Turns out I was wrong about that, per a set of recently added features to Upstart (thanks, James, and Stuart for pointing out!), non-root users can now write and run their own upstart jobs..   Still, any user on the system can launch run-one jobs, and their own command+arguments namespace is unique to them.
  3. run-one is easily usable on systems that do not have upstart available; the only hard dependency is on the flock(1) utility.
Hope that helps!


Happy running,
:-Dustin

Read more
Dustin Kirkland

tl;dr? 
From within byobu, just run:
byobu-enable-prompt

Still reading?

I've helped bring a touch of aubergine to the Ubuntu server before.  Along those lines, it has long bothered me that Ubuntu's bash package, out of the box, creates a situation where full color command prompts are almost always disabled.

Of course I carry around my own, highly customized ~/.bashrc on my desktop, but whenever I start new instances of the Ubuntu server in the cloud, without fail, I end up back at a colorless, drab command prompt, like this:


You can, however, manually override this by setting color_prompt=yes at the top of your ~/.bashrc, or your administrator can set that system-wide in /etc/bash.bashrc.  After which, you'll see your plain, white prompt now show two new colors, bright green and blue.


That's a decent start, but there's two things I don't like about this prompt:
  1. There's 3 disparate pieces of information, but only two color distinctions:
    • a user name
    • a host name
    • a current working directory
  2. The colors themselves are
    • a little plain
    • 8-color
    • and non-communicative
Both of these problems are quite easy to solve.  Within Ubuntu, our top notch design team has invested countless hours defining a spectacular color palette and extensive guidelines on their usage.  Quoting our palette guidelines:


"Colour is an effective, powerful and instantly recognisable medium for visual communications. To convey the brand personality and brand values, there is a sophisticated colour palette. We have introduced a palette which includes both a fresh, lively orange, and a rich, mature aubergine. The use of aubergine indicates commercial involvement, while orange is a signal of community engagement. These colours are used widely in the brand communications, to convey the precise, reliable and free personality."
With this inspiration, I set out to apply these rules to a beautiful, precise Ubuntu server command prompt within Byobu.

First, I needed to do a bit of research, as I would really need a 256-color palette to accomplish anything reasonable, as the 8-color and 16-color palettes are really just atrocious.

The 256-color palette is actually reasonable.  I would have the following color palette to chose from:


That's not quite how these colors are rendered on a modern Ubuntu system, but it's close enough to get started.

I then spent quite a bit of time trying to match Ubuntu color tints against this chart and narrowed down the color choices that would actually fit within the Ubuntu design team's color guidelines.


This is the color balance choice that seemed most appropriate to me:


A majority of white text, on a darker aubergine background.  In fact, if you open gnome-terminal on an Ubuntu desktop, this is exactly what you're presented with.  White text on a dark aubergine background.  But we're missing the orange, grey, and lighter purple highlights!


That number I cited above -- the 3 distinct elements of [user, host, directory] -- are quite important now, as they map exactly to our 3 supporting colors.

Against our 256-color mapping above, I chose:
  • Username: 245 (grey)
  • Hostname: 5 (light aubergine)
  • Working directory: 5 (orange)
  • Separators: 256 (white)
And in the interest of being just a little more "precise", I actually replaced the trailing $ character with the UTF-8 symbol ❭.  This is Unicode's U+276D character, "MEDIUM RIGHT-POINTING ANGLE BRACKET ORNAMENT".  This is a very pointed, attention-grabbing character.  It directs your eye straight to the flashing cursor, or the command at your fingertips.


Gnome-terminal is, by default, set to use the system's default color scheme, but you can easily change that to several other settings.  I often use the higher-contrast white-on-black or white-on-light-yellow color schemes when I'm in a very bright location, like outdoors.


I took great care in choosing those 3 colors that they were readable across each of the stock schemes shipped by gnome-terminal.



I also tested it in Terminator and Konsole, where it seemed to work well enough, while xterm and putty aren't as pretty.

Currently, this functionality is easy to enable from within your Byobu environment.  If you're on the latest Byobu release (currently 5.57), which you can install from ppa:byobu/ppa, simply run the command:

byobu-enable-prompt

Of course, this prompt most certainly won't be for everyone :-)  You can easily disable the behavior at any time with:

byobu-disable-prompt

While new installations of Byobu (where there is no ~/.byobu directory) will automatically see the new prompt, starting in Ubuntu 13.10 (unless you've modified your $PS1 in your ~/.bashrc). But existing, upgraded Byobu users will need to run byobu-enable-prompt to add this into their environment.

As will undoubtedly be noted in the comments below, your mileage may vary on non-Ubuntu systems.  However, if /etc/issue does not start with the string "Ubuntu", byobu-enable-prompt will provide a tri-color prompt, but employs a hopefully-less-opinionated primary colors, green, light blue, and red:



If you want to run this outside of Byobu, well that's quite doable too :-)  I'll leave it as an exercise for motivated users to ferret out the one-liner you need from lp:byobu and paste into your ~/.bashrc ;-)

Cheers,
:-Dustin

Read more