Canonical Voices

Posts tagged with 'juju'

Dustin Kirkland

Earlier this month, I spoke at ContainerDays, part of the excellent DevOpsDays series of conferences -- this one in lovely Portland, Oregon.

I gave a live demo of Kubernetes running directly on bare metal.  I was running it on an 11-node Ubuntu Orange Box -- but I used the exact same tools Canonical's world class consulting team uses to deploy Kubernetes onto racks of physical machines.
You see, the ability to run Kubernetes on bare metal, behind your firewall is essential to the yin-yang duality of Cloud Native computing.  Sometimes, what you need is actually a Native Cloud.
Deploying Kubernetes into virtual machines in the cloud is rather easy, straightforward, with dozens of tools now that can handle that.

But there's only one tool today, that can deploy the exact same Kubernetes to AWS, Azure, GCE, as well as VMware, OpenStack, and bare metal machines.  That tools is conjure-up, which acts as a command line front end to several essential Ubuntu tools: MAAS, LXD, and Juju.

I don't know if the presentation was recorded, but I'm happy to share with you my slides for download, and embedded here below.  There are a few screenshots within that help convey the demo.




Cheers,
Dustin

Read more
Louis

Introduction

For a while now I have been actively maintaining the sosreport debian package. I am also helping out making it available on Ubuntu.

I also have had multiple requests to make sosreport more easily usable in a Juju environment. I have finally been able to author a charm for the sosreport which will render its usage simpler with Juju.

Theory of operation

As you already know, sosreport is a tool that will collect information about your running environment. In the context of a Juju deployment, what we are after is the running environments of the units providing the services. So in order for the sosreport charm to be useful, it needs to be deployed on an existing unit.

The charm has two actions :

  • collect    : Generate the sosreport tarball
  • cleanup  : Cleanup existing tarballs

You would use the collect action to create the sosreport tarball of the unit where it is being run and cleanup to remove those tarballs once you are done.

Each action has optional parameters attached to it :

homedir  Home directory where sosreport files will be copied to (common to both collect & cleanup actions)
options Command line options to be passed to sosreport (collect only)
minfree Minimum of free diskspace to run sosreport expressed in percent,Megabytes or Gigabytes. Valid suffixes are % M or G (collect only)

Practical example using Juju 2.0

Suppose that you are encountering problems with the mysql service being used by your MediaWiki service (yes, I know, yet one more MediaWiki example). You would have an environment similar to the following (Juju 2.0) :

$ juju status
Model Controller Cloud/Region Version
default MyLocalController localhost/localhost 2.0.0

App Version Status Scale Charm Store Rev OS Notes
mediawiki unknown 1 mediawiki jujucharms 5 ubuntu 
mysql error 1 mysql jujucharms 55 ubuntu

Unit Workload Agent Machine Public address Ports Message
mediawiki/0* unknown idle 1 10.0.4.48 
mysql/0* error idle 2 10.0.4.140 hook failed: "start"

Machine State DNS Inst id Series AZ
1 started 10.0.4.48 juju-53ced1-1 trusty 
2 started 10.0.4.140 juju-53ced1-2 trusty

Relation Provides Consumes Type
cluster mysql mysql peer

Here the mysql start hook failed to start for some reason that we want to investigate. One solution is to ssh to the unit and try to find out. You may be asked by a support representative to provide the data for remote analysis. This is where sosreport becomes useful.

Deploy the sosreport charm

The sosreport charm will be helpful in going to collect the information of the unit where the mysql service runs. In our example, the service runs on unit #2 so this is where the sosreport charm needs to be deployed. So in our example we would do :

$ juju deploy cs:~sosreport-charmers/sosreport --to=2

Once the charm is done deploying, you will have the following juju status :

$ juju status
Model Controller Cloud/Region Version
default MyLocalController localhost/localhost 2.0.0

App Version Status Scale Charm Store Rev OS Notes
mediawiki unknown 1 mediawiki jujucharms 5 ubuntu 
mysql error 1 mysql jujucharms 55 ubuntu 
sosreport active 1 sosreport jujucharms 1 ubuntu

Unit Workload Agent Machine Public address Ports Message
mediawiki/0* unknown idle 1 10.0.4.48 
mysql/0* error idle 2 10.0.4.140 hook failed: "start"
sosreport/1* active idle 2 10.0.4.140 sosreport is installed

Machine State DNS Inst id Series AZ
1 started 10.0.4.48 juju-53ced1-1 trusty 
2 started 10.0.4.140 juju-53ced1-2 trusty

Relation Provides Consumes Type
cluster mysql mysql peer

Collect the sosreport information

In order to collect the sosreport tarball, you will issue an action to the sosreport service, telling it to collect the data :

$ juju run-action sosreport/1 collect
Action queued with id: 95d405b3-9b78-468b-840f-d24df5751351

To verify the progression of the action you can use the show-action-status command :

$ juju show-action-status 95d405b3-9b78-468b-840f-d24df5751351
actions:
- id: 95d405b3-9b78-468b-840f-d24df5751351
 status: running
 unit: sosreport/1

After completion, the action will show as completed :

$ juju show-action-status 95d405b3-9b78-468b-840f-d24df5751351
actions:
- id: 95d405b3-9b78-468b-840f-d24df5751351
 status: completed
 unit: sosreport/1

Using the show-action-output, you can see the result of the collect action :

$ juju show-action-output 95d405b3-9b78-468b-840f-d24df5751351
results:
 outcome: success
 result-map:
 message: sosreport-juju-53ced1-2-20161221163645.tar.xz and sosreport-juju-53ced1-2-20161221163645.tar.xz.md5
 available in /home/ubuntu
status: completed
timing:
 completed: 2016-12-21 16:37:06 +0000 UTC
 enqueued: 2016-12-21 16:36:40 +0000 UTC
 started: 2016-12-21 16:36:45 +0000 UTC

If we look at the mysql/0 unit $HOME directory, we will see that the tarball is indeed present :

$ juju ssh mysql/0 "ls -l"
total 26149
-rw------- 1 root root 26687372 Dec 21 16:36 sosreport-juju-53ced1-2-20161221163645.tar.xz
-rw-r--r-- 1 root root 33 Dec 21 16:37 sosreport-juju-53ced1-2-20161221163645.tar.xz.md5
Connection to 10.0.4.140 closed.

One thing to be aware of is that, as with any environment using sosreport, the owner of the tarball and md5 file is root. This is to protect access to the unit’s configuration data contained in the tarball. In order to copy the files from the mysql/0 unit, you would first need to change their ownership :

$ juju ssh mysql/0 "sudo chown ubuntu:ubuntu sos*"
Connection to 10.0.4.140 closed.

$ juju ssh mysql/0 "ls -l"
total 26149
-rw------- 1 ubuntu ubuntu 26687372 Dec 21 16:36 sosreport-juju-53ced1-2-20161221163645.tar.xz
-rw-r--r-- 1 ubuntu ubuntu 33 Dec 21 16:37 sosreport-juju-53ced1-2-20161221163645.tar.xz.md5
Connection to 10.0.4.140 closed.

The files can be copied off the unit by using juju scp.

Cleanup obsolete sosreport information

To cleanup the tarballs that have been previously created, use the cleanup action of the charm as outlined here :

$ juju run-action sosreport/1 cleanup
Action queued with id: 3df3dcb8-0850-414e-87d5-746a52ef9b53

$ juju show-action-status 3df3dcb8-0850-414e-87d5-746a52ef9b53
actions:
- id: 3df3dcb8-0850-414e-87d5-746a52ef9b53
 status: completed
 unit: sosreport/1

$ juju show-action-output 3df3dcb8-0850-414e-87d5-746a52ef9b53
results:
 outcome: success
 result-map:
 message: Directory /home/ubuntu cleaned up
status: completed
timing:
 completed: 2016-12-21 16:49:35 +0000 UTC
 enqueued: 2016-12-21 16:49:30 +0000 UTC
 started: 2016-12-21 16:49:35 +0000 UTC

Practical example using Juju 1.25

Deploy the sosreport charm

Given the same environment with mysql & MediaWiki service deployed, we need to deploy the sosreport charm to the unit where the mysql service is deployed :

$ juju deploy cs:~sosreport-charmers/sosreport --to=2

Once deployed, we have an environment that looks like this :

$ juju status --format=tabular
[Environment]
UPGRADE-AVAILABLE
1.25.9

[Services]
NAME STATUS EXPOSED CHARM
mediawiki unknown false cs:trusty/mediawiki-5
mysql unknown false cs:trusty/mysql-55
sosreport active false cs:~sosreport-charmers/trusty/sosreport-2

[Units]
ID WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
mediawiki/0 unknown idle 1.25.6.1 1 192.168.122.246
mysql/0 unknown idle 1.25.6.1 2 3306/tcp 192.168.122.6
sosreport/0 active idle 1.25.6.1 2 192.168.122.6 sosreport is installed

[Machines]
ID STATE VERSION DNS INS-ID SERIES HARDWARE
0 started 1.25.6.1 localhost localhost zesty
1 started 1.25.6.1 192.168.122.246 caribou-local-machine-1 trusty arch=amd64
2 started 1.25.6.1 192.168.122.6 caribou-local-machine-2 trusty arch=amd64

Collect the sosreport information

With the previous version of Juju, the syntax for actions is slightly different. To run the collect action we need to issue :

$ juju action do sosreport/0 collect

We then get the status of our action :

$ juju action status 2176fad0-9b9f-4006-88cb-4adbf6ad3da1
actions:
- id: 2176fad0-9b9f-4006-88cb-4adbf6ad3da1
 status: failed
 unit: sosreport/0

And to our surprise, the action has failed ! To try to identify why it has failed, we can fetch the result of our action :

$ juju action fetch 2176fad0-9b9f-4006-88cb-4adbf6ad3da1
message: 'Not enough space in /home/ubuntu (minfree: 5% )'
results:
 outcome: failure
status: failed
timing:
 completed: 2016-12-22 10:32:15 +0100 CET
 enqueued: 2016-12-22 10:32:09 +0100 CET
 started: 2016-12-22 10:32:14 +0100 CET

So there is not enough space in our unit to safely run sosreport. This gives me the opportunity to talk about one of the parameter of the collect action : minfree. But first, we need to look at how much disk space is available.

$ juju ssh sosreport/0 "df -h"
Warning: Permanently added '192.168.122.6' (ECDSA) to the list of known hosts.
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-root 222G 200G 11G 95% /

We see that there is at least 11Gb available. While it is below the 5% mark, we can change that by using the minfree parameter. Here is its description :

 * minfree : Minimum of free diskspace to run sosreport expressed in percent, 
             Megabytes or Gigabytes. Valid suffixes are % M or G 
            (default 5%)

Since we have 11Gb available, let us set minfree to 5G :

$ juju action do sosreport/0 collect pctfree=5G
Action queued with id: b741aa7c-537d-4175-8af9-548b1e0e6f7b

We can now fetch the result of our command, waiting for at most 600 seconds for the result :

 

$ juju action fetch b741aa7c-537d-4175-8af9-548b1e0e6f7b --wait=100

results:
 outcome: success
 result-map:
 message: sosreport-caribou-local-machine-1-20161222153903.tar.xz and sosreport-caribou-local-machine-1-20161222153903.tar.xz.md5
 available in /home/ubuntu
 status: completed
 timing:
 completed: 2016-12-22 15:40:01 +0100 CET
 enqueued: 2016-12-22 15:38:58 +0100 CET
 started: 2016-12-22 15:39:03 +0100 CET

Cleanup obsolete sosreport information

As with the previous example, the cleanup of old tarballs is rather simple :

$ juju action do sosreport/0 cleanup
Action queued with id: edf199cd-2a79-4605-8f00-40ec37aa25a9
$ juju action fetch edf199cd-2a79-4605-8f00-40ec37aa25a9 --wait=600
results:
 outcome: success
 result-map:
 message: Directory /home/ubuntu cleaned up
status: completed
timing:
 completed: 2016-12-22 15:47:14 +0100 CET
 enqueued: 2016-12-22 15:47:12 +0100 CET
 started: 2016-12-22 15:47:13 +0100 CET

Conclusion

This charm makes collecting information in a juju enviroment much simpler. Don’t hesitate to test it and please report any bug you may encounter.

Read more
Nicholas Skaggs

Getting your daily dose of juju

One of the first pain points I've been attempting to help smooth out was how Juju is packaged and consumed. The Juju QA Team have put together a new daily ppa you can use, dubbed the Juju Daily ppa. It contains the latest blessed builds from CI testing. Installing this ppa and upgrading regularly allows you to stay in sync with the absolute latest version of Juju that passes our CI testing.

Naturally, this ppa is intended for those who like living on the edge, so it's not recommended for production use. If you find bugs, we'd love to hear about them!

To add the ppa, you will need to add ppa:juju/daily to your software sources.

sudo add-apt-repository ppa:juju/daily

Do be aware that adding this ppa will upgrade any version of Juju you may have installed. Also note this ppa contains builds without published streams, so you will need to generate or acquire streams on your own. For most users, this means you should pass --upload-tools during the bootstrap process. However you may also pass the agent-metadata-url and agent-stream as config options. See the ppa description and simplestreams documentation for more details.

Finally, should you wish to revert to a stable version of Juju, you can use the ppa-purge tool to remove the daily ppa and the installed version of Juju.

I'd love to hear your feedback, and encourage you to give it a try.

Read more
Anthony Dillon

The Juju web resources are made up of two entities: a website jujucharms.com and an app called Juju GUI, which can be demoed at demo.jujucharms.com.

Applying Vanilla to jujucharms.com

Luckily the website was already using our old style guidelines, which we refactored and improved to become Vanilla, so I removed the guidelines link from the head and the site fell to pieces. Once I NPM installed vanilla-framework and included it into the main sass file things started to look up.

A few areas of the site needed to be updated, like moving the search markup outside of the nav element. This is due to header improvements in the transition from guidelines to Vanilla. Also we renamed and BEMed our inline-list component, so I updated its markup in the process. The mobile navigation was also replaced with the new non-JavaScript version from Vanilla.

To my relief with these minor changes the site looked almost exactly as it did before. There were some padding differences, which resulted in some larger spacing between rows, but this was a purposeful update.

All in all the process of replacing guidelines with Vanilla on the website was quick and easy.

Now into the unknown…

Applying Vanilla to Juju GUI

I expected this step to be trickier as the GUI had not started life using guidelines and was using entirely bespoke CSS. So I thought: let’s install it, link the Vanilla framework and see how it looks.

To my surprise the app stayed together, apart from some element movement and overriding of input styling. We didn’t need the entire framework to be included so I selectively included only the core modules like typography, grid, etc.

The only major difference is that Vanilla applies bottom margin to lists, which did not exist on the app before, so I applied “margin-bottom: 0” to each list component as a local override.

Once I completed these changes it looked exactly as before.

What’s the benefit

You might be thinking, as I did at the beginning of the project, “that is a lot of work to have both projects look exactly the same”, when in fact it brings a number of benefits.

Now we have consistent styling across the Juju real estates, which are tied together with one single base CSS framework. This means we have exactly the same grid, buttons, typography, padding and much more. The tech debt to keep these in sync has been cut and allows designers to work from a single component list.

Future

We’re not finished there, as Vanilla framework is a bare bones CSS framework it also has a concept of theming. The next step will be to refactor the SCSS on both projects and identify the common components. The theme itself depends on Vanilla, so we have logical layering.

In the end

It is exciting to see how versatile Vanilla is. Whether it’s a web app or web site, Vanilla helps us keep our styles consistent. The layered inheritance gives us the flexibility to selectively include modules and extend them when required.

Read more
Louis

A little before the summer vacation, I decided that it was a good time to get acquainted with writing Juju charms.  Since I am heavily involved with the kernel crash dump tools, I thought that it would be a good start to allow Juju users to enable kernel crash dumps using a charm.  Since it means acting on existing units, a subordinate charms is the answer.

Theory of operation

Enabling kernel crash dumps on Ubuntu and Debian involves the following :

  • installing
    • kexec-tools
    • makedumpfile
    • kdump-tools
    • crash
  • Adding the crashkernel= boot parameter
  • Enabling kdump-tools in /etc/default/kdump-tools
  • Rebooting

On ubuntu, installing the linux-crashdump meta-package takes care of all the packages installation.

The crashdump subordinate charm does just that : installing the packages, enabling the crashdump= boot parameter as well as kdump-tools and reboot the unit.

Since this charm enable a kernel specific service, it can only be used in a context where the kernel itself is accessible.  This means that testing the charm using the local provider which relies on LXC will fail, since the charm needs to interact with the kernel.  One solution to that restriction is to use KVM with the local provider as outlined in the example.

Deploying the crashdump charm

the crashdump charm being a subordinate charm, it can only be used in a context where there are already existing services running. For this example, we will deploy a simple Ubuntu service :

$ juju bootstrap
$ juju deploy ubuntu --to=kvm:0
$ juju deploy crashdump
$ juju add-relation ubuntu crashdump

This will install the required packages, rebuild the grub configuration file to use the crashkernel= value and reboot the unit.

To confirm that the charm has been deployed adequately, you can run :

$ juju ssh ubuntu/0 "kdump-config show"
Warning: Permanently added '192.168.122.227' (ECDSA) to the list of known hosts.
USE_KDUMP:        1
KDUMP_SYSCTL:     kernel.panic_on_oops=1
KDUMP_COREDIR:    /var/crash
crashkernel addr: 0x17000000
current state:    ready to kdump
kexec command:
  /sbin/kexec -p --command-line="BOOT_IMAGE=/boot/vmlinuz-3.13.0-65-generic root=UUID=1ff353c2-3fed-48fb-acc0-6d18086d030b ro console=tty1 console=ttyS0 irqpoll maxcpus=1 nousb" --initrd=/boot/initrd.img-3.13.0-65-generic /boot/vmlinuz-3.13.0-65-generic

The next time a kernel panic occurs, you should find the kernel crash dump in /var/crash of the unit that failed.

Deploying from the GUI

As an alternate method for adding the crashdump subordinate charm to an existing service, you can use the Juju GUI.

In order to get the Juju GUI started, a few simple commands are needed, assuming that you do not have any environment bootstrapped :

$ juju bootstrap
$ juju deploy juju-gui
$ juju expose juju-gui
$ juju deploy ubuntu --to=kvm:0

 

Here are a few captures of the process, once the ubuntu service is started :

Juju environment with one service

Juju environment with one service

 

Locate the crashdump charm

Locate the crashdump charm

 

The charm is added to the environment

The charm is added to the environment

 

Add the relation between the charms

Add the relation between the charms

 

Crashdump is now enabled in your service

Crashdump is now enabled in your service

Do not hesitate to leave comments or question. I’ll do my best to reply in a timely manner.

Read more
Dustin Kirkland


As you probably remember from grade school math class, primes are numbers that are only divisible by 1 and themselves.  2, 3, 5, 7, and 11 are the first 5 prime numbers, for example.

Many computer operations, such as public-key cryptography, depends entirely on prime numbers.  In fact, RSA encryption, invented in 1978, uses a modulo of a product of two very large primes for encryption and decryption.  The security of asymmetric encryption is tightly coupled with the computational difficulty in factoring large numbers.  I actually use prime numbers as the status update intervals in Byobu, in order to improve performance and distribute the update spikes.

Euclid proved that there are infinitely many prime numbers around 300 BC.  But the Prime Number Theorem (proven in the 19th century) says that the probability of any number is prime is inversely proportional to its number of digits.  That means that larger prime numbers are notoriously harder to find, and it gets harder as they get bigger!
What's the largest known prime number in the world?

Well, it has 17,425,170 decimal digits!  If you wanted to print it out, size 11 font, it would take 6,543 pages -- or 14 reams of paper!

That number is actually one less than a very large power of 2.  257,885,161-1.  It was discovered by Curtis Cooper on January 25, 2013, on an Intel Core2 Duo.

Actually, each of the last 14 record largest prime numbers discovered (between 1996 and today) have been of that form, 2P-1.  Numbers of that form are called Mersenne Prime Numbers, named after Friar Marin Mersenne, a French priest who studied them in the 1600s.


Friar Mersenne's work continues today in the form of the Great Internet Mersenne Prime Search, and the mprime program, which has been used to find those 14 huge prime numbers since 1996.

mprime is a massive parallel, cpu scavenging utility, much like SETI@home or the Protein Folding Project.  It runs in the background, consuming resources, working on its little piece of the problem.  mprime is open source code, and also distributed as a statically compiled binary.  And it will make a fine example of how to package a service into a Docker container, a Juju charm, and a Snappy snap.


Docker Container

First, let's build the Docker container, which will serve as our fundamental building block.  You'll first need to download the mprime tarball from here.  Extract it, and the directory structure should look a little like this (or you can browse it here):

├── license.txt
├── local.txt
├── mprime
├── prime.log
├── prime.txt
├── readme.txt
├── results.txt
├── stress.txt
├── undoc.txt
├── whatsnew.txt
└── worktodo.txt

And then, create a Dockerfile, that copies the files we need into the image.  Here's our example.

FROM ubuntu
MAINTAINER Dustin Kirkland email@example.com
COPY ./mprime /opt/mprime/
COPY ./license.txt /opt/mprime/
COPY ./prime.txt /opt/mprime/
COPY ./readme.txt /opt/mprime/
COPY ./stress.txt /opt/mprime/
COPY ./undoc.txt /opt/mprime/
COPY ./whatsnew.txt /opt/mprime/
CMD ["/opt/mprime/mprime", "-w/opt/mprime/"]

Now, build your Docker image with:

$ sudo docker build .
Sending build context to Docker daemon 36.02 MB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
...
Successfully built de2e817b195f

Then publish the image to Dockerhub.

$ sudo docker push kirkland/mprime

You can see that image, which I've publicly shared here: https://registry.hub.docker.com/u/kirkland/mprime/



Now you can run this image anywhere you can run Docker.

$ sudo docker run -d kirkland/mprime

And verify that it's running:

$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9233f626c85 kirkland/mprime:latest "/opt/mprime/mprime 24 seconds ago Up 23 seconds furious_pike

Juju Charm

So now, let's create a Juju Charm that uses this Docker container.  Actually, we're going to create a subordinate charm.  Subordinate services in Juju are often monitoring and logging services, things that run along side primary services.  Something like mprime is a good example of something that could be a subordinate service, attached to one or many other services in a Juju model.

Our directory structure for the charm looks like this (or you can browse it here):

└── trusty
└── mprime
├── config.yaml
├── copyright
├── hooks
│   ├── config-changed
│   ├── install
│   ├── juju-info-relation-changed
│   ├── juju-info-relation-departed
│   ├── juju-info-relation-joined
│   ├── start
│   ├── stop
│   └── upgrade-charm
├── icon.png
├── icon.svg
├── metadata.yaml
├── README.md
└── revision
3 directories, 15 files

The three key files we should look at here are metadata.yaml, hooks/install and hooks/start:

$ cat metadata.yaml
name: mprime
summary: Search for Mersenne Prime numbers
maintainer: Dustin Kirkland
description: |
A Mersenne prime is a prime of the form 2^P-1.
The first Mersenne primes are 3, 7, 31, 127
(corresponding to P = 2, 3, 5, 7).
There are only 48 known Mersenne primes, and
the 13 largest known prime numbers in the world
are all Mersenne primes.
This charm uses a Docker image that includes the
statically built, 64-bit Linux binary mprime
which will consume considerable CPU and Memory,
searching for the next Mersenne prime number.
See http://www.mersenne.org/ for more details!
tags:
- misc
subordinate: true
requires:
juju-info:
interface: juju-info
scope: container

And:

$ cat hooks/install
#!/bin/bash
apt-get install -y docker.io
docker pull kirkland/mprime

And:

$ cat hooks/start
#!/bin/bash
service docker restart
docker run -d kirkland/mprime

Now, we can add the mprime service to any other running Juju service.  As an example here, I'll --bootstrap, deploy the Apache2 charm, and attach mprime to it.

$ juju bootrap
$ juju deploy apache2
$ juju deploy cs:~kirkland/mprime
$ juju add-relation apache2 mprime

Looking at our services, we can see everything deployed and running here:

$ juju status
services:
apache2:
charm: cs:trusty/apache2-14
exposed: false
service-status:
current: unknown
since: 20 Jul 2015 11:55:59-05:00
relations:
juju-info:
- mprime
units:
apache2/0:
workload-status:
current: unknown
since: 20 Jul 2015 11:55:59-05:00
agent-status:
current: idle
since: 20 Jul 2015 11:56:03-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
machine: "1"
public-address: 23.20.147.158
subordinates:
mprime/0:
workload-status:
current: unknown
since: 20 Jul 2015 11:58:52-05:00
agent-status:
current: idle
since: 20 Jul 2015 11:58:56-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
upgrading-from: local:trusty/mprime-1
public-address: 23.20.147.158
mprime:
charm: local:trusty/mprime-1
exposed: false
service-status: {}
relations:
juju-info:
- apache2
subordinate-to:
- apache2


Snappy Ubuntu Core Snap

Finally, let's build a Snap.  Snaps are applications that run in Ubuntu's transactional, atomic OS, Snappy Ubuntu Core.

We need the simple directory structure below (or you can browse it here):

├── meta
│   ├── icon.png
│   ├── icon.svg
│   ├── package.yaml
│   └── readme.md
└── start.sh
1 directory, 5 files

The package.yaml describes what we're actually building, and what capabilities the service needs.  It looks like this:

name: mprime
vendor: Dustin Kirkland 
architecture: [amd64]
icon: meta/icon.png
version: 28.5-11
frameworks:
- docker
services:
- name: mprime
description: "Search for Mersenne Prime Numbers"
start: start.sh
caps:
- docker_client
- networking

And the start.sh launches the service via Docker.

#!/bin/sh
PATH=$PATH:/apps/docker/current/bin/
docker rm -v -f mprime
docker run --name mprime -d kirkland/mprime
docker wait mprime

Now, we can build the snap like so:

$ snappy build .
Generated 'mprime_28.5-11_amd64.snap' snap
$ ls -halF *snap
-rw-rw-r-- 1 kirkland kirkland 9.6K Jul 20 12:38 mprime_28.5-11_amd64.snap

First, let's install the Docker framework, upon which we depend:

$ snappy-remote --url ssh://snappy-nuc install docker
=======================================================
Installing docker from the store
Installing docker
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

And now, we can install our locally built Snap.
$ snappy-remote --url ssh://snappy-nuc install mprime_28.5-11_amd64.snap
=======================================================
Installing mprime_28.5-11_amd64.snap from local environment
Installing /tmp/mprime_28.5-11_amd64.snap
2015/07/20 17:44:26 Signature check failed, but installing anyway as requested
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 sideload
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

Alternatively, you can install the snap directly from the Ubuntu Snappy store, where I've already uploaded the mprime snap:

$ snappy-remote --url ssh://snappy-nuc install mprime.kirkland
=======================================================
Installing mprime.kirkland from the store
Installing mprime.kirkland
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 kirkland
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

Conclusion

How long until this Docker image, Juju charm, or Ubuntu Snap finds a Mersenne Prime?  Almost certainly never :-)  I want to be clear: that was never the point of this exercise!

Rather I hope you learned how easy it is to run a Docker image inside either a Juju charm or an Ubuntu snap.  And maybe learned something about prime numbers along the way ;-)

Join us in #docker, #juju, and #snappy on irc.freenode.net.

Cheers,
Dustin

Read more
Junien Fridrick

I recently got my OpenStack password reset, and juju was unable to deploy new services even after I replaced all the occurrences of the old password by the new password in ~/.juju (or $JUJU_HOME, if you’re into that sort of thing).

The thing is, juju also stores the password in the MongoDB database on node 0, so you’ll want to change it there as well.

$ juju ssh 0
machine-0:~$ mongo --ssl -u admin -p $(sudo grep oldpassword /var/lib/juju/agents/machine-0/agent.conf | awk -e '{print $2}') localhost:37017/admin
MongoDB shell version: 2.4.9
connecting to: localhost:37017/admin
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
juju:PRIMARY> db.settings.update({'_id': "e"}, { $set : { password: "l33tpassword" } })
juju:PRIMARY> db.settings.find({'_id': "e"}).pretty()
{
[...]
"password" : "l33tpassword",
[...]
}
juju:PRIMARY>
bye
machine-0:~$ logout

And voilà ! This was with juju 1.23 by the way.

Read more
Dustin Kirkland


I had the great pleasure to deliver a 90 minute talk at the USENIX LISA14 conference, in Seattle, Washington.

During the course of the talk, we managed to:

  • Deploy OpenStack Juno across 6 physical nodes, on an Orange Box on stage
  • Explain all of the major components of OpenStack (Nova, Neutron, Swift, Cinder, Horizon, Keystone, Glance, Ceilometer, Heat, Trove, Sahara)
  • Explore the deployed OpenStack cloud's Horizon interface in depth
  • Configured Neutron networking with internal and external networks, as well as a gateway and a router
  • Setup our security groups to open ICMP and SSH ports
  • Upload an SSH keypair
  • Modify the flavor parameters
  • Update a bunch of quotas
  • Add multiple images to Glance
  • Launch some instances until we max out our hypervisor limits
  • Scale up the Nova Compute nodes from 3 units to 6 units
  • Deploy a real workload (Hadoop + Hive + Kibana + Elastic Search)
  • Then, we deleted the entire environment, and ran it all over again from scratch, non-stop
Slides and a full video are below.  Enjoy!




Cheers,
Dustin

Read more
Dustin Kirkland


This little snippet of ~200 lines of YAML is the exact OpenStack that I'm deploying tonight, at the OpenStack Austin Meetup.

Anyone with a working Juju and MAAS setup, and 7 registered servers should be able to deploy this same OpenStack setup, in about 12 minutes, with a single command.


$ wget http://people.canonical.com/~kirkland/icehouseOB.yaml
$ juju-deployer -c icehouseOB.yaml
$ cat icehouseOB.yaml

icehouse:
overrides:
openstack-origin: "cloud:trusty-icehouse"
source: "distro"
services:
ceph:
charm: "cs:trusty/ceph-27"
num_units: 3
constraints: tags=physical
options:
fsid: "9e7aac42-4bf4-11e3-b4b7-5254006a039c"
"monitor-secret": AQAAvoJSOAv/NRAAgvXP8d7iXN7lWYbvDZzm2Q==
"osd-devices": "/srv"
"osd-reformat": "yes"
annotations:
"gui-x": "2648.6688842773438"
"gui-y": "708.3873901367188"
keystone:
charm: "cs:trusty/keystone-5"
num_units: 1
constraints: tags=physical
options:
"admin-password": "admin"
"admin-token": "admin"
annotations:
"gui-x": "2013.905517578125"
"gui-y": "75.58013916015625"
"nova-compute":
charm: "cs:trusty/nova-compute-3"
num_units: 3
constraints: tags=physical
to: [ceph=0, ceph=1, ceph=2]
options:
"flat-interface": eth0
annotations:
"gui-x": "776.1040649414062"
"gui-y": "-81.22811031341553"
"neutron-gateway":
charm: "cs:trusty/quantum-gateway-3"
num_units: 1
constraints: tags=virtual
options:
ext-port: eth1
instance-mtu: 1400
annotations:
"gui-x": "329.0572509765625"
"gui-y": "46.4658203125"
"nova-cloud-controller":
charm: "cs:trusty/nova-cloud-controller-41"
num_units: 1
constraints: tags=physical
options:
"network-manager": Neutron
annotations:
"gui-x": "1388.40185546875"
"gui-y": "-118.01156234741211"
rabbitmq:
charm: "cs:trusty/rabbitmq-server-4"
num_units: 1
to: mysql
annotations:
"gui-x": "633.8120727539062"
"gui-y": "862.6530151367188"
glance:
charm: "cs:trusty/glance-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "1147.3269653320312"
"gui-y": "1389.5643157958984"
cinder:
charm: "cs:trusty/cinder-4"
num_units: 1
to: nova-cloud-controller
options:
"block-device": none
annotations:
"gui-x": "1752.32568359375"
"gui-y": "1365.716194152832"
"ceph-radosgw":
charm: "cs:trusty/ceph-radosgw-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "2216.68212890625"
"gui-y": "697.16796875"
cinder-ceph:
charm: "cs:trusty/cinder-ceph-1"
num_units: 0
annotations:
"gui-x": "2257.5515747070312"
"gui-y": "1231.2130126953125"
"openstack-dashboard":
charm: "cs:trusty/openstack-dashboard-4"
num_units: 1
to: "keystone"
options:
webroot: "/"
annotations:
"gui-x": "2353.6898193359375"
"gui-y": "-94.2642593383789"
mysql:
charm: "cs:trusty/mysql-1"
num_units: 1
constraints: tags=physical
options:
"dataset-size": "20%"
annotations:
"gui-x": "364.4567565917969"
"gui-y": "1067.5167846679688"
mongodb:
charm: "cs:trusty/mongodb-0"
num_units: 1
constraints: tags=physical
annotations:
"gui-x": "-70.0399979352951"
"gui-y": "1282.8224487304688"
ceilometer:
charm: "cs:trusty/ceilometer-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "-78.13333225250244"
"gui-y": "919.3128051757812"
ceilometer-agent:
charm: "cs:trusty/ceilometer-agent-0"
num_units: 0
annotations:
"gui-x": "-90.9158582687378"
"gui-y": "562.5347595214844"
heat:
charm: "cs:trusty/heat-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "494.94012451171875"
"gui-y": "1363.6024169921875"
ntp:
charm: "cs:trusty/ntp-4"
num_units: 0
annotations:
"gui-x": "-104.57728099822998"
"gui-y": "294.6641273498535"
relations:
- - "keystone:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:amqp"
- "rabbitmq:amqp"
- - "nova-cloud-controller:image-service"
- "glance:image-service"
- - "nova-cloud-controller:identity-service"
- "keystone:identity-service"
- - "glance:shared-db"
- "mysql:shared-db"
- - "glance:identity-service"
- "keystone:identity-service"
- - "cinder:shared-db"
- "mysql:shared-db"
- - "cinder:amqp"
- "rabbitmq:amqp"
- - "cinder:cinder-volume-service"
- "nova-cloud-controller:cinder-volume-service"
- - "cinder:identity-service"
- "keystone:identity-service"
- - "neutron-gateway:shared-db"
- "mysql:shared-db"
- - "neutron-gateway:amqp"
- "rabbitmq:amqp"
- - "neutron-gateway:quantum-network-service"
- "nova-cloud-controller:quantum-network-service"
- - "openstack-dashboard:identity-service"
- "keystone:identity-service"
- - "nova-compute:shared-db"
- "mysql:shared-db"
- - "nova-compute:amqp"
- "rabbitmq:amqp"
- - "nova-compute:image-service"
- "glance:image-service"
- - "nova-compute:cloud-compute"
- "nova-cloud-controller:cloud-compute"
- - "cinder:storage-backend"
- "cinder-ceph:storage-backend"
- - "ceph:client"
- "cinder-ceph:ceph"
- - "ceph:client"
- "nova-compute:ceph"
- - "ceph:client"
- "glance:ceph"
- - "ceilometer:identity-service"
- "keystone:identity-service"
- - "ceilometer:amqp"
- "rabbitmq:amqp"
- - "ceilometer:shared-db"
- "mongodb:database"
- - "ceilometer-agent:container"
- "nova-compute:juju-info"
- - "ceilometer-agent:ceilometer-service"
- "ceilometer:ceilometer-service"
- - "heat:shared-db"
- "mysql:shared-db"
- - "heat:identity-service"
- "keystone:identity-service"
- - "heat:amqp"
- "rabbitmq:amqp"
- - "ceph-radosgw:mon"
- "ceph:radosgw"
- - "ceph-radosgw:identity-service"
- "keystone:identity-service"
- - "ntp:juju-info"
- "neutron-gateway:juju-info"
- - "ntp:juju-info"
- "ceph:juju-info"
- - "ntp:juju-info"
- "keystone:juju-info"
- - "ntp:juju-info"
- "nova-compute:juju-info"
- - "ntp:juju-info"
- "nova-cloud-controller:juju-info"
- - "ntp:juju-info"
- "rabbitmq:juju-info"
- - "ntp:juju-info"
- "glance:juju-info"
- - "ntp:juju-info"
- "cinder:juju-info"
- - "ntp:juju-info"
- "ceph-radosgw:juju-info"
- - "ntp:juju-info"
- "openstack-dashboard:juju-info"
- - "ntp:juju-info"
- "mysql:juju-info"
- - "ntp:juju-info"
- "mongodb:juju-info"
- - "ntp:juju-info"
- "ceilometer:juju-info"
- - "ntp:juju-info"
- "heat:juju-info"
series: trusty

:-Dustin

Read more
Dustin Kirkland

What would you say if I told you, that you could continuously upload your own Software-as-a-Service  (SaaS) web apps into an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, autonomically managed by an open source Orchestration-Service… right now, today?

“An idea is resilient. Highly contagious. Once an idea has taken hold of the brain it's almost impossible to eradicate.”

“Now, before you bother telling me it's impossible…”

“No, it's perfectly possible. It's just bloody difficult.” 

Perhaps something like this...

“How could I ever acquire enough detail to make them think this is reality?”

“Don’t you want to take a leap of faith???”
Sure, let's take a look!

Okay, this looks kinda neat, what is it?

This is an open source Java Spring web application, called Spring-Music, deployed as an app, running inside of Linux containers in CloudFoundry


Cloud Foundry?

CloudFoundry is an open source Platform-as-a-Service (PAAS) cloud, deployed into Linux virtual machine instances in OpenStack, by Juju.


OpenStack?

Juju?

OpenStack is an open source Infrastructure-as-a-Service (IAAS) cloud, deployed by Juju and Landscape on top of MAAS.

Juju is an open source Orchestration System that deploys and scales complex services across many public clouds, private clouds, and bare metal servers.

Landscape?

MAAS?

Landscape is a systems management tool that automates software installation, updates, and maintenance in both physical and virtual machines. Oh, and it too is deployed by Juju.

MAAS is an open source bare metal provisioning system, providing a cloud-like API to physical servers. Juju can deploy services to MAAS, as well as public and private clouds.

"Ready for the kick?"

If you recall these concepts of nesting cloud technologies...

These are real technologies, which exist today!

These are Software-as-a-Service  (SaaS) web apps served by an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, managed by an open source Orchestration-Service.

Spring Music, served by CloudFoundry, running on top of OpenStack, deployed on MAAS, managed by Juju and Landscape!

“The smallest seed of an idea can grow…”

Oh, and I won't leave you hanging...you're not dreaming!


:-Dustin

Read more
Dustin Kirkland



In case you missed the recent Cloud Austin MeetUp, you have another chance to see the Ubuntu Orange Box live and in action here in Austin!

This time, we're at the OpenStack Austin MeetUp, next Wednesday, September 10, 2014, at 6:30pm at Tech Ranch Austin, 9111 Jollyville Rd #100, Austin, TX!

If you join us, you'll witness all of OpenStack Ice House, deployed in minutes to real hardware. Not an all-in-one DevStack; not a minimum viable set of components.  Real, rich, production-quality OpenStack!  Ceilometer, Ceph, Cinder, Glance, Heat, Horizon, Keystone, MongoDB, MySQL, Nova, NTP, Quantum, and RabbitMQ -- intelligently orchestrated and rapidly scaled across 10 physical servers sitting right up front on the podium.  Of course, we'll go under the hood and look at how all of this comes together on the fabulous Ubuntu Orange Box.

And like any good open source software developer, I generally like to make things myself, and share them with others.  In that spirit, I'll also bring a couple of growlers of my own home brewed beer, Ubrewtu ;-)  Free as in beer, of course!
Cheers,Dustin

Read more
Dustin Kirkland



I hope you'll join me at Rackspace on Tuesday, August 19, 2014, at the Cloud Austin Meetup, at 6pm, where I'll use our spectacular Orange Box to deploy Hadoop, scale it up, run a terasort, destroy it, deploy OpenStack, launch instances, and destroy it too.  I'll talk about the hardware (the Orange Box, Intel NUCs, Managed VLAN switch), as well as the software (Ubuntu, OpenStack, MAAS, Juju, Hadoop) that makes all of this work in 30 minutes or less!

Be sure to RSVP, as space is limited.

http://www.meetup.com/CloudAustin/events/194009002/

Cheers,
Dustin

Read more
Dustin Kirkland

Transcoding video is a very resource intensive process.

It can take many minutes to process a small, 30-second clip, or even hours to process a full movie.  There are numerous, excellent, open source video transcoding and processing tools freely available in Ubuntu, including libav-toolsffmpegmencoder, and handbrake.  Surprisingly, however, none of those support parallel computing easily or out of the box.  And disappointingly, I couldn't find any MPI support readily available either.

I happened to have an Orange Box for a few days recently, so I decided to tackle the problem myself, and develop a scalable, parallel video transcoding solution myself.  I'm delighted to share the result with you today!

When it comes to commercial video production, it can take thousands of machines, hundreds of compute hours to render a full movie.  I had the distinct privilege some time ago to visit WETA Digital in Wellington, New Zealand and tour the render farm that processed The Lord of the Rings triology, Avatar, and The Hobbit, etc.  And just a few weeks ago, I visited another quite visionary, cloud savvy digital film processing firm in Hollywood, called Digital Film Tree.

Windows and Mac OS may be the first platforms that come to mind, when you think about front end video production, Linux is far more widely used for batch video processing, and with Ubuntu, in particular, being extensively at both WETA Digital and Digital Film Tree, among others.

While I could have worked with any of a number of tools, I settled on avconv (the successor(?) of ffmpeg), as it was the first one that I got working well on my laptop, before scaling it out to the cluster.

I designed an approach on my whiteboard, in fact quite similar to some work I did parallelizing and scaling the john-the-ripper password quality checker.

At a high level, the algorithm looks like this:
  1. Create a shared network filesystem, simultaneously readable and writable by all nodes
  2. Have the master node split the work into even sized chunks for each worker
  3. Have each worker process their segment of the video, and raise a flag when done
  4. Have the master node wait for each of the all-done flags, and then concatenate the result
And that's exactly what I implemented that in a new transcode charm and transcode-cluster bundle.  It provides linear scalability and performance improvements, as you add additional units to the cluster.  A transcode job that takes 24 minutes on a single node, is down to 3 minutes on 8 worker nodes in the Orange Box, using Juju and MAAS against physical hardware nodes.


For the curious, the real magic is in the config-changed hook, which has decent inline documentation.



The trick, for anyone who might make their way into this by way of various StackExchange questions and (incorrect) answers, is in the command that splits up the original video (around line 54):

avconv -ss $start_time -i $filename -t $length -s $size -vcodec libx264 -acodec aac -bsf:v h264_mp4toannexb -f mpegts -strict experimental -y ${filename}.part${current_node}.ts

And the one that puts it back together (around line 72):

avconv -i concat:"$concat" -c copy -bsf:a aac_adtstoasc -y ${filename}_${size}_x264_aac.${format}

I found this post and this documentation particularly helpful in understanding and solving the problem.

In any case, once deployed, my cluster bundle looks like this.  8 units of transcoders, all connected to a shared filesystem, and performance monitoring too.


I was able to leverage the shared-fs relation provided by the nfs charm, as well as the ganglia charm to monitor the utilization of the cluster.  You can see the spikes in the cpu, disk, and network in the graphs below, during the course of a transcode job.




For my testing, I downloaded the movie Code Rushfreely available under the CC-BY-NC-SA 3.0 license.  If you haven't seen it, it's an excellent documentary about the open source software around Netscape/Mozilla/Firefox and the dotcom bubble of the late 1990s.

Oddly enough, the stock, 746MB high quality MP4 video doesn't play in Firefox, since it's an mpeg4 stream, rather than H264.  Fail.  (Yes, of course I could have used mplayer, vlc, etc., that's not the point ;-)


Perhaps one of the most useful, intriguing features of HTML5 is it's support for embedding multimedia, video, and sound into webpages.  HTML5 even supports multiple video formats.  Sounds nice, right?  If it only were that simple...  As it turns out, different browsers have, and lack support for the different formats.  While there is no one format to rule them all, MP4 is supported by the majority of browsers, including the two that I use (Chromium and Firefox).  This matrix from w3schools.com illustrates the mess.

http://www.w3schools.com/html/html5_video.asp

The file format, however, is only half of the story.  The audio and video contents within the file also have to be encoded and compressed with very specific codecs, in order to work properly within the browsers.  For MP4, the video has to be encoded with H264, and the audio with AAC.

Among the various brands of phones, webcams, digital cameras, etc., the output format and codecs are seriously all over the map.  If you've ever wondered what's happening, when you upload a video to YouTube or Facebook, and it's a while before it's ready to be viewed, it's being transcoded and scaled in the background. 

In any case, I find it quite useful to transcode my videos to MP4/H264/AAC format.  And for that, a scalable, parallel computing approach to video processing would be quite helpful.

During the course of the 3 minute run, I liked watching the avconv log files of all of the nodes, using Byobu and Tmux in a tiled split screen format, like this:


Also, the transcode charm installs an Apache2 webserver on each node, so you can expose the service and point a browser to any of the nodes, where you can find the input, output, and intermediary data files, as well as the logs and DONE flags.



Once the job completes, I can simply click on the output file, Code_Rush.mp4_1280x720_x264_aac.mp4, and see that it's now perfectly viewable in the browser!


In case you're curious, I have verified the same charm with a couple of other OGG, AVI, MPEG, and MOV input files, too.


Beyond transcoding the format and codecs, I have also added configuration support within the charm itself to scale the video frame size, too.  This is useful to take a larger video, and scale it down to a more appropriate size, perhaps for a phone or tablet.  Again, this resource intensive procedure perfectly benefits from additional compute units.


File format, audio/video codec, and frame size changes are hardly the extent of video transcoding workloads.  There are hundreds of options and thousands of combinations, as the manpages of avconv and mencoder attest.  All of my scripts and configurations are free software, open source.  Your contributions and extensions are certainly welcome!

In the mean time, I hope you'll take a look at this charm and consider using it, if you have the need to scale up your own video transcoding ;-)

Cheers,
Dustin

Read more
Michael

After recently writing the re-usable wsgi-app role for juju charms, I wanted to see how easy it would be to apply this to an existing service. I chose the existing ubuntu-reviews service because it’s open-source and something that we’ll probably need to deploy with juju soon (albeit only the functionality for reviewing newer click applications).

I’ve tried to include the workflow below that I used to create the charm including the detailed steps and some mistakes, in case it’s useful for others. If you’re after more of an overview about using reusable ansible roles in your juju charms, checkout these slides.

1. Create the new charm from the bootstrap-wsgi charm

First grab the charm-bootstrap-wsgi code and rename it to ubuntu-reviews:

$ mkdir -p ~/charms/rnr/precise && cd ~/charms/rnr/precise
$ git clone https://github.com/absoludity/charm-bootstrap-wsgi.git
$ mv charm-bootstrap-wsgi ubuntu-reviews && cd ubuntu-reviews/

Then update the charm metadata to reflect the ubuntu-reviews service:

--- a/metadata.yaml
+++ b/metadata.yaml
@@ -1,6 +1,6 @@
-name: charm-bootstrap-wsgi
-summary: Bootstrap your wsgi service charm.
-maintainer: Developer Account <Developer.Account@localhost>
+name: ubuntu-reviews
+summary: A review API service for Ubuntu click packages.
+maintainer: Michael Nelson <michael.nelson@canonical.com>
 description: |
   <Multi-line description here>
 categories:

I then updated the playbook to expect a tarball named rnr-server.tgz, and gave it a more relevant app_label (which controls the directories under which the service is setup):

--- a/playbook.yml
+++ b/playbook.yml
@@ -2,13 +2,13 @@
 - hosts: localhost
 
   vars:
-    - app_label: wsgi-app.example.com
+    - app_label: click-reviews.ubuntu.com
 
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: example_wsgi:application
-      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
+      wsgi_application: wsgi:app
+      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''

Although when deploying this service we’d be deploying a built asset published somewhere else, for development it’s easier to work with a local file in the charm – and the reusable wsgi-app role allows you to switch between those two options. So the last step here is to add some Makefile targets to enable pulling in the application code and creating the tarball (you can see the details on the git commit for this step).

With that done, I can deploy the charm with `make deploy` – expecting it to fail because the wsgi object doesn’t yet exist.

Aside: if you’re deploying locally, it’s sometimes useful to watch the deployment progress with:

$ tail -F ~/.juju/local/log/unit-ubuntu-reviews-0.log

At this point, juju status shows no issues, but curling the service does (as expected):

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080"
curl: (56) Recv failure: Connection reset by peer

Checking the logs (which, oh joy, are already setup and configured with log rotation etc. by the wsgi-app role) shows the expected error:

$ juju ssh ubuntu-reviews/0 "tail -F /srv/reviews.click.ubuntu.com/logs/reviews.click.ubuntu.com-error.log"
...
ImportError: No module named wsgi

2. Adding the wsgi, settings and urls

The current rnr-server code does have a django project setup, but it’s specific to the current setup (requiring a 3rd party library for settings, and serves the non-click reviews service too). I’d like to simplify that here, so I’m bundling a project configuration in the charm. First the standard (django-generated) manage.py, wsgi.py and near-default settings.py (which you can see in the actual commit).

An extra task is added which copies the project configuration into place during install/upgrade, and both the wsgi application location and the required PYTHONPATH are specified:

--- a/playbook.yml
+++ b/playbook.yml
@@ -7,7 +7,8 @@
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: wsgi:app
+      python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
+      wsgi_application: clickreviewsproject.wsgi:application
       code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
@@ -18,20 +19,18 @@
 
   tasks:
 
-    # Pretend there are some package dependencies for the example wsgi app.
     - name: Install any required packages for your app.
-      apt: pkg={{ item }} state=latest update_cache=yes
+      apt: pkg={{ item }}
       with_items:
-        - python-django
-        - python-django-celery
+        - python-django=1.5.4-1ubuntu1~ctools0
       tags:
         - install
         - upgrade-charm
 
     - name: Write any custom configuration files
-      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
+      copy: src=django-project dest={{ application_dir }} owner={{ wsgi_user }} group={{ wsgi_group }}
       tags:
-        - config-changed
-        # Also any backend relation-changed events, such as databases.
+        - install
+        - upgrade-charm
       notify:
         - Restart wsgi

I can now run juju upgrade-charm and see that the application now responds, but with a 500 error highlighting the first (of a few) missing dependencies…

3. Adding missing dependencies

At this point, it’s easiest in my opinion to run debug-hooks:

$ juju debug-hooks ubuntu-reviews/0

and directly test and install missing dependencies, adding them to the playbook as you go:

ubuntu@michael-local-machine-1:~$ curl http://localhost:8080 | less
ubuntu@michael-local-machine-1:~$ sudo apt-get install python-missing-library -y (and add it to the playbook in your charm editor).
ubuntu@michael-local-machine-1:~$ sudo service gunicorn restart

Rinse and repeat. You may want to occasionally run upgrade-charm in a separate terminal:

$ juju upgrade-charm --repository=../.. ubuntu-reviews

to verify that you’ve not broken things, running the hooks as they are triggered in your debug-hooks window.

Other times you might want to destroy the environment to redeploy from scratch.

Sometimes you’ll have dependencies which are not available in the distro. For our deployments,
we always ensure we have these included in our tarball (in some form). In the case of the reviews server, there was an existing script which pulls in a bunch of extra branches, so I’ve updated to use that in the Makefile target that builds the tarball. But there was another dependency, south, which wasn’t included by that task, so for simplicity here, I’m going to install that via pip (which you don’t want to do in reality – you’d update your build process instead).

You can see the extra dependencies in the commit.

4. Customise settings

At this point, the service deploys but errors due to a missing setting, which makes complete sense as I’ve not added any app-specific settings yet.

So, remove the vanilla settings.py and add a custom settings.py.j2 template, and add an extra django_secret_key option to the charm config.yaml:

--- a/config.yaml
+++ b/config.yaml
@@ -10,8 +10,13 @@ options:
     current_symlink:
         default: "latest"
         type: string
-        description: |
+        description: >
             The symlink of the code to run. The default of 'latest' will use
             the most recently added build on the instance.  Specifying a
             differnt label (eg. "r235") will symlink to that directory assuming
             it has been previously added to the instance.
+    django_secret_key:
+        default: "you-really-should-set-this"
+        type: string
+        description: >
+            The secret key that django should use for each instance.

--- /dev/null
+++ b/templates/settings.py.j2
@@ -0,0 +1,41 @@
+DEBUG = True
+TEMPLATE_DEBUG = DEBUG
+SECRET_KEY = '{{ django_secret_key }}'
+TIME_ZONE = 'UTC'
+ROOT_URLCONF = 'clickreviewsproject.urls'
+WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
+INSTALLED_APPS = (
+    'django.contrib.auth',
+    'django.contrib.contenttypes',
+    'django.contrib.sessions',
+    'django.contrib.sites',
+    'django_openid_auth',
+    'django.contrib.messages',
+    'django.contrib.staticfiles',
+    'clickreviews',
+    'south',
+)
...

Again, full diff in the commit.

With the extra settings now defined and one more dependencie added, I’m now able to ping the service successfully:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

But that only works because the input validation is fired before anything tries to use the (non-existent) database…

5. Adding the database relation

I had expected this step to be simple – add the database relation, call syncdb/migrate, but there were three factors conspiring to complicate things:

  1. The reviews application uses a custom postgres “debversion” column type
  2. A migration for the older non-click reviewsapp service requires postgres superuser creds (specifically, reviews/0013_add_debversion_field.py
  3. The clickreviews app, which I wanted to deploy on its own, is currently dependent on the older non-click reviews app, so it’s necessary to have both enabled and therefore the tables from both sync’d

So to work around the first point, I split the ‘deploy’ task into two tasks so I can install and setup the custom debversion field on the postgres units, before running syncdb:

--- a/Makefile
+++ b/Makefile
@@ -37,8 +37,17 @@ deploy: create-tarball
        @juju set ubuntu-reviews build_label=r$(REVIEWS_REVNO)
        @juju deploy gunicorn
        @juju deploy nrpe-external-master
+       @juju deploy postgresql
        @juju add-relation ubuntu-reviews gunicorn
        @juju add-relation ubuntu-reviews nrpe-external-master
+       @juju add-relation ubuntu-reviews postgresql:db
+       @juju set postgresql extra-packages=postgresql-9.1-debversion
+       @echo "Once the services are related, run 'make setup-db'"
+
+setup-db:
+       @echo "Creating custom debversion postgres type and syncing the ubuntu-reviews database."
+       @juju run --service postgresql 'psql -U postgres ubuntu-reviews -c "CREATE EXTENSION debversion;"'
+       @juju run --unit=ubuntu-reviews/0 "hooks/syncdb"
        @echo See the README for explorations after deploying.

I’ve separated the syncdb out to a manual step (rather than running syncdb on a hook like config-changed) so that I can run it after the postgresql service had been updated with the custom field, and also because juju doesn’t currently have a concept of a leader (yes, syncdb could be run on every unit, and might be safe, but I don’t like it conceptually).

--- a/playbook.yml
+++ b/playbook.yml
@@ -3,13 +3,13 @@
 
   vars:
     - app_label: click-reviews.ubuntu.com
+    - code_archive: "{{ build_label }}/rnr-server.tgz"
 
   roles:
     - role: wsgi-app
       listen_port: 8080
       python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
       wsgi_application: clickreviewsproject.wsgi:application
-      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
     - role: nrpe-external-master
@@ -25,6 +25,7 @@
         - python-django=1.5.4-1ubuntu1~ctools0
         - python-tz
         - python-pip
+        - python-psycopg2
       tags:
         - install
         - upgrade-charm
@@ -46,13 +47,24 @@
       tags:
         - install
         - upgrade-charm
+        - db-relation-changed
       notify:
         - Restart wsgi
+
     # XXX Normally our deployment build would ensure these are all available in the tarball.
     - name: Install any non-distro dependencies (Don't depend on pip for a real deployment!)
       pip: name={{ item.key }} version={{ item.value }}
       with_dict:
         south: 0.7.6
+        django-openid-auth: 0.2
       tags:
         - install
         - upgrade-charm
+
+    - name: sync the database
+      django_manage: >
+        command="syncdb --noinput"
+        app_path="{{ application_dir }}/django-project"
+        pythonpath="{{ code_dir }}/current/src"
+      tags:
+        - syncdb

To work around the second and third points above, I updated to remove the ‘south’ django application so that syncdb will setup the correct current tables without using migrations (while leaving it on the pythonpath as some code imports it currently), and added the old non-click reviews app to satisfy some of the imported dependencies.

By far the most complicated part of this change though, was the database settings:

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -1,6 +1,31 @@
 DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
+
+{% if 'db' in relations %}
+DATABASES = {
+  {% for dbrelation in relations['db'] %}
+  {% if loop.first %}
+    {% set services=relations['db'][dbrelation] %}
+    {% for service in services %}
+      {% if service.startswith('postgres') %}
+      {% set dbdetails = services[service] %}
+        {% if 'database' in dbdetails %}
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ dbdetails["database"] }}',
+        'HOST': '{{ dbdetails["host"] }}',
+        'USER': '{{ dbdetails["user"] }}',
+        'PASSWORD': '{{ dbdetails["password"] }}',
+    }
+        {% endif %}
+      {% endif %}
+    {% endfor %}
+  {% endif %}
+  {% endfor %}
+}
+{% endif %}
+
 TIME_ZONE = 'UTC'
 ROOT_URLCONF = 'clickreviewsproject.urls'
 WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
@@ -12,8 +37,8 @@ INSTALLED_APPS = (
     'django_openid_auth',
     'django.contrib.messages',
     'django.contrib.staticfiles',
+    'reviewsapp',
     'clickreviews',
-    'south',
 )
 AUTHENTICATION_BACKENDS = (
     'django_openid_auth.auth.OpenIDBackend',

Why are the database settings so complicated? I’m currently rendering all the settings on any config change as well as database relation change which means I need to use the global juju state for the relation data, and as far as juju knows, there could be multiple database relations for this stack. The current charm helpers gives you this as a dict (above it’s ‘relations’). The ‘db’ key of that dict contains all the database relations, so I’m grabbing the first database relation and taking the details from the postgres side of that relation (there are two keys for the relation, one for postgres, the other for the connecting service). Yuk (see below for a better solution).

Full details are in the commit. With those changes I can now use the service:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

A. Cleanup: Simplify database settings

In retrospect, the complicated database settings are only needed because the settings are rendered on any config change, not just when the database relation changes, so I’ll clean this up in the next step by moving the database settings to a separate file which is only written on the db-relation-changed hook where we can use the more convenient current_relation dict (given that I know this service only has one database relationship).

This simplifies things quite a bit:

--- a/playbook.yml
+++ b/playbook.yml
@@ -45,11 +45,21 @@
         owner: "{{ wsgi_user }}"
         group: "{{ wsgi_group }}"
       tags:
-        - install
-        - upgrade-charm
+        - config-changed
+      notify:
+        - Restart wsgi
+
+    - name: Write the database settings
+      template:
+        src: "templates/database_settings.py.j2"
+        dest: "{{ application_dir }}/django-project/clickreviewsproject/database_settings.py"
+        owner: "{{ wsgi_user }}"
+        group: "{{ wsgi_group }}"
+      tags:
         - db-relation-changed
       notify:
         - Restart wsgi
+      when: "'database' in current_relation"
 
--- /dev/null
+++ b/templates/database_settings.py.j2
@@ -0,0 +1,9 @@
+DATABASES = {
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ current_relation["database"] }}',
+        'HOST': '{{ current_relation["host"] }}',
+        'USER': '{{ current_relation["user"] }}',
+        'PASSWORD': '{{ current_relation["password"] }}',
+    }
+}

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -2,29 +2,7 @@ DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
 
-{% if 'db' in relations %}
-DATABASES = {
-  {% for dbrelation in relations['db'] %}
-  {% if loop.first %}
-    {% set services=relations['db'][dbrelation] %}
-    {% for service in services %}
-      {% if service.startswith('postgres') %}
-      {% set dbdetails = services[service] %}
-        {% if 'database' in dbdetails %}
-    'default': {
-        'ENGINE': 'django.db.backends.postgresql_psycopg2',
-        'NAME': '{{ dbdetails["database"] }}',
-        'HOST': '{{ dbdetails["host"] }}',
-        'USER': '{{ dbdetails["user"] }}',
-        'PASSWORD': '{{ dbdetails["password"] }}',
-    }
-        {% endif %}
-      {% endif %}
-    {% endfor %}
-  {% endif %}
-  {% endfor %}
-}
-{% endif %}
+from database_settings import DATABASES
 
 TIME_ZONE = 'UTC'

With those changes, the deploy still works as expected:
$ make deploy
$ make setup-db
Creating custom debversion postgres type and syncing the ubuntu-reviews database.
...
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

Summary

Reusing the wsgi-app ansible role allowed me to focus just on the things that make this service different from other wsgi-app services:

  1. Creating the tarball for deployment (not normally part of the charm, but useful for developing the charm)
  2. Handling the application settings
  3. Handling the database relation and custom setup

The wsgi-app role already provides the functionality to deploy and upgrade code tarballs via a config set (ie. `juju set ubuntu-reviews build_label=r156`) from a configured url. It provides the functionality for the complete directory layout, user and group setup, log rotation etc. All of that comes for free.

Things I’d do if I was doing this charm for a real deployment now:

  1. Use the postgres db-admin relation for syncdb/migrations (thanks Simon) and ensure that the normal app uses the non-admin settings (and can’t change the schema etc.) This functionality could be factored out into another reusable role.
  2. Remove the dependency on the older non-click reviews app so that the clickreviews service can be deployed in isolation.
  3. Ensure all dependencies are either available in the distro, or included in the tarball

Filed under: cloud automation, juju

Read more
Michael

After recently writing the re-usable wsgi-app role for juju charms, I wanted to see how easy it would be to apply this to an existing service. I chose the existing ubuntu-reviews service because it’s open-source and something that we’ll probably need to deploy with juju soon (albeit only the functionality for reviewing newer click applications).

I’ve tried to include the workflow below that I used to create the charm including the detailed steps and some mistakes, in case it’s useful for others. If you’re after more of an overview about using reusable ansible roles in your juju charms, checkout these slides.

1. Create the new charm from the bootstrap-wsgi charm

First grab the charm-bootstrap-wsgi code and rename it to ubuntu-reviews:

$ mkdir -p ~/charms/rnr/precise && cd ~/charms/rnr/precise
$ git clone https://github.com/absoludity/charm-bootstrap-wsgi.git
$ mv charm-bootstrap-wsgi ubuntu-reviews && cd ubuntu-reviews/

Then update the charm metadata to reflect the ubuntu-reviews service:

--- a/metadata.yaml
+++ b/metadata.yaml
@@ -1,6 +1,6 @@
-name: charm-bootstrap-wsgi
-summary: Bootstrap your wsgi service charm.
-maintainer: Developer Account <Developer.Account@localhost>
+name: ubuntu-reviews
+summary: A review API service for Ubuntu click packages.
+maintainer: Michael Nelson <michael.nelson@canonical.com>
 description: |
   <Multi-line description here>
 categories:

I then updated the playbook to expect a tarball named rnr-server.tgz, and gave it a more relevant app_label (which controls the directories under which the service is setup):

--- a/playbook.yml
+++ b/playbook.yml
@@ -2,13 +2,13 @@
 - hosts: localhost
 
   vars:
-    - app_label: wsgi-app.example.com
+    - app_label: click-reviews.ubuntu.com
 
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: example_wsgi:application
-      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
+      wsgi_application: wsgi:app
+      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''

Although when deploying this service we’d be deploying a built asset published somewhere else, for development it’s easier to work with a local file in the charm – and the reusable wsgi-app role allows you to switch between those two options. So the last step here is to add some Makefile targets to enable pulling in the application code and creating the tarball (you can see the details on the git commit for this step).

With that done, I can deploy the charm with `make deploy` – expecting it to fail because the wsgi object doesn’t yet exist.

Aside: if you’re deploying locally, it’s sometimes useful to watch the deployment progress with:

$ tail -F ~/.juju/local/log/unit-ubuntu-reviews-0.log

At this point, juju status shows no issues, but curling the service does (as expected):

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080"
curl: (56) Recv failure: Connection reset by peer

Checking the logs (which, oh joy, are already setup and configured with log rotation etc. by the wsgi-app role) shows the expected error:

$ juju ssh ubuntu-reviews/0 "tail -F /srv/reviews.click.ubuntu.com/logs/reviews.click.ubuntu.com-error.log"
...
ImportError: No module named wsgi

2. Adding the wsgi, settings and urls

The current rnr-server code does have a django project setup, but it’s specific to the current setup (requiring a 3rd party library for settings, and serves the non-click reviews service too). I’d like to simplify that here, so I’m bundling a project configuration in the charm. First the standard (django-generated) manage.py, wsgi.py and near-default settings.py (which you can see in the actual commit).

An extra task is added which copies the project configuration into place during install/upgrade, and both the wsgi application location and the required PYTHONPATH are specified:

--- a/playbook.yml
+++ b/playbook.yml
@@ -7,7 +7,8 @@
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: wsgi:app
+      python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
+      wsgi_application: clickreviewsproject.wsgi:application
       code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
@@ -18,20 +19,18 @@
 
   tasks:
 
-    # Pretend there are some package dependencies for the example wsgi app.
     - name: Install any required packages for your app.
-      apt: pkg={{ item }} state=latest update_cache=yes
+      apt: pkg={{ item }}
       with_items:
-        - python-django
-        - python-django-celery
+        - python-django=1.5.4-1ubuntu1~ctools0
       tags:
         - install
         - upgrade-charm
 
     - name: Write any custom configuration files
-      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
+      copy: src=django-project dest={{ application_dir }} owner={{ wsgi_user }} group={{ wsgi_group }}
       tags:
-        - config-changed
-        # Also any backend relation-changed events, such as databases.
+        - install
+        - upgrade-charm
       notify:
         - Restart wsgi

I can now run juju upgrade-charm and see that the application now responds, but with a 500 error highlighting the first (of a few) missing dependencies…

3. Adding missing dependencies

At this point, it’s easiest in my opinion to run debug-hooks:

$ juju debug-hooks ubuntu-reviews/0

and directly test and install missing dependencies, adding them to the playbook as you go:

ubuntu@michael-local-machine-1:~$ curl http://localhost:8080 | less
ubuntu@michael-local-machine-1:~$ sudo apt-get install python-missing-library -y (and add it to the playbook in your charm editor).
ubuntu@michael-local-machine-1:~$ sudo service gunicorn restart

Rinse and repeat. You may want to occasionally run upgrade-charm in a separate terminal:

$ juju upgrade-charm --repository=../.. ubuntu-reviews

to verify that you’ve not broken things, running the hooks as they are triggered in your debug-hooks window.

Other times you might want to destroy the environment to redeploy from scratch.

Sometimes you’ll have dependencies which are not available in the distro. For our deployments,
we always ensure we have these included in our tarball (in some form). In the case of the reviews server, there was an existing script which pulls in a bunch of extra branches, so I’ve updated to use that in the Makefile target that builds the tarball. But there was another dependency, south, which wasn’t included by that task, so for simplicity here, I’m going to install that via pip (which you don’t want to do in reality – you’d update your build process instead).

You can see the extra dependencies in the commit.

4. Customise settings

At this point, the service deploys but errors due to a missing setting, which makes complete sense as I’ve not added any app-specific settings yet.

So, remove the vanilla settings.py and add a custom settings.py.j2 template, and add an extra django_secret_key option to the charm config.yaml:

--- a/config.yaml
+++ b/config.yaml
@@ -10,8 +10,13 @@ options:
     current_symlink:
         default: "latest"
         type: string
-        description: |
+        description: >
             The symlink of the code to run. The default of 'latest' will use
             the most recently added build on the instance.  Specifying a
             differnt label (eg. "r235") will symlink to that directory assuming
             it has been previously added to the instance.
+    django_secret_key:
+        default: "you-really-should-set-this"
+        type: string
+        description: >
+            The secret key that django should use for each instance.

--- /dev/null
+++ b/templates/settings.py.j2
@@ -0,0 +1,41 @@
+DEBUG = True
+TEMPLATE_DEBUG = DEBUG
+SECRET_KEY = '{{ django_secret_key }}'
+TIME_ZONE = 'UTC'
+ROOT_URLCONF = 'clickreviewsproject.urls'
+WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
+INSTALLED_APPS = (
+    'django.contrib.auth',
+    'django.contrib.contenttypes',
+    'django.contrib.sessions',
+    'django.contrib.sites',
+    'django_openid_auth',
+    'django.contrib.messages',
+    'django.contrib.staticfiles',
+    'clickreviews',
+    'south',
+)
...

Again, full diff in the commit.

With the extra settings now defined and one more dependencie added, I’m now able to ping the service successfully:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

But that only works because the input validation is fired before anything tries to use the (non-existent) database…

5. Adding the database relation

I had expected this step to be simple – add the database relation, call syncdb/migrate, but there were three factors conspiring to complicate things:

  1. The reviews application uses a custom postgres “debversion” column type
  2. A migration for the older non-click reviewsapp service requires postgres superuser creds (specifically, reviews/0013_add_debversion_field.py
  3. The clickreviews app, which I wanted to deploy on its own, is currently dependent on the older non-click reviews app, so it’s necessary to have both enabled and therefore the tables from both sync’d

So to work around the first point, I split the ‘deploy’ task into two tasks so I can install and setup the custom debversion field on the postgres units, before running syncdb:

--- a/Makefile
+++ b/Makefile
@@ -37,8 +37,17 @@ deploy: create-tarball
        @juju set ubuntu-reviews build_label=r$(REVIEWS_REVNO)
        @juju deploy gunicorn
        @juju deploy nrpe-external-master
+       @juju deploy postgresql
        @juju add-relation ubuntu-reviews gunicorn
        @juju add-relation ubuntu-reviews nrpe-external-master
+       @juju add-relation ubuntu-reviews postgresql:db
+       @juju set postgresql extra-packages=postgresql-9.1-debversion
+       @echo "Once the services are related, run 'make setup-db'"
+
+setup-db:
+       @echo "Creating custom debversion postgres type and syncing the ubuntu-reviews database."
+       @juju run --service postgresql 'psql -U postgres ubuntu-reviews -c "CREATE EXTENSION debversion;"'
+       @juju run --unit=ubuntu-reviews/0 "hooks/syncdb"
        @echo See the README for explorations after deploying.

I’ve separated the syncdb out to a manual step (rather than running syncdb on a hook like config-changed) so that I can run it after the postgresql service had been updated with the custom field, and also because juju doesn’t currently have a concept of a leader (yes, syncdb could be run on every unit, and might be safe, but I don’t like it conceptually).

--- a/playbook.yml
+++ b/playbook.yml
@@ -3,13 +3,13 @@
 
   vars:
     - app_label: click-reviews.ubuntu.com
+    - code_archive: "{{ build_label }}/rnr-server.tgz"
 
   roles:
     - role: wsgi-app
       listen_port: 8080
       python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
       wsgi_application: clickreviewsproject.wsgi:application
-      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
     - role: nrpe-external-master
@@ -25,6 +25,7 @@
         - python-django=1.5.4-1ubuntu1~ctools0
         - python-tz
         - python-pip
+        - python-psycopg2
       tags:
         - install
         - upgrade-charm
@@ -46,13 +47,24 @@
       tags:
         - install
         - upgrade-charm
+        - db-relation-changed
       notify:
         - Restart wsgi
+
     # XXX Normally our deployment build would ensure these are all available in the tarball.
     - name: Install any non-distro dependencies (Don't depend on pip for a real deployment!)
       pip: name={{ item.key }} version={{ item.value }}
       with_dict:
         south: 0.7.6
+        django-openid-auth: 0.2
       tags:
         - install
         - upgrade-charm
+
+    - name: sync the database
+      django_manage: >
+        command="syncdb --noinput"
+        app_path="{{ application_dir }}/django-project"
+        pythonpath="{{ code_dir }}/current/src"
+      tags:
+        - syncdb

To work around the second and third points above, I updated to remove the ‘south’ django application so that syncdb will setup the correct current tables without using migrations (while leaving it on the pythonpath as some code imports it currently), and added the old non-click reviews app to satisfy some of the imported dependencies.

By far the most complicated part of this change though, was the database settings:

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -1,6 +1,31 @@
 DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
+
+{% if 'db' in relations %}
+DATABASES = {
+  {% for dbrelation in relations['db'] %}
+  {% if loop.first %}
+    {% set services=relations['db'][dbrelation] %}
+    {% for service in services %}
+      {% if service.startswith('postgres') %}
+      {% set dbdetails = services[service] %}
+        {% if 'database' in dbdetails %}
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ dbdetails["database"] }}',
+        'HOST': '{{ dbdetails["host"] }}',
+        'USER': '{{ dbdetails["user"] }}',
+        'PASSWORD': '{{ dbdetails["password"] }}',
+    }
+        {% endif %}
+      {% endif %}
+    {% endfor %}
+  {% endif %}
+  {% endfor %}
+}
+{% endif %}
+
 TIME_ZONE = 'UTC'
 ROOT_URLCONF = 'clickreviewsproject.urls'
 WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
@@ -12,8 +37,8 @@ INSTALLED_APPS = (
     'django_openid_auth',
     'django.contrib.messages',
     'django.contrib.staticfiles',
+    'reviewsapp',
     'clickreviews',
-    'south',
 )
 AUTHENTICATION_BACKENDS = (
     'django_openid_auth.auth.OpenIDBackend',

Why are the database settings so complicated? I’m currently rendering all the settings on any config change as well as database relation change which means I need to use the global juju state for the relation data, and as far as juju knows, there could be multiple database relations for this stack. The current charm helpers gives you this as a dict (above it’s ‘relations’). The ‘db’ key of that dict contains all the database relations, so I’m grabbing the first database relation and taking the details from the postgres side of that relation (there are two keys for the relation, one for postgres, the other for the connecting service). Yuk (see below for a better solution).

Full details are in the commit. With those changes I can now use the service:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

A. Cleanup: Simplify database settings

In retrospect, the complicated database settings are only needed because the settings are rendered on any config change, not just when the database relation changes, so I’ll clean this up in the next step by moving the database settings to a separate file which is only written on the db-relation-changed hook where we can use the more convenient current_relation dict (given that I know this service only has one database relationship).

This simplifies things quite a bit:

--- a/playbook.yml
+++ b/playbook.yml
@@ -45,11 +45,21 @@
         owner: "{{ wsgi_user }}"
         group: "{{ wsgi_group }}"
       tags:
-        - install
-        - upgrade-charm
+        - config-changed
+      notify:
+        - Restart wsgi
+
+    - name: Write the database settings
+      template:
+        src: "templates/database_settings.py.j2"
+        dest: "{{ application_dir }}/django-project/clickreviewsproject/database_settings.py"
+        owner: "{{ wsgi_user }}"
+        group: "{{ wsgi_group }}"
+      tags:
         - db-relation-changed
       notify:
         - Restart wsgi
+      when: "'database' in current_relation"
 
--- /dev/null
+++ b/templates/database_settings.py.j2
@@ -0,0 +1,9 @@
+DATABASES = {
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ current_relation["database"] }}',
+        'HOST': '{{ current_relation["host"] }}',
+        'USER': '{{ current_relation["user"] }}',
+        'PASSWORD': '{{ current_relation["password"] }}',
+    }
+}

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -2,29 +2,7 @@ DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
 
-{% if 'db' in relations %}
-DATABASES = {
-  {% for dbrelation in relations['db'] %}
-  {% if loop.first %}
-    {% set services=relations['db'][dbrelation] %}
-    {% for service in services %}
-      {% if service.startswith('postgres') %}
-      {% set dbdetails = services[service] %}
-        {% if 'database' in dbdetails %}
-    'default': {
-        'ENGINE': 'django.db.backends.postgresql_psycopg2',
-        'NAME': '{{ dbdetails["database"] }}',
-        'HOST': '{{ dbdetails["host"] }}',
-        'USER': '{{ dbdetails["user"] }}',
-        'PASSWORD': '{{ dbdetails["password"] }}',
-    }
-        {% endif %}
-      {% endif %}
-    {% endfor %}
-  {% endif %}
-  {% endfor %}
-}
-{% endif %}
+from database_settings import DATABASES
 
 TIME_ZONE = 'UTC'

With those changes, the deploy still works as expected:
$ make deploy
$ make setup-db
Creating custom debversion postgres type and syncing the ubuntu-reviews database.
...
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

Summary

Reusing the wsgi-app ansible role allowed me to focus just on the things that make this service different from other wsgi-app services:

  1. Creating the tarball for deployment (not normally part of the charm, but useful for developing the charm)
  2. Handling the application settings
  3. Handling the database relation and custom setup

The wsgi-app role already provides the functionality to deploy and upgrade code tarballs via a config set (ie. `juju set ubuntu-reviews build_label=r156`) from a configured url. It provides the functionality for the complete directory layout, user and group setup, log rotation etc. All of that comes for free.

Things I’d do if I was doing this charm for a real deployment now:

  1. Use the postgres db-admin relation for syncdb/migrations (thanks Simon) and ensure that the normal app uses the non-admin settings (and can’t change the schema etc.) This functionality could be factored out into another reusable role.
  2. Remove the dependency on the older non-click reviews app so that the clickreviews service can be deployed in isolation.
  3. Ensure all dependencies are either available in the distro, or included in the tarball

Filed under: cloud automation, juju

Read more
Dustin Kirkland

It's that simple.
It was about 4pm on Friday afternoon, when I had just about wrapped up everything I absolutely needed to do for the day, and I decided to kick back and have a little fun with the remainder of my work day.

 It's now 4:37pm on Friday, and I'm now done.

Done with what?  The Yo charm, of course!

The Internet has been abuzz this week about the how the Yo app received a whopping $1 million dollars in venture funding.  (Forbes notes that this is a pretty surefire indication that there's another internet bubble about to burst...)

It's little more than the first program any kid writes -- hello world!

Subsequently I realized that we don't really have a "hello world" charm.  And so here it is, yo.

$ juju deploy yo

Deploying up a webpage that says "Yo" is hardly the point, of course.  Rather, this is a fantastic way to see the absolute simplest form of a Juju charm.  Grab the source, and go explore it yo-self!

$ charm-get yo
$ tree yo
├── config.yaml
├── copyright
├── hooks
│   ├── config-changed
│   ├── install
│   ├── start
│   ├── stop
│   ├── upgrade-charm
│   └── website-relation-joined
├── icon.svg
├── metadata.yaml
└── README.md
1 directory, 11 files



  • The config.yaml let's you set and dynamically changes the service itself (the color and size of the font that renders "Yo").
  • The copyright is simply boilerplate GPLv3
  • The icon.svg is just a vector graphics "Yo."
  • The metadata.yaml explains what this charm is, how it can relate to other charms
  • The README.md is a simple getting-started document
  • And the hooks...
    • config-changed is the script that runs when you change the configuration -- basically, it uses sed to inline edit the index.html Yo webpage
    • install simply installs apache2 and overwrites /var/www/index.html
    • start and stop simply starts and stops the apache2 service
    • upgrade-charm is currently a no-op
    • website-relation-joined sets and exports the hostname and port of this system
The website relation is very important here...  Declaring and defining this relation instantly lets me relate this charm with dozens of other services.  As you can see in the screenshot at the top of this post, I was able to easily relate the varnish website accelerator in front of the Yo charm.

Hopefully this simple little example might help you examine the anatomy of a charm for the first time, and perhaps write your own first charm!

Cheers,

Dustin

Read more
David Murphy (schwuk)

Ars Technica has a great write up by Lee Hutchinson on our Orange Box demo and training unit.

You can't help but have your attention grabbed by it!

You can’t help but have your attention grabbed by it!

As the comments are quick to point out – at the expense of the rest of the piece – the hardware isn’t the compelling story here. While you can buy your own, you can almost certainly hand build an equivalent-or-better set up for less money1, but Ars recognises this:

Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it’s making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud.

The Orange Box is about showing off MAAS & Juju, and – usually – OpenStack.

To see what Ars think of those, you should read the article.

I definitely echo Lee’s closing statement:

I wish my closet had an Orange Box in it. That thing is hella cool.


  1. Or make one out of wood like my colleague Gavin did! 

Read more
Michael

I’ve been writing Juju charms to automate the deployment of a few different services at work, which all happen to be wsgi applications… some Django apps, others with other frameworks. I’ve been using the ansible support for writing charms which makes charm authoring simpler, but even then, essentially each wsgi service charm needs to do the same thing:

  1. Setup specific users
  2. Install the built code into a specific location
  3. Install any package dependencies
  4. Relate to a backend (could be postgresql, could be elasticsearch)
  5. Render the settings
  6. Setup a wsgi (gunicorn) service (via a subordinate charm)
  7. Setup log rotation
  8. Support updating to a new codebase without upgrading the charm
  9. Support rolling updates a new codebase

Only three of the above really change slightly from service to service: which package dependencies are required, the rendering of the settings and the backend relation (which usually just causes the settings to be rerendered anyway).

After trying (and failing) to create a nice reusable wsgi-app charm, I’ve switched to utilise ansible’s built-in support for reusable roles and created a charm-bootstrap-wsgi repo on github, which demonstrates all of the above out of the box (the README has an example rolling upgrade). The charm’s playbook is very simple, just reusing the wsgi-app role:

 

roles:
    - role: wsgi-app
      listen_port: 8080
      wsgi_application: example_wsgi:application
      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
      when: build_label != ''

 

and only needs to do two things itself:

tasks:
    - name: Install any required packages for your app.
      apt: pkg={{ item }} state=latest update_cache=yes
      with_items:
        - python-django
        - python-django-celery
      tags:
        - install
        - upgrade-charm

    - name: Write any custom configuration files
      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
      tags:
        - config-changed
        # Also any backend relation-changed hooks for databases etc.
      notify:
        - Restart wsgi

 

Everything else is provided by the reusable wsgi-app role. For the moment I’ve got the source of the reusable roles in a separate github repo, but I’d like to get these into the charm-helpers project itself eventually. Of course there will be cases where the service charm may need to do quite a bit more custom functionality, but if we can encapsulate and reuse as much as possible, it’s a win for all of us.

If you’re interested in chatting about easier charms with ansible (or any issues you can see), we’ll be getting together for a hangout tomorrow (Jun 11, 2014 at 1400UTC).


Filed under: ansible, juju, python

Read more
Michael

I’ve been writing Juju charms to automate the deployment of a few different services at work, which all happen to be wsgi applications… some Django apps, others with other frameworks. I’ve been using the ansible support for writing charms which makes charm authoring simpler, but even then, essentially each wsgi service charm needs to do the same thing:

  1. Setup specific users
  2. Install the built code into a specific location
  3. Install any package dependencies
  4. Relate to a backend (could be postgresql, could be elasticsearch)
  5. Render the settings
  6. Setup a wsgi (gunicorn) service (via a subordinate charm)
  7. Setup log rotation
  8. Support updating to a new codebase without upgrading the charm
  9. Support rolling updates a new codebase

Only three of the above really change slightly from service to service: which package dependencies are required, the rendering of the settings and the backend relation (which usually just causes the settings to be rerendered anyway).

After trying (and failing) to create a nice reusable wsgi-app charm, I’ve switched to utilise ansible’s built-in support for reusable roles and created a charm-bootstrap-wsgi repo on github, which demonstrates all of the above out of the box (the README has an example rolling upgrade). The charm’s playbook is very simple, just reusing the wsgi-app role:

 

roles:
    - role: wsgi-app
      listen_port: 8080
      wsgi_application: example_wsgi:application
      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
      when: build_label != ''

 

and only needs to do two things itself:

tasks:
    - name: Install any required packages for your app.
      apt: pkg={{ item }} state=latest update_cache=yes
      with_items:
        - python-django
        - python-django-celery
      tags:
        - install
        - upgrade-charm

    - name: Write any custom configuration files
      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
      tags:
        - config-changed
        # Also any backend relation-changed hooks for databases etc.
      notify:
        - Restart wsgi

 

Everything else is provided by the reusable wsgi-app role. For the moment I’ve got the source of the reusable roles in a separate github repo, but I’d like to get these into the charm-helpers project itself eventually. Of course there will be cases where the service charm may need to do quite a bit more custom functionality, but if we can encapsulate and reuse as much as possible, it’s a win for all of us.

If you’re interested in chatting about easier charms with ansible (or any issues you can see), we’ll be getting together for a hangout tomorrow (Jun 11, 2014 at 1400UTC).


Filed under: ansible, juju, python

Read more
Dustin Kirkland

Click and drag to rotate, zoom with middle mouse button

It was September of 2009.  I answered a couple of gimme trivia questions and dropped my business card into a hat at a Linux conference in Portland, Oregon.  A few hours later, I received an email...I had just "won" a developer edition HTC Dream -- the Android G1.  I was quite anxious to have a hardware platform where I could experiment with Android.  I had, of course, already downloaded the SDK, compiled Android from scratch, and fiddled with it in an emulator.  But that experience fell far short of Android running on real hardware.  Until the G1.  The G1 was the first device to truly showcase the power and potential of the Android operating system.

And with that context, we are delighted to introduce the Orange Box!


The Orange Box


Conceived by Canonical and custom built by TranquilPC, the Orange Box is a 10-node cluster computer, that fits in a suitcase.

Ubuntu, MAAS, Juju, Landscape, OpenStack, Hadoop, CloudFoundry, and more!

The Orange Box provides a spectacular development platform, showcasing in mere minutes the power of hardware provisioning and service orchestration with Ubuntu, MAAS, Juju, and Landscape.  OpenStack, Hadoop, CloudFoundry, and hundreds of other workloads deploy in minutes, to real hardware -- not just instances in AWS!  It also makes one hell of a Steam server -- there's a charm for that ;-)


OpenStack deployed by Juju, takes merely 6 minutes on an Orange Box

Most developers here certainly recognize the term "SDK", or "Software Development Kit"...  You can think of the Orange Box as a "HDK", or "Hardware Development Kit".  Pair an Orange Box with MAAS and Juju, and you have yourself a compact cloud.  Or a portable big data number cruncher.  Or a lightweight cluster computer.


The underside of an Orange Box, with its cover off


Want to get your hands on one?

Drop us a line, and we'd be delighted to hand-deliver an Orange Box to your office, and conduct 2 full days of technical training, covering MAAS, Juju, Landscape, and OpenStack.  The box is yours for 2 weeks, as you experiment with the industry leading Ubuntu ecosystem of cloud technologies at your own pace and with your own workloads.  We'll show back up, a couple of weeks later, to review what you learned and discuss scaling these tools up, into your own data center, on your own enterprise hardware.  (And if you want your very own Orange Box to keep, you can order one from our friends at TranquilPC.)


Manufacturers of the Orange Box

Gear head like me?  Interested in the technical specs?


Remember those posts late last year about Intel NUCs?  Someone took notice, and we set out to build this ;-)


Each Orange Box chassis contains:
  • 10x Intel NUCs
  • All 10x Intel NUCs contain
    • Intel HD Graphics 4000 GPU
    • 16GB of DDR3 RAM
    • 120GB SSD root disk
    • Intel Gigabit ethernet
  • D-Link DGS-1100-16 managed gigabit switch with 802.1q VLAN support
    • All 10 nodes are internally connected to this gigabit switch
  • 100-240V AC/DC power supply
    • Adapter supplied for US, UK, and EU plug types
    • 19V DC power supplied to each NUC
    • 5V DC power supplied to internal network switch


Intel NUC D53427RKE board

That's basically an Amazon EC2 m3.xlarge ;-)

The first node, node0, additionally contains:
  • A 2TB Western Digital HDD, preloaded with a full Ubuntu archive mirror
  • USB and HDMI ports are wired and accessible from the rear of the box

Most planes fly in clouds...this cloud flies in planes!


In aggregate, this micro cluster effectively fields 40 cores, 160GB of RAM, 1.2TB of solid state storage, and is connected over an internal gigabit network fabric.  A single fan quietly cools the power supply, while all of the nodes are passively cooled by aluminum heat sinks spanning each side of the chassis. All in a chassis the size of a tower PC!

It fits in a suit case, and can travel anywhere you go.


Pelican iM2875 Storm Case

How are we using them at Canonical?

If you're here at the OpenStack Summit in Atlanta, GA, you'll see at least a dozen Orange Boxes, in our booth, on stage during Mark Shuttleworth's keynote, and in our breakout conference rooms.


Canonical sales engineer, Ameet Paranjape,
demonstrating OpenStack on the Orange Box in the Ubuntu booth
at the OpenStack Summit in Atlanta, GA
We are also launching an update to our OpenStack Jumpstart program, where we'll deliver and Orange Box and 2 full days of training to your team, and leave you the box while you experiment with OpenStack, MAAS, Juju, Hadoop, and more for 2 weeks.  Without disrupting your core network or production data center workloads,  prototype your OpenStack experience within a private sandbox environment. You can experiment with various storage alternatives, practice scaling services, destroy and rebuild the environment repeatedly. Safe. Risk free.


This is Cloud, for the Free Man.

:-Dustin

Read more