Canonical Voices

Posts tagged with 'juju'

ZhengPeng Hou

Last night, spent some time with latest jujuc-core in saucy, was interested with local provider support which was added recently. Couple of things worth to be known:
1 juju-core now use mongodb to replace zookeeper, so to play with local provider, you need install lxc, mongodb. I have whishlist against juju-core packaging for providing a meta package to install those dependencies.
2 After install mongodb, do remember to stop the server manually, because juju bootstrap will create its own upstart scripts to handle start/stop the service.
3 The configuration of local provider is quite simple now, you may copy and paste the one by running juju init, not need modify anything, I did comment out root-dir, which made me run into .
4 Two commands need to be run with sudo, one is bootstrap, the other is destroy-environment.

Read more
Dustin Kirkland

UPDATE: I wrote this charm and blog post before I saw the unfortunate news that the UbuntuForums.org along with Apple's Developer websites had been very recently compromised and user database stolen.  Given that such events have sadly become commonplace, the instructions below can actually be used by proactive administrators to identify and disable weak user passwords, expecting that the bad guys are already doing the same.

It's been about 2 years since I've written a Juju charm.  And even those that I wrote were not scale-out applications.

I've been back at Canonical for two weeks now, and I've been spending some time bringing myself up to speed on the cloud projects that form the basis for the Cloud Solution products, for which I'm responsible. First, I deployed MAAS, and then brought up a small Ubuntu OpenStack cluster.  Finally, I decided to tackle Juju and rather than deploying one of the existing charms, I wanted to write my own.

Installing Juju

Juju was originally written in Python, but has since been ported to Golang over the last 2+ years.  My previous experience was exclusively with the Python version of Juju, but all new development is now focused on the Golang version of Juju, also known as juju-core.  So at this point, I decided to install juju-core from the 13.04 (raring) archive.

sudo apt-get install juju-core

I immediately hit a couple of bugs in the version of juju-core in 13.04 (1.10.0.1-0ubuntu1~ubuntu13.04.1), particularly Bug #1172973.  Life is more fun on the edge anyway, so I upgraded to a daily snapshot from the PPA.

sudo apt-add-repository ppa:juju/devel
sudo apt-get update
sudo apt-get install juju-core

Now I'm running juju-core 1.11.2-3~1414~raring1, and it's currently working.

Configuring Juju

Juju can be configured to use a number of different cloud backends as "providers", notably, Amazon EC2, OpenStack, MAAS, and HP Cloud.

For my development, I'm using Canonical's internal deployment of OpenStack, and so I configured my environment accordingly in ~/.juju/environments.yaml:

default: openstack
environments:
openstack:
type: openstack
admin-secret: any-secret-you-choose-randomly
control-bucket: any-bucket-name-you-choose-randomly
default-series: precise
auth-mode: userpass

Using OpenStack (or even AWS for that matter) also requires defining a number of environment variables in an rc-file.  Basically, you need to be able to launch instances using euca2ools or ec2-api-tools.  That's outside of the scope of this post, and expected as a prerequisite.

The official documentation for configuring your Juju environment can be found here.

Choosing a Charm-able Application

I have previously charmed two small (but useful!) webapps that I've written and continue to maintain -- Pictor and Musica.  These are both standalone web applications that allow you to organize, serve, share, and stream your picture archive and music collection.  But neither of these "scale out", really.  They certainly could, perhaps, use a caching proxy on the front end, and shared storage on the back end.  But, as I originally wrote them, they do not.  Maybe I'll update that, but I don't know of anyone using either of those charms.

In any case, for this experiment, I wanted to write a charm that would "scale out", with Juju's add-unit command.  I wanted to ensure that adding more units to a deployment would result in a bigger and better application.

For these reasons, I chose the program known as John-the-Ripper, or just john.  You can trivially install it on any Ubuntu system, with:

sudo apt-get install john

John has been used by Linux system administrators for over a decade to test the quality of their user's passwords.  A root user can view the hashes that protect user passwords in files like /etc/shadow or even application level password hashes in a database.  Effectively, it can be used to "crack" weak passwords.  There are almost certainly evil people using programs like john to do malicious things.  But as long as the good guys have access to a program like john too, they can ensure that their own passwords are impossible to crack.

John can work in a number of different "modes".  It can use a dictionary of words, and simply hash each of those words looking for a match.  The john-data package ships a word list in /usr/share/john/password.lst that contains 3,000+ words.  You can find much bigger wordlists online as well, such as this one, which contains over 2 million words.

John can also generate "twists" on these words according to some rules (like changing E's to 3's, and so on).  And it can also work in a complete brute force mode, generating every possible password from various character sets.  This, of course, will take exponentially longer run times, depending on the length of the password.

Fortunately, John can run in parallel, with as many workers as you have at your disposal.  You can run multiple processes on the same system, or you can scale it out across many systems.  There are many different approaches to parallelizing John, using OpenMP, MPI, and others.

I took a very simple approach, explained in the manpage and configuration file called "External".  Basically, in the /etc/john/john.conf configuration file, you tell each node how many total nodes exist, and which particular node they are.  Each node uses the same wordlist or sequential generation algorithm, and indexes these.  The node modulates the current index by the total number of nodes, and tries the candidate passwords that match their own id.  Dead simple :-)  I like it.

# Trivial parallel processing example
[List.External:Parallel]
/*
* This word filter makes John process some of the words only, for running
* multiple instances on different CPUs. It can be used with any cracking
* mode except for "single crack". Note: this is not a good solution, but
* is just an example of what can be done with word filters.
*/
int node, total; // This node's number, and node count
int number; // Current word number
void init()
{
node = 1; total = 2; // Node 1 of 2, change as appropriate
number = node - 1; // Speedup the filter a bit
}
void filter()
{
if (number++ % total) // Word for a different node?
word = 0; // Yes, skip it
}

This does, however, require some way of sharing the inputs, logs, and results across all nodes.  Basically, I need a shared filesystem.  The Juju charm collection has a number of shared filesystem charms already implemented.  I chose to use NFS in my deployment, though I could have just as easily used Ceph, Hadoop, or others.

Writing a Charm

The official documentation on writing charms can be found here.  That's certainly a good starting point, and I read all of that before I set out.  I also spent considerable time in the #juju IRC channel on irc.freenode.net, talking to Jorge and Marco.  Thanks, guys!

The base template of the charm is pretty simple.  The convention is to create a charm directory like this, and put it under revision control.

mkdir -p precise/john
bzr init .

I first needed to create the metadata that will describe my charm to Juju.  My charm is named john, which is an application known as "John the Ripper", which can test the quality of your passwords.  I list myself as the maintainer.  This charm requires a shared filesystem that implements the mount interface, as my charm will call some hooks that make use of that mount interface.  Most importantly, this charm may have other peers, which I arbitrarily called workers.  They have a dummy interface (not used) called john.  Here's the metadata.yaml:

name: john
summary: "john the ripper"
description: |
John the Ripper tests the quality of system passwords
maintainer: "Dustin Kirkland"
requires:
shared-fs:
interface: mount
peers:
workers:
interface: john

I also have one optional configuration parameter, called target_hashes.  This configuration string will include the input data that john will work on, trying to break.  This can be one to many different password hashes to crack.  If this isn't specified, this charm actually generates some random ones, and then tries to break those.  I thought that would be nice, so that it's immediately useful out of the box.  Here's config.yaml:

options:
target_hashes:
type: string
description: input password hashes

There's a couple of other simple files to create, such as copyright:

Format: http://dep.debian.net/deps/dep5/

Files: *
Copyright: Copyright 2013, Dustin Kirkland , All Rights Reserved.
License: GPL-3
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
You should have received a copy of the GNU General Public License
along with this program. If not, see .

README and revision are also required.

And the Magic -- Hooks!

The real magic happens in a set of very specifically named hooks.  These are specially named executables, which can be written in any language.  For my purposes, shell scripts are more than sufficient.

The Install Hook

The install hook is what is run at installation time on each worker node.  I need to install the john and john-data packages, as well as the nfs-common client binaries.  I also make use of the mkpasswd utility provided by the whois package.  And I will also use the keep-one-running tool provided by the run-one package.  Finally, I need to tweak the configuration file, /etc/john/john.conf, on each node, to use all of the CPU, save results every 10 seconds (instead of every 10 minutes), and to use the much bigger wordlist that we're going to fetch.  Here's hooks/install:

#!/bin/bash
set -eu
juju-log "Installing all components"
apt-get update
apt-get install -qqy nfs-common john john-data whois run-one
DIR=/var/lib/john
mkdir -p $DIR
ln -sf $DIR /root/.john
sed -i -e "s/^Idle = .*/Idle = N/" /etc/john/john.conf
sed -i -e "s/^Save = .*/Save = 10/" /etc/john/john.conf
sed -i -e "s:^Wordlist = .*:Wordlist = $DIR\/passwords.txt:" /etc/john/john.conf
juju-log "Installed packages"

The Start Hook

The start hook defines how to start this application.  Ideally, the john package would provide an init script or upstart job that cleanly daemonizes its workers, but it currently doesn't.  But for a poor-man's daemonizer, I love the keep-one-running utility (written by yours truly).  I'm going to start two copies of john utility, one that runs in wordlist mode, trying every one of the 2 million words in my wordlist, as well as a second, which tries every combinations of characters in an incremental, brute force mode.  These binaries are going to operate entirely in the shared /var/lib/john NFS mount point.  Each copy on each worker node will need to have their own session file.  Here's hooks/start:

#!/bin/bash
set -eu
juju-log "Starting john"
DIR=/var/lib/john
keep-one-running john -incremental -session:$DIR/session-incremental-$(hostname) -external:Parallel $DIR/target_hashes &
keep-one-running john -wordlist:$DIR/passwords.txt -session:$DIR/session-wordlist-$(hostname) -external:Parallel $DIR/target_hashes &

The Stop Hook

The stop hook defines how to stop the application.  Here, I'll need to kill the keep-one-running processes which wrap john, since we don't have an upstart job or init script.  This is perhaps a little sloppy, but perfectly functional.  Here's hooks/stop:

#!/bin/bash
set -eu
juju-log "Stopping john"
killall keep-one-running || true

The Workers Relation Changed Hook

This hook defines the actions that need to be taken each time another john worker unit is added to the service.  Basically, each worker needs to recount how many total workers there are (using the relation-list command), determine their own id (from $JUJU_UNIT_NAME), update their /etc/john/john.conf (using sed), and then restart their john worker processes.  The last part is easy since we're using keep-one-running; we simply need to killall john processes, and keep-one-running will automatically respawn new processes that will read the updated configuration file.  This is hooks/workers-relation-changed:

#!/bin/bash
set -eu
DIR="/var/lib/john"
update_unit_count() {
node=$(echo $JUJU_UNIT_NAME | awk -F/ '{print $2}')
node=$((node+1))
total=$(relation-list | wc -l)
total=$((total+1))
sed -i -e "s/^\s\+node = .*; total = .*;.*$/ node = $node; total = $total;/" /etc/john/john.conf
}
restart_john() {
killall john || true
# It'll restart itself via keep-one-running, if we kill it
}
update_unit_count
restart_john

The Configuration Changed Hook

All john worker nodes will operate on a file in the shared filesystem called /var/lib/john/target_hashes.  I'd like the administrator who deployed this service to be able to dynamically update that file and signal all of her worker nodes to restart their john processes.  Here, I used the config-get juju command, and again restart by simply killing the john processes and letting keep-one-running sort out the restart.  This is handled here in hooks/config-changed:

#!/bin/bash
set -e
DIR=/var/lib/john
target_hashes=$(config-get target_hashes)
if [ -n "$target_hashes" ]; then
# Install the user's supplied hashes
echo "$target_hashes" > $DIR/target_hashes
# Restart john
killall john || true
fi

The Shared Filesystem Relation Changed Hook

By far, the most complicated logic is in hooks/shared-fs-relation-changed.  There's quite a bit of work we need to here, as soon as we can be assured that this node has successfully mounted its shared filesystem.  There's a bit of boilerplate mount work that I borrowed from the owncloud charm.  Beyond that, there's a bit of john-specific work.  I'm downloading the aforementioned larger wordlist.  I install the target hash, if specified in the configuration; otherwise, we just generate 10 random target passwords to try and crack.  We also symlink a bunch of john's runtime shared data into the NFS directory.  For no good reason, john expects a bunch of stuff to be in the same directory.  Of course, this code could really use some cleanup.  Here it is again, non-perfect, but functional hooks/shared-fs-relation-changed:
#!/bin/bash
set -eu

remote_host=`relation-get private-address`
export_path=`relation-get mountpoint`
mount_options=`relation-get options`
fstype=`relation-get fstype`
DIR="/var/lib/john"

if [ -z "${export_path}" ]; then
juju-log "remote host not ready"
exit 0
fi

local_mountpoint="$DIR"

create_local_mountpoint() {
juju-log "creating local mountpoint"
umask 022
mkdir -p $local_mountpoint
chown -R ubuntu:ubuntu $local_mountpoint
}
[ -d "${local_mountpoint}" ] || create_local_mountpoint

share_already_mounted() {
`mount | grep -q $local_mountpoint`
}

mount_share() {
for try in {1..3}; do
juju-log "mounting share"
[ ! -z "${mount_options}" ] && options="-o ${mount_options}" || options=""
mount -t $fstype $options $remote_host:$export_path $local_mountpoint \
&& break

juju-log "mount failed: ${local_mountpoint}"
sleep 10

done
}

download_passwords() {
if [ ! -s $DIR/passwords.txt ]; then
# Grab a giant dictionary of passwords, 20MB, 2M passwords
juju-log "Downloading password dictionary"
cd $DIR
# http://www.breakthesecurity.com/2011/12/large-password-list-free-download.html
wget http://dazzlepod.com/site_media/txt/passwords.txt
juju-log "Done downloading password dictionary"
fi
}

install_target_hashes() {
if [ ! -s $DIR/target_hashes ]; then
target_hashes=$(config-get target_hashes)
if [ -n "$target_hashes" ]; then
# Install the user's supplied hashes
echo "$target_hashes" > $DIR/target_hashes
else
# Otherwise, grab some random ones
i=0
for p in $(shuf -n 10 $DIR/passwords.txt); do
# http://openwall.info/wiki/john/Generating-test-hashes
printf "user${i}:%s\n" $(mkpasswd -m md5 $p) >> $DIR/target_hashes
i=$((i+1))
done
fi
fi
for i in /usr/share/john/*; do
ln -sf $i /var/lib/john
done
}

apt-get -qqy install rpcbind nfs-common
share_already_mounted || mount_share
download_passwords
install_target_hashes

Deploying the Service

If you're still with me, we're ready to deploy this service and try cracking some passwords!  We need to bootstrap our environment, and deploy the stock nfs charm.  Next, branch my charm's source code, and deploy it.  I deployed it here across a whopping 18 units!  I currently have a quota of 20 small instances I can run our private OpenStack.  Two of those instances are used by the Juju bootstrap node and by the NFS server.  So the other 18 will be NFS clients running john processes.

juju bootstrap
juju deploy nfs
bzr branch lp:~kirkland/+junk/john precise
juju deploy -n 18 --repository=precise local:precise/john
juju add-relation john nfs
juju status

Once everything is up and ready, running and functional, my status looks like this:

machines:
"0":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.230
instance-id: 98090098-2e08-4326-bc73-22c7c6879b95
series: precise
"1":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.7
instance-id: 449c6c8c-b503-487b-b370-bb9ac7800225
series: precise
"2":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.193
instance-id: 576ffd6f-ddfa-4507-960f-3ac2e11ea669
series: precise
"3":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.215
instance-id: 70bfe985-9e3f-4159-8923-60ab6d9f7d43
series: precise
"4":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.221
instance-id: f48364a9-03c0-496f-9287-0fb294bfaf24
series: precise
"5":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.223
instance-id: 62cc52c4-df7e-448a-81b1-5a3a06af6324
series: precise
"6":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.231
instance-id: f20dee5d-762f-4462-a9ef-96f3c7ab864f
series: precise
"7":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.239
instance-id: 27c6c45d-18cb-4b64-8c6d-b046e6e01f61
series: precise
"8":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.240
instance-id: 63cb9c91-a394-4c23-81bd-c400c8ec4f93
series: precise
"9":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.242
instance-id: b2239923-b642-442d-9008-7d7e725a4c32
series: precise
"10":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.249
instance-id: 90ab019c-a22c-41d3-acd2-d5d7c507c445
series: precise
"11":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.252
instance-id: e7abe8e1-1cdf-4e08-8771-4b816f680048
series: precise
"12":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.254
instance-id: ff2b6ba5-3405-4c80-ae9b-b087bedef882
series: precise
"13":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.255
instance-id: 2b019616-75bc-4227-8b8b-78fd23d6b8fd
series: precise
"14":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.1
instance-id: ecac6e11-c89e-4371-a4c0-5afee41da353
series: precise
"15":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.3
instance-id: 969f3d1c-abfb-4142-8cc6-fc5c45d6cb2c
series: precise
"16":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.4
instance-id: 6bb24a01-d346-4de5-ab0b-03f51271e8bb
series: precise
"17":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.5
instance-id: 924804d6-0893-4e56-aef2-64e089cda1be
series: precise
"18":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.11
instance-id: 5c96faca-c6c0-4be4-903e-a6233325caec
series: precise
"19":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.15
instance-id: 62b48da2-60ea-4c75-b5ed-ffbb2f8982b5
series: precise
services:
john:
charm: local:precise/john-3
exposed: false
relations:
shared-fs:
- nfs
workers:
- john
units:
john/0:
agent-state: started
agent-version: 1.11.0
machine: "2"
public-address: 10.99.60.193
john/1:
agent-state: started
agent-version: 1.11.0
machine: "3"
public-address: 10.99.60.215
john/2:
agent-state: started
agent-version: 1.11.0
machine: "4"
public-address: 10.99.60.221
john/3:
agent-state: started
agent-version: 1.11.0
machine: "5"
public-address: 10.99.60.223
john/4:
agent-state: started
agent-version: 1.11.0
machine: "6"
public-address: 10.99.60.231
john/5:
agent-state: started
agent-version: 1.11.0
machine: "7"
public-address: 10.99.60.239
john/6:
agent-state: started
agent-version: 1.11.0
machine: "8"
public-address: 10.99.60.240
john/7:
agent-state: started
agent-version: 1.11.0
machine: "9"
public-address: 10.99.60.242
john/8:
agent-state: started
agent-version: 1.11.0
machine: "10"
public-address: 10.99.60.249
john/9:
agent-state: started
agent-version: 1.11.0
machine: "11"
public-address: 10.99.60.252
john/10:
agent-state: started
agent-version: 1.11.0
machine: "12"
public-address: 10.99.60.254
john/11:
agent-state: started
agent-version: 1.11.0
machine: "13"
public-address: 10.99.60.255
john/12:
agent-state: started
agent-version: 1.11.0
machine: "14"
public-address: 10.99.61.1
john/13:
agent-state: started
agent-version: 1.11.0
machine: "15"
public-address: 10.99.61.3
john/14:
agent-state: started
agent-version: 1.11.0
machine: "16"
public-address: 10.99.61.4
john/15:
agent-state: started
agent-version: 1.11.0
machine: "17"
public-address: 10.99.61.5
john/16:
agent-state: started
agent-version: 1.11.0
machine: "18"
public-address: 10.99.61.11
john/17:
agent-state: started
agent-version: 1.11.0
machine: "19"
public-address: 10.99.61.15
nfs:
charm: cs:precise/nfs-3
exposed: false
relations:
nfs:
- john
units:
nfs/0:
agent-state: started
agent-version: 1.11.0
machine: "1"
public-address: 10.99.60.7

Obtaining the Results

And now, let's monitor the results.  To do this, I'll ssh to any of the john worker nodes, move over to the shared NFS directory, and use the john -show command in a watch loop.

keep-one-running juju ssh john/0
sudo su -
cd /var/lib/john
watch john -show target_hashes

And the results...
Every 2.0s: john -show target_hashes

user:260775
user1:73832100
user2:829171kzh
user3:pf1vd4nb
user4:7788521312229
user5:saksak
user6:rongjun2010
user7:2312010
user8:davied
user9:elektrohobbi

10 password hashes cracked, 0 left

Within a few seconds, this 18-node cluster has cracked all 10 of the randomly chosen passwords from the dictionary.  That's only mildly interesting, as my laptop can do the same in a few minutes, if the passwords are already in the wordlist.  What's far more interesting is in randomly generating a password and passing that as a new configuration to our running cluster and letting it crack that instead.

Modifying the Configuration Target Hash

Let's generate a random password using apg.  We'll then need to hash this and create a string in the form of username:pwhash that john can understand.  Finally, we'll pass this to our cluster using Juju's set action.

passwd=$(apg -a 0 -n 1 -m 6 -x 6)
target=$(printf "user0:%s\n" $(mkpasswd -m md5 $passwd))
juju set john target_hashes="$target"

This was a 6 character password, consisting of 52 random characters (a-z, A-Z), almost certainly not in our dictionary.  526 = 19,770,609,664, or about 19 billion letter combinations we need to test.  According to the john -test command, a single one of my instances can test about 12,500 MD5 hashes per second.  So with a single instance, this would take a maximum of 526 / 12,500 / 60 / 60 = 439 hours. Or 18 days :-) Well, I happen to have exactly 18 instances, so we should be able to test the entire wordspace in about 24 hours.

So I threw all 18 instances at this very problem and let it run over a weekend. And voila, we got a little lucky, and cracked the password, Uvneow, in 16 hours!

In Conclusion

I don't know if this charm will ever land in the official charm store.  That really wasn't the goal of this exercise for me.  I simply wanted to bring myself back up to speed on Juju, play with the port to Golang, experiment with OpenStack as a provider for Juju, and most importantly, write a scalable Juju charm.

This particularly application, john, is actually just one of a huge class of MPI-compatible parallelizable applications that could be charmed for Juju.  The general design, I think, should be very reusable by you, if you're interested.  Between the shared file system and the keep-one-running approach, I bet you could charm any one of a number of scalable applications.  While I'm not eligible, perhaps you might consider competing for cash prizes in the Juju Charm Championship.

Happy charming,
:-Dustin

Read more
Mark Baker

Juju, the leading tool for continuous deployment, continuous integration (CI/CD), and cloud-neutral orchestration, now has a refreshed GUI with smoother workflows for integration professionals spinning up many services across clouds like Amazon EC2 and a range of public OpenStack providers. The new GUI speeds up service design – conceptual modelling of service relationships – as well as actual deployment, providing a visual map of the relationships between services.

“The GUI is now a first-class part of the Juju experience” said Gary Poster, whose team lead the work, “with an emphasis on rapid access to the collection of service charms and better visualisation of the deployment in question”. In this milestone the Juju GUI can act as a whiteboard, so a user can mock up the service orchestration they intend to create using the same Juju GUI that they will use to manage their real, live deployments. Users can experience the new interface for themselves at jujucharms.com with no need to setup software in advance.

Juju is used by organisations that are constantly deploying and redeploying collections of services. Companies focused on media, professional services, and systems integration are the heaviest users, who benefit from having repeatable best-practice deployments across a range of cloud environments.

Juju uniquely enables the reuse of shared components called ‘charms’ for common parts of a complex service. A large portfolio of existing open source components is available from a public Charm collection, and browsing that collection is built into the new GUI. Charms are easy to find and review in the GUI, with full documentation instantly accessible. Featured, recommended and popular charms are highlighted for easy discovery. Each Charm now has more detailed information including test results from all supported providers, download count, related Charms, and a Charm code quality rating. The Charm collection includes both certified, supported Charms, and a wider range of ad-hoc Charms that are published by a large community of contributors.

The simple browser-based interface makes it easy to find reusable open source charms that define popular services like Hadoop, Storm, Ceph, OpenStack, MySQL, RabbitMQ, MongoDB, Cassandra, Mediawiki and WordPress. Information about each service, such as configuration options, is immediately available, and the charms can then be dragged and dropped directly on a canvas where they can be connected to other services, deployed and scaled. It’s also possible to export these service topologies into a human-readable and -editable format that can be shared within a team or published as a reference architecture for that deployment.

Recent additions to the public Charm collection include OpenVPN AS, Liferay, Storm and Varnish. For developers the new GUI and Charm Browser mean that their Charms are now much more discoverable. For those taking part in the Charm Championship, it’s easier to upload their Charms and use the GUI to connect them into a full solution for entry into the competition. Submit your best Charmed solution for the possibility of winning $10,000.

The management interface for Charm authors has also been enhanced and is available at  http://manage.jujucharms.com/ immediately.

See how you can use Juju to deploy OpenStack:

The current version of Juju supports Amazon EC2, HP Cloud and many other OpenStack clouds, as well as in-memory deployment for test and dev scenarios. Juju is on track for a 1.12 release in time for Ubuntu 13.10 that will enhance scalability for very large deployments, and a 2.0 release in time for Ubuntu 14.04 LTS.

See it demoed: We’ll be showing off the new Juju GUI and charm browser at OSCON on Tuesday 23rd at 9:00AM in the Service Orchestration In the Cloud with Juju workshop.

Read more
Mark Baker

Ubuntu developer contest offers $10,000 for the most innovative charms

Developers around the world are already saving time and money thanks to Juju, and now they have the opportunity to win money too. Today marks the opening of the Juju Charm Championship, in which developers can reap big rewards for getting creative with Juju charms.

If you haven’t met Juju yet, now’s the ideal time to dive in. Juju is a service orchestration tool, a simple way to build entire cloud environments, deploy scale and manage complex workloads using only a few commands. It takes all the knowledge of an application and wraps it up into a re-usable Juju charm, ready to be quickly deployed anywhere. And you can modify and combine charms to create a custom deployment that meets your needs.

Juju is a powerful tool, and its flexibility means it’s capable of things we haven’t even imagined yet. So we’re kicking off the Charm Championship to discover what happens when the best developers bring Juju into their clouds — with big rewards on offer.

The prizes

As well as showing off the best achievements to the community, our panel of judges will award $10,000 cash prizes to the best charmed solutions in a range of categories.

That’s not all. Qualifying participants will be eligible for a joint marketing programme with Canonical, including featured application slots on ubuntu.com,  joint webinars and more. Win the Charm Championship and your app will reach a whole new audience.

Get started today

If you’re a Juju wizard, we want to see what magic you’re already creating. If you’re not, now’s a great time to start — it only takes five minutes to get going with Juju.

The Charm Championship runs until 1 October 2013, and it’s open to individuals, teams, companies and organisations. For more details and full com

petition rules, visit the Charm Championship page.

Charm Championship page

Read more
Michael

logo-jujuHave you ever wished you could just declare the installed state of your juju charm like this?

deploy_user:
    group.present:
        - gid: 1800
    user.present:
        - uid: 1800
        - gid: 1800
        - createhome: False
        - require:
            - group: deploy_user

exampleapp:
    group.present:
        - gid: 1500
    user.present:
        - uid: 1500
        - gid: 1500
        - createhome: False
        - require:
            - group: exampleapp


/srv/{{ service_name }}:
    file.directory:
        - group: exampleapp
        - user: exampleapp
        - require:
            - user: exampleapp
        - recurse:
            - user
            - group


/srv/{{ service_name }}/{{ instance_type }}-logs:
    file.directory:
        - makedirs: True

While writing charms for Juju a long time ago, one of the things that I struggled with was testing the hook code – specifically the install hook code where the machine state is set up (ie. packages installed, directories created with correct permissions, config files setup etc.) Often the test code would be fragile – at best you can patch some attributes of your module (like “code_location = ‘/srv/example.com/code'”) to a tmp dir and test the state correctly, but at worst you end up testing the behaviour of your code (ie. os.mkdir was called with the correct user/group etc.). Either way, it wasn’t fun to write and iterate those tests.

But support has improved over the past year with the charmhelpers library. And recently I landed a branch adding support for declaring saltstack states in yaml, like the above example. That means that the install hook of your hooks.py can be reduced to something like:

import charmhelpers.core.hookenv
import charmhelpers.payload.execd
import charmhelpers.contrib.saltstack


hooks = charmhelpers.core.hookenv.Hooks()


@hooks.hook()
def install():
    """Setup the machine dependencies and installed state."""
    charmhelpers.contrib.saltstack.install_salt_support()
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/dependencies.yaml')
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/installed.yaml')


# Other hooks...

if __name__ == "__main__":
    hooks.execute(sys.argv)

…letting you focus on testing and writing the actual hook functionality (like relation-set’s etc. I’d like to add some test helpers that will automatically check the syntax of the state yaml files and template rendering output, but haven’t yet).

Hopefully we can add similar support for puppet and Ansible soon too, so that the charmer can choose the tools they want to use to declare the local machine state.

A few other things that I found valuable while writing my charm:

  • Use a branch for charmhelpers – this way you can make improvements to the library as you develop and not be dependent on your changes landing straight away to deploy (thanks Sidnei – I think I just copied that idea from one of his charms). The easiest way that I found for that was to install the branch into mycharm/lib so that it’s included in both dev and when you deploy (with a small snippet in your hooks.py.
  • Make it easy to deploy your local charm from the branch… the easiest way I found was a link-test-juju-repo make target – I’m not sure what other people do here?
  • In terms of writing actual hook functionality (like relation-set events etc), I found the easiest way to develop the charm was to iterate within a debug-hook session. Something like:
    1. write new test+code then juju upgrade-charm or add-relation
    2. run the hook and if it fails…
    3. fix and test right there within the debug-hook
    4. put the code back into my actual charm branch and update the test
    5. restore the system state in debug hook
    6. then juju upgrade-charm again to ensure it works, if it fails, iterate from 3.
  • Use the built-in support of template rendering from saltstack for rendering any config files that you need.

I don’t think I’d really appreciated the beauty of what juju is doing until, after testing my charm locally together with a gunicorn charm and a solr backend, I then setup a config using juju-deployer to create a full stack with an Apache front-end, a cache load balancer for multiple squid caches, as well as a load balancer in front of potentially multiple instances of my charms wsgi app, then a back-end loadbalancer in between my app and the (multiple) solr backends… and it just works.


Filed under: juju, python, testing

Read more
Michael

logo-jujuHave you ever wished you could just declare the installed state of your juju charm like this?

deploy_user:
    group.present:
        - gid: 1800
    user.present:
        - uid: 1800
        - gid: 1800
        - createhome: False
        - require:
            - group: deploy_user

exampleapp:
    group.present:
        - gid: 1500
    user.present:
        - uid: 1500
        - gid: 1500
        - createhome: False
        - require:
            - group: exampleapp


/srv/{{ service_name }}:
    file.directory:
        - group: exampleapp
        - user: exampleapp
        - require:
            - user: exampleapp
        - recurse:
            - user
            - group


/srv/{{ service_name }}/{{ instance_type }}-logs:
    file.directory:
        - makedirs: True

While writing charms for Juju a long time ago, one of the things that I struggled with was testing the hook code – specifically the install hook code where the machine state is set up (ie. packages installed, directories created with correct permissions, config files setup etc.) Often the test code would be fragile – at best you can patch some attributes of your module (like “code_location = ‘/srv/example.com/code’”) to a tmp dir and test the state correctly, but at worst you end up testing the behaviour of your code (ie. os.mkdir was called with the correct user/group etc.). Either way, it wasn’t fun to write and iterate those tests.

But support has improved over the past year with the charmhelpers library. And recently I landed a branch adding support for declaring saltstack states in yaml, like the above example. That means that the install hook of your hooks.py can be reduced to something like:

import charmhelpers.core.hookenv
import charmhelpers.payload.execd
import charmhelpers.contrib.saltstack


hooks = charmhelpers.core.hookenv.Hooks()


@hooks.hook()
def install():
    """Setup the machine dependencies and installed state."""
    charmhelpers.contrib.saltstack.install_salt_support()
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/dependencies.yaml')
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/installed.yaml')


# Other hooks...

if __name__ == "__main__":
    hooks.execute(sys.argv)

…letting you focus on testing and writing the actual hook functionality (like relation-set’s etc. I’d like to add some test helpers that will automatically check the syntax of the state yaml files and template rendering output, but haven’t yet).

Hopefully we can add similar support for puppet and Ansible soon too, so that the charmer can choose the tools they want to use to declare the local machine state.

A few other things that I found valuable while writing my charm:

  • Use a branch for charmhelpers – this way you can make improvements to the library as you develop and not be dependent on your changes landing straight away to deploy (thanks Sidnei – I think I just copied that idea from one of his charms). The easiest way that I found for that was to install the branch into mycharm/lib so that it’s included in both dev and when you deploy (with a small snippet in your hooks.py.
  • Make it easy to deploy your local charm from the branch… the easiest way I found was a link-test-juju-repo make target – I’m not sure what other people do here?
  • In terms of writing actual hook functionality (like relation-set events etc), I found the easiest way to develop the charm was to iterate within a debug-hook session. Something like:
    1. write new test+code then juju upgrade-charm or add-relation
    2. run the hook and if it fails…
    3. fix and test right there within the debug-hook
    4. put the code back into my actual charm branch and update the test
    5. restore the system state in debug hook
    6. then juju upgrade-charm again to ensure it works, if it fails, iterate from 3.
  • Use the built-in support of template rendering from saltstack for rendering any config files that you need.

I don’t think I’d really appreciated the beauty of what juju is doing until, after testing my charm locally together with a gunicorn charm and a solr backend, I then setup a config using juju-deployer to create a full stack with an Apache front-end, a cache load balancer for multiple squid caches, as well as a load balancer in front of potentially multiple instances of my charms wsgi app, then a back-end loadbalancer in between my app and the (multiple) solr backends… and it just works.


Filed under: juju, python, testing

Read more
roaksoax

For a while, I have been wanting to write about MAAS and how it can easily deploy workloads (specially OpenStack) with Juju, and the time has finally come. This will be the first of a series of posts where I’ll provide an Overview of how to quickly get started with MAAS and Juju.

What is MAAS?

I think that MAAS does not require introduction, but if people really need to know, this awesome video will provide a far better explanation than the one I can give in this blog post.

http://youtu.be/J1XH0SQARgo

 

Components and Architecture

MAAS have been designed in such a way that it can be deployed in different architectures and network environments. MAAS can be deployed as both, a Single-Node or Multi-Node Architecture. This allows MAAS to be a scalable deployment system to meet your needs. It has two basic components, the MAAS Region Controller and the MAAS Cluster Controller.

MAAS Architectures

Region Controller

The MAAS Region Controller is the component the users interface with, and is the one that controls the Cluster Controllers. It is the place of the WebUI and API. The Region Controller is also the place for the MAAS meta-data server for cloud-init, as well as the place where the DNS server runs. The region controller also configures a rsyslogd server to log the installation process, as well as a proxy (squid-deb-proxy) that is used to cache the debian packages. The preseeds used for the different stages of the process are also being stored here.

Cluster Controller

The MAAS Cluster Controller only interfaces with the Region controller and is the one in charge of provisioning in general. The Cluster Controller is the place the TFTP and DHCP server(s) are located. This is the place where both the PXE files and ephemeral images are being stored. It is also the Cluster Controller’s job to power on/off the managed nodes (if configured).

The Architecture

As you can see in the image above, MAAS can be deployed in both a single node or multi-node. The way MAAS has being designed makes MAAS highly scalable allowing to add more Cluster Controllers that will manage a different pool of machines. A single-node scenario can become in a multi-node scenario by simply adding more Cluster Controllers. Each Cluster Controller has to register with the Region Controller, and each can be configured to manage a different Network. The way has this is intended to work is that each Cluster Controller will manage a different pool of machines in different networks (for provisioning), allowing MAAS to manage hundreds of machines. This is completely transparent to users because MAAS makes the machines available to them as a single pool of machines, which can all be used for deploying/orchestrating your services with juju.

How Does It Work?

MAAS has 3 basic stages. These are Enlistment, Commissioning and Deployment which are explained below:

MAAS Process

Enlistment

The enlistment process is the process on which a new machine is registered to MAAS. When a new machine is started, it will obtain an IP address and PXE boot from the MAAS Cluster Controller. The PXE boot process will instruct the machine to load an ephemeral image that will run and perform an initial discovery process (via a preseed fed to cloud-init). This discovery process will obtain basic information such as network interfaces, MAC addresses and the machine’s architecture. Once this information is gathered, a request to register the machine is made to the MAAS Region Controller. Once this happens, the machine will appear in MAAS with a Declared state.

Commissioning

The commissioning process is the process where MAAS collects hardware information, such as the number of CPU cores, RAM memory, disk size, etc, which can be later used as constraints. Once the machine has been enlisted (Declared State), the machine must be accepted into the MAAS in order for the commissioning processes to begin and for it to be ready for deployment. For example, in the WebUI, an “Accept & Commission” button will be present. Once the machine gets accepted into MAAS, the machine will PXE boot from the MAAS Cluster Controller and will be instructed to run the same ephemeral image (again). This time, however, the commissioning process will be instructed to gather more information about the machine, which will be sent back to the MAAS region controller (via cloud-init from MAAS meta-data server). Once this process has finished, the machine information will be updated it will change to Ready state. This status means that the machine is ready for deployment.

Deployment

Once the machines are in Ready state, they can be used for deployment. Deployment can happen with both juju or the maas-cli (or even the WebUI). The maas-cli will only allow you to install Ubuntu on the machine, while juju will not only allow you to deploy Ubuntu on them, but will allow you to orchestrate services. When a machine has been deployed, its state will change to Allocated to <user>. This state means that the machine is in use by the user who requested its deployment.

Releasing Machines

Once a user doesn’t need the machine anymore, it can be released and its status will change from Allocated to <user> back to Ready. This means that the machine will be turned off and will be made available for later use.

But… How do Machines Turn On/Off?

Now, you might be wondering how are the machines being turned on/off or who is the one in charge of that. MAAS can manage power devices, such as IPMI/iLO, Sentry Switch CDU’s, or even virsh. By default, we expect that all the machines being controlled by MAAS have IPMI/iLO cards. So if your machines do, MAAS will attempt to auto-detect and auto-configure your IPMI/iLO cards during the Enlistment and Commissioning processes. Once the machines are Accepted into MAAS (after enlistment) they will be turned on automatically and they will be Commissioned (that is if IPMI was discovered and configured correctly).. This also means that every time a machine is being deployed, they will be turned on automatically.

Note that MAAS not only handles physical machines, it can also handle Virtual Machines, hence the virsh power management type. However, you will have to manually configure the details in order for MAAS to manage these virtual machines and turn them on/off automatically.

Read more
Darryl Weaver

Introduction

In this article I will show you how to set up a new WordPress blog on Amazon EC2 public cloud and then migrate it to HP Public Cloud using Juju Jitsu, from Canonical, the company behind Ubuntu.

Prerequisites

  • Amazon EC2 Account
  • HP Public Cloud Account
  • Ubuntu Desktop or Server 12.04 or above with root or sudo access

Juju Environment Setup

First of all we need to install Juju and Jitsu from the PPA archive to make it available for use, so first of all add the PPA to the installation sources:

sudo apt-get -y install python-software-properties
sudo add-apt-repository ppa:juju/pkgs

Now update apt and install juju, charm-tools and juju-jitsu

sudo apt-get update
sudo apt-get install juju charm-tools juju-jitsu

You will now need to set up your ~/.juju/environments.yaml file for Amazon EC2, see here: https://juju.ubuntu.com/get-started/amazon/

and then for HP cloud also, so see here:

https://juju.ubuntu.com/get-started/hp-cloud/

So you should end up with an environments.yaml file that will look something like this:

default: amazon
environments:
amazon:
 type: ec2
 control-bucket: juju-b1bb8e0313d14bf1accb8a198a389eed
 admin-secret:[any-unique-string-shared-among-admins-u-like]
 access-key: [PUT YOUR ACCESS KEY HERE]
 secret-key: [PUT YOUR SECRET KEY HERE]
 default-series: precise
 juju-origin: ppa
 ssl-hostname-verification: true
hpcloud:
 juju-origin: ppa
 control-bucket: juju-hpc-az1-cb
 admin-secret: [any-unique-string-shared-among-admins-u-like]
 default-image-id: [8419]
 region: az-1.region-a.geo-1
 project-name: [your@hp-cloud.com-tenant-name]
 default-instance-type: standard.small
 auth-url: https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/
 auth-mode: keypair
 type: openstack
 default-series: precise
 access-key: [PUT YOUR ACCESS KEY HERE]
 secret-key: [PUT YOUR SECRET KEY HERE]

Deploying WordPress to Amazon EC2

Now we need to bootstrap the Amazon EC2 environment.

juju bootstrap -e amazon

Check it finishes bootstrapping correctly after a few minutes using:

juju status -e amazon

Which should output something like this:

machines:
  0:
    agent-state: running
    dns-name: ec2-50-17-169-153.compute-1.amazonaws.com
    instance-id: i-78d4781b
    instance-state: running
services: {}

To give a good view of what is going on and to also allow modification from a web control panel we can deploy juju-gui to the bootstrap node, using juju-jitsu:

jitsu deploy-to 0 juju-gui -e amazon

juju expose juju-gui -e amazon

This will take a few minutes to deploy.
Once complete you will see this from the output of “juju status -e amazon”, which should output something like:

machines:
  0:
    agent-state: running
    dns-name: ec2-50-17-169-153.compute-1.amazonaws.com
    instance-id: i-78d4781b
    instance-state: running
services:
  juju-gui:
    charm: cs:precise/juju-gui-3
    exposed: true
    relations: {}
    units:
      juju-gui/0:
        agent-state: started
        machine: 0
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: ec2-50-17-169-153.compute-1.amazonaws.com

Then use the “public-address” entry in your web browser to connect to juju-gui and see what is going on visually.

Juju-gui currently works well on Google Chrome or Chromium, it uses a Self-signed SSL certificate so you will be prompted to connect given a security warning which you can safely ignore and proceed.

Initially you should see the login page, with the username already filled in as “admin” and the password is the same as your password for the admin-secret in your ~/.juju/environments.yaml file.

Once logged in you should see a page that looks like this showing that only juju-gui is deployed to your environment, so far:

Juju-gui screenshot

First login

First we need to deploy a MySQL Database to store your blog’s data:

juju deploy mysql -e amazon

This will take a few minutes to deploy, so go ahead and also deploy a wordpress application server:

juju deploy wordpress -e amazon

While deployment continues you should see them appear in Juju-gui too

Juju gui with wordpress and mysql deployed

Showing MySQL and WordPress deployed

:

Once deployment is complete you can check the name of the new servers with:

juju status -e amazon

Which should output something like this:

machines:
  0:
    agent-state: running
    dns-name: ec2-50-17-169-153.compute-1.amazonaws.com
    instance-id: i-78d4781b
    instance-state: running
  1:
    agent-state: running
    dns-name: ec2-23-22-68-159.compute-1.amazonaws.com
    instance-id: i-3a9bd554
    instance-state: running
  2:
    agent-state: running
    dns-name: ec2-54-234-249-131.compute-1.amazonaws.com
    instance-id: i-f9e56696
    instance-state: running
services:
  juju-gui:
    charm: cs:precise/juju-gui-3
    exposed: true
    relations: {}
    units:
      juju-gui/0:
        agent-state: started
        machine: 0
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: ec2-50-17-169-153.compute-1.amazonaws.com
  mysql:
    charm: cs:precise/mysql-16
    relations: {}
    units:
      mysql/0:
        agent-state: started
        machine: 1
        public-address: ec2-23-22-68-159.compute-1.amazonaws.com
  wordpress:
    charm: cs:precise/wordpress-11
    exposed: false
    relations:
      loadbalancer:
      - wordpress
    units:
      wordpress/0:
        agent-state: started
        machine: 2
        public-address: ec2-54-234-249-131.compute-1.amazonaws.com

Now we need to add a relationship between the wordpress application server and the MySQL database server. This will set up the SQL backend database for your blog and configure the usernames and passwords and database tables needed, all automatically.

juju add-relation wordpress mysql -e amazon

Finally, we need to expose the wordpress instance so you can connect to it using your web browser:

juju expose wordpress -e amazon

Now your Juju gui should look like this:
Juju Gui showing relations

Setting up WordPress and adding your first post

Then connect to the wordpress server using your web browser, by using the public-address from the status output above, i.e. http://ec2-54-234-249-131.compute-1.amazonaws.com/
This will then show you the initial set up page for your wordpress blog, like this:

You will need to enter some configuration details such as a site name and password:

After you have saved the new details you will get a confirmation page:

Confirmation Page

So, click on Login to login to your new blog on Amazon EC2.

Now in order to make sure we are testing a live blog we need to enter some data. So, let’s post a blog entry.
First click on New Post on the top left menu:

Now, type in the details of your new blog post and click on Publish on the top right:

Now you have a new blog on Amazon EC2 with your first blog entry posted.

Migrating from Amazon EC2 to HP Cloud

So, now we have a live blog running on Amazon EC2 it is now time to migrate to HP Cloud.

We could just run the commands above but using the extension “-e hpcloud” to deploy the services to HP Cloud and then migrate the data.
But a more satisfying way is to use Juju-jitsu again to export the current layout from Amazon EC2 environment and then replicate that on HP Cloud.

So, we can use:

jitsu export -e amazon > wordpress-deployment.json

This will save a file in JSON format detailing the deployed services and their relationships.

First we need to bootstrap our HP Cloud environment:

juju bootstrap -e hpcloud

This will take a few minutes to deploy a new instance and install the Juju bootstrap node.
Once the bootstrap is complete you should be able to check the status by using:

juju status -e hpcloud

The output should be something like this:

machines:
  0:
    agent-state: running
    dns-name: 15.185.102.93
    instance-id: 1064649
    instance-state: running
services: {}

So, let us now deploy the replica of the environment on Amazon to HP:

jitsu import -e hpcloud wordpress-deployment.json

This will then deploy the replicated environment from Amazon EC2. You can check progress with:

juju status -e hpcloud

When completed your output should be as follows:


So we now have a replica of the environment from Amazon EC2 on HP Cloud, but we have no data, yet.
We also need to copy the SQL data from the existing Amazon EC2 MySQL database to the HP Cloud MySQL database to get all your live blog data across to the new environment.
Let’s login to the MySQL DB node on Amazon EC2:

juju ssh mysql/0 -e amazon

Now we are logged in we can get the root password for the Database:

sudo cat /var/lib/juju/mysql.passwd

This will output the root password for the MySQL DB so you can take a copy of the data with:

sudo mysqldump -p wordpress > wordpress.sql

When prompted copy and past the password that you recovered from the previous step.

Now exit the login using:

exit

Now copy the SQL backup file from Amazon EC2 to your local machine:

juju scp mysql/0:wordpress.sql ./ -e amazon

This will download the wordpress.sql file.
You will now need to know your new wordpress server IP address for HP Cloud.
You can find this from juju status:

juju status wordpress -e hpcloud

The output should look like this:

machines:
  3:
    agent-state: running
    dns-name: 15.185.102.121
    instance-id: 1064677
    instance-state: running
services:
  wordpress:
    charm: cs:precise/wordpress-11
    exposed: false
    relations:
      db:
      - mysql
      loadbalancer:
      - wordpress
    units:
      wordpress/0:
        agent-state: started
        machine: 3
        public-address: 15.185.102.121

In order to fix your WordPress server name you will have to replace your Amazon EC2 WordPress public-address with your HP Cloud WordPress server public-address.
So, you will need to do a find and replace in the wordpress.sql file as follows:

sed -e 's/ec2-54-234-249-131.compute-1.amazonaws.com/15.185.102.121/g' wordpress.sql > wordpress-hp.sql

Obviously you will need to customise the command to replace your server addresses from Amazon and HP Cloud in the command above.
NB:This step can be problematic and if you need more detailed information on changing the server name of a wordpress installation and moving servers see this more detailed instructions here:
http://codex.wordpress.org/Moving_WordPress

Now upload to your new HP Cloud MySQL server the database backup file, fixed with the new server public-address:

juju scp wordpress-hp.sql mysql/0: -e hpcloud

Now let’s import the data into your wordpress database on HP Cloud.
First we need to log in to the database server, as before:

juju ssh mysql/0 -e hpcloud

Now let’s get the root password for the Database:

sudo cat /var/lib/juju/mysql.passwd

Now we can import the data using:

sudo mysql -p wordpress < wordpress-hp.sql

And when you are prompted for the password enter the password you retrieved in the previous step, and then exit.

Finally you will still need to expose the wordpress instance on HP Cloud to the outside world using:

juju expose wordpress -e hpcloud

Now connect to your new wordpress blog migrated to HP Cloud from Amazon by connecting to the public-address of the wordpress node.
You can find the address from the output of juju status as follows:

juju status wordpress -e hpcloud

The output should look like this:

machines:
  3:
    agent-state: running
    dns-name: 15.185.102.121
    instance-id: 1064677
    instance-state: running
services:
  wordpress:
    charm: cs:precise/wordpress-11
    exposed: true
    relations:
      db:
      - mysql
      loadbalancer:
      - wordpress
    units:
      wordpress/0:
        agent-state: started
        machine: 3
        open-ports:
        - 80/tcp
        public-address: 15.185.102.121

Now connect to http://15.185.102.121/ and your blog is now hosted on HP Cloud.

Read more
Mark Baker

As clouds for IT infrastructure become commonplace, admins and devops need quick, easy ways of deploying and orchestrating cloud services.  As we mentioned in October, Ubuntu now has a GUI for Juju, the service orchestration tool for server and cloud. In this post we wanted to expand a bit more on how Juju makes it even easier to visualise and keep track of complex cloud environments.

Juju provides the ability to rapidly deploy cloud services on OpenStack, HP Cloud, AWS and other platforms using a library of 100 ‘charms’ which cover applications from node.js to Hadoop. Juju GUI makes the Juju command line interface even easier, giving the ability to deploy, manage and track progress visually as your cloud grows (or shrinks).

Juju GUI is easy and totally intuitive.  To start, you simply search for the service you want on the Juju GUI charm search bar (top right on the screen).  In this case I want to deploy WordPress to host my blog site.  I have the chance to alter the WordPress settings, and with a few clicks the service is ready.  Its displayed as an icon on the GUI.

I then want a mysql service to go alongside.  Again I search for the charm, set the parameter (or accept the defaults) and away we go.

Its even easier to build the relations between these services by point and click. Juju knows that the relationship needs a suitable database link.

I can expose WordPress to users by setting expose flag  - at the bottom of a settings screen – to on. To scale up WordPress I can add more units, creating identical copies of the WordPress deployment, including any relationships.  I have selected ten in total, and this shows in the center of the wordpress icon.

And thats it.

For a simple cloud, Juju or other tools might be sufficient.  But as your cloud grows, Juju GUI will be a wonderful way not only to provision and orchestrate services, but more importantly to validate and check that you have the correct links and relationships.  Its an ideal way to replicate and scale cloud services as you need.

For more details of Juju, go to juju.ubuntu.com.  To try Juju GUI for yourself, go to http://uistage.jujucharms.com:8080/

Read more
Matt Fischer

Getting Juju With It

At the UDS in Copenhagen I finally had time to attend a session on Juju Charms. I knew the theory of Juju, which is that allows you to easily deploy and link services on public clouds, locally, or even on bare metal, but I never had time to try it out. The Charm School (registration required) session in Copenhagen really showed me the power of what Juju can give you. For example, when I first setup my blog, I had to find a webhost, get an an ssh account, download WordPress, install it, and dependencies, setup mysql, configure WordPress, debug why they weren’t communicating, etc. It was super annoying and took way too long. Now, imagine you want to setup ten blogs, or ten instances of couchdb, or one hundred, or one thousand, and it quickly becomes untenable.  With juju, setting up a blog is as simple as:

  • juju deploy wordpress
  • juju deploy mysql
  • juju add-relation wordpress mysql
  • juju expose wordpress

A few minutes later, and I have a functioning WordPress install. For more complex setups and installs Juju helps to manage the relationships between charms and sends events that the charms react to. This makes it easy to add and remove services like haproxy and memcached to an existing webapp. This interaction between charms implies that the more charms that are available the more useful they all become; the network effect applies to charms!

So after I got home, Charm School had left me energized and ready to write a charm, but I didn’t have any great ideas, until I remembered an app that I’ve used before called Tracks. Tracks is a GTD app, in other words, a fancy todo list. I’d used it hosted before, but my free host went offline and I lost all my to do items. Hosting my own would be much safer. So I started working on a Tracks charm.

If you need an idea for a charm, think about what tools you use that you have to setup, what software have you installed and configured recently? If you need an idea and nothing stands out, you can check out the list of “Charm Needed” bugs. Actually you should check that list regardless to make sure nobody else is already writing the same one.

With an idea in hand, I sat down to write my Charm. Step one is the documentation, most of which was contained on this page “Writing a Charm“. I fully expected to spend three weeks learning a new programming language with arcane black magic commands, but I was pleasantly surprised to learn that you can write a charm in any language you want. Most charms seem to be shell scripts or Python and my charm was simple enough that I wrote it in bash.

During the process of charm writing you may have some questions, and there’s plenty of help to be had. First, the examples that are contained in the juju trunk are OLD and I wouldn’t recommend you follow them. They are missing things like README files and don’t expose http interfaces, which was requested for my charm. Instead I’d recommend you pull the wordpress, mysql, and drupal charms from the charm store. If the examples aren’t enough, you can always ask in #juju on freenode or use askubuntu.com. Once your charm works, you can submit it for review. You’ll probably learn a lot during the review, every person I’ve talked to has.

Finally after a bit of work off and on, my charm was done! I submitted it for review, made a few fixes and it made it into the store.

I can now have a Tracks instance up and running in just a few minutes

I’ve barely scratched the surface here with this post, but I hope someone will be energized to go investigate charms and write one. Charms do not use black magic and you don’t need to learn a new language to write one. Help is available if you need it and we’d love to have your contributions.
If you go write a charm please comment here and let me know!

Read more
Mark Baker

Hardened sysadmins and operators often spurn graphical user interfaces (GUIs) as being slow, cumbersome, unscriptable and inflexible. GUIs are for wimps, right?

Well, I’m not going to argue – and certainly, command line interfaces (CLIs) have their benefits, for those comfortable using them. But we are seeing a pronounced change in the industry, as developers start to take a much greater interest in the deployment and operation of flexible, elastic services in scale out or cloud environments. Whilst many of these new ‘devops’ are happy with a CLI, others want to be able to visualise their environment. In the same way that IDEs are popular, being able to see a representation of the services that are running and how they are related can prove extremely valuable. The same goes for launching new services or removing existing ones.

This is why, last week, as part of the new Ubuntu 12.10 release, we announced a GUI for Juju, the Ubuntu service orchestration tool for server and cloud.
The new Juju GUI does all these things and more. For those of you unfamiliar with it, Juju uses a service definition file know as a ‘charm’. Much of the magic in Juju comes from the collective expertise that has gone into developing this the charm. It enables you to deploy complex services without intimate knowledge of the best practice associated that service. Instead, all that deployment expertise is encapsulated in the charm.
Now, with the Juju GUI, it gets even easier. You can select services from a library of nearly 100 charms, covering applications from node.js to Hadoop. And you can deploy them live on any of the providers that Juju supports – OpenStack, HP Cloud, Amazon Web Services and Ubuntu’s Metal-as-a-Service. You can add relations between services while they are running, explore the load on them, upgrade them or destroy them. At the OpenStack Summit in San Diego this year, Mark Shuttleworth even used it to upgrade a running* OpenStack Cloud from Essex to Folsom.
Since the Juju GUI was first shown, the interest and feedback has been tremendous. It certainly seems to make the magic of Juju – and what it can do for people – easier to see. If you haven’t seen it already, check out the screen shots below or visit http://uistage.jujucharms.com:8080/

Because as we’ve always known, a picture really is worth a 1000 words.

 

Juju Gui Image

The Juju GUI

 

 

*Running on Ubuntu Server, obviously.

Read more
Robert Ayres

Juju Java Cluster – Part 3

In my previous post, we added Memcached to our cluster.  In this post, I’ll write a bit more about the Tomcat configuration options that are available including JMX monitoring.  I’ll also show how easy it is to enable session clustering.

Java cluster with JMX and session clusteringConfiguration and Monitoring

All charms come with many options available for configuration.  Each is selected to allow the same tuning you would typically perform on a manually deployed machine.  Configuration options are shown per charm when browsing the Charm Store (jujucharms.com/charms/precise).  The Tomcat charm provides numerous options.  For example, to tweak the JVM options of a running service:

juju set tomcat "java_opts=-Xms768M -Xmx1024M"

This sets the Java heap to a miminum and maximum of 768Mb and 1024Mb respectively.  If you are debugging an application, you may also set:

juju set tomcat "java_opts=-Xms768M -Xmx1024M -XX:-HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=webapps"

To create a ‘.hprof’ Java heap dump you can inspect with VisualVM or jhat each time an OutOfMemoryError occurs.

To open a remote debugger:

juju set tomcat debug_enabled=True

This will open a JDWP debugger on port 8000 you can use to step-through code from Eclipse, Netbeans etc.  (Note: The debugger is never exposed to the Internet, so you need to access it through a ssh tunnel – ‘ssh -L 8000:localhost:8000 ubuntu@xxx.computer.amazonaws.com’, then connect your IDE to localhost port 8000).

A useful part of the JVM is JMX monitoring.  To enable JMX:

juju set tomcat jmx_enabled=True
juju set tomcat "jmx_control_password=<password>"
juju set tomcat "jmx_monitor_password=<password>"

This will start a remote JMX listener on ports 10001, 10002 and set passwords for the ‘monitorRole’ and ‘controlRole’ users (not setting a password disables that account).  You can now open VisualVM or JConsole to connect to the remote JMX instance (screenshot below).  (Note: JMX is never exposed to the Internet, so you need to access it through a ssh tunnel – ‘ssh -L 10001:localhost:10001 -L 10002:localhost:10002 ubuntu@xxx.computer.amazonaws.com’, then connect your JMX client to port 10001).  You can easily expose your own application specific MBeans for monitoring by adding them to the platform MBeanServer.

Juju JMX monitoringOptions are applied to services and to all units under a service.  It isn’t possible to apply options to a specific unit.  So if you enable debugging, you enable it for all Tomcat units.  Same with Java options.

Options may also be applied at deployment time.  For example, to use Tomcat 6 (rather than the default Tomcat 7), create a ‘config.yaml’ file containing the following:

tomcat:
  tomcat_version: tomcat6

Then deploy:

juju deploy --config config.yaml cs:~robert-ayres/precise/tomcat

All units added via ‘add-unit’ will also be Tomcat 6.

Session Clustering

Previously, we setup a Juju cluster consisting of two Tomcat units behind HAProxy.  In this configuration, HTTP sessions exist only on individual Tomcat units.  For many production setups, the use of load balancer sticky sessions and a non-replicated session is the most performant where HTTP sessions are either not required or expendable in the event of unit failure.  For setups concerned about availability of sessions, you can enable Tomcat session clustering on your Juju service which will replicate session data between all units in the service.  Should a unit fail, any of the remaining units can pickup the subsequent requests with the previous session state.  To enable session clustering:

juju set tomcat cluster_enabled=True

We have two choices of how the cluster manages membership.  The preferred choice is using multicast traffic, but as EC2 doesn’t allow this, we must use static configuration.  This is the default, but you can switch between either method by changing the value of the ‘multicast’ option.  Like everything else Juju deployed, any new units added or removed via ‘add-unit’ or ‘remove-unit’ are automatically included/excluded from the cluster membership.  This easily allows you to toggle clustering so that you can benchmark precisely what latency/throughput cost you have by using replicated sessions.

In summary, I’ve shown how you can tweak Tomcat configuration including enabling JMX monitoring.  We’ve also seen how to enable session clustering.  In my final post of the series, I shall show how you can add Solr indexing to your application.

Read more
Robert Ayres

Juju Java Cluster – Part 2

In my previous post, I demonstrated deploying a Juju cluster with a sample Grails application.  Let’s expand our cluster by adding Memcached (see diagram below).

Java memcached clusterDeploy a Memcached service:

juju deploy memcached

Configure Tomcat to map Memcached location under a JNDI name:

juju set tomcat "jndi_memcached_config=param/Memcached:memcached"

This will map the ‘memcached’ service under the JNDI name ‘param/Memcached’.  Whilst Memcached is deploying, you can add the relation ahead of time:

juju add-relation tomcat memcached

We will use the excellent Java Memcached library Spy Memcached (code.google.com/p/spymemcached/) in our application.  Download the ‘spymemcached-x.x.x.jar’ and copy it to ‘juju-example/lib’.
Now edit ‘juju-example/grails-app/conf/spring/resources.groovy’ so it contains the following:

import net.spy.memcached.spring.MemcachedClientFactoryBean
import org.springframework.jndi.JndiObjectFactoryBean

beans = {

    memcachedClient(MemcachedClientFactoryBean) {
        servers = { JndiObjectFactoryBean jndi ->
            jndiName = 'param/Memcached'
            resourceRef = true
        }
    }
}

To make use of our Memcached client, let’s create a simple page counter:

(within 'juju-example' directory)
grails create-controller memcached-count

This will create ‘juju-example/grails-app/controllers/juju/example/MemcachedCountController.groovy’.  Edit it so it contains the following:

package juju.example

class MemcachedCountController {

    def memcachedClient

    def index() {
        def count = memcachedClient.incr('juju-example-count', 1, 1)
        render count
    }
}

When Memcached is deployed and associated with Tomcat, redeploy our application:

(within juju-example directory)
grails clean
grails war

(within parent directory)
cp juju-example/target/juju-example-0.1.war precise/j2ee-deployer/deploy
juju upgrade-charm --repository . juju-example

Once redeployed, you should be able to open http://xxx.compute.amazonaws.com/juju/memcachedCount and refresh the page to an incrementing counter, stored in Memcached.

As with our datasource connection, we utilise a JNDI lookup to instantiate our Memcached client using runtime configuration provided by Juju (a space separated list of Memcached units, provided as a JNDI environment parameter).  With this structure, the developer has total control over integrating external services into their application.  If they want to use a different Memcached library, they can use the Juju configuration to instantiate a different class.

If we want to increase our cache capacity, we can add more units:

juju add-unit -n 2 memcached

This will deploy another 2 Memcached units.  Our Tomcats will update to reflect the new units and restart.
(Note: As you add Memcached units, our example counter may appear to reset as its Memcached key is hashed to another server).

We’ve added Memcached to our Juju cluster and seen how you can integrate external services within your application using JNDI values.
In my next post, I’ll write about how we can enable features of our existing cluster like JMX and utilise Tomcat session clustering.

Read more
Robert Ayres

Juju Java Cluster – Part 1

In my previous post I gave an introduction to Juju, the new deployment tool in Ubuntu 12.04 Precise Pangolin.  This post is the first of four demonstrating how you can deploy a typical Java web application into your own Juju cluster.  I’ll start the series by deploying an initial cluster of HAProxy, Tomcat and MySQL to Amazon EC2, shown in the diagram below.  You can always deploy to a different environment than EC2 such as MAAS or locally using LXC.  The Juju commands are equivalent.

Java web application cluster

For this demo I’ll build a sample application using the excellent Grails framework (grails.org).  You can of course use traditional tools of Maven, Ant, etc. to produce your final WAR file.  If you want to try the demo yourself, you’ll need to install Grails and Bazaar.

Firstly let’s demonstrate how to deploy Tomcat using Juju (jujucharms.com/~robert-ayres/precise/tomcat).

Open a terminal on any Ubuntu Precise machine and follow the instructions for bootstrapping a Juju cluster – juju.ubuntu.com/docs/getting-started.html.

With a bootstrapped cluster, let’s deploy a Tomcat service:

juju deploy cs:~robert-ayres/precise/tomcat

This will deploy a Tomcat unit under the service name ‘tomcat’.  Like the bootstrap instance, it will take a short time to launch a new instance, install Tomcat, configure defaults and start.  You can check the progress with ‘juju status’.  When deployed you should see the following output (‘machines’ information purposely removed):

services:
  tomcat:
    charm: cs:~robert-ayres/precise/tomcat-1
    relations:
      cluster:
      - tomcat
    units:
      tomcat/0:
        agent-state: started
        machine: 1
        public-address: xxx.compute.amazonaws.com

Should you wish to investigate the details of any unit you can ssh in – ‘ssh ubuntu@xxx.compute.amazonaws.com’ (Juju will have transferred your public key).

The Tomcat manager applications are installed and secured by default, requiring an admin password to be set.  We can apply configuration to Juju services using ‘juju set <service> “<key>=<value>” …’.  To set the ‘admin’ user password on our Tomcat unit:

juju set tomcat "admin_password=<password>"

Our Tomcat unit isn’t initially exposed to the Internet, we can only access it over a ssh tunnel (see ssh ‘-L’ option).  To expose our Tomcat unit to the Internet:

juju expose tomcat

Now you should be able to open your web browser at http://xxx.computer.amazonaws.com:8080/manager and login to Tomcat’s manager using the credentials we just set.
If we prefer our unit to run on a more traditional web port:

juju set tomcat http_port=80

After a small time of configuration you should now be able to access http://xxx.computer.amazonaws.com/manager with the same credentials.
Over HTTP, our credentials aren’t transmitted securely, so let’s enable HTTPS:

juju set tomcat https_enabled=True https_port=443 [1]

Our Tomcat unit will listen for HTTPS connections on the traditional 443 port using a generated self-signed certificate (to use CA signed certificates, see the Tomcat charm README).  Now we can securely access our manager application at https://xxx.computer.amazonaws.com/manager (you need to ignore any browser warning about a self-signed certificate).  We now have a deployed Tomcat optimised and secured for production use!

Now let’s turn our attention to evolving a simple Grails application to demonstrate further Juju abilities.

With a working Grails installation, create ‘juju-example’ application:

grails create-app juju-example

This will create your application in a directory ‘juju-example’.  Inside is a shell of a Grails application, enough for demonstration purposes.

To suit the directory layout of our deployed Tomcat, we should adjust our application to store stacktrace logs in a designated, writable directory.  Edit ‘juju-example/grails-app/conf/Config.groovy’ and inside the ‘log4j’ block add the following ‘appenders’ block:

log4j = {
    ...
    appenders {
        rollingFile name: "stacktrace", maxFileSize: 1024,
                    file: "logs/juju-example-stacktrace.log"
    }
    ...
}

To build a WAR file run:

(within 'juju-example' directory)
grails dev war

This will build a deployable WAR file ‘juju-example/target/juju-example-0.1.war’.

You have secure access to deploy WAR files directly using the Tomcat manager, but there is a better way – using the J2EE Deployer charm.

The J2EE Deployer charm is a subordinate charm that essentially provides a Juju controlled wrapper around deploying your WAR file into a Juju cluster.  This has the distinct advantage of allowing you to upgrade multiple units using a single command as is shown later.  To use the J2EE Deployer, first download a copy of the wrapper for our example application using bzr:

mkdir precise
bzr export precise/j2ee-deployer lp:~robert-ayres/charms/precise/j2ee-deployer/trunk

This will create a local copy of the wrapper under a directory ‘precise/j2ee-deployer’.  The ‘precise’ parent directory is necessary for Juju when using locally deployed charms.
Copy our war file to the ‘deploy’ directory within:

cp juju-example/target/juju-example-0.1.war precise/j2ee-deployer/deploy

Now deploy our application into Juju:

juju deploy --repository . local:j2ee-deployer juju-example

As with other charms, this will securely upload our application into S3 storage for use by any of our Juju services.  Once the deploy command returns, our application should be available within the cluster under the service name ‘juju-example’.  To deploy to Tomcat, we relate the services:

juju add-relation tomcat juju-example

Our Tomcat unit will download our application locally, stop Tomcat, deploy the application and then start Tomcat.
Issue ‘juju status’ commands to check progress.  Once deployment is complete, you can access http://xxx.computer.amazonaws.com/juju-example-0.1/ and see the default Grails welcome page (screenshot below).

juju-example application

We can use ‘juju set’ to change configuration of our application as we did with the Tomcat service.  For example, to change the deployed path to something simpler:

juju set juju-example war-config=juju-example-0.1:/

Our application will now be redeployed and Tomcat restarted so we can access our application at http://xxx.computer.amazonaws.com/.  Now we have a deployed application!

A web application typically requires access to a RDBMS, so let’s demonstrate how we can connect our application to MySQL.
Firstly, deploy a MySQL service:

juju deploy mysql

Whilst this is deploying, we can set the configuration of the imminent relation between Tomcat and MySQL:

juju set tomcat "jndi_db_config=jdbc/JujuDB:mysql:juju:initialSize=20;maxActive=20;maxIdle=20"

This is a colon separated value that maps the requested database ‘juju’ of the ‘mysql’ service under a JNDI name of ‘jdbc/JujuDB’.  The set of values after the final colon set DBCP connection pooling options.  Here we specify a dedicated pool of 20 connections.
Once our MySQL unit is deployed, we relate our Tomcat service:

juju add-relation tomcat mysql

During this process, our Tomcat unit will request the use of database juju.  Our MySQL unit will create the database and return a set of generated credentials for Tomcat to use.  Once complete, our pooled datasource connection is available to our Tomcat application under JNDI – ‘java:comp/env/jdbc/JujuDB’.  To demonstrate its use within our application, firstly configure Grails to use JNDI for its datasource connection.  Within ‘juju-example/grails-app/conf/DataSource.groovy’, inside the ‘production’/’dataSource’ block, add ‘jndiName = “java:comp/env/jdbc/JujuDB”‘ so it reads as follows:

production {
    dataSource {
        dbCreate = "update"
        jndiName = "java:comp/env/jdbc/JujuDB"
    }
}

Next create a domain class which will serve as an example database object:

(within 'juju-example' directory)
grails create-domain-class Book

Edit ‘juju-example/grails-app/domain/juju/example/Book.groovy’ so it contains the following:

package juju.example

class Book {

    static constraints = {
    }

    String author
    String isbn
    Integer pages
    Date published
    String title
}

Now we can use Grails ‘scaffolding’ to generate pages that allow us to insert Books into our database:

grails generate-all Book

Recompile our application to produce a new WAR file:

grails clean
grails war
(Note: 'grails war' now, no 'dev' option)

Now upgrade our application in Juju:

# copy across new war file
cp juju-example/target/juju-example-0.1.war precise/j2ee-deployer/deploy
# upgrade Juju deployment
juju upgrade-charm --repository . juju-example

This will upload our revised application into S3 again and then deploy to all related services, restarting them in the process.
With our newly deployed application utilising its local JNDI datasource, we can now open our web browser at http://xxx.compute.amazonaws.com/juju/book/list and use the generated page to perform CRUD operations on our Book objects, all persisted to our MySQL database.

A key point to be made is how you should develop your application to be cloud deployable.  If the application is developed to utilise external resources via runtime lookups, the application may be deployed to any number of Juju clusters.  You can observe this yourself by adding a relation between your application and any other Tomcat services.

For this post’s finale, let’s show how we can scale Tomcat.
First, deploy the HAProxy load balancer:

juju deploy haproxy

And associate with Tomcat:

juju add-relation haproxy tomcat

Unexpose Tomcat and expose HAProxy:

juju unexpose tomcat
juju expose haproxy

We can now use the public address of HAProxy to access our application.
Now we’re behind a load balancer, its simple to bolster our web traffic capacity by adding a further Tomcat unit:

juju add-unit tomcat

A second Tomcat unit will be deployed and configured as the first.  Same open ports, same MySQL connection, same web application.  Once deployed, HAProxy will serve traffic to both instances in round robin fashion.  Any future application upgrades will occur on both Tomcat units.  If we want to remove a unit:

juju remove-unit tomcat/<n>

where ‘<n>’ is the unit number (shown in status output).

That’s the end of the demo.  Should you wish to destroy your cluster, run:

juju destroy-environment

This will terminate all EC2 instances including the bootstrap instance.

To summarise, I’ve shown how you can create a Juju cluster containing a load balanced Tomcat with MySQL, serving your web application.  We’ve seen how important it is for the application to be cloud deployable allowing it to utilise managed relations.  I’ve also demonstrated how you can upgrade your application once deployed.

In my next post I shall write about adding Memcached to our cluster.


[1] Due to a current Juju bug (https://bugs.launchpad.net/juju/+bug/979859) with command line boolean variables, you may need to create a separate ‘config.yaml’ file containing the contents:

tomcat:
 https_enabled=True
 https_port=443

and then use:

juju set --config config.yaml tomcat

Read more
Robert Ayres

Java meet Juju

Take a look at the architecture diagram below.

Java based cluster

How would you go about automating deployment of this Java based cluster to EC2?  Utilise Puppet or Chef?  Write your own scripts?  How would you adapt your solution to add or remove servers to scale on demand?  Can your solution support deployment to your own equipment?  If the solutions that come to mind require a lot of initial time investment, you may be interested in Juju (juju.ubuntu.com).

In upcoming posts, I’ll show how you can use Juju to deploy this cluster.  But for this post, I’ll give a brief Juju introduction.

Juju is a new Open Source command line deployment tool in Ubuntu 12.04 Precise Pangolin.  It allows you to quickly and painlessly deploy your own cluster of applications to a cloud provider like EC2, on your own equipment in combination with Ubuntu MAAS (Metal as a Service – wiki.ubuntu.com/ServerTeam/MAAS), or even on your own computer using LXC (Linux Containers).  Juju deploys ‘charms’, scripts written to deploy and configure an application on an Ubuntu Server.
The real automated magic happens through charm relations.  Relations allow charms to associate to perform combined functionality.  This behaviour is predetermined by the charm author through the use of programmable callbacks.  For example, a database will be created and credentials generated when associating with a MySQL charm.  Charms utilise relations to provide the user with traditional functionality that requires no knowledge of underlying networks or configuration files.  And as the focus isn’t on individual machines, Juju allows you to add or remove further servers easily to scale up or down on demand.

Sound interesting? In my next post I’ll demonstrate deploying a web application to Tomcat and connecting it to MySQL.

Read more
Michael Hall

Sweet Chorus

Juju is revolutionizing the way web services are deployed in the cloud, taking what was either a labor-intensive manual task, or a very labor-intensive re-invention of the wheel  (or deployment automation in this case), and distilling it into a collection of reusable components called “Charms” that let anybody deploy multiple inter-connected services in the cloud with ease.

There are currently 84 Juju charms written for everything from game backends to WordPress sites, with databases and cache servers that work with them.  Charms are great when you can deploy the same service the same way, regardless of it’s intended use.  Wordpress is a good use case, since the process of deploying WordPress is going to be the same from one blog to the next.

Django’s Blues

But when you go a little lower in the stack, to web frameworks, it’s not quite so simple.  Take Django, for instance.  While much of the process of deploying a Django service will be the same, there is going to be a lot that is specific to the project.  A Django site can have any number of dependencies, both common additions like South and Celery, as well as many custom modules.  It might use MySQL, or PostgreSQL, or Oracle (even SQLite for development and testing).  Still more things will depend on the development process, while WordPress is available in a DEB package, or a tarball from the upstream site, a Django project may be anywhere, and most frequently in a source control branch specific to that project.  All of this makes writing a single Django charm nearly impossible.

There have been some attempts at making a generic, reusable Django charm.  Michael Nelson made one that uses Puppet and a custom config.yaml for each project.  While this works, it has two drawbacks: 1) It requires Puppet, which isn’t natural for a Python project, and 2) It required so many options in the config.yaml that you still had to do a lot by hand to make it work.  The first of these was done because ISD (where Michael was at the time) was using Puppet to deploy and configure their Django services, and could easily have been done another way.  The second, however, is the necessary consequence of trying to make a reusable Django charm.

Just for Fun

Given the problems detailed above, and not liking the idea of making config options for every possible variation of a Django project, I recently took a different approach.  Instead of making one Django Charm to rule them all, I wrote a small Django App that would generate a customized Charm for any given project.  My goal is to gather enough information from the project and it’s environment to produce a charm that is very nearly complete for that project.  I named this charming code “Naguine” after Django Reinhardt’s second wife, Sophie “Naguine” Ziegler.  It seemed fitting, since this project would be charming Django webapps.

Naguine is very much a JFDI project, so it’s not highly architected or even internally consistent at this point, but with a little bit of hacking I was able to get a significant return. For starters, using Naguine is about as simple as can be, you simply install it on your PYTHONPATH and run:

python manage.py charm --settings naguine

The –settings naguine will inject the naguine django app into your INSTALLED_APPS, which makes the charm command available.

This Kind of Friend

The charm command makes use of your Django settings to learn about your other INSTALLED_APPS as well as your database settings.  It will also look for a requirements.txt and setup.py, inspecting each to learn more about your project’s dependencies.  From there it will try to locate system packages that will provide those dependencies and add them to the install hook in the Juju  charm.

The charm command also looks to see if your project is currently in a bzr branch, and if it is it will use the remote branch to pull down your  project’s code during the install.  In  the future I hope to also support git and hg deployments.

Finally the command will write hooks for linking to a database instance on another server, including running syncdb to create the tables for your models, adding a superuser account with a randomly generated password and, if you are using South, running any migration scripts as well. It also writes some metadata about your charm and a short README explaining how to use it.

All that is left for you to do is review the generated charm, manually add any dependencies Naguine couldn’t find a matching package for, and manually add any install or database initialization that is specific to your project.  The amount of custom work needed to get a charm working is extremely minor, even for moderately complex projects.

Are you in the Mood

To try Naguine with your Django project, use the following steps:

  1. cd to your django project root (where your manage.py is)
  2. bzr branch lp:naguine
  3. python manage.py charm –settings naguine

That’s all you need.  If your django project lives in a bzr branch, and if it normally uses settings.py, you should have a directory called ./charms/precise/ that contains an almost working Juju charm for your project.

I’ve only tested this on a few Django project, all of which followed the same general conventions when it came to development, so don’t be surprised if you run into problems.  This is still a very early-stage project after all.  But you already have the code (if you followed step #2 above), so you can poke around and try to get it working or working better for your project.  Then submit your changes back to me on Launchpad, and I’ll merge them in.  You can also find me on IRC (mhall119 on freenode) if you get stuck and I will help you get it working.

(For those who are interested, each of the headers in this post is the name of a Django Reinhardt song)

Read more

Check out Why I don’t host my own blog anymore.

I mentioned it to a friend and he immediately piped in “Oh that guy did it wrong, he shouldn’t care about KeepAlive, he needs FastCGI”.

Ok so the guy “messed up” and misconfigured his blog. Zigged instead of zagged. Bummer.

But it doesn’t have to be this way. Right now we offer Wordpress as a juju charm. This lets us deploy Wordpress with Mysql in 4 commands.

However if you look at the db-relation-hook we don’t do anything special, we create an apache vhost and set it up for you. While this is simple, there’s no reason we can’t make this charm be a turbo charged deployment of Wordpress. Let’s look at some of the recommendations we see on his blog and on HN:

  • A simple caching plugin would have quickly fixed this for you.
  • In my stacks I always use nginx in conjunction with Apache to handle as much of the static content load as is possible and that lifts a huge weight from Apache. Next up is to always use a bytecode cache like Xcache or APC, these help give a huge boost in performance.
  • But then you hit a wall, next up are limitations in WP SQL and MySQL, these can be helped by messing with the queries and using Memcached also helps to significantly boost the DB performance here.
  • I had similar nightmares to you for a long time with Apache/PHP/WP, then finally put Varnish cache in front of the whole thing.
  • And someone recommends just shoving the thing in Jekyll and serving that.

I’m sure everyone will have an opinion on how to deploy Wordpress. From an Ubuntu perspective, we ship the wordpress and mysql packages, but that only gets you so far. It’s still up to you to configure it, and as this guy proves, you can mess something up. Wouldn’t it be nice if we could collect all the experience from people who are Wordpress deployment experts, put that in our charms and just give people that out of the box?

We could use nginx in the Wordpress charm, with FastCGI, we can certainly add relations to make varnish and memcached know what to do when they’re related to wordpress. And/or just “juju add-relation jekyll wordpress” and have that Just Work.

These are the kinds of problems we’re trying to tackle with juju. Will it be totally perfect for everyone’s deployment? Of course not, that’s impossible, but we can certainly make Patrick’s experience more uncommon. People will always argue about the nitnoid implementation details, but we can make those config options; the point is that we can share deployment and service maintenance as a whole instead of hoping people put the lego blocks together in the right order.

Interested in turning a plain boring charm into something sexy? I’ve filed the bug here, let us know if you’re interested.

Read more

I can’t wait to see some people I haven’t seen in years at SCALE, and meet a bunch of new people!

Come find me and Clint, we’ll be doing talks about juju and Ubuntu Cloud all weekend, as well as answering questions the entire time. I’m easy to find, look for a Red Wings hat and an Ubuntu shirt.

Here’s our post about our talks.

Read more

Calling all devops!

We’re holding a Charm School on IRC.

juju Charm School is a virtual event where a juju expert is available to answer questions about writing your own juju charms. The intended audience are people who deploy software and want to contribute charms to the wider devops community to make deploying in the public and private cloud easy.

Attendees are more than welcome to:

  • Ask questions about juju and charms
  • Ask for help modifying existing scripts and make charms out of them
  • Ask for peer review on existing charms you might be working on.

Though not required, we recommend that you have |juju installed and configured if you want to get deep into the event.

Read more
Michael

After experimenting with juju and puppet the other week, I wanted to see if it was possible to create a generic juju charm for deploying any Django apps using Apache+mod_wsgi together with puppet manifests wherever possible. The resulting apache-django-wsgi charm is ready to demo (thanks to lots of support from the #juju team), but still needs a few more configuration options. The charm currently:

  1. Enables the user to specify a branch of a Python package containing the Django app/project for deploy. This python package will be `python setup.py install`’d on the instance, but it also
  2. Enables you to configure extra debian packages to be installed first so that your requirements can be installed in a more reliable/trusted manner, along with the standard required packages (apache2, libapache2-mod-wsgi etc.). Here’s the example charm config used for apps.ubuntu.com,
  3. Creates a django.wsgi and httpd.conf ready to serve your app, automatically collecting all the static content of your installed Django apps to be served separately from the same Apache virtual host,
  4. When it receives a database relation change, it creates some local settings, overriding the database settings of your branch, sync’s and migrates the database (a noop if it’s the second unit) and restarts apache (See the database_settings.pp manifest for more details).

Here’s a quick demo which puts up a postgresql unit and two app servers with these commands:

$ juju deploy --repository ~/charms local:postgresql
$ juju deploy --config ubuntu-app-dir.yaml --repository ~/apache-django-wsgi/ local:apache-django-wsgi
$ juju add-relation postgresql:db apache-django-wsgi
$ juju add-unit apache-django-wsgi

Things that I think need to be improved or I’m uncertain about:

  1. `gem install puppet-module` is included in the install hook (a 3rd way of installing something on the system :/). I wanted to use the vcsrepo puppet module to define bzr resource types and puppet-module-tool seems to be the way to install 3rd-party puppet modules. Using this resource-type enables a simple initial_state.pp manifest. Of course, it’d be great to have ‘necessary’ tools like that in the archive instead.
  2. The initial_state.pp manifest pulls the django app package to /home/ubuntu/django-app-branch and then pip installs it on the system. Requiring the app to be a valid python package seemed sensible (in terms of ensuring it is correctly installed with its requirements satisfied) while still allowing the user to go one step further if they like and provide a debian package instead of a python package in a branch (which I assume we would do ultimately for production deploys?)
  3. Currently it’s just a very simple apache setup. I think ideally the static file serving should be done by a separate unit in the charm (ie. an instance running a stripped down apache2 or lighttpd). Also, I would have liked to have used an ‘official’ or ‘blessed’ puppet apache module to benefit from someone else’s experience, but I couldn’t see one that stood out as such.
  4. Currently the charm assumes that your project contains the configuration info (ie. a settings.py, urls.py etc.), of which the database settings can be simply overridden for deploy. There should be an additional option to specify a configuration branch (and it shouldn’t assume that you’re using django-configglue), as well as other options like django_debug, static_url etc.
  5. The charm should also export an interface (?) that can be used by a load balancer charm.

Filed under: django, juju

Read more