Canonical Voices

Posts tagged with 'openstack'

Mark Baker

When it comes to using Linux on an enterprise server, Ubuntu is generally seen as the new challenger in a market dominated by established vendors specifically targeting enterprises. However, we are seeing signs that this is changing. The W3Techs data showing Ubuntu’s continued growth as a platform for online scale-out infrastructure is becoming well known, but a more recent highlight is a review published by Network World of five commercial Linux-based servers (note registration required to read the whole article).

The title of the review “Ubuntu impresses in Linux enterprise test” is encouraging right from the start, but what may surprise some readers are the areas in which the reviewers rated Ubuntu highly:

 

1. Transparency (Free and commercially supported versions are the same.)

This has long been a key part of Ubuntu and we are pleased that its value is gaining broader recognition. From an end user perspective this model has many benefits, primarily the zero migration cost of moving between an unsupported environment (say, in development) and a supported one (in production). With many organisations moving towards models of continuous deployment this can be extremely valuable.

2. Management tools

The reviewers seemed particularly impressed with the management tools that come with Ubuntu, supported with Ubuntu Advantage: Metal as a Service (MAAS), for rapid bare metal provisioning; Juju for service deployment and orchestration; and Landscape for monitoring, security and maintenance management. At Canonical we have invested significantly in these tools over the last few years, so it is good to know that the results have been well received.

Landscape Cloud Support

Landscape Cloud Support

3. Cloud capability

The availability of cloud images that run on public clouds is called out as being valuable, as is the inclusion of OpenStack to be able to create an OpenStack Cloud. Cloud has been a key part of Ubuntu’s focus since 2008, when we started to create and publish images onto EC2. With the huge growth of Amazon and the more recent rapid adoption of OpenStack, having cloud support baked into Ubuntu and instantly available to end users is valuable.

4. Virtualisation support

It is sometimes thought that Ubuntu is not a great virtualisation platform, mainly because it is not really marketed as being one. The reality, as recognised by the Network World reviewers, is that Ubuntu has great hypervisor support. Like some other vendors we default to KVM for general server virtualisation, but when it comes to hypervisor support for Infrastructure as a Service (IaaS), Ubuntu is far more hypervisor agnostic than many others, supporting not only KVM, but VMware ESXi, and Xen as well. Choice is a good thing.

Of course there are areas of Ubuntu that the reviewers believed to be weak – installation being the primary one. We’ll take this onboard and are confident that future releases will deliver an improved installation experience. There are areas that you could suggest are important to an enterprise that are not covered in the review – commercial application support being one – but the fact remains that viewed as a platform in its own right, with a vast array of open source applications available via Juju, Ubuntu seems to be on the right path. If it continues this way, soon it could well cease to be the challenger and become the leader.

Read more
Robbie

20130722092922-edge-1-large

So I’m partially kidding…the Ubuntu Edge is quickly becoming a crowdfunding phenomena, and everyone should support it if they can.  If we succeed, it will be a historic moment for Ubuntu, crowdfunding, and the global phone industry as well. 

But I Don’t Wanna Talk About That Right Now

While I’m definitely a fan of the phone stuff, I’m a cloud and server guy at heart and what’s gotten me really excited this past month have been two significant (and freaking awesome) announcements.

#1 The Juju Charm Championship

easy_money_board_game_1974_121011_1

First off, if you still don’t know about Juju, it’s essentially our attempt at making Cloud Computing for Human Beings.  Juju allows you to deploy, connect, manage, and scale web services and applications quickly and easily…again…and again…AND AGAIN!  These services are captured in what we call charms, which contain the knowledge of how to properly deploy, configure, connect, and scale the services and applications you will want to deploy in the cloud.  We have 100′s of charms for every popular and well-known web service and application in use in the cloud today.  They’ve been authored and maintained by the experts, so you don’t have to waste your time trying to become one.  Just as Ubuntu depends on a community of packagers and developers, so does Juju.  Juju goes only as far as our Charm Community will take us, and this is why the Charm Championship is so important to us.

So….what is this Charm Championship all about?  We took notice of the fantastic response to the Cloud-Prize contest ran by our good friends (and Ubuntu Server users) over at Netflix.  So we thought we could do something similar to boost the number of full service solutions deployable by Juju, i.e. Charm Bundles.  If charms are the APT packages of the cloud, bundles are effectively the package seeds, thus allowing you to deploy groups of services, configured and interconnected all at once.  We’ve chosen this approach to increase our bundle count because we know from our experience with Ubuntu, that the best approach for growth will be by harvesting and cultivating the expertise and experience of the experts regularly developing and deploying these solutions.  For example, we at Canonical maintain and regularly deploy an OpenStack bundle to allow us to quickly get our clouds up for both internal use and for our Ubuntu Advantage customers.  We have master level expertise in OpenStack cloud deployments, and thus have codified this into our charms so that others are able to benefit.  The Charm Championship is our attempt to replicate this sharing of similar master level expertise across more service/application bundles…..BY OFFERING $30,000 USD IN PRIZE MONEY! Think of how many Ubuntu Edge phones that could buy you…well, unless you desperately need to have one of the first 50 :-).

#2 JujuCharms.com

Ironman JARVIS technologyFrom the very moment we began thinking about creating Juju years ago…we always envisioned eventually creating an interface that provides solution architects the ability to graphically create, deploy, and interact with services visually…replicating the whiteboard planning commonly employed in the planning phase of such solutions.

The new Juju GUI now integrated into JujuCharms.com is the realization of our vision, and I’m excited as hell at the possibilities opened and the technical roadblocks removed by the release of this tool.  We’ve even charmed it, so you can  ’juju deploy juju-gui’ into any supported cloud, bare metal (MAAS), or local workstation (via LXC) environment.  Below is a video of deploying OpenStack via our new GUI, and a perfect example of the possibilities that are opened up now that we’ve released this innovative and f*cking awesome tool:

The best part here, is that you can play with the new GUI RIGHT NOW by selecting the “Build” option on jujucharms.com….so go ahead and give it a try!

Join the Championship…Play with the GUI…then Buy the Phone

Cause I will definitely admit…it’s a damn sexy piece of hardware. ;)


Read more
Prakash

Read more
Dustin Kirkland

UPDATE: I wrote this charm and blog post before I saw the unfortunate news that the UbuntuForums.org along with Apple's Developer websites had been very recently compromised and user database stolen.  Given that such events have sadly become commonplace, the instructions below can actually be used by proactive administrators to identify and disable weak user passwords, expecting that the bad guys are already doing the same.

It's been about 2 years since I've written a Juju charm.  And even those that I wrote were not scale-out applications.

I've been back at Canonical for two weeks now, and I've been spending some time bringing myself up to speed on the cloud projects that form the basis for the Cloud Solution products, for which I'm responsible. First, I deployed MAAS, and then brought up a small Ubuntu OpenStack cluster.  Finally, I decided to tackle Juju and rather than deploying one of the existing charms, I wanted to write my own.

Installing Juju

Juju was originally written in Python, but has since been ported to Golang over the last 2+ years.  My previous experience was exclusively with the Python version of Juju, but all new development is now focused on the Golang version of Juju, also known as juju-core.  So at this point, I decided to install juju-core from the 13.04 (raring) archive.

sudo apt-get install juju-core

I immediately hit a couple of bugs in the version of juju-core in 13.04 (1.10.0.1-0ubuntu1~ubuntu13.04.1), particularly Bug #1172973.  Life is more fun on the edge anyway, so I upgraded to a daily snapshot from the PPA.

sudo apt-add-repository ppa:juju/devel
sudo apt-get update
sudo apt-get install juju-core

Now I'm running juju-core 1.11.2-3~1414~raring1, and it's currently working.

Configuring Juju

Juju can be configured to use a number of different cloud backends as "providers", notably, Amazon EC2, OpenStack, MAAS, and HP Cloud.

For my development, I'm using Canonical's internal deployment of OpenStack, and so I configured my environment accordingly in ~/.juju/environments.yaml:

default: openstack
environments:
openstack:
type: openstack
admin-secret: any-secret-you-choose-randomly
control-bucket: any-bucket-name-you-choose-randomly
default-series: precise
auth-mode: userpass

Using OpenStack (or even AWS for that matter) also requires defining a number of environment variables in an rc-file.  Basically, you need to be able to launch instances using euca2ools or ec2-api-tools.  That's outside of the scope of this post, and expected as a prerequisite.

The official documentation for configuring your Juju environment can be found here.

Choosing a Charm-able Application

I have previously charmed two small (but useful!) webapps that I've written and continue to maintain -- Pictor and Musica.  These are both standalone web applications that allow you to organize, serve, share, and stream your picture archive and music collection.  But neither of these "scale out", really.  They certainly could, perhaps, use a caching proxy on the front end, and shared storage on the back end.  But, as I originally wrote them, they do not.  Maybe I'll update that, but I don't know of anyone using either of those charms.

In any case, for this experiment, I wanted to write a charm that would "scale out", with Juju's add-unit command.  I wanted to ensure that adding more units to a deployment would result in a bigger and better application.

For these reasons, I chose the program known as John-the-Ripper, or just john.  You can trivially install it on any Ubuntu system, with:

sudo apt-get install john

John has been used by Linux system administrators for over a decade to test the quality of their user's passwords.  A root user can view the hashes that protect user passwords in files like /etc/shadow or even application level password hashes in a database.  Effectively, it can be used to "crack" weak passwords.  There are almost certainly evil people using programs like john to do malicious things.  But as long as the good guys have access to a program like john too, they can ensure that their own passwords are impossible to crack.

John can work in a number of different "modes".  It can use a dictionary of words, and simply hash each of those words looking for a match.  The john-data package ships a word list in /usr/share/john/password.lst that contains 3,000+ words.  You can find much bigger wordlists online as well, such as this one, which contains over 2 million words.

John can also generate "twists" on these words according to some rules (like changing E's to 3's, and so on).  And it can also work in a complete brute force mode, generating every possible password from various character sets.  This, of course, will take exponentially longer run times, depending on the length of the password.

Fortunately, John can run in parallel, with as many workers as you have at your disposal.  You can run multiple processes on the same system, or you can scale it out across many systems.  There are many different approaches to parallelizing John, using OpenMP, MPI, and others.

I took a very simple approach, explained in the manpage and configuration file called "External".  Basically, in the /etc/john/john.conf configuration file, you tell each node how many total nodes exist, and which particular node they are.  Each node uses the same wordlist or sequential generation algorithm, and indexes these.  The node modulates the current index by the total number of nodes, and tries the candidate passwords that match their own id.  Dead simple :-)  I like it.

# Trivial parallel processing example
[List.External:Parallel]
/*
* This word filter makes John process some of the words only, for running
* multiple instances on different CPUs. It can be used with any cracking
* mode except for "single crack". Note: this is not a good solution, but
* is just an example of what can be done with word filters.
*/
int node, total; // This node's number, and node count
int number; // Current word number
void init()
{
node = 1; total = 2; // Node 1 of 2, change as appropriate
number = node - 1; // Speedup the filter a bit
}
void filter()
{
if (number++ % total) // Word for a different node?
word = 0; // Yes, skip it
}

This does, however, require some way of sharing the inputs, logs, and results across all nodes.  Basically, I need a shared filesystem.  The Juju charm collection has a number of shared filesystem charms already implemented.  I chose to use NFS in my deployment, though I could have just as easily used Ceph, Hadoop, or others.

Writing a Charm

The official documentation on writing charms can be found here.  That's certainly a good starting point, and I read all of that before I set out.  I also spent considerable time in the #juju IRC channel on irc.freenode.net, talking to Jorge and Marco.  Thanks, guys!

The base template of the charm is pretty simple.  The convention is to create a charm directory like this, and put it under revision control.

mkdir -p precise/john
bzr init .

I first needed to create the metadata that will describe my charm to Juju.  My charm is named john, which is an application known as "John the Ripper", which can test the quality of your passwords.  I list myself as the maintainer.  This charm requires a shared filesystem that implements the mount interface, as my charm will call some hooks that make use of that mount interface.  Most importantly, this charm may have other peers, which I arbitrarily called workers.  They have a dummy interface (not used) called john.  Here's the metadata.yaml:

name: john
summary: "john the ripper"
description: |
John the Ripper tests the quality of system passwords
maintainer: "Dustin Kirkland"
requires:
shared-fs:
interface: mount
peers:
workers:
interface: john

I also have one optional configuration parameter, called target_hashes.  This configuration string will include the input data that john will work on, trying to break.  This can be one to many different password hashes to crack.  If this isn't specified, this charm actually generates some random ones, and then tries to break those.  I thought that would be nice, so that it's immediately useful out of the box.  Here's config.yaml:

options:
target_hashes:
type: string
description: input password hashes

There's a couple of other simple files to create, such as copyright:

Format: http://dep.debian.net/deps/dep5/

Files: *
Copyright: Copyright 2013, Dustin Kirkland , All Rights Reserved.
License: GPL-3
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
You should have received a copy of the GNU General Public License
along with this program. If not, see .

README and revision are also required.

And the Magic -- Hooks!

The real magic happens in a set of very specifically named hooks.  These are specially named executables, which can be written in any language.  For my purposes, shell scripts are more than sufficient.

The Install Hook

The install hook is what is run at installation time on each worker node.  I need to install the john and john-data packages, as well as the nfs-common client binaries.  I also make use of the mkpasswd utility provided by the whois package.  And I will also use the keep-one-running tool provided by the run-one package.  Finally, I need to tweak the configuration file, /etc/john/john.conf, on each node, to use all of the CPU, save results every 10 seconds (instead of every 10 minutes), and to use the much bigger wordlist that we're going to fetch.  Here's hooks/install:

#!/bin/bash
set -eu
juju-log "Installing all components"
apt-get update
apt-get install -qqy nfs-common john john-data whois run-one
DIR=/var/lib/john
mkdir -p $DIR
ln -sf $DIR /root/.john
sed -i -e "s/^Idle = .*/Idle = N/" /etc/john/john.conf
sed -i -e "s/^Save = .*/Save = 10/" /etc/john/john.conf
sed -i -e "s:^Wordlist = .*:Wordlist = $DIR\/passwords.txt:" /etc/john/john.conf
juju-log "Installed packages"

The Start Hook

The start hook defines how to start this application.  Ideally, the john package would provide an init script or upstart job that cleanly daemonizes its workers, but it currently doesn't.  But for a poor-man's daemonizer, I love the keep-one-running utility (written by yours truly).  I'm going to start two copies of john utility, one that runs in wordlist mode, trying every one of the 2 million words in my wordlist, as well as a second, which tries every combinations of characters in an incremental, brute force mode.  These binaries are going to operate entirely in the shared /var/lib/john NFS mount point.  Each copy on each worker node will need to have their own session file.  Here's hooks/start:

#!/bin/bash
set -eu
juju-log "Starting john"
DIR=/var/lib/john
keep-one-running john -incremental -session:$DIR/session-incremental-$(hostname) -external:Parallel $DIR/target_hashes &
keep-one-running john -wordlist:$DIR/passwords.txt -session:$DIR/session-wordlist-$(hostname) -external:Parallel $DIR/target_hashes &

The Stop Hook

The stop hook defines how to stop the application.  Here, I'll need to kill the keep-one-running processes which wrap john, since we don't have an upstart job or init script.  This is perhaps a little sloppy, but perfectly functional.  Here's hooks/stop:

#!/bin/bash
set -eu
juju-log "Stopping john"
killall keep-one-running || true

The Workers Relation Changed Hook

This hook defines the actions that need to be taken each time another john worker unit is added to the service.  Basically, each worker needs to recount how many total workers there are (using the relation-list command), determine their own id (from $JUJU_UNIT_NAME), update their /etc/john/john.conf (using sed), and then restart their john worker processes.  The last part is easy since we're using keep-one-running; we simply need to killall john processes, and keep-one-running will automatically respawn new processes that will read the updated configuration file.  This is hooks/workers-relation-changed:

#!/bin/bash
set -eu
DIR="/var/lib/john"
update_unit_count() {
node=$(echo $JUJU_UNIT_NAME | awk -F/ '{print $2}')
node=$((node+1))
total=$(relation-list | wc -l)
total=$((total+1))
sed -i -e "s/^\s\+node = .*; total = .*;.*$/ node = $node; total = $total;/" /etc/john/john.conf
}
restart_john() {
killall john || true
# It'll restart itself via keep-one-running, if we kill it
}
update_unit_count
restart_john

The Configuration Changed Hook

All john worker nodes will operate on a file in the shared filesystem called /var/lib/john/target_hashes.  I'd like the administrator who deployed this service to be able to dynamically update that file and signal all of her worker nodes to restart their john processes.  Here, I used the config-get juju command, and again restart by simply killing the john processes and letting keep-one-running sort out the restart.  This is handled here in hooks/config-changed:

#!/bin/bash
set -e
DIR=/var/lib/john
target_hashes=$(config-get target_hashes)
if [ -n "$target_hashes" ]; then
# Install the user's supplied hashes
echo "$target_hashes" > $DIR/target_hashes
# Restart john
killall john || true
fi

The Shared Filesystem Relation Changed Hook

By far, the most complicated logic is in hooks/shared-fs-relation-changed.  There's quite a bit of work we need to here, as soon as we can be assured that this node has successfully mounted its shared filesystem.  There's a bit of boilerplate mount work that I borrowed from the owncloud charm.  Beyond that, there's a bit of john-specific work.  I'm downloading the aforementioned larger wordlist.  I install the target hash, if specified in the configuration; otherwise, we just generate 10 random target passwords to try and crack.  We also symlink a bunch of john's runtime shared data into the NFS directory.  For no good reason, john expects a bunch of stuff to be in the same directory.  Of course, this code could really use some cleanup.  Here it is again, non-perfect, but functional hooks/shared-fs-relation-changed:
#!/bin/bash
set -eu

remote_host=`relation-get private-address`
export_path=`relation-get mountpoint`
mount_options=`relation-get options`
fstype=`relation-get fstype`
DIR="/var/lib/john"

if [ -z "${export_path}" ]; then
juju-log "remote host not ready"
exit 0
fi

local_mountpoint="$DIR"

create_local_mountpoint() {
juju-log "creating local mountpoint"
umask 022
mkdir -p $local_mountpoint
chown -R ubuntu:ubuntu $local_mountpoint
}
[ -d "${local_mountpoint}" ] || create_local_mountpoint

share_already_mounted() {
`mount | grep -q $local_mountpoint`
}

mount_share() {
for try in {1..3}; do
juju-log "mounting share"
[ ! -z "${mount_options}" ] && options="-o ${mount_options}" || options=""
mount -t $fstype $options $remote_host:$export_path $local_mountpoint \
&& break

juju-log "mount failed: ${local_mountpoint}"
sleep 10

done
}

download_passwords() {
if [ ! -s $DIR/passwords.txt ]; then
# Grab a giant dictionary of passwords, 20MB, 2M passwords
juju-log "Downloading password dictionary"
cd $DIR
# http://www.breakthesecurity.com/2011/12/large-password-list-free-download.html
wget http://dazzlepod.com/site_media/txt/passwords.txt
juju-log "Done downloading password dictionary"
fi
}

install_target_hashes() {
if [ ! -s $DIR/target_hashes ]; then
target_hashes=$(config-get target_hashes)
if [ -n "$target_hashes" ]; then
# Install the user's supplied hashes
echo "$target_hashes" > $DIR/target_hashes
else
# Otherwise, grab some random ones
i=0
for p in $(shuf -n 10 $DIR/passwords.txt); do
# http://openwall.info/wiki/john/Generating-test-hashes
printf "user${i}:%s\n" $(mkpasswd -m md5 $p) >> $DIR/target_hashes
i=$((i+1))
done
fi
fi
for i in /usr/share/john/*; do
ln -sf $i /var/lib/john
done
}

apt-get -qqy install rpcbind nfs-common
share_already_mounted || mount_share
download_passwords
install_target_hashes

Deploying the Service

If you're still with me, we're ready to deploy this service and try cracking some passwords!  We need to bootstrap our environment, and deploy the stock nfs charm.  Next, branch my charm's source code, and deploy it.  I deployed it here across a whopping 18 units!  I currently have a quota of 20 small instances I can run our private OpenStack.  Two of those instances are used by the Juju bootstrap node and by the NFS server.  So the other 18 will be NFS clients running john processes.

juju bootstrap
juju deploy nfs
bzr branch lp:~kirkland/+junk/john precise
juju deploy -n 18 --repository=precise local:precise/john
juju add-relation john nfs
juju status

Once everything is up and ready, running and functional, my status looks like this:

machines:
"0":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.230
instance-id: 98090098-2e08-4326-bc73-22c7c6879b95
series: precise
"1":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.7
instance-id: 449c6c8c-b503-487b-b370-bb9ac7800225
series: precise
"2":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.193
instance-id: 576ffd6f-ddfa-4507-960f-3ac2e11ea669
series: precise
"3":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.215
instance-id: 70bfe985-9e3f-4159-8923-60ab6d9f7d43
series: precise
"4":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.221
instance-id: f48364a9-03c0-496f-9287-0fb294bfaf24
series: precise
"5":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.223
instance-id: 62cc52c4-df7e-448a-81b1-5a3a06af6324
series: precise
"6":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.231
instance-id: f20dee5d-762f-4462-a9ef-96f3c7ab864f
series: precise
"7":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.239
instance-id: 27c6c45d-18cb-4b64-8c6d-b046e6e01f61
series: precise
"8":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.240
instance-id: 63cb9c91-a394-4c23-81bd-c400c8ec4f93
series: precise
"9":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.242
instance-id: b2239923-b642-442d-9008-7d7e725a4c32
series: precise
"10":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.249
instance-id: 90ab019c-a22c-41d3-acd2-d5d7c507c445
series: precise
"11":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.252
instance-id: e7abe8e1-1cdf-4e08-8771-4b816f680048
series: precise
"12":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.254
instance-id: ff2b6ba5-3405-4c80-ae9b-b087bedef882
series: precise
"13":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.60.255
instance-id: 2b019616-75bc-4227-8b8b-78fd23d6b8fd
series: precise
"14":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.1
instance-id: ecac6e11-c89e-4371-a4c0-5afee41da353
series: precise
"15":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.3
instance-id: 969f3d1c-abfb-4142-8cc6-fc5c45d6cb2c
series: precise
"16":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.4
instance-id: 6bb24a01-d346-4de5-ab0b-03f51271e8bb
series: precise
"17":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.5
instance-id: 924804d6-0893-4e56-aef2-64e089cda1be
series: precise
"18":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.11
instance-id: 5c96faca-c6c0-4be4-903e-a6233325caec
series: precise
"19":
agent-state: started
agent-version: 1.11.0
dns-name: 10.99.61.15
instance-id: 62b48da2-60ea-4c75-b5ed-ffbb2f8982b5
series: precise
services:
john:
charm: local:precise/john-3
exposed: false
relations:
shared-fs:
- nfs
workers:
- john
units:
john/0:
agent-state: started
agent-version: 1.11.0
machine: "2"
public-address: 10.99.60.193
john/1:
agent-state: started
agent-version: 1.11.0
machine: "3"
public-address: 10.99.60.215
john/2:
agent-state: started
agent-version: 1.11.0
machine: "4"
public-address: 10.99.60.221
john/3:
agent-state: started
agent-version: 1.11.0
machine: "5"
public-address: 10.99.60.223
john/4:
agent-state: started
agent-version: 1.11.0
machine: "6"
public-address: 10.99.60.231
john/5:
agent-state: started
agent-version: 1.11.0
machine: "7"
public-address: 10.99.60.239
john/6:
agent-state: started
agent-version: 1.11.0
machine: "8"
public-address: 10.99.60.240
john/7:
agent-state: started
agent-version: 1.11.0
machine: "9"
public-address: 10.99.60.242
john/8:
agent-state: started
agent-version: 1.11.0
machine: "10"
public-address: 10.99.60.249
john/9:
agent-state: started
agent-version: 1.11.0
machine: "11"
public-address: 10.99.60.252
john/10:
agent-state: started
agent-version: 1.11.0
machine: "12"
public-address: 10.99.60.254
john/11:
agent-state: started
agent-version: 1.11.0
machine: "13"
public-address: 10.99.60.255
john/12:
agent-state: started
agent-version: 1.11.0
machine: "14"
public-address: 10.99.61.1
john/13:
agent-state: started
agent-version: 1.11.0
machine: "15"
public-address: 10.99.61.3
john/14:
agent-state: started
agent-version: 1.11.0
machine: "16"
public-address: 10.99.61.4
john/15:
agent-state: started
agent-version: 1.11.0
machine: "17"
public-address: 10.99.61.5
john/16:
agent-state: started
agent-version: 1.11.0
machine: "18"
public-address: 10.99.61.11
john/17:
agent-state: started
agent-version: 1.11.0
machine: "19"
public-address: 10.99.61.15
nfs:
charm: cs:precise/nfs-3
exposed: false
relations:
nfs:
- john
units:
nfs/0:
agent-state: started
agent-version: 1.11.0
machine: "1"
public-address: 10.99.60.7

Obtaining the Results

And now, let's monitor the results.  To do this, I'll ssh to any of the john worker nodes, move over to the shared NFS directory, and use the john -show command in a watch loop.

keep-one-running juju ssh john/0
sudo su -
cd /var/lib/john
watch john -show target_hashes

And the results...
Every 2.0s: john -show target_hashes

user:260775
user1:73832100
user2:829171kzh
user3:pf1vd4nb
user4:7788521312229
user5:saksak
user6:rongjun2010
user7:2312010
user8:davied
user9:elektrohobbi

10 password hashes cracked, 0 left

Within a few seconds, this 18-node cluster has cracked all 10 of the randomly chosen passwords from the dictionary.  That's only mildly interesting, as my laptop can do the same in a few minutes, if the passwords are already in the wordlist.  What's far more interesting is in randomly generating a password and passing that as a new configuration to our running cluster and letting it crack that instead.

Modifying the Configuration Target Hash

Let's generate a random password using apg.  We'll then need to hash this and create a string in the form of username:pwhash that john can understand.  Finally, we'll pass this to our cluster using Juju's set action.

passwd=$(apg -a 0 -n 1 -m 6 -x 6)
target=$(printf "user0:%s\n" $(mkpasswd -m md5 $passwd))
juju set john target_hashes="$target"

This was a 6 character password, consisting of 52 random characters (a-z, A-Z), almost certainly not in our dictionary.  526 = 19,770,609,664, or about 19 billion letter combinations we need to test.  According to the john -test command, a single one of my instances can test about 12,500 MD5 hashes per second.  So with a single instance, this would take a maximum of 526 / 12,500 / 60 / 60 = 439 hours. Or 18 days :-) Well, I happen to have exactly 18 instances, so we should be able to test the entire wordspace in about 24 hours.

So I threw all 18 instances at this very problem and let it run over a weekend. And voila, we got a little lucky, and cracked the password, Uvneow, in 16 hours!

In Conclusion

I don't know if this charm will ever land in the official charm store.  That really wasn't the goal of this exercise for me.  I simply wanted to bring myself back up to speed on Juju, play with the port to Golang, experiment with OpenStack as a provider for Juju, and most importantly, write a scalable Juju charm.

This particularly application, john, is actually just one of a huge class of MPI-compatible parallelizable applications that could be charmed for Juju.  The general design, I think, should be very reusable by you, if you're interested.  Between the shared file system and the keep-one-running approach, I bet you could charm any one of a number of scalable applications.  While I'm not eligible, perhaps you might consider competing for cash prizes in the Juju Charm Championship.

Happy charming,
:-Dustin

Read more
Mark Baker

Juju, the leading tool for continuous deployment, continuous integration (CI/CD), and cloud-neutral orchestration, now has a refreshed GUI with smoother workflows for integration professionals spinning up many services across clouds like Amazon EC2 and a range of public OpenStack providers. The new GUI speeds up service design – conceptual modelling of service relationships – as well as actual deployment, providing a visual map of the relationships between services.

“The GUI is now a first-class part of the Juju experience” said Gary Poster, whose team lead the work, “with an emphasis on rapid access to the collection of service charms and better visualisation of the deployment in question”. In this milestone the Juju GUI can act as a whiteboard, so a user can mock up the service orchestration they intend to create using the same Juju GUI that they will use to manage their real, live deployments. Users can experience the new interface for themselves at jujucharms.com with no need to setup software in advance.

Juju is used by organisations that are constantly deploying and redeploying collections of services. Companies focused on media, professional services, and systems integration are the heaviest users, who benefit from having repeatable best-practice deployments across a range of cloud environments.

Juju uniquely enables the reuse of shared components called ‘charms’ for common parts of a complex service. A large portfolio of existing open source components is available from a public Charm collection, and browsing that collection is built into the new GUI. Charms are easy to find and review in the GUI, with full documentation instantly accessible. Featured, recommended and popular charms are highlighted for easy discovery. Each Charm now has more detailed information including test results from all supported providers, download count, related Charms, and a Charm code quality rating. The Charm collection includes both certified, supported Charms, and a wider range of ad-hoc Charms that are published by a large community of contributors.

The simple browser-based interface makes it easy to find reusable open source charms that define popular services like Hadoop, Storm, Ceph, OpenStack, MySQL, RabbitMQ, MongoDB, Cassandra, Mediawiki and WordPress. Information about each service, such as configuration options, is immediately available, and the charms can then be dragged and dropped directly on a canvas where they can be connected to other services, deployed and scaled. It’s also possible to export these service topologies into a human-readable and -editable format that can be shared within a team or published as a reference architecture for that deployment.

Recent additions to the public Charm collection include OpenVPN AS, Liferay, Storm and Varnish. For developers the new GUI and Charm Browser mean that their Charms are now much more discoverable. For those taking part in the Charm Championship, it’s easier to upload their Charms and use the GUI to connect them into a full solution for entry into the competition. Submit your best Charmed solution for the possibility of winning $10,000.

The management interface for Charm authors has also been enhanced and is available at  http://manage.jujucharms.com/ immediately.

See how you can use Juju to deploy OpenStack:

The current version of Juju supports Amazon EC2, HP Cloud and many other OpenStack clouds, as well as in-memory deployment for test and dev scenarios. Juju is on track for a 1.12 release in time for Ubuntu 13.10 that will enhance scalability for very large deployments, and a 2.0 release in time for Ubuntu 14.04 LTS.

See it demoed: We’ll be showing off the new Juju GUI and charm browser at OSCON on Tuesday 23rd at 9:00AM in the Service Orchestration In the Cloud with Juju workshop.

Read more
Mark Baker

We are pleased to announce a seriously good addition to the our product team: Ratnadeep (Deep) Bhattacharjee. Deep joins Canonical as Director of Cloud Product Management from VMware where he led its Cloud Infrastructure Platform effort and has a solid understanding of customer needs as they continue to move to virtual and cloud infrastructure.

Ubuntu has fast become the operating system of choice for cloud computing and Ubuntu is the most popular platform for OpenStack. With Deep’s direction, we plan to continue to lead Ubuntu OpenStack into enterprises, carriers and service providers looking for new ways to deliver next generation infrastructure without the ‘enterprise’ price tag and lock in. He will also be key in building out our great integration story with VMWare to help customers who will run heterogeneous environments. Welcome Deep!

Read more
Mark Baker

In April at the OpenStack Summit, Canonical founder Mark Shuttleworth quipped “My OpenStack, how you’ve grown” as a reference to the thousands of people in the room. OpenStack is indeed growing up and it seems incredible that this Friday, we celebrate OpenStacks’ 3rd Birthday.

Incredible – it seems like only yesterday OpenStack was a twinkle in the eyes of a few engineers getting together in Austin. Incredible that OpenStack has come so far in such a short time. Ubuntu has been with OpenStack every day of the 3 year journey so far which is why the majority of OpenStack clouds are built on Ubuntu Server and Ubuntu OpenStack continues to be one of the most popular OpenStack distributions available.

It is also why we are proud to host the London OpenStack 3rd Birthday Party at our HQ in London. We’d love to see you using OpenStack with Ubuntu and even if you don’t, you should come and celebrate OpenStack with on Friday, July 19th.

http://www.meetup.com/Openstack-London/

Read more
Mark Baker

Ubuntu developer contest offers $10,000 for the most innovative charms

Developers around the world are already saving time and money thanks to Juju, and now they have the opportunity to win money too. Today marks the opening of the Juju Charm Championship, in which developers can reap big rewards for getting creative with Juju charms.

If you haven’t met Juju yet, now’s the ideal time to dive in. Juju is a service orchestration tool, a simple way to build entire cloud environments, deploy scale and manage complex workloads using only a few commands. It takes all the knowledge of an application and wraps it up into a re-usable Juju charm, ready to be quickly deployed anywhere. And you can modify and combine charms to create a custom deployment that meets your needs.

Juju is a powerful tool, and its flexibility means it’s capable of things we haven’t even imagined yet. So we’re kicking off the Charm Championship to discover what happens when the best developers bring Juju into their clouds — with big rewards on offer.

The prizes

As well as showing off the best achievements to the community, our panel of judges will award $10,000 cash prizes to the best charmed solutions in a range of categories.

That’s not all. Qualifying participants will be eligible for a joint marketing programme with Canonical, including featured application slots on ubuntu.com,  joint webinars and more. Win the Charm Championship and your app will reach a whole new audience.

Get started today

If you’re a Juju wizard, we want to see what magic you’re already creating. If you’re not, now’s a great time to start — it only takes five minutes to get going with Juju.

The Charm Championship runs until 1 October 2013, and it’s open to individuals, teams, companies and organisations. For more details and full com

petition rules, visit the Charm Championship page.

Charm Championship page

Read more
Mark Baker

“May you live in interesting times.” This Chinese proverb probably resonates well with teams running OpenStack in production over the last 18 months. But, at the OpenStack Summit in Portland, Ubuntu and Canonical founder Mark Shuttleworth demonstrated that life is going to get much less ‘interesting’ for people running OpenStack and that is a good thing.

OpenStack has come a long way in a short time. The OpenStack Summit event in April attracted 3000 attendees with pretty much every significant technology company represented.

Only 12 months ago, being able to install OpenStack in under a few hours was deemed to be an extraordinary feat. Since then deployment tools such as Juju have simplified the process and today very large companies such as AT&T, HP and Deutsche Telekom have been able to rapidly push OpenStack Clouds into production. This means the community has had to look into solving the next wave of problems – managing the cloud in production, upgrading OpenStack, upgrading the underlying infrastructure and applying security fixes – all without disrupting services running in the cloud.

With the majority of OpenStack clouds running on Ubuntu, Canonical has been uniquely positioned to work on this. We have spent 18 months building out Juju and Landscape, our service orchestration and systems management tools to solve these problems, and at the Summit, Mark Shuttleworth demonstrated just how far they have come. During a 30 min session, Mark performed kernel upgrades on a live running system without service interruption. He talked about the integrations and partnerships in place with VMWare, Microsoft and Inktank that mean these technologies can be incorporated into an OpenStack Cloud on Ubuntu with ease. This is is the kind of practicality that OpenStack users need and represents how OpenStack is growing up. It also makes OpenStack less “interesting” and far more adoptable by a typical user which is what OpenStack needs in order to continue its incredible growth. We at Canonical aim to be with it every step of the way.

Read more
roaksoax

For a while, I have been wanting to write about MAAS and how it can easily deploy workloads (specially OpenStack) with Juju, and the time has finally come. This will be the first of a series of posts where I’ll provide an Overview of how to quickly get started with MAAS and Juju.

What is MAAS?

I think that MAAS does not require introduction, but if people really need to know, this awesome video will provide a far better explanation than the one I can give in this blog post.

http://youtu.be/J1XH0SQARgo

 

Components and Architecture

MAAS have been designed in such a way that it can be deployed in different architectures and network environments. MAAS can be deployed as both, a Single-Node or Multi-Node Architecture. This allows MAAS to be a scalable deployment system to meet your needs. It has two basic components, the MAAS Region Controller and the MAAS Cluster Controller.

MAAS Architectures

Region Controller

The MAAS Region Controller is the component the users interface with, and is the one that controls the Cluster Controllers. It is the place of the WebUI and API. The Region Controller is also the place for the MAAS meta-data server for cloud-init, as well as the place where the DNS server runs. The region controller also configures a rsyslogd server to log the installation process, as well as a proxy (squid-deb-proxy) that is used to cache the debian packages. The preseeds used for the different stages of the process are also being stored here.

Cluster Controller

The MAAS Cluster Controller only interfaces with the Region controller and is the one in charge of provisioning in general. The Cluster Controller is the place the TFTP and DHCP server(s) are located. This is the place where both the PXE files and ephemeral images are being stored. It is also the Cluster Controller’s job to power on/off the managed nodes (if configured).

The Architecture

As you can see in the image above, MAAS can be deployed in both a single node or multi-node. The way MAAS has being designed makes MAAS highly scalable allowing to add more Cluster Controllers that will manage a different pool of machines. A single-node scenario can become in a multi-node scenario by simply adding more Cluster Controllers. Each Cluster Controller has to register with the Region Controller, and each can be configured to manage a different Network. The way has this is intended to work is that each Cluster Controller will manage a different pool of machines in different networks (for provisioning), allowing MAAS to manage hundreds of machines. This is completely transparent to users because MAAS makes the machines available to them as a single pool of machines, which can all be used for deploying/orchestrating your services with juju.

How Does It Work?

MAAS has 3 basic stages. These are Enlistment, Commissioning and Deployment which are explained below:

MAAS Process

Enlistment

The enlistment process is the process on which a new machine is registered to MAAS. When a new machine is started, it will obtain an IP address and PXE boot from the MAAS Cluster Controller. The PXE boot process will instruct the machine to load an ephemeral image that will run and perform an initial discovery process (via a preseed fed to cloud-init). This discovery process will obtain basic information such as network interfaces, MAC addresses and the machine’s architecture. Once this information is gathered, a request to register the machine is made to the MAAS Region Controller. Once this happens, the machine will appear in MAAS with a Declared state.

Commissioning

The commissioning process is the process where MAAS collects hardware information, such as the number of CPU cores, RAM memory, disk size, etc, which can be later used as constraints. Once the machine has been enlisted (Declared State), the machine must be accepted into the MAAS in order for the commissioning processes to begin and for it to be ready for deployment. For example, in the WebUI, an “Accept & Commission” button will be present. Once the machine gets accepted into MAAS, the machine will PXE boot from the MAAS Cluster Controller and will be instructed to run the same ephemeral image (again). This time, however, the commissioning process will be instructed to gather more information about the machine, which will be sent back to the MAAS region controller (via cloud-init from MAAS meta-data server). Once this process has finished, the machine information will be updated it will change to Ready state. This status means that the machine is ready for deployment.

Deployment

Once the machines are in Ready state, they can be used for deployment. Deployment can happen with both juju or the maas-cli (or even the WebUI). The maas-cli will only allow you to install Ubuntu on the machine, while juju will not only allow you to deploy Ubuntu on them, but will allow you to orchestrate services. When a machine has been deployed, its state will change to Allocated to <user>. This state means that the machine is in use by the user who requested its deployment.

Releasing Machines

Once a user doesn’t need the machine anymore, it can be released and its status will change from Allocated to <user> back to Ready. This means that the machine will be turned off and will be made available for later use.

But… How do Machines Turn On/Off?

Now, you might be wondering how are the machines being turned on/off or who is the one in charge of that. MAAS can manage power devices, such as IPMI/iLO, Sentry Switch CDU’s, or even virsh. By default, we expect that all the machines being controlled by MAAS have IPMI/iLO cards. So if your machines do, MAAS will attempt to auto-detect and auto-configure your IPMI/iLO cards during the Enlistment and Commissioning processes. Once the machines are Accepted into MAAS (after enlistment) they will be turned on automatically and they will be Commissioned (that is if IPMI was discovered and configured correctly).. This also means that every time a machine is being deployed, they will be turned on automatically.

Note that MAAS not only handles physical machines, it can also handle Virtual Machines, hence the virsh power management type. However, you will have to manually configure the details in order for MAAS to manage these virtual machines and turn them on/off automatically.

Read more
Mark Baker

If you are interested in either OpenStack or MySQL (or both) then you need to know about 2 meetups running the evening of May 23rd in London.

The London OpenStack meetup.

This is the 3rd meeting to take place and promises to be a good one with 3 talks planned so far:

* Software defined networking and OpenStack – VMWare Nicera’s Andrew Kennedy
* OpenStack Summit Overview – Rackspace’s Kevin Jackson
* An introduction to the Heat API – Red Hat’s Steven Hardy

For a 4th talk we are looking at a customer example – watch this space.

To come along please register at:

http://www.meetup.com/Openstack-London/

The MySQL Meetup.

This group hasn’t met for quite some time but MySQL remains as popular as ever and new developments with MariaDB mean the group has plenty to catch up on. There 2 talks planned so far:

* HP’s database as a service – HP’s Andrew Hutching

* ‘Whatever he wants to talk about’ – MySQL and MariaDB founder Monty Widenius.

 

With David Axmark also in attendance it could be one of the most significant MySQL meetings in London ever. Not one to miss if you are interested in MySQL, MariaDB or related technologies

MySQL meetups are managed in Facebook – please register to attend here:

http://www.meetup.com/The-London-MySQL-Meetup-Group/events/110243482/

 

Of course given the events are running in rooms next to each other you are welcome to register for both and switch between them based on the schedule. We hope to see you there!

Read more
anthony-c-beckley

From our Cloud partner Inktank…

Today marks another milestone for Ceph with the release of Cuttlefish (v0.61), the third stable release of Ceph. Inktank’s development efforts for the Cuttlefish release have been focused around Red Hat support and making it easier to install and configure Ceph while improving the operational ease of integrating with 3rdparty tools, such as provisioning and billing systems. As ever, there have also been a ton of new features we have added to the object and block capabilities of Ceph, as well as with the underlying storage cluster (RADOS), alongside some great contributions from the community.

So what’s new for Ceph users in Cuttlefish?

Ease of installation:

  • Ceph-deploy: a new deployment tool which requires no other tools and allows a user to start running a multi-node Ceph cluster in minutes. Ideal for users who want to do quick proof of concepts with Ceph.
  • Chef recipes: a new set of reference Chef recipes for deploying a Ceph storage cluster, which Inktank will keep authoritative as new features emerge in Ceph. These are in addition to the Puppet scripts contributed by eNovance and Bloomberg, the Crowbar Barclamps developed with Dell, and the Juju charms produced in conjunction by Canonical, ensuring customers can install Ceph using most popular tools.
  • Fully tested RPM packages for Red Hat Enterprise Linux and derivatives, available on both the ceph.com repo and in EPEL (Extra Packages for Enterprise Linux).

Administrative functionality:

  • Admins can now create, delete or modify users and their access keys as well as manipulate and audit users’ bucket and object data using the RESTful API of the Ceph Object Gateway. This makes it easy to hook Ceph into provisioning or billing systems.
  • Administrators can now quickly and easily set a quota for a RADOS pool. This helps with capacity planning management as well as preventing specific Ceph clients from consuming all available data at the expense of other users.
  • In addition, to the pool quotas, administrators can now quickly see the total used and available capacity of a cluster using the ceph df command, very similar to how the generic UNIX df command works with other local file systems.

Ceph Block Device (RBD) Incremental Snapshots

It is now possible to just take a snapshot of the recent changes to a Ceph block image. Not only does this reduce the amount of space needed to store snapshots on a cluster, but forms the foundation for delivering disaster recovery options for volumes, as part of the popular cloud platforms such as OpenStack and CloudStack.

You can see the complete list of features in the release notes are available at  http://ceph.com/docs/master/release-notes/. You can also check out our roadmap page for more information on what’s coming up in future releases of Ceph. If you would like to contribute towards Ceph, you can visit Ceph.com for more information on how you can get started and we invite you to join our online Ceph Development Summit on Tuesday May 7th, more details available at http://wiki.ceph.com.

Read more
Darryl Weaver

Introduction

In this article I will show you how to set up a new WordPress blog on Amazon EC2 public cloud and then migrate it to HP Public Cloud using Juju Jitsu, from Canonical, the company behind Ubuntu.

Prerequisites

  • Amazon EC2 Account
  • HP Public Cloud Account
  • Ubuntu Desktop or Server 12.04 or above with root or sudo access

Juju Environment Setup

First of all we need to install Juju and Jitsu from the PPA archive to make it available for use, so first of all add the PPA to the installation sources:

sudo apt-get -y install python-software-properties
sudo add-apt-repository ppa:juju/pkgs

Now update apt and install juju, charm-tools and juju-jitsu

sudo apt-get update
sudo apt-get install juju charm-tools juju-jitsu

You will now need to set up your ~/.juju/environments.yaml file for Amazon EC2, see here: https://juju.ubuntu.com/get-started/amazon/

and then for HP cloud also, so see here:

https://juju.ubuntu.com/get-started/hp-cloud/

So you should end up with an environments.yaml file that will look something like this:

default: amazon
environments:
amazon:
 type: ec2
 control-bucket: juju-b1bb8e0313d14bf1accb8a198a389eed
 admin-secret:[any-unique-string-shared-among-admins-u-like]
 access-key: [PUT YOUR ACCESS KEY HERE]
 secret-key: [PUT YOUR SECRET KEY HERE]
 default-series: precise
 juju-origin: ppa
 ssl-hostname-verification: true
hpcloud:
 juju-origin: ppa
 control-bucket: juju-hpc-az1-cb
 admin-secret: [any-unique-string-shared-among-admins-u-like]
 default-image-id: [8419]
 region: az-1.region-a.geo-1
 project-name: [your@hp-cloud.com-tenant-name]
 default-instance-type: standard.small
 auth-url: https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/
 auth-mode: keypair
 type: openstack
 default-series: precise
 access-key: [PUT YOUR ACCESS KEY HERE]
 secret-key: [PUT YOUR SECRET KEY HERE]

Deploying WordPress to Amazon EC2

Now we need to bootstrap the Amazon EC2 environment.

juju bootstrap -e amazon

Check it finishes bootstrapping correctly after a few minutes using:

juju status -e amazon

Which should output something like this:

machines:
  0:
    agent-state: running
    dns-name: ec2-50-17-169-153.compute-1.amazonaws.com
    instance-id: i-78d4781b
    instance-state: running
services: {}

To give a good view of what is going on and to also allow modification from a web control panel we can deploy juju-gui to the bootstrap node, using juju-jitsu:

jitsu deploy-to 0 juju-gui -e amazon

juju expose juju-gui -e amazon

This will take a few minutes to deploy.
Once complete you will see this from the output of “juju status -e amazon”, which should output something like:

machines:
  0:
    agent-state: running
    dns-name: ec2-50-17-169-153.compute-1.amazonaws.com
    instance-id: i-78d4781b
    instance-state: running
services:
  juju-gui:
    charm: cs:precise/juju-gui-3
    exposed: true
    relations: {}
    units:
      juju-gui/0:
        agent-state: started
        machine: 0
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: ec2-50-17-169-153.compute-1.amazonaws.com

Then use the “public-address” entry in your web browser to connect to juju-gui and see what is going on visually.

Juju-gui currently works well on Google Chrome or Chromium, it uses a Self-signed SSL certificate so you will be prompted to connect given a security warning which you can safely ignore and proceed.

Initially you should see the login page, with the username already filled in as “admin” and the password is the same as your password for the admin-secret in your ~/.juju/environments.yaml file.

Once logged in you should see a page that looks like this showing that only juju-gui is deployed to your environment, so far:

Juju-gui screenshot

First login

First we need to deploy a MySQL Database to store your blog’s data:

juju deploy mysql -e amazon

This will take a few minutes to deploy, so go ahead and also deploy a wordpress application server:

juju deploy wordpress -e amazon

While deployment continues you should see them appear in Juju-gui too

Juju gui with wordpress and mysql deployed

Showing MySQL and WordPress deployed

:

Once deployment is complete you can check the name of the new servers with:

juju status -e amazon

Which should output something like this:

machines:
  0:
    agent-state: running
    dns-name: ec2-50-17-169-153.compute-1.amazonaws.com
    instance-id: i-78d4781b
    instance-state: running
  1:
    agent-state: running
    dns-name: ec2-23-22-68-159.compute-1.amazonaws.com
    instance-id: i-3a9bd554
    instance-state: running
  2:
    agent-state: running
    dns-name: ec2-54-234-249-131.compute-1.amazonaws.com
    instance-id: i-f9e56696
    instance-state: running
services:
  juju-gui:
    charm: cs:precise/juju-gui-3
    exposed: true
    relations: {}
    units:
      juju-gui/0:
        agent-state: started
        machine: 0
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: ec2-50-17-169-153.compute-1.amazonaws.com
  mysql:
    charm: cs:precise/mysql-16
    relations: {}
    units:
      mysql/0:
        agent-state: started
        machine: 1
        public-address: ec2-23-22-68-159.compute-1.amazonaws.com
  wordpress:
    charm: cs:precise/wordpress-11
    exposed: false
    relations:
      loadbalancer:
      - wordpress
    units:
      wordpress/0:
        agent-state: started
        machine: 2
        public-address: ec2-54-234-249-131.compute-1.amazonaws.com

Now we need to add a relationship between the wordpress application server and the MySQL database server. This will set up the SQL backend database for your blog and configure the usernames and passwords and database tables needed, all automatically.

juju add-relation wordpress mysql -e amazon

Finally, we need to expose the wordpress instance so you can connect to it using your web browser:

juju expose wordpress -e amazon

Now your Juju gui should look like this:
Juju Gui showing relations

Setting up WordPress and adding your first post

Then connect to the wordpress server using your web browser, by using the public-address from the status output above, i.e. http://ec2-54-234-249-131.compute-1.amazonaws.com/
This will then show you the initial set up page for your wordpress blog, like this:

You will need to enter some configuration details such as a site name and password:

After you have saved the new details you will get a confirmation page:

Confirmation Page

So, click on Login to login to your new blog on Amazon EC2.

Now in order to make sure we are testing a live blog we need to enter some data. So, let’s post a blog entry.
First click on New Post on the top left menu:

Now, type in the details of your new blog post and click on Publish on the top right:

Now you have a new blog on Amazon EC2 with your first blog entry posted.

Migrating from Amazon EC2 to HP Cloud

So, now we have a live blog running on Amazon EC2 it is now time to migrate to HP Cloud.

We could just run the commands above but using the extension “-e hpcloud” to deploy the services to HP Cloud and then migrate the data.
But a more satisfying way is to use Juju-jitsu again to export the current layout from Amazon EC2 environment and then replicate that on HP Cloud.

So, we can use:

jitsu export -e amazon > wordpress-deployment.json

This will save a file in JSON format detailing the deployed services and their relationships.

First we need to bootstrap our HP Cloud environment:

juju bootstrap -e hpcloud

This will take a few minutes to deploy a new instance and install the Juju bootstrap node.
Once the bootstrap is complete you should be able to check the status by using:

juju status -e hpcloud

The output should be something like this:

machines:
  0:
    agent-state: running
    dns-name: 15.185.102.93
    instance-id: 1064649
    instance-state: running
services: {}

So, let us now deploy the replica of the environment on Amazon to HP:

jitsu import -e hpcloud wordpress-deployment.json

This will then deploy the replicated environment from Amazon EC2. You can check progress with:

juju status -e hpcloud

When completed your output should be as follows:


So we now have a replica of the environment from Amazon EC2 on HP Cloud, but we have no data, yet.
We also need to copy the SQL data from the existing Amazon EC2 MySQL database to the HP Cloud MySQL database to get all your live blog data across to the new environment.
Let’s login to the MySQL DB node on Amazon EC2:

juju ssh mysql/0 -e amazon

Now we are logged in we can get the root password for the Database:

sudo cat /var/lib/juju/mysql.passwd

This will output the root password for the MySQL DB so you can take a copy of the data with:

sudo mysqldump -p wordpress > wordpress.sql

When prompted copy and past the password that you recovered from the previous step.

Now exit the login using:

exit

Now copy the SQL backup file from Amazon EC2 to your local machine:

juju scp mysql/0:wordpress.sql ./ -e amazon

This will download the wordpress.sql file.
You will now need to know your new wordpress server IP address for HP Cloud.
You can find this from juju status:

juju status wordpress -e hpcloud

The output should look like this:

machines:
  3:
    agent-state: running
    dns-name: 15.185.102.121
    instance-id: 1064677
    instance-state: running
services:
  wordpress:
    charm: cs:precise/wordpress-11
    exposed: false
    relations:
      db:
      - mysql
      loadbalancer:
      - wordpress
    units:
      wordpress/0:
        agent-state: started
        machine: 3
        public-address: 15.185.102.121

In order to fix your WordPress server name you will have to replace your Amazon EC2 WordPress public-address with your HP Cloud WordPress server public-address.
So, you will need to do a find and replace in the wordpress.sql file as follows:

sed -e 's/ec2-54-234-249-131.compute-1.amazonaws.com/15.185.102.121/g' wordpress.sql > wordpress-hp.sql

Obviously you will need to customise the command to replace your server addresses from Amazon and HP Cloud in the command above.
NB:This step can be problematic and if you need more detailed information on changing the server name of a wordpress installation and moving servers see this more detailed instructions here:
http://codex.wordpress.org/Moving_WordPress

Now upload to your new HP Cloud MySQL server the database backup file, fixed with the new server public-address:

juju scp wordpress-hp.sql mysql/0: -e hpcloud

Now let’s import the data into your wordpress database on HP Cloud.
First we need to log in to the database server, as before:

juju ssh mysql/0 -e hpcloud

Now let’s get the root password for the Database:

sudo cat /var/lib/juju/mysql.passwd

Now we can import the data using:

sudo mysql -p wordpress < wordpress-hp.sql

And when you are prompted for the password enter the password you retrieved in the previous step, and then exit.

Finally you will still need to expose the wordpress instance on HP Cloud to the outside world using:

juju expose wordpress -e hpcloud

Now connect to your new wordpress blog migrated to HP Cloud from Amazon by connecting to the public-address of the wordpress node.
You can find the address from the output of juju status as follows:

juju status wordpress -e hpcloud

The output should look like this:

machines:
  3:
    agent-state: running
    dns-name: 15.185.102.121
    instance-id: 1064677
    instance-state: running
services:
  wordpress:
    charm: cs:precise/wordpress-11
    exposed: true
    relations:
      db:
      - mysql
      loadbalancer:
      - wordpress
    units:
      wordpress/0:
        agent-state: started
        machine: 3
        open-ports:
        - 80/tcp
        public-address: 15.185.102.121

Now connect to http://15.185.102.121/ and your blog is now hosted on HP Cloud.

Read more
David Duffey

Today we announced a collaborative support and engineering agreement with Dell.  As part of this agreement Canonical will add Dell 11G & 12G PowerEdge models to the Ubuntu Server 12.04 LTS Certification List and Dell will add Ubuntu Server to its Linux OS Support Matrix.

In May 2012, Dell launched the OpenStack Cloud Reference Architecture using Ubuntu 12.04 LTS on select PowerEdge-C series servers. Today’s announcement expands upon that offering by combining the benefits of Ubuntu Server Certification, Ubuntu Advantage enterprise support, and Dell Hardware ProSupport across the PowerEdge line.

Dell customers can now deploy with confidence when purchasing Dell PowerEdge servers with Dell Hardware ProSupport and Ubuntu Advantage.  When these customers call into Dell, their service tag numbers will be entitled with ProSupport and Ubuntu Advantage, which will create a seamless support experience via the collaborative Dell and Canonical support and engineering relationship.

In preparation for this announcement, Canonical engineers worked with Dell to enable and validate Ubuntu Server running on Dell PowerEdge Servers.  This work resulted in improved Ubuntu Server on Dell PowerEdge support for PCIe SSD (solid state drives), 4K-block drives, EFI booting, Web Services Management, consistent network device naming, and PERC (PowerEdge RAID Controllers).

Dell hardware systems management can be done out-of-band via ipmi, iDRAC, and the Lifecycle Controller.  Dell OMSA Ubuntu packages are also available but it is recommended to use the supported out-of-band systems management tools.  Dell TechCenter is a good resource for additional technical information about running Ubuntu Server on Dell PowerEdge servers.

If you are interested in purchasing Ubuntu Advantage for your Dell PowerEdge servers, please contact the Dell Solutions team at Canonical.  If your business is already using or thinking about using a supported Ubuntu Server infrastructure in your data-center then be sure to fill out the annual Ubuntu Server and Cloud Survey to provide additional feedback.

Read more
anthony-c-beckley

We are exhibiting at this year’s CeBIT event on March 5-9th, 2013 in Hannover Germany, in conjunction with our partner in the region, Teuto.net and we’re giving away number of free tickets to selected customers and partners. If you are interested in one of these tickets, please contact me at anthony.beckley@canonical.com for more information.

The Canonical/Teuto.net stand will be in the Open Source Arena (Hall 6, Stand F16, (030) and we will be showcasing two enterprise technology areas:

  • The Ubuntu Cloud Stack – demonstrating end user access to applications via an OpenStack cloud, powered by Ubuntu,
  • Ubuntu Landscape Systems Management – demonstrating ease of management of desktop, server and cloud nodes.

We will be running hourly demonstrations on our stand and attendees have the chance to win a Google Nexus 7 tablet! Simply come to out stand and watch a short demo or your chance to win If you would like to pre-register for a demonstration, email me at anthony.beckley@canonical.com

We look forward to seeing you at the show!

CeBIT draws a live audience of more than 3,000 people from over 100 different countries. In just five days the show delivers a panoramic view of the digital world’s mainstay markets: ICT and Telecommunications, Digital Media also Consumer Electronics.
To learn more about CeBIT click here.

Read more
Mark Baker

As clouds for IT infrastructure become commonplace, admins and devops need quick, easy ways of deploying and orchestrating cloud services.  As we mentioned in October, Ubuntu now has a GUI for Juju, the service orchestration tool for server and cloud. In this post we wanted to expand a bit more on how Juju makes it even easier to visualise and keep track of complex cloud environments.

Juju provides the ability to rapidly deploy cloud services on OpenStack, HP Cloud, AWS and other platforms using a library of 100 ‘charms’ which cover applications from node.js to Hadoop. Juju GUI makes the Juju command line interface even easier, giving the ability to deploy, manage and track progress visually as your cloud grows (or shrinks).

Juju GUI is easy and totally intuitive.  To start, you simply search for the service you want on the Juju GUI charm search bar (top right on the screen).  In this case I want to deploy WordPress to host my blog site.  I have the chance to alter the WordPress settings, and with a few clicks the service is ready.  Its displayed as an icon on the GUI.

I then want a mysql service to go alongside.  Again I search for the charm, set the parameter (or accept the defaults) and away we go.

Its even easier to build the relations between these services by point and click. Juju knows that the relationship needs a suitable database link.

I can expose WordPress to users by setting expose flag  - at the bottom of a settings screen – to on. To scale up WordPress I can add more units, creating identical copies of the WordPress deployment, including any relationships.  I have selected ten in total, and this shows in the center of the wordpress icon.

And thats it.

For a simple cloud, Juju or other tools might be sufficient.  But as your cloud grows, Juju GUI will be a wonderful way not only to provision and orchestrate services, but more importantly to validate and check that you have the correct links and relationships.  Its an ideal way to replicate and scale cloud services as you need.

For more details of Juju, go to juju.ubuntu.com.  To try Juju GUI for yourself, go to http://uistage.jujucharms.com:8080/

Read more
Mark Murphy

Ubuntu has long been a favourite with developers – especially in the worlds of web and cloud development. We’re excited that, from today, serious (and not-so-serious) developers will be able to get their hands on the super-sleek Dell XPS 13 Developer Edition, preloaded with and fully optimised for Ubuntu.

The Dell XPS 13 is a top spec, high-end ultramobile laptop, offering developers a complete client-to-cloud experience. It is the result of the Dell’s bold Sputnik initiative, which embraced the community and received terrific response from developers around the world. The community has spoken – and they said, “give us power, give us storage, give us a really ‘meaty’ machine – that also looks GREAT. And Dell has delivered.

The XPS 13 with Ubuntu allows developers to create ??microclouds? on the local drive, simulating a proper, at-scale environment, before deploying seamlessly to the cloud using Juju, Ubuntu’s service orchestration tool. That’s something you simply can’t do with a standard installation of any other OS.

With Juju now supporting 103 charms and counting, it covers the world’s most popular open source cloud services, all from the Ubuntu desktop.

I’d like to call out the drive and energy of Barton George and Michael Cote at Dell for making the XPS 13 launch possible. And of course, the team within Canonical for the fine tuning of this great product (mine ‘cold’ boots to desktop in under 11 seconds!) I’d also like to call out the dev community for their incredible support, helping us getting this from drawing board to factory ship – get buying!

Combining Ubuntu with the power of Dell hardware gives developers the perfect environment for productive software development, whatever their sector. The Dell XPS 13 Developer Edition is available from http://www.dell.com/us/soho/p/xps-13-linux/pd in America and Canada today.

Read more
Sonia Ouarti

You have critical decisions ahead as you take your first steps into cloud computing.

One of them will be whether to build a private cloud infrastructure in your own data centre, make use of one of the public cloud services offered by vendors like Amazon, Rackspace and HP, or combine the two in a ‘hybrid cloud’ approach.

You can get closer to the right decision by considering the right questions now:

  • Budget - How much do you have (or how much don’t you have) to support your cloud strategy?
  • Speed - When do you need this done? Tomorrow, next year, yesterday…
  • Demand - How many users will you need to support? And will they call come at once?
  • Resources - What kind of resources do you have in-house? And how many can you realistically get your hands on?
  • Privacy -How sensitive is your data? Where are you doing business?

This short, sharp checklist takes you through the process that points you in the right direction and ensures your investments pay off from the start. Download it today.

 

Read more
Sonia Ouarti

OpenStack, your foundation for Cloud computing

14 November 2012 at 4pm GMT

 

The open cloud, based on OpenStack, is fast becoming one of the most popular cloud platforms. OpenStack delivers open standards, modularity and scalability, and avoids vendor lock-in.

Join this webinar to find out why OpenStack is surging ahead, learn about the OpenStack technical architecture and the new features of the recent Folsom release. Find out why, to date, all public cloud providers, such as DreamHost and HP, whom are using OpenStack, are deploying it on ubuntu.

You will also learn about investments that Canonical has made into OpenStack such a as our Continuous Integration efforts, the Ubuntu Cloud Archive and Ceilometer.

Register now

Read more
Mark Baker

Hardened sysadmins and operators often spurn graphical user interfaces (GUIs) as being slow, cumbersome, unscriptable and inflexible. GUIs are for wimps, right?

Well, I’m not going to argue – and certainly, command line interfaces (CLIs) have their benefits, for those comfortable using them. But we are seeing a pronounced change in the industry, as developers start to take a much greater interest in the deployment and operation of flexible, elastic services in scale out or cloud environments. Whilst many of these new ‘devops’ are happy with a CLI, others want to be able to visualise their environment. In the same way that IDEs are popular, being able to see a representation of the services that are running and how they are related can prove extremely valuable. The same goes for launching new services or removing existing ones.

This is why, last week, as part of the new Ubuntu 12.10 release, we announced a GUI for Juju, the Ubuntu service orchestration tool for server and cloud.
The new Juju GUI does all these things and more. For those of you unfamiliar with it, Juju uses a service definition file know as a ‘charm’. Much of the magic in Juju comes from the collective expertise that has gone into developing this the charm. It enables you to deploy complex services without intimate knowledge of the best practice associated that service. Instead, all that deployment expertise is encapsulated in the charm.
Now, with the Juju GUI, it gets even easier. You can select services from a library of nearly 100 charms, covering applications from node.js to Hadoop. And you can deploy them live on any of the providers that Juju supports – OpenStack, HP Cloud, Amazon Web Services and Ubuntu’s Metal-as-a-Service. You can add relations between services while they are running, explore the load on them, upgrade them or destroy them. At the OpenStack Summit in San Diego this year, Mark Shuttleworth even used it to upgrade a running* OpenStack Cloud from Essex to Folsom.
Since the Juju GUI was first shown, the interest and feedback has been tremendous. It certainly seems to make the magic of Juju – and what it can do for people – easier to see. If you haven’t seen it already, check out the screen shots below or visit http://uistage.jujucharms.com:8080/

Because as we’ve always known, a picture really is worth a 1000 words.

 

Juju Gui Image

The Juju GUI

 

 

*Running on Ubuntu Server, obviously.

Read more