Canonical Voices

Posts tagged with 'juju'

Darryl Weaver


In this article I will show you how to set up a new WordPress blog on Amazon EC2 public cloud and then migrate it to HP Public Cloud using Juju Jitsu, from Canonical, the company behind Ubuntu.


  • Amazon EC2 Account
  • HP Public Cloud Account
  • Ubuntu Desktop or Server 12.04 or above with root or sudo access

Juju Environment Setup

First of all we need to install Juju and Jitsu from the PPA archive to make it available for use, so first of all add the PPA to the installation sources:

sudo apt-get -y install python-software-properties
sudo add-apt-repository ppa:juju/pkgs

Now update apt and install juju, charm-tools and juju-jitsu

sudo apt-get update
sudo apt-get install juju charm-tools juju-jitsu

You will now need to set up your ~/.juju/environments.yaml file for Amazon EC2, see here:

and then for HP cloud also, so see here:

So you should end up with an environments.yaml file that will look something like this:

default: amazon
 type: ec2
 control-bucket: juju-b1bb8e0313d14bf1accb8a198a389eed
 default-series: precise
 juju-origin: ppa
 ssl-hostname-verification: true
 juju-origin: ppa
 control-bucket: juju-hpc-az1-cb
 admin-secret: [any-unique-string-shared-among-admins-u-like]
 default-image-id: [8419]
 region: az-1.region-a.geo-1
 project-name: []
 default-instance-type: standard.small
 auth-mode: keypair
 type: openstack
 default-series: precise

Deploying WordPress to Amazon EC2

Now we need to bootstrap the Amazon EC2 environment.

juju bootstrap -e amazon

Check it finishes bootstrapping correctly after a few minutes using:

juju status -e amazon

Which should output something like this:

    agent-state: running
    instance-id: i-78d4781b
    instance-state: running
services: {}

To give a good view of what is going on and to also allow modification from a web control panel we can deploy juju-gui to the bootstrap node, using juju-jitsu:

jitsu deploy-to 0 juju-gui -e amazon

juju expose juju-gui -e amazon

This will take a few minutes to deploy.
Once complete you will see this from the output of “juju status -e amazon”, which should output something like:

    agent-state: running
    instance-id: i-78d4781b
    instance-state: running
    charm: cs:precise/juju-gui-3
    exposed: true
    relations: {}
        agent-state: started
        machine: 0
        - 80/tcp
        - 443/tcp

Then use the “public-address” entry in your web browser to connect to juju-gui and see what is going on visually.

Juju-gui currently works well on Google Chrome or Chromium, it uses a Self-signed SSL certificate so you will be prompted to connect given a security warning which you can safely ignore and proceed.

Initially you should see the login page, with the username already filled in as “admin” and the password is the same as your password for the admin-secret in your ~/.juju/environments.yaml file.

Once logged in you should see a page that looks like this showing that only juju-gui is deployed to your environment, so far:

Juju-gui screenshot

First login

First we need to deploy a MySQL Database to store your blog’s data:

juju deploy mysql -e amazon

This will take a few minutes to deploy, so go ahead and also deploy a wordpress application server:

juju deploy wordpress -e amazon

While deployment continues you should see them appear in Juju-gui too

Juju gui with wordpress and mysql deployed

Showing MySQL and WordPress deployed


Once deployment is complete you can check the name of the new servers with:

juju status -e amazon

Which should output something like this:

    agent-state: running
    instance-id: i-78d4781b
    instance-state: running
    agent-state: running
    instance-id: i-3a9bd554
    instance-state: running
    agent-state: running
    instance-id: i-f9e56696
    instance-state: running
    charm: cs:precise/juju-gui-3
    exposed: true
    relations: {}
        agent-state: started
        machine: 0
        - 80/tcp
        - 443/tcp
    charm: cs:precise/mysql-16
    relations: {}
        agent-state: started
        machine: 1
    charm: cs:precise/wordpress-11
    exposed: false
      - wordpress
        agent-state: started
        machine: 2

Now we need to add a relationship between the wordpress application server and the MySQL database server. This will set up the SQL backend database for your blog and configure the usernames and passwords and database tables needed, all automatically.

juju add-relation wordpress mysql -e amazon

Finally, we need to expose the wordpress instance so you can connect to it using your web browser:

juju expose wordpress -e amazon

Now your Juju gui should look like this:
Juju Gui showing relations

Setting up WordPress and adding your first post

Then connect to the wordpress server using your web browser, by using the public-address from the status output above, i.e.
This will then show you the initial set up page for your wordpress blog, like this:

You will need to enter some configuration details such as a site name and password:

After you have saved the new details you will get a confirmation page:

Confirmation Page

So, click on Login to login to your new blog on Amazon EC2.

Now in order to make sure we are testing a live blog we need to enter some data. So, let’s post a blog entry.
First click on New Post on the top left menu:

Now, type in the details of your new blog post and click on Publish on the top right:

Now you have a new blog on Amazon EC2 with your first blog entry posted.

Migrating from Amazon EC2 to HP Cloud

So, now we have a live blog running on Amazon EC2 it is now time to migrate to HP Cloud.

We could just run the commands above but using the extension “-e hpcloud” to deploy the services to HP Cloud and then migrate the data.
But a more satisfying way is to use Juju-jitsu again to export the current layout from Amazon EC2 environment and then replicate that on HP Cloud.

So, we can use:

jitsu export -e amazon > wordpress-deployment.json

This will save a file in JSON format detailing the deployed services and their relationships.

First we need to bootstrap our HP Cloud environment:

juju bootstrap -e hpcloud

This will take a few minutes to deploy a new instance and install the Juju bootstrap node.
Once the bootstrap is complete you should be able to check the status by using:

juju status -e hpcloud

The output should be something like this:

    agent-state: running
    instance-id: 1064649
    instance-state: running
services: {}

So, let us now deploy the replica of the environment on Amazon to HP:

jitsu import -e hpcloud wordpress-deployment.json

This will then deploy the replicated environment from Amazon EC2. You can check progress with:

juju status -e hpcloud

When completed your output should be as follows:

So we now have a replica of the environment from Amazon EC2 on HP Cloud, but we have no data, yet.
We also need to copy the SQL data from the existing Amazon EC2 MySQL database to the HP Cloud MySQL database to get all your live blog data across to the new environment.
Let’s login to the MySQL DB node on Amazon EC2:

juju ssh mysql/0 -e amazon

Now we are logged in we can get the root password for the Database:

sudo cat /var/lib/juju/mysql.passwd

This will output the root password for the MySQL DB so you can take a copy of the data with:

sudo mysqldump -p wordpress > wordpress.sql

When prompted copy and past the password that you recovered from the previous step.

Now exit the login using:


Now copy the SQL backup file from Amazon EC2 to your local machine:

juju scp mysql/0:wordpress.sql ./ -e amazon

This will download the wordpress.sql file.
You will now need to know your new wordpress server IP address for HP Cloud.
You can find this from juju status:

juju status wordpress -e hpcloud

The output should look like this:

    agent-state: running
    instance-id: 1064677
    instance-state: running
    charm: cs:precise/wordpress-11
    exposed: false
      - mysql
      - wordpress
        agent-state: started
        machine: 3

In order to fix your WordPress server name you will have to replace your Amazon EC2 WordPress public-address with your HP Cloud WordPress server public-address.
So, you will need to do a find and replace in the wordpress.sql file as follows:

sed -e 's/' wordpress.sql > wordpress-hp.sql

Obviously you will need to customise the command to replace your server addresses from Amazon and HP Cloud in the command above.
NB:This step can be problematic and if you need more detailed information on changing the server name of a wordpress installation and moving servers see this more detailed instructions here:

Now upload to your new HP Cloud MySQL server the database backup file, fixed with the new server public-address:

juju scp wordpress-hp.sql mysql/0: -e hpcloud

Now let’s import the data into your wordpress database on HP Cloud.
First we need to log in to the database server, as before:

juju ssh mysql/0 -e hpcloud

Now let’s get the root password for the Database:

sudo cat /var/lib/juju/mysql.passwd

Now we can import the data using:

sudo mysql -p wordpress < wordpress-hp.sql

And when you are prompted for the password enter the password you retrieved in the previous step, and then exit.

Finally you will still need to expose the wordpress instance on HP Cloud to the outside world using:

juju expose wordpress -e hpcloud

Now connect to your new wordpress blog migrated to HP Cloud from Amazon by connecting to the public-address of the wordpress node.
You can find the address from the output of juju status as follows:

juju status wordpress -e hpcloud

The output should look like this:

    agent-state: running
    instance-id: 1064677
    instance-state: running
    charm: cs:precise/wordpress-11
    exposed: true
      - mysql
      - wordpress
        agent-state: started
        machine: 3
        - 80/tcp

Now connect to and your blog is now hosted on HP Cloud.

Read more
Mark Baker

As clouds for IT infrastructure become commonplace, admins and devops need quick, easy ways of deploying and orchestrating cloud services.  As we mentioned in October, Ubuntu now has a GUI for Juju, the service orchestration tool for server and cloud. In this post we wanted to expand a bit more on how Juju makes it even easier to visualise and keep track of complex cloud environments.

Juju provides the ability to rapidly deploy cloud services on OpenStack, HP Cloud, AWS and other platforms using a library of 100 ‘charms’ which cover applications from node.js to Hadoop. Juju GUI makes the Juju command line interface even easier, giving the ability to deploy, manage and track progress visually as your cloud grows (or shrinks).

Juju GUI is easy and totally intuitive.  To start, you simply search for the service you want on the Juju GUI charm search bar (top right on the screen).  In this case I want to deploy WordPress to host my blog site.  I have the chance to alter the WordPress settings, and with a few clicks the service is ready.  Its displayed as an icon on the GUI.

I then want a mysql service to go alongside.  Again I search for the charm, set the parameter (or accept the defaults) and away we go.

Its even easier to build the relations between these services by point and click. Juju knows that the relationship needs a suitable database link.

I can expose WordPress to users by setting expose flag  - at the bottom of a settings screen – to on. To scale up WordPress I can add more units, creating identical copies of the WordPress deployment, including any relationships.  I have selected ten in total, and this shows in the center of the wordpress icon.

And thats it.

For a simple cloud, Juju or other tools might be sufficient.  But as your cloud grows, Juju GUI will be a wonderful way not only to provision and orchestrate services, but more importantly to validate and check that you have the correct links and relationships.  Its an ideal way to replicate and scale cloud services as you need.

For more details of Juju, go to  To try Juju GUI for yourself, go to

Read more
Matt Fischer

Getting Juju With It

At the UDS in Copenhagen I finally had time to attend a session on Juju Charms. I knew the theory of Juju, which is that allows you to easily deploy and link services on public clouds, locally, or even on bare metal, but I never had time to try it out. The Charm School (registration required) session in Copenhagen really showed me the power of what Juju can give you. For example, when I first setup my blog, I had to find a webhost, get an an ssh account, download WordPress, install it, and dependencies, setup mysql, configure WordPress, debug why they weren’t communicating, etc. It was super annoying and took way too long. Now, imagine you want to setup ten blogs, or ten instances of couchdb, or one hundred, or one thousand, and it quickly becomes untenable.  With juju, setting up a blog is as simple as:

  • juju deploy wordpress
  • juju deploy mysql
  • juju add-relation wordpress mysql
  • juju expose wordpress

A few minutes later, and I have a functioning WordPress install. For more complex setups and installs Juju helps to manage the relationships between charms and sends events that the charms react to. This makes it easy to add and remove services like haproxy and memcached to an existing webapp. This interaction between charms implies that the more charms that are available the more useful they all become; the network effect applies to charms!

So after I got home, Charm School had left me energized and ready to write a charm, but I didn’t have any great ideas, until I remembered an app that I’ve used before called Tracks. Tracks is a GTD app, in other words, a fancy todo list. I’d used it hosted before, but my free host went offline and I lost all my to do items. Hosting my own would be much safer. So I started working on a Tracks charm.

If you need an idea for a charm, think about what tools you use that you have to setup, what software have you installed and configured recently? If you need an idea and nothing stands out, you can check out the list of “Charm Needed” bugs. Actually you should check that list regardless to make sure nobody else is already writing the same one.

With an idea in hand, I sat down to write my Charm. Step one is the documentation, most of which was contained on this page “Writing a Charm“. I fully expected to spend three weeks learning a new programming language with arcane black magic commands, but I was pleasantly surprised to learn that you can write a charm in any language you want. Most charms seem to be shell scripts or Python and my charm was simple enough that I wrote it in bash.

During the process of charm writing you may have some questions, and there’s plenty of help to be had. First, the examples that are contained in the juju trunk are OLD and I wouldn’t recommend you follow them. They are missing things like README files and don’t expose http interfaces, which was requested for my charm. Instead I’d recommend you pull the wordpress, mysql, and drupal charms from the charm store. If the examples aren’t enough, you can always ask in #juju on freenode or use Once your charm works, you can submit it for review. You’ll probably learn a lot during the review, every person I’ve talked to has.

Finally after a bit of work off and on, my charm was done! I submitted it for review, made a few fixes and it made it into the store.

I can now have a Tracks instance up and running in just a few minutes

I’ve barely scratched the surface here with this post, but I hope someone will be energized to go investigate charms and write one. Charms do not use black magic and you don’t need to learn a new language to write one. Help is available if you need it and we’d love to have your contributions.
If you go write a charm please comment here and let me know!

Read more
Mark Baker

Hardened sysadmins and operators often spurn graphical user interfaces (GUIs) as being slow, cumbersome, unscriptable and inflexible. GUIs are for wimps, right?

Well, I’m not going to argue – and certainly, command line interfaces (CLIs) have their benefits, for those comfortable using them. But we are seeing a pronounced change in the industry, as developers start to take a much greater interest in the deployment and operation of flexible, elastic services in scale out or cloud environments. Whilst many of these new ‘devops’ are happy with a CLI, others want to be able to visualise their environment. In the same way that IDEs are popular, being able to see a representation of the services that are running and how they are related can prove extremely valuable. The same goes for launching new services or removing existing ones.

This is why, last week, as part of the new Ubuntu 12.10 release, we announced a GUI for Juju, the Ubuntu service orchestration tool for server and cloud.
The new Juju GUI does all these things and more. For those of you unfamiliar with it, Juju uses a service definition file know as a ‘charm’. Much of the magic in Juju comes from the collective expertise that has gone into developing this the charm. It enables you to deploy complex services without intimate knowledge of the best practice associated that service. Instead, all that deployment expertise is encapsulated in the charm.
Now, with the Juju GUI, it gets even easier. You can select services from a library of nearly 100 charms, covering applications from node.js to Hadoop. And you can deploy them live on any of the providers that Juju supports – OpenStack, HP Cloud, Amazon Web Services and Ubuntu’s Metal-as-a-Service. You can add relations between services while they are running, explore the load on them, upgrade them or destroy them. At the OpenStack Summit in San Diego this year, Mark Shuttleworth even used it to upgrade a running* OpenStack Cloud from Essex to Folsom.
Since the Juju GUI was first shown, the interest and feedback has been tremendous. It certainly seems to make the magic of Juju – and what it can do for people – easier to see. If you haven’t seen it already, check out the screen shots below or visit

Because as we’ve always known, a picture really is worth a 1000 words.


Juju Gui Image

The Juju GUI



*Running on Ubuntu Server, obviously.

Read more
Robert Ayres

Juju Java Cluster – Part 3

In my previous post, we added Memcached to our cluster.  In this post, I’ll write a bit more about the Tomcat configuration options that are available including JMX monitoring.  I’ll also show how easy it is to enable session clustering.

Java cluster with JMX and session clusteringConfiguration and Monitoring

All charms come with many options available for configuration.  Each is selected to allow the same tuning you would typically perform on a manually deployed machine.  Configuration options are shown per charm when browsing the Charm Store (  The Tomcat charm provides numerous options.  For example, to tweak the JVM options of a running service:

juju set tomcat "java_opts=-Xms768M -Xmx1024M"

This sets the Java heap to a miminum and maximum of 768Mb and 1024Mb respectively.  If you are debugging an application, you may also set:

juju set tomcat "java_opts=-Xms768M -Xmx1024M -XX:-HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=webapps"

To create a ‘.hprof’ Java heap dump you can inspect with VisualVM or jhat each time an OutOfMemoryError occurs.

To open a remote debugger:

juju set tomcat debug_enabled=True

This will open a JDWP debugger on port 8000 you can use to step-through code from Eclipse, Netbeans etc.  (Note: The debugger is never exposed to the Internet, so you need to access it through a ssh tunnel – ‘ssh -L 8000:localhost:8000’, then connect your IDE to localhost port 8000).

A useful part of the JVM is JMX monitoring.  To enable JMX:

juju set tomcat jmx_enabled=True
juju set tomcat "jmx_control_password=<password>"
juju set tomcat "jmx_monitor_password=<password>"

This will start a remote JMX listener on ports 10001, 10002 and set passwords for the ‘monitorRole’ and ‘controlRole’ users (not setting a password disables that account).  You can now open VisualVM or JConsole to connect to the remote JMX instance (screenshot below).  (Note: JMX is never exposed to the Internet, so you need to access it through a ssh tunnel – ‘ssh -L 10001:localhost:10001 -L 10002:localhost:10002’, then connect your JMX client to port 10001).  You can easily expose your own application specific MBeans for monitoring by adding them to the platform MBeanServer.

Juju JMX monitoringOptions are applied to services and to all units under a service.  It isn’t possible to apply options to a specific unit.  So if you enable debugging, you enable it for all Tomcat units.  Same with Java options.

Options may also be applied at deployment time.  For example, to use Tomcat 6 (rather than the default Tomcat 7), create a ‘config.yaml’ file containing the following:

  tomcat_version: tomcat6

Then deploy:

juju deploy --config config.yaml cs:~robert-ayres/precise/tomcat

All units added via ‘add-unit’ will also be Tomcat 6.

Session Clustering

Previously, we setup a Juju cluster consisting of two Tomcat units behind HAProxy.  In this configuration, HTTP sessions exist only on individual Tomcat units.  For many production setups, the use of load balancer sticky sessions and a non-replicated session is the most performant where HTTP sessions are either not required or expendable in the event of unit failure.  For setups concerned about availability of sessions, you can enable Tomcat session clustering on your Juju service which will replicate session data between all units in the service.  Should a unit fail, any of the remaining units can pickup the subsequent requests with the previous session state.  To enable session clustering:

juju set tomcat cluster_enabled=True

We have two choices of how the cluster manages membership.  The preferred choice is using multicast traffic, but as EC2 doesn’t allow this, we must use static configuration.  This is the default, but you can switch between either method by changing the value of the ‘multicast’ option.  Like everything else Juju deployed, any new units added or removed via ‘add-unit’ or ‘remove-unit’ are automatically included/excluded from the cluster membership.  This easily allows you to toggle clustering so that you can benchmark precisely what latency/throughput cost you have by using replicated sessions.

In summary, I’ve shown how you can tweak Tomcat configuration including enabling JMX monitoring.  We’ve also seen how to enable session clustering.  In my final post of the series, I shall show how you can add Solr indexing to your application.

Read more
Robert Ayres

Juju Java Cluster – Part 2

In my previous post, I demonstrated deploying a Juju cluster with a sample Grails application.  Let’s expand our cluster by adding Memcached (see diagram below).

Java memcached clusterDeploy a Memcached service:

juju deploy memcached

Configure Tomcat to map Memcached location under a JNDI name:

juju set tomcat "jndi_memcached_config=param/Memcached:memcached"

This will map the ‘memcached’ service under the JNDI name ‘param/Memcached’.  Whilst Memcached is deploying, you can add the relation ahead of time:

juju add-relation tomcat memcached

We will use the excellent Java Memcached library Spy Memcached ( in our application.  Download the ‘spymemcached-x.x.x.jar’ and copy it to ‘juju-example/lib’.
Now edit ‘juju-example/grails-app/conf/spring/resources.groovy’ so it contains the following:

import net.spy.memcached.spring.MemcachedClientFactoryBean
import org.springframework.jndi.JndiObjectFactoryBean

beans = {

    memcachedClient(MemcachedClientFactoryBean) {
        servers = { JndiObjectFactoryBean jndi ->
            jndiName = 'param/Memcached'
            resourceRef = true

To make use of our Memcached client, let’s create a simple page counter:

(within 'juju-example' directory)
grails create-controller memcached-count

This will create ‘juju-example/grails-app/controllers/juju/example/MemcachedCountController.groovy’.  Edit it so it contains the following:

package juju.example

class MemcachedCountController {

    def memcachedClient

    def index() {
        def count = memcachedClient.incr('juju-example-count', 1, 1)
        render count

When Memcached is deployed and associated with Tomcat, redeploy our application:

(within juju-example directory)
grails clean
grails war

(within parent directory)
cp juju-example/target/juju-example-0.1.war precise/j2ee-deployer/deploy
juju upgrade-charm --repository . juju-example

Once redeployed, you should be able to open and refresh the page to an incrementing counter, stored in Memcached.

As with our datasource connection, we utilise a JNDI lookup to instantiate our Memcached client using runtime configuration provided by Juju (a space separated list of Memcached units, provided as a JNDI environment parameter).  With this structure, the developer has total control over integrating external services into their application.  If they want to use a different Memcached library, they can use the Juju configuration to instantiate a different class.

If we want to increase our cache capacity, we can add more units:

juju add-unit -n 2 memcached

This will deploy another 2 Memcached units.  Our Tomcats will update to reflect the new units and restart.
(Note: As you add Memcached units, our example counter may appear to reset as its Memcached key is hashed to another server).

We’ve added Memcached to our Juju cluster and seen how you can integrate external services within your application using JNDI values.
In my next post, I’ll write about how we can enable features of our existing cluster like JMX and utilise Tomcat session clustering.

Read more
Robert Ayres

Juju Java Cluster – Part 1

In my previous post I gave an introduction to Juju, the new deployment tool in Ubuntu 12.04 Precise Pangolin.  This post is the first of four demonstrating how you can deploy a typical Java web application into your own Juju cluster.  I’ll start the series by deploying an initial cluster of HAProxy, Tomcat and MySQL to Amazon EC2, shown in the diagram below.  You can always deploy to a different environment than EC2 such as MAAS or locally using LXC.  The Juju commands are equivalent.

Java web application cluster

For this demo I’ll build a sample application using the excellent Grails framework (  You can of course use traditional tools of Maven, Ant, etc. to produce your final WAR file.  If you want to try the demo yourself, you’ll need to install Grails and Bazaar.

Firstly let’s demonstrate how to deploy Tomcat using Juju (

Open a terminal on any Ubuntu Precise machine and follow the instructions for bootstrapping a Juju cluster –

With a bootstrapped cluster, let’s deploy a Tomcat service:

juju deploy cs:~robert-ayres/precise/tomcat

This will deploy a Tomcat unit under the service name ‘tomcat’.  Like the bootstrap instance, it will take a short time to launch a new instance, install Tomcat, configure defaults and start.  You can check the progress with ‘juju status’.  When deployed you should see the following output (‘machines’ information purposely removed):

    charm: cs:~robert-ayres/precise/tomcat-1
      - tomcat
        agent-state: started
        machine: 1

Should you wish to investigate the details of any unit you can ssh in – ‘ssh’ (Juju will have transferred your public key).

The Tomcat manager applications are installed and secured by default, requiring an admin password to be set.  We can apply configuration to Juju services using ‘juju set <service> “<key>=<value>” …’.  To set the ‘admin’ user password on our Tomcat unit:

juju set tomcat "admin_password=<password>"

Our Tomcat unit isn’t initially exposed to the Internet, we can only access it over a ssh tunnel (see ssh ‘-L’ option).  To expose our Tomcat unit to the Internet:

juju expose tomcat

Now you should be able to open your web browser at and login to Tomcat’s manager using the credentials we just set.
If we prefer our unit to run on a more traditional web port:

juju set tomcat http_port=80

After a small time of configuration you should now be able to access with the same credentials.
Over HTTP, our credentials aren’t transmitted securely, so let’s enable HTTPS:

juju set tomcat https_enabled=True https_port=443 [1]

Our Tomcat unit will listen for HTTPS connections on the traditional 443 port using a generated self-signed certificate (to use CA signed certificates, see the Tomcat charm README).  Now we can securely access our manager application at (you need to ignore any browser warning about a self-signed certificate).  We now have a deployed Tomcat optimised and secured for production use!

Now let’s turn our attention to evolving a simple Grails application to demonstrate further Juju abilities.

With a working Grails installation, create ‘juju-example’ application:

grails create-app juju-example

This will create your application in a directory ‘juju-example’.  Inside is a shell of a Grails application, enough for demonstration purposes.

To suit the directory layout of our deployed Tomcat, we should adjust our application to store stacktrace logs in a designated, writable directory.  Edit ‘juju-example/grails-app/conf/Config.groovy’ and inside the ‘log4j’ block add the following ‘appenders’ block:

log4j = {
    appenders {
        rollingFile name: "stacktrace", maxFileSize: 1024,
                    file: "logs/juju-example-stacktrace.log"

To build a WAR file run:

(within 'juju-example' directory)
grails dev war

This will build a deployable WAR file ‘juju-example/target/juju-example-0.1.war’.

You have secure access to deploy WAR files directly using the Tomcat manager, but there is a better way – using the J2EE Deployer charm.

The J2EE Deployer charm is a subordinate charm that essentially provides a Juju controlled wrapper around deploying your WAR file into a Juju cluster.  This has the distinct advantage of allowing you to upgrade multiple units using a single command as is shown later.  To use the J2EE Deployer, first download a copy of the wrapper for our example application using bzr:

mkdir precise
bzr export precise/j2ee-deployer lp:~robert-ayres/charms/precise/j2ee-deployer/trunk

This will create a local copy of the wrapper under a directory ‘precise/j2ee-deployer’.  The ‘precise’ parent directory is necessary for Juju when using locally deployed charms.
Copy our war file to the ‘deploy’ directory within:

cp juju-example/target/juju-example-0.1.war precise/j2ee-deployer/deploy

Now deploy our application into Juju:

juju deploy --repository . local:j2ee-deployer juju-example

As with other charms, this will securely upload our application into S3 storage for use by any of our Juju services.  Once the deploy command returns, our application should be available within the cluster under the service name ‘juju-example’.  To deploy to Tomcat, we relate the services:

juju add-relation tomcat juju-example

Our Tomcat unit will download our application locally, stop Tomcat, deploy the application and then start Tomcat.
Issue ‘juju status’ commands to check progress.  Once deployment is complete, you can access and see the default Grails welcome page (screenshot below).

juju-example application

We can use ‘juju set’ to change configuration of our application as we did with the Tomcat service.  For example, to change the deployed path to something simpler:

juju set juju-example war-config=juju-example-0.1:/

Our application will now be redeployed and Tomcat restarted so we can access our application at  Now we have a deployed application!

A web application typically requires access to a RDBMS, so let’s demonstrate how we can connect our application to MySQL.
Firstly, deploy a MySQL service:

juju deploy mysql

Whilst this is deploying, we can set the configuration of the imminent relation between Tomcat and MySQL:

juju set tomcat "jndi_db_config=jdbc/JujuDB:mysql:juju:initialSize=20;maxActive=20;maxIdle=20"

This is a colon separated value that maps the requested database ‘juju’ of the ‘mysql’ service under a JNDI name of ‘jdbc/JujuDB’.  The set of values after the final colon set DBCP connection pooling options.  Here we specify a dedicated pool of 20 connections.
Once our MySQL unit is deployed, we relate our Tomcat service:

juju add-relation tomcat mysql

During this process, our Tomcat unit will request the use of database juju.  Our MySQL unit will create the database and return a set of generated credentials for Tomcat to use.  Once complete, our pooled datasource connection is available to our Tomcat application under JNDI – ‘java:comp/env/jdbc/JujuDB’.  To demonstrate its use within our application, firstly configure Grails to use JNDI for its datasource connection.  Within ‘juju-example/grails-app/conf/DataSource.groovy’, inside the ‘production’/’dataSource’ block, add ‘jndiName = “java:comp/env/jdbc/JujuDB”‘ so it reads as follows:

production {
    dataSource {
        dbCreate = "update"
        jndiName = "java:comp/env/jdbc/JujuDB"

Next create a domain class which will serve as an example database object:

(within 'juju-example' directory)
grails create-domain-class Book

Edit ‘juju-example/grails-app/domain/juju/example/Book.groovy’ so it contains the following:

package juju.example

class Book {

    static constraints = {

    String author
    String isbn
    Integer pages
    Date published
    String title

Now we can use Grails ‘scaffolding’ to generate pages that allow us to insert Books into our database:

grails generate-all Book

Recompile our application to produce a new WAR file:

grails clean
grails war
(Note: 'grails war' now, no 'dev' option)

Now upgrade our application in Juju:

# copy across new war file
cp juju-example/target/juju-example-0.1.war precise/j2ee-deployer/deploy
# upgrade Juju deployment
juju upgrade-charm --repository . juju-example

This will upload our revised application into S3 again and then deploy to all related services, restarting them in the process.
With our newly deployed application utilising its local JNDI datasource, we can now open our web browser at and use the generated page to perform CRUD operations on our Book objects, all persisted to our MySQL database.

A key point to be made is how you should develop your application to be cloud deployable.  If the application is developed to utilise external resources via runtime lookups, the application may be deployed to any number of Juju clusters.  You can observe this yourself by adding a relation between your application and any other Tomcat services.

For this post’s finale, let’s show how we can scale Tomcat.
First, deploy the HAProxy load balancer:

juju deploy haproxy

And associate with Tomcat:

juju add-relation haproxy tomcat

Unexpose Tomcat and expose HAProxy:

juju unexpose tomcat
juju expose haproxy

We can now use the public address of HAProxy to access our application.
Now we’re behind a load balancer, its simple to bolster our web traffic capacity by adding a further Tomcat unit:

juju add-unit tomcat

A second Tomcat unit will be deployed and configured as the first.  Same open ports, same MySQL connection, same web application.  Once deployed, HAProxy will serve traffic to both instances in round robin fashion.  Any future application upgrades will occur on both Tomcat units.  If we want to remove a unit:

juju remove-unit tomcat/<n>

where ‘<n>’ is the unit number (shown in status output).

That’s the end of the demo.  Should you wish to destroy your cluster, run:

juju destroy-environment

This will terminate all EC2 instances including the bootstrap instance.

To summarise, I’ve shown how you can create a Juju cluster containing a load balanced Tomcat with MySQL, serving your web application.  We’ve seen how important it is for the application to be cloud deployable allowing it to utilise managed relations.  I’ve also demonstrated how you can upgrade your application once deployed.

In my next post I shall write about adding Memcached to our cluster.

[1] Due to a current Juju bug ( with command line boolean variables, you may need to create a separate ‘config.yaml’ file containing the contents:


and then use:

juju set --config config.yaml tomcat

Read more
Robert Ayres

Java meet Juju

Take a look at the architecture diagram below.

Java based cluster

How would you go about automating deployment of this Java based cluster to EC2?  Utilise Puppet or Chef?  Write your own scripts?  How would you adapt your solution to add or remove servers to scale on demand?  Can your solution support deployment to your own equipment?  If the solutions that come to mind require a lot of initial time investment, you may be interested in Juju (

In upcoming posts, I’ll show how you can use Juju to deploy this cluster.  But for this post, I’ll give a brief Juju introduction.

Juju is a new Open Source command line deployment tool in Ubuntu 12.04 Precise Pangolin.  It allows you to quickly and painlessly deploy your own cluster of applications to a cloud provider like EC2, on your own equipment in combination with Ubuntu MAAS (Metal as a Service –, or even on your own computer using LXC (Linux Containers).  Juju deploys ‘charms’, scripts written to deploy and configure an application on an Ubuntu Server.
The real automated magic happens through charm relations.  Relations allow charms to associate to perform combined functionality.  This behaviour is predetermined by the charm author through the use of programmable callbacks.  For example, a database will be created and credentials generated when associating with a MySQL charm.  Charms utilise relations to provide the user with traditional functionality that requires no knowledge of underlying networks or configuration files.  And as the focus isn’t on individual machines, Juju allows you to add or remove further servers easily to scale up or down on demand.

Sound interesting? In my next post I’ll demonstrate deploying a web application to Tomcat and connecting it to MySQL.

Read more
Michael Hall

Sweet Chorus

Juju is revolutionizing the way web services are deployed in the cloud, taking what was either a labor-intensive manual task, or a very labor-intensive re-invention of the wheel  (or deployment automation in this case), and distilling it into a collection of reusable components called “Charms” that let anybody deploy multiple inter-connected services in the cloud with ease.

There are currently 84 Juju charms written for everything from game backends to WordPress sites, with databases and cache servers that work with them.  Charms are great when you can deploy the same service the same way, regardless of it’s intended use.  Wordpress is a good use case, since the process of deploying WordPress is going to be the same from one blog to the next.

Django’s Blues

But when you go a little lower in the stack, to web frameworks, it’s not quite so simple.  Take Django, for instance.  While much of the process of deploying a Django service will be the same, there is going to be a lot that is specific to the project.  A Django site can have any number of dependencies, both common additions like South and Celery, as well as many custom modules.  It might use MySQL, or PostgreSQL, or Oracle (even SQLite for development and testing).  Still more things will depend on the development process, while WordPress is available in a DEB package, or a tarball from the upstream site, a Django project may be anywhere, and most frequently in a source control branch specific to that project.  All of this makes writing a single Django charm nearly impossible.

There have been some attempts at making a generic, reusable Django charm.  Michael Nelson made one that uses Puppet and a custom config.yaml for each project.  While this works, it has two drawbacks: 1) It requires Puppet, which isn’t natural for a Python project, and 2) It required so many options in the config.yaml that you still had to do a lot by hand to make it work.  The first of these was done because ISD (where Michael was at the time) was using Puppet to deploy and configure their Django services, and could easily have been done another way.  The second, however, is the necessary consequence of trying to make a reusable Django charm.

Just for Fun

Given the problems detailed above, and not liking the idea of making config options for every possible variation of a Django project, I recently took a different approach.  Instead of making one Django Charm to rule them all, I wrote a small Django App that would generate a customized Charm for any given project.  My goal is to gather enough information from the project and it’s environment to produce a charm that is very nearly complete for that project.  I named this charming code “Naguine” after Django Reinhardt’s second wife, Sophie “Naguine” Ziegler.  It seemed fitting, since this project would be charming Django webapps.

Naguine is very much a JFDI project, so it’s not highly architected or even internally consistent at this point, but with a little bit of hacking I was able to get a significant return. For starters, using Naguine is about as simple as can be, you simply install it on your PYTHONPATH and run:

python charm --settings naguine

The –settings naguine will inject the naguine django app into your INSTALLED_APPS, which makes the charm command available.

This Kind of Friend

The charm command makes use of your Django settings to learn about your other INSTALLED_APPS as well as your database settings.  It will also look for a requirements.txt and, inspecting each to learn more about your project’s dependencies.  From there it will try to locate system packages that will provide those dependencies and add them to the install hook in the Juju  charm.

The charm command also looks to see if your project is currently in a bzr branch, and if it is it will use the remote branch to pull down your  project’s code during the install.  In  the future I hope to also support git and hg deployments.

Finally the command will write hooks for linking to a database instance on another server, including running syncdb to create the tables for your models, adding a superuser account with a randomly generated password and, if you are using South, running any migration scripts as well. It also writes some metadata about your charm and a short README explaining how to use it.

All that is left for you to do is review the generated charm, manually add any dependencies Naguine couldn’t find a matching package for, and manually add any install or database initialization that is specific to your project.  The amount of custom work needed to get a charm working is extremely minor, even for moderately complex projects.

Are you in the Mood

To try Naguine with your Django project, use the following steps:

  1. cd to your django project root (where your is)
  2. bzr branch lp:naguine
  3. python charm –settings naguine

That’s all you need.  If your django project lives in a bzr branch, and if it normally uses, you should have a directory called ./charms/precise/ that contains an almost working Juju charm for your project.

I’ve only tested this on a few Django project, all of which followed the same general conventions when it came to development, so don’t be surprised if you run into problems.  This is still a very early-stage project after all.  But you already have the code (if you followed step #2 above), so you can poke around and try to get it working or working better for your project.  Then submit your changes back to me on Launchpad, and I’ll merge them in.  You can also find me on IRC (mhall119 on freenode) if you get stuck and I will help you get it working.

(For those who are interested, each of the headers in this post is the name of a Django Reinhardt song)

Read more

Check out Why I don’t host my own blog anymore.

I mentioned it to a friend and he immediately piped in “Oh that guy did it wrong, he shouldn’t care about KeepAlive, he needs FastCGI”.

Ok so the guy “messed up” and misconfigured his blog. Zigged instead of zagged. Bummer.

But it doesn’t have to be this way. Right now we offer Wordpress as a juju charm. This lets us deploy Wordpress with Mysql in 4 commands.

However if you look at the db-relation-hook we don’t do anything special, we create an apache vhost and set it up for you. While this is simple, there’s no reason we can’t make this charm be a turbo charged deployment of Wordpress. Let’s look at some of the recommendations we see on his blog and on HN:

  • A simple caching plugin would have quickly fixed this for you.
  • In my stacks I always use nginx in conjunction with Apache to handle as much of the static content load as is possible and that lifts a huge weight from Apache. Next up is to always use a bytecode cache like Xcache or APC, these help give a huge boost in performance.
  • But then you hit a wall, next up are limitations in WP SQL and MySQL, these can be helped by messing with the queries and using Memcached also helps to significantly boost the DB performance here.
  • I had similar nightmares to you for a long time with Apache/PHP/WP, then finally put Varnish cache in front of the whole thing.
  • And someone recommends just shoving the thing in Jekyll and serving that.

I’m sure everyone will have an opinion on how to deploy Wordpress. From an Ubuntu perspective, we ship the wordpress and mysql packages, but that only gets you so far. It’s still up to you to configure it, and as this guy proves, you can mess something up. Wouldn’t it be nice if we could collect all the experience from people who are Wordpress deployment experts, put that in our charms and just give people that out of the box?

We could use nginx in the Wordpress charm, with FastCGI, we can certainly add relations to make varnish and memcached know what to do when they’re related to wordpress. And/or just “juju add-relation jekyll wordpress” and have that Just Work.

These are the kinds of problems we’re trying to tackle with juju. Will it be totally perfect for everyone’s deployment? Of course not, that’s impossible, but we can certainly make Patrick’s experience more uncommon. People will always argue about the nitnoid implementation details, but we can make those config options; the point is that we can share deployment and service maintenance as a whole instead of hoping people put the lego blocks together in the right order.

Interested in turning a plain boring charm into something sexy? I’ve filed the bug here, let us know if you’re interested.

Read more

I can’t wait to see some people I haven’t seen in years at SCALE, and meet a bunch of new people!

Come find me and Clint, we’ll be doing talks about juju and Ubuntu Cloud all weekend, as well as answering questions the entire time. I’m easy to find, look for a Red Wings hat and an Ubuntu shirt.

Here’s our post about our talks.

Read more

Calling all devops!

We’re holding a Charm School on IRC.

juju Charm School is a virtual event where a juju expert is available to answer questions about writing your own juju charms. The intended audience are people who deploy software and want to contribute charms to the wider devops community to make deploying in the public and private cloud easy.

Attendees are more than welcome to:

  • Ask questions about juju and charms
  • Ask for help modifying existing scripts and make charms out of them
  • Ask for peer review on existing charms you might be working on.

Though not required, we recommend that you have |juju installed and configured if you want to get deep into the event.

Read more

After experimenting with juju and puppet the other week, I wanted to see if it was possible to create a generic juju charm for deploying any Django apps using Apache+mod_wsgi together with puppet manifests wherever possible. The resulting apache-django-wsgi charm is ready to demo (thanks to lots of support from the #juju team), but still needs a few more configuration options. The charm currently:

  1. Enables the user to specify a branch of a Python package containing the Django app/project for deploy. This python package will be `python install`’d on the instance, but it also
  2. Enables you to configure extra debian packages to be installed first so that your requirements can be installed in a more reliable/trusted manner, along with the standard required packages (apache2, libapache2-mod-wsgi etc.). Here’s the example charm config used for,
  3. Creates a django.wsgi and httpd.conf ready to serve your app, automatically collecting all the static content of your installed Django apps to be served separately from the same Apache virtual host,
  4. When it receives a database relation change, it creates some local settings, overriding the database settings of your branch, sync’s and migrates the database (a noop if it’s the second unit) and restarts apache (See the database_settings.pp manifest for more details).

Here’s a quick demo which puts up a postgresql unit and two app servers with these commands:

$ juju deploy --repository ~/charms local:postgresql
$ juju deploy --config ubuntu-app-dir.yaml --repository ~/apache-django-wsgi/ local:apache-django-wsgi
$ juju add-relation postgresql:db apache-django-wsgi
$ juju add-unit apache-django-wsgi

Things that I think need to be improved or I’m uncertain about:

  1. `gem install puppet-module` is included in the install hook (a 3rd way of installing something on the system :/). I wanted to use the vcsrepo puppet module to define bzr resource types and puppet-module-tool seems to be the way to install 3rd-party puppet modules. Using this resource-type enables a simple initial_state.pp manifest. Of course, it’d be great to have ‘necessary’ tools like that in the archive instead.
  2. The initial_state.pp manifest pulls the django app package to /home/ubuntu/django-app-branch and then pip installs it on the system. Requiring the app to be a valid python package seemed sensible (in terms of ensuring it is correctly installed with its requirements satisfied) while still allowing the user to go one step further if they like and provide a debian package instead of a python package in a branch (which I assume we would do ultimately for production deploys?)
  3. Currently it’s just a very simple apache setup. I think ideally the static file serving should be done by a separate unit in the charm (ie. an instance running a stripped down apache2 or lighttpd). Also, I would have liked to have used an ‘official’ or ‘blessed’ puppet apache module to benefit from someone else’s experience, but I couldn’t see one that stood out as such.
  4. Currently the charm assumes that your project contains the configuration info (ie. a, etc.), of which the database settings can be simply overridden for deploy. There should be an additional option to specify a configuration branch (and it shouldn’t assume that you’re using django-configglue), as well as other options like django_debug, static_url etc.
  5. The charm should also export an interface (?) that can be used by a load balancer charm.

Filed under: django, juju

Read more

I’ve been playing with juju for a few months now in different contexts and I’ve really enjoyed the ease with which it allows me to think about services rather than resources.

More recently I’ve started thinking about best-practices for deploying services using juju, while still using puppet to setup individual units. As a simple experiment, I wrote a juju charm to deploy an irssi service [1] to dig around. Here’s what I’ve found so far [2]. The first is kind of obvious, but worth mentioning:

Install hooks can be trivial:

sudo apt-get -y install puppet

juju-log "Initialising machine state."
puppet apply $PWD/hooks/initial_state.pp

Normally the corresponding manifest (see initial_state.pp) would be a little more complicated, but in this example it’s hardly worth mentioning.

Juju config changes can utilise Puppet’s Facter infrastructure:

This enables juju config options to be passed through to puppet, so that config-changed hooks can be equally simple:

juju-log "Getting config options"
username=`config-get username`
public_key=`config-get public_key`

juju-log "Configuring irssi for user"
# We specify custom facts so that they're accessible in the manifest.
FACTER_username=$username FACTER_public_key=$public_key puppet apply $PWD/hooks/configured_state.pp

In this example, it is the configured state manifest that is more interesting (see configured_state.pp). It adds the user to the system, sets up byobu with an irssi window ready to go, and adds the given public ssh key enabling the user to login.

The same would go for other juju hooks (db-relation-changed etc.), which is quite neat – getting the best of both worlds: the charm user can still think in terms of deploying services, while the charm author can use puppets declarative syntax to define the machine states.

Next up: I hope to experiment with an optional puppet master for a real project (something simple like the Ubuntu App directory), so that

  1. a project can be deployed without the (probably private) puppet-master to create a close-to-production environment, while
  2. configuring a puppet-master in the juju config would enable production deploys (or deploys of exact replicas of production to a separate environment for testing).

If you’re interested in seeing the simple irssi charm, the following 2min video demos:

# Deploy an irssi service
$ juju deploy --repository=/home/ubuntu/mycharms  local:oneiric/irssi
# Configure it so a user can login
$ juju set irssi username=michael public_key=AAAA...
# Login to find irssi already up and running in a byobu window
$ ssh michael@new.ip.address

and the code is on Launchpad.

[1] Yes, irssi is not particularly useful as a juju service (as I don’t want multiple units, or relating it to other services etc.), but it suited my purposes for a simple experiment that also automates something I can use for working in the cloud.

[2] I’m not a puppet or juju expert, so if you’ve got any comments or improvements, don’t hesitate.

Filed under: juju, puppet, ubuntu

Read more
Dustin Kirkland

Servers in Concert!

Ubuntu Orchestra is one of the most exciting features of the Ubuntu 11.10 Server release, and we're already improving upon it for the big 12.04 LTS!

I've previously given an architectural introduction to the design of Orchestra.  Now, let's take a practical look at it in this how-to guide.


To follow this particular guide, you'll need at least two physical systems and administrative access rights on your local DHCP server (perhaps on your network's router).  With a little ingenuity, you can probably use two virtual machines and work around the router configuration.  I'll follow this guide with another one using entirely virtual machines.

To build this demonstration, I'm using two older ASUS (P1AH2) desktop systems.  They're both dual-core 2.4GHz AMD processors and 2GB of RAM each.  I'm also using a Linksys WRT310n router flashed with DD-WRT.  Most importantly, at least one of the systems must be able to boot over the network using PXE.

Orchestra Installation

You will need to manually install Ubuntu 11.10 Server on one of the systems, using an ISO or a USB flash disk.  I used the 64-bit Ubuntu 11.10 Server ISO, and my no-questions-asked uquick installation method.  This took me a little less than 10 minutes.

After this system reboots, update and upgrade all packages on the system, and then install the ubuntu-orchestra-server package.

sudo apt-get update
sudo apt-get dist-upgrade -y
sudo apt-get install -y ubuntu-orchestra-server

You'll be prompted to enter a couple of configuration parameters, such as setting the cobbler user's password.  It's important to read and understand each question.  The default values are probably acceptable, except for one, which you'll want to be very careful about...the one that asks about DHCP/DNS management.

In this post, I selected "No", as I want my DD-WRT router to continue handling DHCP/DNS.  However, in a production environment (and if you want to use Orchestra with Juju), you might need to select "Yes" here.

And a about five minutes later, you should have an Ubuntu Orchestra Server up and running!

Target System Setup

Once your Orchestra Server is installed, you're ready to prepare your target system for installation.  You will need to enter your target system's BIOS settings, and ensure that the system is set to first boot from PXE (netboot), and then to local disk (hdd).  Orchestra uses Cobbler (a project maintained by our friends at Fedora) to prepare the network installation using PXE and TFTP, and thus your machine needs to boot from the network.  While you're in your BIOS configuration, you might also ensure that Wake on LAN (WoL) is also enabled.

Next, you'll need to obtain the MAC address of the network card in your target system.  One of many ways to obtain this is by booting that Ubuntu ISO, pressing ctrl-alt-F2, and running ip addr show.

Now, you should add the system to Cobbler.  Ubuntu 11.10 ships a feature called cobbler-enlist that automates this, however, for this guide, we'll use the Cobbler web interface.  Give the system a hostname (e.g., asus1), select its profile (e.g., oneiric-x86_64), IP address (e.g., and MAC address (e.g., 00:1a:92:88:b7:d9).  In the case of this system, I needed to tweak the Kernel Options, since this machine has more than one attached hard drive, and I want to ensure that Ubuntu installs onto /dev/sdc, so I set the Kernel Options to partman-auto/disk=/dev/sdc.  You might have other tweaks on a system-by-system basis that you need or want to adjust here (like IPMI configuration).

Finally, I adjusted my DD-WRT router to add a static lease for my target system, and point dnsmasq to PXE boot against the Orchestra Server.  You'll need to do something similar-but-different here, depending on how your network handles DHCP.

NOTE: As of October 27, 2011, Bug #882726 must be manually worked around, though this should be fixed in oneiric-updates any day now.  To work around this bug, login to the Orchestra Server and run:

RELEASES=$(distro-info --supported)
ARCHES="x86_64 i386"
for r in $RELEASES; do
for a in $ARCHES; do
sudo cobbler profile edit --name="$r-$a" \

Target Installation

All set!  Now, let's trigger the installation.  In the web interface, enable the machine for netbooting.

If you have WoL working for this system, you can even use the web interface to power the system on.  If not, you'll need to press the power button yourself.

Now, we can watch the installation remotely, from an SSH session into our Orchestra Server!  For extra bling, install these two packages:

sudo apt-get install -y tmux ccze

Now launch byobu-tmux (which handles splits much better than byobu-screen).  In the current window, run:

tail -f /var/log/syslog | ccze

Now, split the screen vertically with ctrl-F2.  In the new split, run:

sudo tail -f /var/log/squid/access.log | ccze

Move back and forth between splits with shift-F3 and shift-F4.  The ccze command colorizes log files.

syslog progress of your installation scrolling by.  In the right split, you'll see your squid logs, as your Orchestra server caches the binary deb files it downloads.  On your first installation, you'll see a lot of TCP_MISS messages.  But if you try this installation a second time, subsequent installs will roll along much faster and you should see lots of TCP_HIT messages.

It takes me about 5 minutes to install these machines with a warm squid cache (and maybe 10 mintues to do that first installation downloading all of those debs over the Internet).  More importantly, I have installed as many as 30 machines simultaneously in a little over 5 minutes with a warm cache!  I'd love to try more, but that's as much hardware as I've had concurrent access to, at this point.

Post Installation

Most of what you've seen above is the provisioning aspect of Orchestra -- how to get the Ubuntu Server installed to bare metal, over the network, and at scale.  Cobbler does much of the hard work there,  but remarkably, that's only the first pillar of Orchestra.

What you can do after the system is installed is even more exciting!  Each system installed by Orchestra automatically uses rsyslog to push logs back to the Orchestra server.  To keep the logs of multiple clients in sync, NTP is installed and running on every Orchestra managed system.  The Orchestra Server also includes the Nagios web front end, and each installed client runs a Nagios client.  We're working on improving the out-of-the-box Nagios experience for 12.04, but the fundamentals are already there.  Orchestra clients are running PowerNap in power-save mode, by default, so that Orchestra installed servers operate as energy efficiently as possible.

Perhaps most importantly, Orchestra can actually serve as a machine provider to Juju, which can then offer complete Service Orchestration to your physical servers.  I'll explain in another post soon how to point Juju to your Orchestra infrastructure, and deploy services directly to your bare metal servers.

Questions?  Comments?

I won't be able to offer support in the comments below, but if you have questions or comments, drop by the friendly #ubuntu-server IRC channel on, where we have at least a dozen Ubuntu Server developers with Orchestra expertise, hanging around and happy to help!


Read more