Canonical Voices

Posts tagged with 'juju'

Michael Hall

Sweet Chorus

Juju is revolutionizing the way web services are deployed in the cloud, taking what was either a labor-intensive manual task, or a very labor-intensive re-invention of the wheel  (or deployment automation in this case), and distilling it into a collection of reusable components called “Charms” that let anybody deploy multiple inter-connected services in the cloud with ease.

There are currently 84 Juju charms written for everything from game backends to WordPress sites, with databases and cache servers that work with them.  Charms are great when you can deploy the same service the same way, regardless of it’s intended use.  Wordpress is a good use case, since the process of deploying WordPress is going to be the same from one blog to the next.

Django’s Blues

But when you go a little lower in the stack, to web frameworks, it’s not quite so simple.  Take Django, for instance.  While much of the process of deploying a Django service will be the same, there is going to be a lot that is specific to the project.  A Django site can have any number of dependencies, both common additions like South and Celery, as well as many custom modules.  It might use MySQL, or PostgreSQL, or Oracle (even SQLite for development and testing).  Still more things will depend on the development process, while WordPress is available in a DEB package, or a tarball from the upstream site, a Django project may be anywhere, and most frequently in a source control branch specific to that project.  All of this makes writing a single Django charm nearly impossible.

There have been some attempts at making a generic, reusable Django charm.  Michael Nelson made one that uses Puppet and a custom config.yaml for each project.  While this works, it has two drawbacks: 1) It requires Puppet, which isn’t natural for a Python project, and 2) It required so many options in the config.yaml that you still had to do a lot by hand to make it work.  The first of these was done because ISD (where Michael was at the time) was using Puppet to deploy and configure their Django services, and could easily have been done another way.  The second, however, is the necessary consequence of trying to make a reusable Django charm.

Just for Fun

Given the problems detailed above, and not liking the idea of making config options for every possible variation of a Django project, I recently took a different approach.  Instead of making one Django Charm to rule them all, I wrote a small Django App that would generate a customized Charm for any given project.  My goal is to gather enough information from the project and it’s environment to produce a charm that is very nearly complete for that project.  I named this charming code “Naguine” after Django Reinhardt’s second wife, Sophie “Naguine” Ziegler.  It seemed fitting, since this project would be charming Django webapps.

Naguine is very much a JFDI project, so it’s not highly architected or even internally consistent at this point, but with a little bit of hacking I was able to get a significant return. For starters, using Naguine is about as simple as can be, you simply install it on your PYTHONPATH and run:

python charm --settings naguine

The –settings naguine will inject the naguine django app into your INSTALLED_APPS, which makes the charm command available.

This Kind of Friend

The charm command makes use of your Django settings to learn about your other INSTALLED_APPS as well as your database settings.  It will also look for a requirements.txt and, inspecting each to learn more about your project’s dependencies.  From there it will try to locate system packages that will provide those dependencies and add them to the install hook in the Juju  charm.

The charm command also looks to see if your project is currently in a bzr branch, and if it is it will use the remote branch to pull down your  project’s code during the install.  In  the future I hope to also support git and hg deployments.

Finally the command will write hooks for linking to a database instance on another server, including running syncdb to create the tables for your models, adding a superuser account with a randomly generated password and, if you are using South, running any migration scripts as well. It also writes some metadata about your charm and a short README explaining how to use it.

All that is left for you to do is review the generated charm, manually add any dependencies Naguine couldn’t find a matching package for, and manually add any install or database initialization that is specific to your project.  The amount of custom work needed to get a charm working is extremely minor, even for moderately complex projects.

Are you in the Mood

To try Naguine with your Django project, use the following steps:

  1. cd to your django project root (where your is)
  2. bzr branch lp:naguine
  3. python charm –settings naguine

That’s all you need.  If your django project lives in a bzr branch, and if it normally uses, you should have a directory called ./charms/precise/ that contains an almost working Juju charm for your project.

I’ve only tested this on a few Django project, all of which followed the same general conventions when it came to development, so don’t be surprised if you run into problems.  This is still a very early-stage project after all.  But you already have the code (if you followed step #2 above), so you can poke around and try to get it working or working better for your project.  Then submit your changes back to me on Launchpad, and I’ll merge them in.  You can also find me on IRC (mhall119 on freenode) if you get stuck and I will help you get it working.

(For those who are interested, each of the headers in this post is the name of a Django Reinhardt song)

Read more

Check out Why I don’t host my own blog anymore.

I mentioned it to a friend and he immediately piped in “Oh that guy did it wrong, he shouldn’t care about KeepAlive, he needs FastCGI”.

Ok so the guy “messed up” and misconfigured his blog. Zigged instead of zagged. Bummer.

But it doesn’t have to be this way. Right now we offer Wordpress as a juju charm. This lets us deploy Wordpress with Mysql in 4 commands.

However if you look at the db-relation-hook we don’t do anything special, we create an apache vhost and set it up for you. While this is simple, there’s no reason we can’t make this charm be a turbo charged deployment of Wordpress. Let’s look at some of the recommendations we see on his blog and on HN:

  • A simple caching plugin would have quickly fixed this for you.
  • In my stacks I always use nginx in conjunction with Apache to handle as much of the static content load as is possible and that lifts a huge weight from Apache. Next up is to always use a bytecode cache like Xcache or APC, these help give a huge boost in performance.
  • But then you hit a wall, next up are limitations in WP SQL and MySQL, these can be helped by messing with the queries and using Memcached also helps to significantly boost the DB performance here.
  • I had similar nightmares to you for a long time with Apache/PHP/WP, then finally put Varnish cache in front of the whole thing.
  • And someone recommends just shoving the thing in Jekyll and serving that.

I’m sure everyone will have an opinion on how to deploy Wordpress. From an Ubuntu perspective, we ship the wordpress and mysql packages, but that only gets you so far. It’s still up to you to configure it, and as this guy proves, you can mess something up. Wouldn’t it be nice if we could collect all the experience from people who are Wordpress deployment experts, put that in our charms and just give people that out of the box?

We could use nginx in the Wordpress charm, with FastCGI, we can certainly add relations to make varnish and memcached know what to do when they’re related to wordpress. And/or just “juju add-relation jekyll wordpress” and have that Just Work.

These are the kinds of problems we’re trying to tackle with juju. Will it be totally perfect for everyone’s deployment? Of course not, that’s impossible, but we can certainly make Patrick’s experience more uncommon. People will always argue about the nitnoid implementation details, but we can make those config options; the point is that we can share deployment and service maintenance as a whole instead of hoping people put the lego blocks together in the right order.

Interested in turning a plain boring charm into something sexy? I’ve filed the bug here, let us know if you’re interested.

Read more

I can’t wait to see some people I haven’t seen in years at SCALE, and meet a bunch of new people!

Come find me and Clint, we’ll be doing talks about juju and Ubuntu Cloud all weekend, as well as answering questions the entire time. I’m easy to find, look for a Red Wings hat and an Ubuntu shirt.

Here’s our post about our talks.

Read more

Calling all devops!

We’re holding a Charm School on IRC.

juju Charm School is a virtual event where a juju expert is available to answer questions about writing your own juju charms. The intended audience are people who deploy software and want to contribute charms to the wider devops community to make deploying in the public and private cloud easy.

Attendees are more than welcome to:

  • Ask questions about juju and charms
  • Ask for help modifying existing scripts and make charms out of them
  • Ask for peer review on existing charms you might be working on.

Though not required, we recommend that you have |juju installed and configured if you want to get deep into the event.

Read more

After experimenting with juju and puppet the other week, I wanted to see if it was possible to create a generic juju charm for deploying any Django apps using Apache+mod_wsgi together with puppet manifests wherever possible. The resulting apache-django-wsgi charm is ready to demo (thanks to lots of support from the #juju team), but still needs a few more configuration options. The charm currently:

  1. Enables the user to specify a branch of a Python package containing the Django app/project for deploy. This python package will be `python install`’d on the instance, but it also
  2. Enables you to configure extra debian packages to be installed first so that your requirements can be installed in a more reliable/trusted manner, along with the standard required packages (apache2, libapache2-mod-wsgi etc.). Here’s the example charm config used for,
  3. Creates a django.wsgi and httpd.conf ready to serve your app, automatically collecting all the static content of your installed Django apps to be served separately from the same Apache virtual host,
  4. When it receives a database relation change, it creates some local settings, overriding the database settings of your branch, sync’s and migrates the database (a noop if it’s the second unit) and restarts apache (See the database_settings.pp manifest for more details).

Here’s a quick demo which puts up a postgresql unit and two app servers with these commands:

$ juju deploy --repository ~/charms local:postgresql
$ juju deploy --config ubuntu-app-dir.yaml --repository ~/apache-django-wsgi/ local:apache-django-wsgi
$ juju add-relation postgresql:db apache-django-wsgi
$ juju add-unit apache-django-wsgi

Things that I think need to be improved or I’m uncertain about:

  1. `gem install puppet-module` is included in the install hook (a 3rd way of installing something on the system :/). I wanted to use the vcsrepo puppet module to define bzr resource types and puppet-module-tool seems to be the way to install 3rd-party puppet modules. Using this resource-type enables a simple initial_state.pp manifest. Of course, it’d be great to have ‘necessary’ tools like that in the archive instead.
  2. The initial_state.pp manifest pulls the django app package to /home/ubuntu/django-app-branch and then pip installs it on the system. Requiring the app to be a valid python package seemed sensible (in terms of ensuring it is correctly installed with its requirements satisfied) while still allowing the user to go one step further if they like and provide a debian package instead of a python package in a branch (which I assume we would do ultimately for production deploys?)
  3. Currently it’s just a very simple apache setup. I think ideally the static file serving should be done by a separate unit in the charm (ie. an instance running a stripped down apache2 or lighttpd). Also, I would have liked to have used an ‘official’ or ‘blessed’ puppet apache module to benefit from someone else’s experience, but I couldn’t see one that stood out as such.
  4. Currently the charm assumes that your project contains the configuration info (ie. a, etc.), of which the database settings can be simply overridden for deploy. There should be an additional option to specify a configuration branch (and it shouldn’t assume that you’re using django-configglue), as well as other options like django_debug, static_url etc.
  5. The charm should also export an interface (?) that can be used by a load balancer charm.

Filed under: django, juju

Read more

I’ve been playing with juju for a few months now in different contexts and I’ve really enjoyed the ease with which it allows me to think about services rather than resources.

More recently I’ve started thinking about best-practices for deploying services using juju, while still using puppet to setup individual units. As a simple experiment, I wrote a juju charm to deploy an irssi service [1] to dig around. Here’s what I’ve found so far [2]. The first is kind of obvious, but worth mentioning:

Install hooks can be trivial:

sudo apt-get -y install puppet

juju-log "Initialising machine state."
puppet apply $PWD/hooks/initial_state.pp

Normally the corresponding manifest (see initial_state.pp) would be a little more complicated, but in this example it’s hardly worth mentioning.

Juju config changes can utilise Puppet’s Facter infrastructure:

This enables juju config options to be passed through to puppet, so that config-changed hooks can be equally simple:

juju-log "Getting config options"
username=`config-get username`
public_key=`config-get public_key`

juju-log "Configuring irssi for user"
# We specify custom facts so that they're accessible in the manifest.
FACTER_username=$username FACTER_public_key=$public_key puppet apply $PWD/hooks/configured_state.pp

In this example, it is the configured state manifest that is more interesting (see configured_state.pp). It adds the user to the system, sets up byobu with an irssi window ready to go, and adds the given public ssh key enabling the user to login.

The same would go for other juju hooks (db-relation-changed etc.), which is quite neat – getting the best of both worlds: the charm user can still think in terms of deploying services, while the charm author can use puppets declarative syntax to define the machine states.

Next up: I hope to experiment with an optional puppet master for a real project (something simple like the Ubuntu App directory), so that

  1. a project can be deployed without the (probably private) puppet-master to create a close-to-production environment, while
  2. configuring a puppet-master in the juju config would enable production deploys (or deploys of exact replicas of production to a separate environment for testing).

If you’re interested in seeing the simple irssi charm, the following 2min video demos:

# Deploy an irssi service
$ juju deploy --repository=/home/ubuntu/mycharms  local:oneiric/irssi
# Configure it so a user can login
$ juju set irssi username=michael public_key=AAAA...
# Login to find irssi already up and running in a byobu window
$ ssh michael@new.ip.address

and the code is on Launchpad.

[1] Yes, irssi is not particularly useful as a juju service (as I don’t want multiple units, or relating it to other services etc.), but it suited my purposes for a simple experiment that also automates something I can use for working in the cloud.

[2] I’m not a puppet or juju expert, so if you’ve got any comments or improvements, don’t hesitate.

Filed under: juju, puppet, ubuntu

Read more
Dustin Kirkland

Servers in Concert!

Ubuntu Orchestra is one of the most exciting features of the Ubuntu 11.10 Server release, and we're already improving upon it for the big 12.04 LTS!

I've previously given an architectural introduction to the design of Orchestra.  Now, let's take a practical look at it in this how-to guide.


To follow this particular guide, you'll need at least two physical systems and administrative access rights on your local DHCP server (perhaps on your network's router).  With a little ingenuity, you can probably use two virtual machines and work around the router configuration.  I'll follow this guide with another one using entirely virtual machines.

To build this demonstration, I'm using two older ASUS (P1AH2) desktop systems.  They're both dual-core 2.4GHz AMD processors and 2GB of RAM each.  I'm also using a Linksys WRT310n router flashed with DD-WRT.  Most importantly, at least one of the systems must be able to boot over the network using PXE.

Orchestra Installation

You will need to manually install Ubuntu 11.10 Server on one of the systems, using an ISO or a USB flash disk.  I used the 64-bit Ubuntu 11.10 Server ISO, and my no-questions-asked uquick installation method.  This took me a little less than 10 minutes.

After this system reboots, update and upgrade all packages on the system, and then install the ubuntu-orchestra-server package.

sudo apt-get update
sudo apt-get dist-upgrade -y
sudo apt-get install -y ubuntu-orchestra-server

You'll be prompted to enter a couple of configuration parameters, such as setting the cobbler user's password.  It's important to read and understand each question.  The default values are probably acceptable, except for one, which you'll want to be very careful about...the one that asks about DHCP/DNS management.

In this post, I selected "No", as I want my DD-WRT router to continue handling DHCP/DNS.  However, in a production environment (and if you want to use Orchestra with Juju), you might need to select "Yes" here.

And a about five minutes later, you should have an Ubuntu Orchestra Server up and running!

Target System Setup

Once your Orchestra Server is installed, you're ready to prepare your target system for installation.  You will need to enter your target system's BIOS settings, and ensure that the system is set to first boot from PXE (netboot), and then to local disk (hdd).  Orchestra uses Cobbler (a project maintained by our friends at Fedora) to prepare the network installation using PXE and TFTP, and thus your machine needs to boot from the network.  While you're in your BIOS configuration, you might also ensure that Wake on LAN (WoL) is also enabled.

Next, you'll need to obtain the MAC address of the network card in your target system.  One of many ways to obtain this is by booting that Ubuntu ISO, pressing ctrl-alt-F2, and running ip addr show.

Now, you should add the system to Cobbler.  Ubuntu 11.10 ships a feature called cobbler-enlist that automates this, however, for this guide, we'll use the Cobbler web interface.  Give the system a hostname (e.g., asus1), select its profile (e.g., oneiric-x86_64), IP address (e.g., and MAC address (e.g., 00:1a:92:88:b7:d9).  In the case of this system, I needed to tweak the Kernel Options, since this machine has more than one attached hard drive, and I want to ensure that Ubuntu installs onto /dev/sdc, so I set the Kernel Options to partman-auto/disk=/dev/sdc.  You might have other tweaks on a system-by-system basis that you need or want to adjust here (like IPMI configuration).

Finally, I adjusted my DD-WRT router to add a static lease for my target system, and point dnsmasq to PXE boot against the Orchestra Server.  You'll need to do something similar-but-different here, depending on how your network handles DHCP.

NOTE: As of October 27, 2011, Bug #882726 must be manually worked around, though this should be fixed in oneiric-updates any day now.  To work around this bug, login to the Orchestra Server and run:

RELEASES=$(distro-info --supported)
ARCHES="x86_64 i386"
for r in $RELEASES; do
for a in $ARCHES; do
sudo cobbler profile edit --name="$r-$a" \

Target Installation

All set!  Now, let's trigger the installation.  In the web interface, enable the machine for netbooting.

If you have WoL working for this system, you can even use the web interface to power the system on.  If not, you'll need to press the power button yourself.

Now, we can watch the installation remotely, from an SSH session into our Orchestra Server!  For extra bling, install these two packages:

sudo apt-get install -y tmux ccze

Now launch byobu-tmux (which handles splits much better than byobu-screen).  In the current window, run:

tail -f /var/log/syslog | ccze

Now, split the screen vertically with ctrl-F2.  In the new split, run:

sudo tail -f /var/log/squid/access.log | ccze

Move back and forth between splits with shift-F3 and shift-F4.  The ccze command colorizes log files.

syslog progress of your installation scrolling by.  In the right split, you'll see your squid logs, as your Orchestra server caches the binary deb files it downloads.  On your first installation, you'll see a lot of TCP_MISS messages.  But if you try this installation a second time, subsequent installs will roll along much faster and you should see lots of TCP_HIT messages.

It takes me about 5 minutes to install these machines with a warm squid cache (and maybe 10 mintues to do that first installation downloading all of those debs over the Internet).  More importantly, I have installed as many as 30 machines simultaneously in a little over 5 minutes with a warm cache!  I'd love to try more, but that's as much hardware as I've had concurrent access to, at this point.

Post Installation

Most of what you've seen above is the provisioning aspect of Orchestra -- how to get the Ubuntu Server installed to bare metal, over the network, and at scale.  Cobbler does much of the hard work there,  but remarkably, that's only the first pillar of Orchestra.

What you can do after the system is installed is even more exciting!  Each system installed by Orchestra automatically uses rsyslog to push logs back to the Orchestra server.  To keep the logs of multiple clients in sync, NTP is installed and running on every Orchestra managed system.  The Orchestra Server also includes the Nagios web front end, and each installed client runs a Nagios client.  We're working on improving the out-of-the-box Nagios experience for 12.04, but the fundamentals are already there.  Orchestra clients are running PowerNap in power-save mode, by default, so that Orchestra installed servers operate as energy efficiently as possible.

Perhaps most importantly, Orchestra can actually serve as a machine provider to Juju, which can then offer complete Service Orchestration to your physical servers.  I'll explain in another post soon how to point Juju to your Orchestra infrastructure, and deploy services directly to your bare metal servers.

Questions?  Comments?

I won't be able to offer support in the comments below, but if you have questions or comments, drop by the friendly #ubuntu-server IRC channel on, where we have at least a dozen Ubuntu Server developers with Orchestra expertise, hanging around and happy to help!


Read more