Canonical Voices

Posts tagged with 'juju'

Check out Why I don’t host my own blog anymore.

I mentioned it to a friend and he immediately piped in “Oh that guy did it wrong, he shouldn’t care about KeepAlive, he needs FastCGI”.

Ok so the guy “messed up” and misconfigured his blog. Zigged instead of zagged. Bummer.

But it doesn’t have to be this way. Right now we offer Wordpress as a juju charm. This lets us deploy Wordpress with Mysql in 4 commands.

However if you look at the db-relation-hook we don’t do anything special, we create an apache vhost and set it up for you. While this is simple, there’s no reason we can’t make this charm be a turbo charged deployment of Wordpress. Let’s look at some of the recommendations we see on his blog and on HN:

  • A simple caching plugin would have quickly fixed this for you.
  • In my stacks I always use nginx in conjunction with Apache to handle as much of the static content load as is possible and that lifts a huge weight from Apache. Next up is to always use a bytecode cache like Xcache or APC, these help give a huge boost in performance.
  • But then you hit a wall, next up are limitations in WP SQL and MySQL, these can be helped by messing with the queries and using Memcached also helps to significantly boost the DB performance here.
  • I had similar nightmares to you for a long time with Apache/PHP/WP, then finally put Varnish cache in front of the whole thing.
  • And someone recommends just shoving the thing in Jekyll and serving that.

I’m sure everyone will have an opinion on how to deploy Wordpress. From an Ubuntu perspective, we ship the wordpress and mysql packages, but that only gets you so far. It’s still up to you to configure it, and as this guy proves, you can mess something up. Wouldn’t it be nice if we could collect all the experience from people who are Wordpress deployment experts, put that in our charms and just give people that out of the box?

We could use nginx in the Wordpress charm, with FastCGI, we can certainly add relations to make varnish and memcached know what to do when they’re related to wordpress. And/or just “juju add-relation jekyll wordpress” and have that Just Work.

These are the kinds of problems we’re trying to tackle with juju. Will it be totally perfect for everyone’s deployment? Of course not, that’s impossible, but we can certainly make Patrick’s experience more uncommon. People will always argue about the nitnoid implementation details, but we can make those config options; the point is that we can share deployment and service maintenance as a whole instead of hoping people put the lego blocks together in the right order.

Interested in turning a plain boring charm into something sexy? I’ve filed the bug here, let us know if you’re interested.

Read more

I can’t wait to see some people I haven’t seen in years at SCALE, and meet a bunch of new people!

Come find me and Clint, we’ll be doing talks about juju and Ubuntu Cloud all weekend, as well as answering questions the entire time. I’m easy to find, look for a Red Wings hat and an Ubuntu shirt.

Here’s our post about our talks.

Read more

Calling all devops!

We’re holding a Charm School on IRC.

juju Charm School is a virtual event where a juju expert is available to answer questions about writing your own juju charms. The intended audience are people who deploy software and want to contribute charms to the wider devops community to make deploying in the public and private cloud easy.

Attendees are more than welcome to:

  • Ask questions about juju and charms
  • Ask for help modifying existing scripts and make charms out of them
  • Ask for peer review on existing charms you might be working on.

Though not required, we recommend that you have |juju installed and configured if you want to get deep into the event.

Read more
Michael

After experimenting with juju and puppet the other week, I wanted to see if it was possible to create a generic juju charm for deploying any Django apps using Apache+mod_wsgi together with puppet manifests wherever possible. The resulting apache-django-wsgi charm is ready to demo (thanks to lots of support from the #juju team), but still needs a few more configuration options. The charm currently:

  1. Enables the user to specify a branch of a Python package containing the Django app/project for deploy. This python package will be `python setup.py install`’d on the instance, but it also
  2. Enables you to configure extra debian packages to be installed first so that your requirements can be installed in a more reliable/trusted manner, along with the standard required packages (apache2, libapache2-mod-wsgi etc.). Here’s the example charm config used for apps.ubuntu.com,
  3. Creates a django.wsgi and httpd.conf ready to serve your app, automatically collecting all the static content of your installed Django apps to be served separately from the same Apache virtual host,
  4. When it receives a database relation change, it creates some local settings, overriding the database settings of your branch, sync’s and migrates the database (a noop if it’s the second unit) and restarts apache (See the database_settings.pp manifest for more details).

Here’s a quick demo which puts up a postgresql unit and two app servers with these commands:

$ juju deploy --repository ~/charms local:postgresql
$ juju deploy --config ubuntu-app-dir.yaml --repository ~/apache-django-wsgi/ local:apache-django-wsgi
$ juju add-relation postgresql:db apache-django-wsgi
$ juju add-unit apache-django-wsgi

Things that I think need to be improved or I’m uncertain about:

  1. `gem install puppet-module` is included in the install hook (a 3rd way of installing something on the system :/). I wanted to use the vcsrepo puppet module to define bzr resource types and puppet-module-tool seems to be the way to install 3rd-party puppet modules. Using this resource-type enables a simple initial_state.pp manifest. Of course, it’d be great to have ‘necessary’ tools like that in the archive instead.
  2. The initial_state.pp manifest pulls the django app package to /home/ubuntu/django-app-branch and then pip installs it on the system. Requiring the app to be a valid python package seemed sensible (in terms of ensuring it is correctly installed with its requirements satisfied) while still allowing the user to go one step further if they like and provide a debian package instead of a python package in a branch (which I assume we would do ultimately for production deploys?)
  3. Currently it’s just a very simple apache setup. I think ideally the static file serving should be done by a separate unit in the charm (ie. an instance running a stripped down apache2 or lighttpd). Also, I would have liked to have used an ‘official’ or ‘blessed’ puppet apache module to benefit from someone else’s experience, but I couldn’t see one that stood out as such.
  4. Currently the charm assumes that your project contains the configuration info (ie. a settings.py, urls.py etc.), of which the database settings can be simply overridden for deploy. There should be an additional option to specify a configuration branch (and it shouldn’t assume that you’re using django-configglue), as well as other options like django_debug, static_url etc.
  5. The charm should also export an interface (?) that can be used by a load balancer charm.

Filed under: django, juju

Read more
Michael

I’ve been playing with juju for a few months now in different contexts and I’ve really enjoyed the ease with which it allows me to think about services rather than resources.

More recently I’ve started thinking about best-practices for deploying services using juju, while still using puppet to setup individual units. As a simple experiment, I wrote a juju charm to deploy an irssi service [1] to dig around. Here’s what I’ve found so far [2]. The first is kind of obvious, but worth mentioning:

Install hooks can be trivial:

#!/bin/bash
sudo apt-get -y install puppet

juju-log "Initialising machine state."
puppet apply $PWD/hooks/initial_state.pp

Normally the corresponding manifest (see initial_state.pp) would be a little more complicated, but in this example it’s hardly worth mentioning.

Juju config changes can utilise Puppet’s Facter infrastructure:

This enables juju config options to be passed through to puppet, so that config-changed hooks can be equally simple:

#!/bin/bash
juju-log "Getting config options"
username=`config-get username`
public_key=`config-get public_key`

juju-log "Configuring irssi for user"
# We specify custom facts so that they're accessible in the manifest.
FACTER_username=$username FACTER_public_key=$public_key puppet apply $PWD/hooks/configured_state.pp

In this example, it is the configured state manifest that is more interesting (see configured_state.pp). It adds the user to the system, sets up byobu with an irssi window ready to go, and adds the given public ssh key enabling the user to login.

The same would go for other juju hooks (db-relation-changed etc.), which is quite neat – getting the best of both worlds: the charm user can still think in terms of deploying services, while the charm author can use puppets declarative syntax to define the machine states.

Next up: I hope to experiment with an optional puppet master for a real project (something simple like the Ubuntu App directory), so that

  1. a project can be deployed without the (probably private) puppet-master to create a close-to-production environment, while
  2. configuring a puppet-master in the juju config would enable production deploys (or deploys of exact replicas of production to a separate environment for testing).

If you’re interested in seeing the simple irssi charm, the following 2min video demos:

# Deploy an irssi service
$ juju deploy --repository=/home/ubuntu/mycharms  local:oneiric/irssi
# Configure it so a user can login
$ juju set irssi username=michael public_key=AAAA...
# Login to find irssi already up and running in a byobu window
$ ssh michael@new.ip.address

and the code is on Launchpad.

[1] Yes, irssi is not particularly useful as a juju service (as I don’t want multiple units, or relating it to other services etc.), but it suited my purposes for a simple experiment that also automates something I can use for working in the cloud.

[2] I’m not a puppet or juju expert, so if you’ve got any comments or improvements, don’t hesitate.


Filed under: juju, puppet, ubuntu

Read more
Dustin Kirkland


Servers in Concert!

Ubuntu Orchestra is one of the most exciting features of the Ubuntu 11.10 Server release, and we're already improving upon it for the big 12.04 LTS!

I've previously given an architectural introduction to the design of Orchestra.  Now, let's take a practical look at it in this how-to guide.

Prerequisites

To follow this particular guide, you'll need at least two physical systems and administrative access rights on your local DHCP server (perhaps on your network's router).  With a little ingenuity, you can probably use two virtual machines and work around the router configuration.  I'll follow this guide with another one using entirely virtual machines.

To build this demonstration, I'm using two older ASUS (P1AH2) desktop systems.  They're both dual-core 2.4GHz AMD processors and 2GB of RAM each.  I'm also using a Linksys WRT310n router flashed with DD-WRT.  Most importantly, at least one of the systems must be able to boot over the network using PXE.

Orchestra Installation

You will need to manually install Ubuntu 11.10 Server on one of the systems, using an ISO or a USB flash disk.  I used the 64-bit Ubuntu 11.10 Server ISO, and my no-questions-asked uquick installation method.  This took me a little less than 10 minutes.

After this system reboots, update and upgrade all packages on the system, and then install the ubuntu-orchestra-server package.

sudo apt-get update
sudo apt-get dist-upgrade -y
sudo apt-get install -y ubuntu-orchestra-server

You'll be prompted to enter a couple of configuration parameters, such as setting the cobbler user's password.  It's important to read and understand each question.  The default values are probably acceptable, except for one, which you'll want to be very careful about...the one that asks about DHCP/DNS management.

In this post, I selected "No", as I want my DD-WRT router to continue handling DHCP/DNS.  However, in a production environment (and if you want to use Orchestra with Juju), you might need to select "Yes" here.


And a about five minutes later, you should have an Ubuntu Orchestra Server up and running!

Target System Setup

Once your Orchestra Server is installed, you're ready to prepare your target system for installation.  You will need to enter your target system's BIOS settings, and ensure that the system is set to first boot from PXE (netboot), and then to local disk (hdd).  Orchestra uses Cobbler (a project maintained by our friends at Fedora) to prepare the network installation using PXE and TFTP, and thus your machine needs to boot from the network.  While you're in your BIOS configuration, you might also ensure that Wake on LAN (WoL) is also enabled.

Next, you'll need to obtain the MAC address of the network card in your target system.  One of many ways to obtain this is by booting that Ubuntu ISO, pressing ctrl-alt-F2, and running ip addr show.

Now, you should add the system to Cobbler.  Ubuntu 11.10 ships a feature called cobbler-enlist that automates this, however, for this guide, we'll use the Cobbler web interface.  Give the system a hostname (e.g., asus1), select its profile (e.g., oneiric-x86_64), IP address (e.g. 192.168.1.70), and MAC address (e.g., 00:1a:92:88:b7:d9).  In the case of this system, I needed to tweak the Kernel Options, since this machine has more than one attached hard drive, and I want to ensure that Ubuntu installs onto /dev/sdc, so I set the Kernel Options to partman-auto/disk=/dev/sdc.  You might have other tweaks on a system-by-system basis that you need or want to adjust here (like IPMI configuration).


Finally, I adjusted my DD-WRT router to add a static lease for my target system, and point dnsmasq to PXE boot against the Orchestra Server.  You'll need to do something similar-but-different here, depending on how your network handles DHCP.


NOTE: As of October 27, 2011, Bug #882726 must be manually worked around, though this should be fixed in oneiric-updates any day now.  To work around this bug, login to the Orchestra Server and run:

RELEASES=$(distro-info --supported)
ARCHES="x86_64 i386"
KSDIR="/var/lib/orchestra/kickstarts"
for r in $RELEASES; do
for a in $ARCHES; do
sudo cobbler profile edit --name="$r-$a" \
--kickstart="$KSDIR/orchestra.preseed"
done
done

Target Installation

All set!  Now, let's trigger the installation.  In the web interface, enable the machine for netbooting.


If you have WoL working for this system, you can even use the web interface to power the system on.  If not, you'll need to press the power button yourself.

Now, we can watch the installation remotely, from an SSH session into our Orchestra Server!  For extra bling, install these two packages:

sudo apt-get install -y tmux ccze

Now launch byobu-tmux (which handles splits much better than byobu-screen).  In the current window, run:

tail -f /var/log/syslog | ccze

Now, split the screen vertically with ctrl-F2.  In the new split, run:

sudo tail -f /var/log/squid/access.log | ccze

Move back and forth between splits with shift-F3 and shift-F4.  The ccze command colorizes log files.

syslog progress of your installation scrolling by.  In the right split, you'll see your squid logs, as your Orchestra server caches the binary deb files it downloads.  On your first installation, you'll see a lot of TCP_MISS messages.  But if you try this installation a second time, subsequent installs will roll along much faster and you should see lots of TCP_HIT messages.


It takes me about 5 minutes to install these machines with a warm squid cache (and maybe 10 mintues to do that first installation downloading all of those debs over the Internet).  More importantly, I have installed as many as 30 machines simultaneously in a little over 5 minutes with a warm cache!  I'd love to try more, but that's as much hardware as I've had concurrent access to, at this point.

Post Installation

Most of what you've seen above is the provisioning aspect of Orchestra -- how to get the Ubuntu Server installed to bare metal, over the network, and at scale.  Cobbler does much of the hard work there,  but remarkably, that's only the first pillar of Orchestra.

What you can do after the system is installed is even more exciting!  Each system installed by Orchestra automatically uses rsyslog to push logs back to the Orchestra server.  To keep the logs of multiple clients in sync, NTP is installed and running on every Orchestra managed system.  The Orchestra Server also includes the Nagios web front end, and each installed client runs a Nagios client.  We're working on improving the out-of-the-box Nagios experience for 12.04, but the fundamentals are already there.  Orchestra clients are running PowerNap in power-save mode, by default, so that Orchestra installed servers operate as energy efficiently as possible.

Perhaps most importantly, Orchestra can actually serve as a machine provider to Juju, which can then offer complete Service Orchestration to your physical servers.  I'll explain in another post soon how to point Juju to your Orchestra infrastructure, and deploy services directly to your bare metal servers.

Questions?  Comments?

I won't be able to offer support in the comments below, but if you have questions or comments, drop by the friendly #ubuntu-server IRC channel on irc.freenode.net, where we have at least a dozen Ubuntu Server developers with Orchestra expertise, hanging around and happy to help!

Cheers,
:-Dustin

Read more