Canonical Voices

Posts tagged with 'juju'

Greg Lutostanski

We (the Canonical OIL dev team) are about to finish the production roll out of our OpenStack Interoperability Lab (OIL). It’s been an awesome time getting here so I thought I would take the opportunity to get everyone familiar, at a high level, with what OIL is and some of the cool technology behind it.

So what is OIL?

For starters, OIL is essentially continuous integration of the entire stack, from hardware preparation, to Operating System deployment, to orchestration of OpenStack and third party software, all while running specific tests at each point in the process. All test results and CI artifacts are centrally stored for analysis and monthly report generation.

Typically, setting up a cloud (particularly OpenStack) for the first time can be frustrating and time consuming. The potential combinations and permutations of hardware/software components and configurations can quickly become mind-numbing. To help ease the process and provide stability across options we sought to develop an interoperability test lab to vet as much of the ecosystem as possible.

To accomplish this we developed a CI process for building and tearing down entire OpenStack deployments in order to validate every step in the process and to make sure it is repeatable. The OIL lab is comprised of a pool of machines (including routers/switches, storage systems, and computer servers) from a large number of partners. We continually pull available nodes from the pool, setup the entire stack, go to town testing, and then tear it all back down again. We do this so many times that we are already deploying around 50 clouds a day and expect to scale this by a factor of 3-4 with our production roll-out. Generally, each cloud is composed of about 5-7 machines each but we have the ability to scale each test as well.

But that’s not all, in addition to testing we also do bug triage, defect analysis and work both internally and with our partners on fixing as many things as we can. All to ensure that deploying OpenStack on Ubuntu is as seamless a process as possible for both users and vendors alike.

Underlying Technology

We didn’t want to reinvent the wheel so, we are leveraging the latest Ubuntu technologies as well as some standard tools to do all of this. In fact the majority of the OIL infrastructure is public code you can get and start playing with right away!

Here is a small list of what we are using for all this CI goodness:

  • MaaS — to do the base OS install
  • Juju — for all the complicated OpenStack setup steps — and linking them together
  • Tempest — the standard test suite that pokes and prods OpenStack to ensure everything is working
  • Machine selections & random config generation code — to make sure we get a good hardware/software cross sections
  • Jenkins — gluing everything together

Using all of this we are able to manage our hardware effectively, and with a similar setup you can easily too. This is just a high-level overview so we will have to leave the in-depth technological discussions for another time.

More to come

We plan on having a few more blog posts cover some of the more interesting aspects (both results we are getting from OIL and some underlying technological discussions).

We are getting very close to OIL’s official debut and are excited to start publishing some really insightful data.

Read more
Maarten Ectors

A few months ago, Canonical started to work with a set of partners to address the challenges around single sign-on for new services within an organisation. We created a committee to develop a solution that would ensure service authentication could happen instantaneously, saving organisations often months in the roll out of new services.

Today, we’re announcing that two of our partners, Gluu and ForgeRock, will lead the Committee to develop the standards which will enable organisations to integrate any enterprise-grade security infrastructure in minutes with any compliant application. The Committee will define the relationships needed to enable orchestration between applications and common security components, like user provisioning systems, authentication services, and API access management. Where possible, we’ll use existing standards and best practices. For example, OpenID Connect could be adopted for authentication, the Simple Cloud Identity Management (SCIM) API for user provisioning, and the User Managed Access protocol (UMA) for API access management.

Juju is already saving enterprises time by enabling rapid deployment, integration and scaling of sophisticated applications across a number of different platforms. With the work of the Committee, Juju  could have a significant impact on how organisations design and deploy a cloud infrastructure that scales to meet modern security requirements, making it easier for developers to move away from managing user accounts and for domains to offer stronger authentication and trust elevation.

“By providing a standard Juju framework for application security, we can reduce the ‘last mile’ cost that organisations face when securing an ever-expanding array of  websites and mobile applications.” said Lasse Andresen CTO at ForgeRock. “Driving down the deployment and operational costs are essential for improving security on the Internet.”

“The Juju labs project will enable businesses of all sizes to implement an enterprise-grade security infrastructure,” said Mike Schwartz, CEO at Gluu. “Our vendor agnostic and interoperable approach will support open source, SaaS and commercial applications. We want to give domains as much flexibility as possible to choose a security solution that makes sense for their requirements, and to integrate a wide array of applications quickly and easily. Canonical is a clear industry leader in orchestration, which is key to driving down the cost and complexity of domain security.”

More information

Gluu
Juju Labs

Read more
Sally Radwan

A few years ago, the cloud team at Canonical decided that the future of cloud computing lies not only in what clouds are built on, but what runs on it, and how quickly, securely, and efficiently those services can be managed. This is when Juju was born; our service orchestration tool built for the cloud and inspired by the way IT architects visualise their infrastructure: boxes representing services, connected by lines representing interfaces or relationships. Juju’s GUI simplifies searching, dragging and dropping a ‘Charm’ into a canvas to deploy services instantly.

Today, we are announcing two new features for DevOps seeking ever faster and easier ways of deploying scalable infrastructure. The first are Juju Charm bundles that allow you to deploy an entire cloud environment with one click. Secondly we are announcing Quickstart which spins up an entire Juju environment and deploys the necessary services to run Juju, all with one command. Juju Bundles and Quickstart are powerful tools on their own but offer enormous value comes when they are used together: Quickstart can be combined with bundles to rapidly launch Juju, start-up the environment, and deploy an entire application infrastructure, all in one action.

Already there are several bundles available that cover key technology areas: security, big data, SaaS, back office workloads, web servers, content management and the integration of legacy systems. New Charm bundles available today include:

Bundles for complex services:

  • Instant Hadoop: The Hadoop cluster bundle is a 7-node starter cluster designed to deploy Hadoop in a way that’s easily scalable. The deployment has been tested with up to 2,000 nodes on AWS.

  • Instant Mongo: Mongodb, a 13-node (over three shards) starter MongoDB cluster and has the capability to horizontally scale all of the three shards.

  • Instant Wiki: Two Mediawiki deployments; a simple example mediawiki deployment with just mediawiki and MySQL; and a load balanced deployment with HAProxy and memcached, designed to be horizontally scalable.

  •  A new bundle from import.io allows their SaaS platform to be instantly integrated inside Juju. Navigate to any website using the import.io browser, template the data and then test your crawl. Finally, use the import.io charm to crawl your data directly into ElasticSearch.
  • Instant Security: Syncope + PostgreSQL, developed by Tirasa, is a bundle providing Apache Syncope with the internal storage up and running on PostreSQL. Apache Syncope is an open source system for managing digital identities in enterprise environments.

  • Instant Enterprise Solutions: Credativ, experts in Open Source consultancy, are showing with their OpenERP bundle how any enterprise can instantly deploy an enterprise resource planning solution.

  • Instant High Performance Computing: HPCC (High Performance Computing Cluster) is a massive parallel-processing computing platform that solves Big Data problems. The platform is Open Source and can now be instantly deployed via Juju.

Francesco Chicchiriccò, CEO Tirasa / VP Apache Syncope comments; “The immediate availability of an Apache Syncope Juju bundle dramatically shortens the product evaluation process and encourages adoption. With this additional facility to get started with Open Source Identity Management, we hope to increase the deployments of Apache Syncope in any environment.”

 

Bundles for developers:

These bundles provide ‘hello world’ blank applications; they are designed as templates for application developers. Simply, they provide templates with configuration options to an application:

  • Instant Django: A Django bundle with gunicorn and PostgreSQL modelled after the Django ‘Getting Started’ guide is provided for application developers.

  • Instant Rails: Two Rails bundles, one is a simple Rails/Postgres deployment, the ‘scalable’ bundle adds HAProxy, Memcached, Redis, Nagios (for monitoring), and a Logstash/Kibana (for logging), providing an application developer with an entire scalable Rails stack.

  • Instant Wildlfy (The Community JBoss): The new Wildfly bundle from Technology Blueprint, provides an out-of-the-box Wildfly application server in a standalone mode running on openjdk 7. Currently MySQL as a datasource is also supported via a MySQL relation.

Technology Blueprint, creators of the Wildfly bundle, also uses Juju to build its own cloud environments. The company’s system administrator, Saurabh Jha comments; “Juju bundles are really beneficial for programmers and system administrators. Juju saves time, efforts as well as cost. We’ve used it to create our environment on the fly. All we need is a quick command and the whole setup gets ready automatically. No more waiting for installing and starting those heavy applications/servers manually; a bundle takes care of that for us. We can code, deploy and host our application and when we don’t need it, we can just destroy the environment. It’s that easy.”

You can browse and discover all new bundles on jujucharms.com.

Our entire ecosystem is hard at work too, charming up their applications and creating bundles around them. Upcoming bundles to look forward to include a GNU Cobol bundle, which will enable instant legacy integration, a telecom bundle to instantly deploy and integrate Project Clearwater – an open source IMS, and many others. For sure you have some ideas about a bundle that gives an instant solution to some common problems. It has never been easier to see your ideas turn into reality.

==

If you would like to create your own charm or bundle, here is how to get started: http://developer.ubuntu.com/cloud/ or see a video about Charm Bundles:  https://www.youtube.com/watch?v=eYpnQI6GZTA.

And if you’ve never used Juju before, here is an excellent series of blog posts that will guide you through spinning up a simple environment on AWS: http://insights.ubuntu.com/resources/article/deploying-web-applications-using-juju-part-33/.

Need help or advice? The Juju community is here to assist https://juju.ubuntu.com/community.

Finally, for the more technically-minded, here is a slightly more geeky take on things by Canonical’s Rick Harding, including a video walkthrough of Quickstart.

Read more
mitechie

The Juju UI team has been hard at work making it even easier for you to get started with Juju. We’ve got a new tool for everyone that is appropriately named Juju Quickstart and when you combine it with the power of Juju bundles you’re in for something special.

Quickstart is a Juju plugin that aims to help you get up and running with Juju faster than any set of commands you can copy and paste. First, to use Quickstart you need to install it. If you’re on the upcoming Ubuntu Trusty release it’s already there for you. If you’re on an older version of Ubuntu you need to get the Juju stable ppa

sudo add-apt-repository ppa:juju/stable
sudo apt-get update

Installing Quickstart is then just:

sudo apt-get install juju-quickstart

Once you’ve got Quickstart installed you are ready to use it to deploy Juju environments. Just run it with `juju-quickstart`. Quickstart will then open a window to help walk you through setting up your first cloud environment using Juju.

Quickstart can help you configure and setup clouds using LXC (for local environments), OpenStack (which is used for HP Cloud), Windows Azure, and Amazon EC2. It knows what configuration data is required for each cloud provider and provides hints on where to find the information you’ll need.

Once you’ve configured  your cloud provider, Quickstart will bootstrap a Juju environment on it for you. This takes a while on live clouds as this is bringing up instances.

Quickstart does a couple of things to make the environment nicer than your typical bootstrap. First, it will automatically install the Juju GUI for you. It does this on the first machine brought up in the environment so that it’s co-located, which means it comes up much faster and does not incur the cost of a separate machine.  Once the GUI is up and running, Quickstart will automatically launch your browser and log you into the GUI. This saves you from having to copy and paste your admin secret to log in.

If you would like to setup additional environments you can re-launch Quickstart at any time. Use juju-quickstart -i to get back to the guided setup.

Once the environment is up Quickstart still helps you out by providing a shortcut to get back to the Juju GUI running. It will auto launch your browser, find the right IP address of the GUI, and auto log you in. Come back the next day and Quickstart is still the fastest way to get back into your environment.

Finally, Quickstart works great with the new Juju charm bundles feature. A bundle is a set of services with a specific configuration and their corresponding relations that can be deployed together via a single step. Instead of deploying a single service, they can be used to deploy an entire workload, with working relations and configuration. The use of bundles allows for easy repeatability and for sharing of complex, multi-service deployments. Quickstart can accept a bundle and will deploy that bundle for you. If the environment is not bootstrapped it will bring up the environment, install the GUI, and then deploy the bundle.

For instance, here is the one command needed to deploy a bundle that we’ve created and shared:

juju-quickstart bundle:~jorge/mongodb-cluster/1/mongodb-cluster

If the environment is already bootstrapped and running then Quickstart will just deploy the bundle. The two features together work great for testing repeatable deployments. What’s great is that the power of Juju means you can test this deployment on multiple clouds effortlessly.  For instance you can design and configure your bundle locally via LXC and, when satisfied, deploy it to a real environment, simply by changing the environment command-line option when launching Quickstart.

Try out Quickstart and bundles and let us know what you think. Feel free to hop into our irc channel #juju on Freenode if you’ve got any questions. We’re happy to help.

Make sure to check out Mat’s great YouTube video walk through as well over on the Juju GUI blog.


Read more
Michael

Just documenting for later (and for a friend and colleague who needs it now) – my notes for setting up openstack swift using juju. I need to go back and check whether keystone is required – I initially had issue with the test auth so switched to keystone.

First, create the config file to use keystone, local block-devices on the swift storage units (ie. no need to mount storage), and using openstack havana:

cat >swift.cfg <<END
swift-proxy:
    zone-assignment: auto
    replicas: 3
    auth-type: keystone
    openstack-origin: cloud:precise-havana/updates
swift-storage:
    zone: 1
    block-device: /etc/swift/storagedev1.img|2G
    openstack-origin: cloud:precise-havana/updates
keystone:
    admin-token: somebigtoken
    openstack-origin: cloud:precise-havana/updates
END

Deploy it (this could probably be replaced with a charm bundle?):

juju deploy --config=swift.cfg swift-proxy
juju deploy --config=swift.cfg --num-units 3 swift-storage
juju add-relation swift-proxy swift-storage
juju deploy --config=swift.cfg keystone
juju add-relation swift-proxy keystone

Once everything is up and running, create a tenant and user with the user having admin rights for the tenant (using your keystone unit’s IP address for keystone-ip). Note, below I’m using the names of tenant, user and role – which works with keystone 0.3.2, but apparently earlier versions require you to use the uuids instead. Check with `keystone help user-role-add`).

$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken tenant-create --name mytenant
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-create --name myuser --tenant mytenant --pass userpassword
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-role-add --tenant mytenant --user myuser --role Admin

And finally, use our new admin user to create a container for use in our dev environment (specify auth version 2):

$ export OS_REGION_NAME=RegionOne
$ export OS_TENANT_NAME=mytenant
$ export OS_USERNAME=myuser
$ export OS_PASSWORD=userpassword
$ export OS_AUTH_URL=http://keystone-ip:5000/v2.0/
$ swift -V 2 post mycontainer

If you want the container to be readable without auth:

$ swift -V 2 post mycontainer -r '.r:*'

If you want another keystone user to have write access:

$ swift -V 2 post mycontainer -w mytenant:otheruser

Verify that the container is ready for use:
$ swift -V 2 stat mycontainer

Please let me know if you spot any issues (these notes are from a month or two ago, so I haven’t just tried this).


Filed under: Uncategorized

Read more
Michael

After working with InformatiQ to setup a new charm using the ansible support (and ironing out a few issues), it made sense to capture the process…

The README at charm-bootstrap-ansible has the details, but the branch will pull in the required charm-helpers library and run the tests, leaving you ready to deploy and explore.

Hopefully I can get this into the charm-create tool eventually.


Filed under: bzr, juju

Read more
Michael

I’ve been working on some more support for ansible in the juju charm-helpers package recently [1], which has effectively transformed my juju charm’s hooks.py to something like:

# Create the hooks helper, passing a list of hooks which will be
# handled by default by running all sections of the playbook
# tagged with the hook name.
hooks = charmhelpers.contrib.ansible.AnsibleHooks(
    playbook_path='playbooks/site.yaml',
    default_hooks=['start', 'stop', 'config-changed',
                   'solr-relation-changed'])

@hooks.hook()
def install():
    charmhelpers.contrib.ansible.install_ansible_support(from_ppa=True)

And that’s it.

If I need something done outside of ansible, like in the install hook above, I can write a simple hook with the non-ansible setup (in this case, installing ansible), but the decorator will still ensure all the sections of the playbook tagged by the hook-name (in this case, ‘install’) are applied once the custom hook function finishes. All the other hooks (start, stop, config-changed and solr-relation-changed) are registered so that ansible will run the tagged sections automatically on those hooks.

Why am I excited about this? Because it means that practically everything related to ensuring the state of the machine is now handled by ansibles yaml declarations (and I trust those to do what I declare). Of coures those playbooks could themselves get quite large and hard to maintain, but ansible has plenty of ways to break up declarations into includes and roles.

It also means that I need to write and maintain fewer unit-tests – in the above example I need to ensure that when the install() hook is called that ansible is installed, but that’s about it. I no longer need to unit-test the code which creates directories and users, ensures permissions etc., or even calls out to relevant charm-helper functions, as it’s all instead declared as part of the machine state. That said, I’m still just as dependent on integration testing to ensure the started state of the machine is what I need.

I’m pretty sure that ansible + juju has even more possibilities for being able to create extensible charms with plugins (using roles), rather than forcing too much into the charms config.yaml, and other benefits… looking forward to trying it out!

[1] The merge proposal still needs to be reviewed, possibly updated and landed :)


Filed under: Uncategorized

Read more
Sidnei

Since early August I’ve been looking at improving the state of the art in container cloning with an ultimate goal of making Juju faster when used in a local provider context, which happens to be where I spend most of my days lately. For those who don’t know, the local provider in Juju uses LXC with the LXC-provided ‘ubuntu-cloud’ template in order to provide an environment that’s as similar as possible to provisioning a cloud instance elsewhere.

In looking at the various storage backends supported by LXC and experimenting with each of them, I’ve stumbled on various issues, from broken inotify support in overlayfs to random timing issues deleting btrfs snapshots. Eventually I’ve discovered the holy grail of LVM Thin Provisioning and started working on a patch to LXC which would allow it to create volumes in a thin provisioned pool. In the meantime, Docker announced their intent of adding support for LVM Thin Provisioning too. I’m happy to announce that the work I started in August is now merged into LXC as various separate pull requests (#67#70#72#74) and is already available in the LXC Daily PPA. I’ve even created a Gist showing how to use it.

As pointed out by a colleague today, Thin Provisioning support should soon land in Docker itself. I applaud the initiative, and am really looking forward to seeing this feature land in Docker. I wish though there was a little more coordination going on to get that work upstreamed into LXC instead, to the benefit of everyone. Regardless, I’m committed to make sure that the differences between whatever ends up landing in Docker and what I’ve added to LXC eventually converge. One such difference is that I’ve simply shelled out to the ‘lvs’ command while Alexander is using libdevmapper directly, something I’ve looked at doing but felt a little over my head. I’ve also haven’t got around to making the filesystem default to ext4 with DISCARD support yet, but that’s on the top of my list.

So without much ado, let’s look at an example with a loopback-mounted device backed by a sparse-allocated file.

$ sudo fallocate -l 6G /tmp/lxc
$ sudo losetup -f /tmp/lxc
$ sudo pvcreate /dev/loop0
$ sudo vgcreate lxc /dev/loop0
Nothing special so far, simply a 6G sparse file, mounted loopback and then a VG named ‘lxc’ is created within it. Now the next step, creating the Thin Provisioning Pool. The size of the LV cannot be specified as 100%FREE because some space needs to be left for the LV metadata:
$ sudo lvcreate -l 95%FREE --type thin-pool --thinpool lxc lxc
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lxc lxc twi-a-tz- 5.70g 0.00
Now if you have a recent enough LXC as the one from the PPA above, creating a LVM-backed LXC container should default to creating a Thin Provisioned volume on the thinpool named ‘lxc’, so the command is pretty simple:
$ sudo lxc-create -n precise-template -B lvm --fssize 8G -t ubuntu-cloud -- -r precise
$ sudo lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lxc lxc twi-a-tz- 5.70g 17.38
precise-template lxc Vwi-a-tz- 7.81g lxc 12.67
If you wanted to use a custom named thin pool, that’s also possible by specifying the ‘–thinpool’ command line argument.
Now, how fast is it to create a clone of this container? Let’s see.
$ time sudo lxc-clone -s precise-template precise-copy
real 0m0.276s
user 0m0.021s
sys 0m0.085s

Plenty fast I say. What does the newly created volume looks like?

$ sudo lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lxc lxc twi-a-tz- 5.70g 17.44
precise-copy lxc Vwi-a-tz- 7.81g lxc precise-template 12.71
precise-template lxc Vwi-a-tz- 7.81g lxc 12.67

Not only cloning the container was super fast, but it also barely used any new data in the ‘lxc’ thin pool.

The result of this work should soon make it’s way into Juju and, I hope, Docker, either via Alexander’s work or some other way. If you’re into Docker or a more hardcore LXC user, I hope you enjoy this feature and make good use of it.

Read more
Robbie

20130722092922-edge-1-large

So I’m partially kidding…the Ubuntu Edge is quickly becoming a crowdfunding phenomena, and everyone should support it if they can.  If we succeed, it will be a historic moment for Ubuntu, crowdfunding, and the global phone industry as well. 

But I Don’t Wanna Talk About That Right Now

While I’m definitely a fan of the phone stuff, I’m a cloud and server guy at heart and what’s gotten me really excited this past month have been two significant (and freaking awesome) announcements.

#1 The Juju Charm Championship

easy_money_board_game_1974_121011_1

First off, if you still don’t know about Juju, it’s essentially our attempt at making Cloud Computing for Human Beings.  Juju allows you to deploy, connect, manage, and scale web services and applications quickly and easily…again…and again…AND AGAIN!  These services are captured in what we call charms, which contain the knowledge of how to properly deploy, configure, connect, and scale the services and applications you will want to deploy in the cloud.  We have 100′s of charms for every popular and well-known web service and application in use in the cloud today.  They’ve been authored and maintained by the experts, so you don’t have to waste your time trying to become one.  Just as Ubuntu depends on a community of packagers and developers, so does Juju.  Juju goes only as far as our Charm Community will take us, and this is why the Charm Championship is so important to us.

So….what is this Charm Championship all about?  We took notice of the fantastic response to the Cloud-Prize contest ran by our good friends (and Ubuntu Server users) over at Netflix.  So we thought we could do something similar to boost the number of full service solutions deployable by Juju, i.e. Charm Bundles.  If charms are the APT packages of the cloud, bundles are effectively the package seeds, thus allowing you to deploy groups of services, configured and interconnected all at once.  We’ve chosen this approach to increase our bundle count because we know from our experience with Ubuntu, that the best approach for growth will be by harvesting and cultivating the expertise and experience of the experts regularly developing and deploying these solutions.  For example, we at Canonical maintain and regularly deploy an OpenStack bundle to allow us to quickly get our clouds up for both internal use and for our Ubuntu Advantage customers.  We have master level expertise in OpenStack cloud deployments, and thus have codified this into our charms so that others are able to benefit.  The Charm Championship is our attempt to replicate this sharing of similar master level expertise across more service/application bundles…..BY OFFERING $30,000 USD IN PRIZE MONEY! Think of how many Ubuntu Edge phones that could buy you…well, unless you desperately need to have one of the first 50 :-).

#2 JujuCharms.com

Ironman JARVIS technologyFrom the very moment we began thinking about creating Juju years ago…we always envisioned eventually creating an interface that provides solution architects the ability to graphically create, deploy, and interact with services visually…replicating the whiteboard planning commonly employed in the planning phase of such solutions.

The new Juju GUI now integrated into JujuCharms.com is the realization of our vision, and I’m excited as hell at the possibilities opened and the technical roadblocks removed by the release of this tool.  We’ve even charmed it, so you can  ’juju deploy juju-gui’ into any supported cloud, bare metal (MAAS), or local workstation (via LXC) environment.  Below is a video of deploying OpenStack via our new GUI, and a perfect example of the possibilities that are opened up now that we’ve released this innovative and f*cking awesome tool:

The best part here, is that you can play with the new GUI RIGHT NOW by selecting the “Build” option on jujucharms.com….so go ahead and give it a try!

Join the Championship…Play with the GUI…then Buy the Phone

Cause I will definitely admit…it’s a damn sexy piece of hardware. ;)


Read more
ZhengPeng Hou

Last night, spent some time with latest jujuc-core in saucy, was interested with local provider support which was added recently. Couple of things worth to be known:
1 juju-core now use mongodb to replace zookeeper, so to play with local provider, you need install lxc, mongodb. I have whishlist against juju-core packaging for providing a meta package to install those dependencies.
2 After install mongodb, do remember to stop the server manually, because juju bootstrap will create its own upstart scripts to handle start/stop the service.
3 The configuration of local provider is quite simple now, you may copy and paste the one by running juju init, not need modify anything, I did comment out root-dir, which made me run into .
4 Two commands need to be run with sudo, one is bootstrap, the other is destroy-environment.

Read more
Mark Baker

Juju, the leading tool for continuous deployment, continuous integration (CI/CD), and cloud-neutral orchestration, now has a refreshed GUI with smoother workflows for integration professionals spinning up many services across clouds like Amazon EC2 and a range of public OpenStack providers. The new GUI speeds up service design – conceptual modelling of service relationships – as well as actual deployment, providing a visual map of the relationships between services.

“The GUI is now a first-class part of the Juju experience” said Gary Poster, whose team lead the work, “with an emphasis on rapid access to the collection of service charms and better visualisation of the deployment in question”. In this milestone the Juju GUI can act as a whiteboard, so a user can mock up the service orchestration they intend to create using the same Juju GUI that they will use to manage their real, live deployments. Users can experience the new interface for themselves at jujucharms.com with no need to setup software in advance.

Juju is used by organisations that are constantly deploying and redeploying collections of services. Companies focused on media, professional services, and systems integration are the heaviest users, who benefit from having repeatable best-practice deployments across a range of cloud environments.

Juju uniquely enables the reuse of shared components called ‘charms’ for common parts of a complex service. A large portfolio of existing open source components is available from a public Charm collection, and browsing that collection is built into the new GUI. Charms are easy to find and review in the GUI, with full documentation instantly accessible. Featured, recommended and popular charms are highlighted for easy discovery. Each Charm now has more detailed information including test results from all supported providers, download count, related Charms, and a Charm code quality rating. The Charm collection includes both certified, supported Charms, and a wider range of ad-hoc Charms that are published by a large community of contributors.

The simple browser-based interface makes it easy to find reusable open source charms that define popular services like Hadoop, Storm, Ceph, OpenStack, MySQL, RabbitMQ, MongoDB, Cassandra, Mediawiki and WordPress. Information about each service, such as configuration options, is immediately available, and the charms can then be dragged and dropped directly on a canvas where they can be connected to other services, deployed and scaled. It’s also possible to export these service topologies into a human-readable and -editable format that can be shared within a team or published as a reference architecture for that deployment.

Recent additions to the public Charm collection include OpenVPN AS, Liferay, Storm and Varnish. For developers the new GUI and Charm Browser mean that their Charms are now much more discoverable. For those taking part in the Charm Championship, it’s easier to upload their Charms and use the GUI to connect them into a full solution for entry into the competition. Submit your best Charmed solution for the possibility of winning $10,000.

The management interface for Charm authors has also been enhanced and is available at  http://manage.jujucharms.com/ immediately.

See how you can use Juju to deploy OpenStack:

The current version of Juju supports Amazon EC2, HP Cloud and many other OpenStack clouds, as well as in-memory deployment for test and dev scenarios. Juju is on track for a 1.12 release in time for Ubuntu 13.10 that will enhance scalability for very large deployments, and a 2.0 release in time for Ubuntu 14.04 LTS.

See it demoed: We’ll be showing off the new Juju GUI and charm browser at OSCON on Tuesday 23rd at 9:00AM in the Service Orchestration In the Cloud with Juju workshop.

Read more
Mark Baker

Ubuntu developer contest offers $10,000 for the most innovative charms

Developers around the world are already saving time and money thanks to Juju, and now they have the opportunity to win money too. Today marks the opening of the Juju Charm Championship, in which developers can reap big rewards for getting creative with Juju charms.

If you haven’t met Juju yet, now’s the ideal time to dive in. Juju is a service orchestration tool, a simple way to build entire cloud environments, deploy scale and manage complex workloads using only a few commands. It takes all the knowledge of an application and wraps it up into a re-usable Juju charm, ready to be quickly deployed anywhere. And you can modify and combine charms to create a custom deployment that meets your needs.

Juju is a powerful tool, and its flexibility means it’s capable of things we haven’t even imagined yet. So we’re kicking off the Charm Championship to discover what happens when the best developers bring Juju into their clouds — with big rewards on offer.

The prizes

As well as showing off the best achievements to the community, our panel of judges will award $10,000 cash prizes to the best charmed solutions in a range of categories.

That’s not all. Qualifying participants will be eligible for a joint marketing programme with Canonical, including featured application slots on ubuntu.com,  joint webinars and more. Win the Charm Championship and your app will reach a whole new audience.

Get started today

If you’re a Juju wizard, we want to see what magic you’re already creating. If you’re not, now’s a great time to start — it only takes five minutes to get going with Juju.

The Charm Championship runs until 1 October 2013, and it’s open to individuals, teams, companies and organisations. For more details and full com

petition rules, visit the Charm Championship page.

Charm Championship page

Read more
Michael

logo-jujuHave you ever wished you could just declare the installed state of your juju charm like this?

deploy_user:
    group.present:
        - gid: 1800
    user.present:
        - uid: 1800
        - gid: 1800
        - createhome: False
        - require:
            - group: deploy_user

exampleapp:
    group.present:
        - gid: 1500
    user.present:
        - uid: 1500
        - gid: 1500
        - createhome: False
        - require:
            - group: exampleapp


/srv/{{ service_name }}:
    file.directory:
        - group: exampleapp
        - user: exampleapp
        - require:
            - user: exampleapp
        - recurse:
            - user
            - group


/srv/{{ service_name }}/{{ instance_type }}-logs:
    file.directory:
        - makedirs: True

While writing charms for Juju a long time ago, one of the things that I struggled with was testing the hook code – specifically the install hook code where the machine state is set up (ie. packages installed, directories created with correct permissions, config files setup etc.) Often the test code would be fragile – at best you can patch some attributes of your module (like “code_location = ‘/srv/example.com/code’”) to a tmp dir and test the state correctly, but at worst you end up testing the behaviour of your code (ie. os.mkdir was called with the correct user/group etc.). Either way, it wasn’t fun to write and iterate those tests.

But support has improved over the past year with the charmhelpers library. And recently I landed a branch adding support for declaring saltstack states in yaml, like the above example. That means that the install hook of your hooks.py can be reduced to something like:

import charmhelpers.core.hookenv
import charmhelpers.payload.execd
import charmhelpers.contrib.saltstack


hooks = charmhelpers.core.hookenv.Hooks()


@hooks.hook()
def install():
    """Setup the machine dependencies and installed state."""
    charmhelpers.contrib.saltstack.install_salt_support()
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/dependencies.yaml')
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/installed.yaml')


# Other hooks...

if __name__ == "__main__":
    hooks.execute(sys.argv)

…letting you focus on testing and writing the actual hook functionality (like relation-set’s etc. I’d like to add some test helpers that will automatically check the syntax of the state yaml files and template rendering output, but haven’t yet).

Hopefully we can add similar support for puppet and Ansible soon too, so that the charmer can choose the tools they want to use to declare the local machine state.

A few other things that I found valuable while writing my charm:

  • Use a branch for charmhelpers – this way you can make improvements to the library as you develop and not be dependent on your changes landing straight away to deploy (thanks Sidnei – I think I just copied that idea from one of his charms). The easiest way that I found for that was to install the branch into mycharm/lib so that it’s included in both dev and when you deploy (with a small snippet in your hooks.py.
  • Make it easy to deploy your local charm from the branch… the easiest way I found was a link-test-juju-repo make target – I’m not sure what other people do here?
  • In terms of writing actual hook functionality (like relation-set events etc), I found the easiest way to develop the charm was to iterate within a debug-hook session. Something like:
    1. write new test+code then juju upgrade-charm or add-relation
    2. run the hook and if it fails…
    3. fix and test right there within the debug-hook
    4. put the code back into my actual charm branch and update the test
    5. restore the system state in debug hook
    6. then juju upgrade-charm again to ensure it works, if it fails, iterate from 3.
  • Use the built-in support of template rendering from saltstack for rendering any config files that you need.

I don’t think I’d really appreciated the beauty of what juju is doing until, after testing my charm locally together with a gunicorn charm and a solr backend, I then setup a config using juju-deployer to create a full stack with an Apache front-end, a cache load balancer for multiple squid caches, as well as a load balancer in front of potentially multiple instances of my charms wsgi app, then a back-end loadbalancer in between my app and the (multiple) solr backends… and it just works.


Filed under: juju, python, testing

Read more
roaksoax

For a while, I have been wanting to write about MAAS and how it can easily deploy workloads (specially OpenStack) with Juju, and the time has finally come. This will be the first of a series of posts where I’ll provide an Overview of how to quickly get started with MAAS and Juju.

What is MAAS?

I think that MAAS does not require introduction, but if people really need to know, this awesome video will provide a far better explanation than the one I can give in this blog post.

http://youtu.be/J1XH0SQARgo

 

Components and Architecture

MAAS have been designed in such a way that it can be deployed in different architectures and network environments. MAAS can be deployed as both, a Single-Node or Multi-Node Architecture. This allows MAAS to be a scalable deployment system to meet your needs. It has two basic components, the MAAS Region Controller and the MAAS Cluster Controller.

MAAS Architectures

Region Controller

The MAAS Region Controller is the component the users interface with, and is the one that controls the Cluster Controllers. It is the place of the WebUI and API. The Region Controller is also the place for the MAAS meta-data server for cloud-init, as well as the place where the DNS server runs. The region controller also configures a rsyslogd server to log the installation process, as well as a proxy (squid-deb-proxy) that is used to cache the debian packages. The preseeds used for the different stages of the process are also being stored here.

Cluster Controller

The MAAS Cluster Controller only interfaces with the Region controller and is the one in charge of provisioning in general. The Cluster Controller is the place the TFTP and DHCP server(s) are located. This is the place where both the PXE files and ephemeral images are being stored. It is also the Cluster Controller’s job to power on/off the managed nodes (if configured).

The Architecture

As you can see in the image above, MAAS can be deployed in both a single node or multi-node. The way MAAS has being designed makes MAAS highly scalable allowing to add more Cluster Controllers that will manage a different pool of machines. A single-node scenario can become in a multi-node scenario by simply adding more Cluster Controllers. Each Cluster Controller has to register with the Region Controller, and each can be configured to manage a different Network. The way has this is intended to work is that each Cluster Controller will manage a different pool of machines in different networks (for provisioning), allowing MAAS to manage hundreds of machines. This is completely transparent to users because MAAS makes the machines available to them as a single pool of machines, which can all be used for deploying/orchestrating your services with juju.

How Does It Work?

MAAS has 3 basic stages. These are Enlistment, Commissioning and Deployment which are explained below:

MAAS Process

Enlistment

The enlistment process is the process on which a new machine is registered to MAAS. When a new machine is started, it will obtain an IP address and PXE boot from the MAAS Cluster Controller. The PXE boot process will instruct the machine to load an ephemeral image that will run and perform an initial discovery process (via a preseed fed to cloud-init). This discovery process will obtain basic information such as network interfaces, MAC addresses and the machine’s architecture. Once this information is gathered, a request to register the machine is made to the MAAS Region Controller. Once this happens, the machine will appear in MAAS with a Declared state.

Commissioning

The commissioning process is the process where MAAS collects hardware information, such as the number of CPU cores, RAM memory, disk size, etc, which can be later used as constraints. Once the machine has been enlisted (Declared State), the machine must be accepted into the MAAS in order for the commissioning processes to begin and for it to be ready for deployment. For example, in the WebUI, an “Accept & Commission” button will be present. Once the machine gets accepted into MAAS, the machine will PXE boot from the MAAS Cluster Controller and will be instructed to run the same ephemeral image (again). This time, however, the commissioning process will be instructed to gather more information about the machine, which will be sent back to the MAAS region controller (via cloud-init from MAAS meta-data server). Once this process has finished, the machine information will be updated it will change to Ready state. This status means that the machine is ready for deployment.

Deployment

Once the machines are in Ready state, they can be used for deployment. Deployment can happen with both juju or the maas-cli (or even the WebUI). The maas-cli will only allow you to install Ubuntu on the machine, while juju will not only allow you to deploy Ubuntu on them, but will allow you to orchestrate services. When a machine has been deployed, its state will change to Allocated to <user>. This state means that the machine is in use by the user who requested its deployment.

Releasing Machines

Once a user doesn’t need the machine anymore, it can be released and its status will change from Allocated to <user> back to Ready. This means that the machine will be turned off and will be made available for later use.

But… How do Machines Turn On/Off?

Now, you might be wondering how are the machines being turned on/off or who is the one in charge of that. MAAS can manage power devices, such as IPMI/iLO, Sentry Switch CDU’s, or even virsh. By default, we expect that all the machines being controlled by MAAS have IPMI/iLO cards. So if your machines do, MAAS will attempt to auto-detect and auto-configure your IPMI/iLO cards during the Enlistment and Commissioning processes. Once the machines are Accepted into MAAS (after enlistment) they will be turned on automatically and they will be Commissioned (that is if IPMI was discovered and configured correctly).. This also means that every time a machine is being deployed, they will be turned on automatically.

Note that MAAS not only handles physical machines, it can also handle Virtual Machines, hence the virsh power management type. However, you will have to manually configure the details in order for MAAS to manage these virtual machines and turn them on/off automatically.

Read more
Darryl Weaver

Introduction

In this article I will show you how to set up a new WordPress blog on Amazon EC2 public cloud and then migrate it to HP Public Cloud using Juju Jitsu, from Canonical, the company behind Ubuntu.

Prerequisites

  • Amazon EC2 Account
  • HP Public Cloud Account
  • Ubuntu Desktop or Server 12.04 or above with root or sudo access

Juju Environment Setup

First of all we need to install Juju and Jitsu from the PPA archive to make it available for use, so first of all add the PPA to the installation sources:

sudo apt-get -y install python-software-properties
sudo add-apt-repository ppa:juju/pkgs

Now update apt and install juju, charm-tools and juju-jitsu

sudo apt-get update
sudo apt-get install juju charm-tools juju-jitsu

You will now need to set up your ~/.juju/environments.yaml file for Amazon EC2, see here: https://juju.ubuntu.com/get-started/amazon/

and then for HP cloud also, so see here:

https://juju.ubuntu.com/get-started/hp-cloud/

So you should end up with an environments.yaml file that will look something like this:

default: amazon
environments:
amazon:
 type: ec2
 control-bucket: juju-b1bb8e0313d14bf1accb8a198a389eed
 admin-secret:[any-unique-string-shared-among-admins-u-like]
 access-key: [PUT YOUR ACCESS KEY HERE]
 secret-key: [PUT YOUR SECRET KEY HERE]
 default-series: precise
 juju-origin: ppa
 ssl-hostname-verification: true
hpcloud:
 juju-origin: ppa
 control-bucket: juju-hpc-az1-cb
 admin-secret: [any-unique-string-shared-among-admins-u-like]
 default-image-id: [8419]
 region: az-1.region-a.geo-1
 project-name: [your@hp-cloud.com-tenant-name]
 default-instance-type: standard.small
 auth-url: https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/
 auth-mode: keypair
 type: openstack
 default-series: precise
 access-key: [PUT YOUR ACCESS KEY HERE]
 secret-key: [PUT YOUR SECRET KEY HERE]

Deploying WordPress to Amazon EC2

Now we need to bootstrap the Amazon EC2 environment.

juju bootstrap -e amazon

Check it finishes bootstrapping correctly after a few minutes using:

juju status -e amazon

Which should output something like this:

machines:
  0:
    agent-state: running
    dns-name: ec2-50-17-169-153.compute-1.amazonaws.com
    instance-id: i-78d4781b
    instance-state: running
services: {}

To give a good view of what is going on and to also allow modification from a web control panel we can deploy juju-gui to the bootstrap node, using juju-jitsu:

jitsu deploy-to 0 juju-gui -e amazon

juju expose juju-gui -e amazon

This will take a few minutes to deploy.
Once complete you will see this from the output of “juju status -e amazon”, which should output something like:

machines:
  0:
    agent-state: running
    dns-name: ec2-50-17-169-153.compute-1.amazonaws.com
    instance-id: i-78d4781b
    instance-state: running
services:
  juju-gui:
    charm: cs:precise/juju-gui-3
    exposed: true
    relations: {}
    units:
      juju-gui/0:
        agent-state: started
        machine: 0
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: ec2-50-17-169-153.compute-1.amazonaws.com

Then use the “public-address” entry in your web browser to connect to juju-gui and see what is going on visually.

Juju-gui currently works well on Google Chrome or Chromium, it uses a Self-signed SSL certificate so you will be prompted to connect given a security warning which you can safely ignore and proceed.

Initially you should see the login page, with the username already filled in as “admin” and the password is the same as your password for the admin-secret in your ~/.juju/environments.yaml file.

Once logged in you should see a page that looks like this showing that only juju-gui is deployed to your environment, so far:

Juju-gui screenshot

First login

First we need to deploy a MySQL Database to store your blog’s data:

juju deploy mysql -e amazon

This will take a few minutes to deploy, so go ahead and also deploy a wordpress application server:

juju deploy wordpress -e amazon

While deployment continues you should see them appear in Juju-gui too

Juju gui with wordpress and mysql deployed

Showing MySQL and WordPress deployed

:

Once deployment is complete you can check the name of the new servers with:

juju status -e amazon

Which should output something like this:

machines:
  0:
    agent-state: running
    dns-name: ec2-50-17-169-153.compute-1.amazonaws.com
    instance-id: i-78d4781b
    instance-state: running
  1:
    agent-state: running
    dns-name: ec2-23-22-68-159.compute-1.amazonaws.com
    instance-id: i-3a9bd554
    instance-state: running
  2:
    agent-state: running
    dns-name: ec2-54-234-249-131.compute-1.amazonaws.com
    instance-id: i-f9e56696
    instance-state: running
services:
  juju-gui:
    charm: cs:precise/juju-gui-3
    exposed: true
    relations: {}
    units:
      juju-gui/0:
        agent-state: started
        machine: 0
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: ec2-50-17-169-153.compute-1.amazonaws.com
  mysql:
    charm: cs:precise/mysql-16
    relations: {}
    units:
      mysql/0:
        agent-state: started
        machine: 1
        public-address: ec2-23-22-68-159.compute-1.amazonaws.com
  wordpress:
    charm: cs:precise/wordpress-11
    exposed: false
    relations:
      loadbalancer:
      - wordpress
    units:
      wordpress/0:
        agent-state: started
        machine: 2
        public-address: ec2-54-234-249-131.compute-1.amazonaws.com

Now we need to add a relationship between the wordpress application server and the MySQL database server. This will set up the SQL backend database for your blog and configure the usernames and passwords and database tables needed, all automatically.

juju add-relation wordpress mysql -e amazon

Finally, we need to expose the wordpress instance so you can connect to it using your web browser:

juju expose wordpress -e amazon

Now your Juju gui should look like this:
Juju Gui showing relations

Setting up WordPress and adding your first post

Then connect to the wordpress server using your web browser, by using the public-address from the status output above, i.e. http://ec2-54-234-249-131.compute-1.amazonaws.com/
This will then show you the initial set up page for your wordpress blog, like this:

You will need to enter some configuration details such as a site name and password:

After you have saved the new details you will get a confirmation page:

Confirmation Page

So, click on Login to login to your new blog on Amazon EC2.

Now in order to make sure we are testing a live blog we need to enter some data. So, let’s post a blog entry.
First click on New Post on the top left menu:

Now, type in the details of your new blog post and click on Publish on the top right:

Now you have a new blog on Amazon EC2 with your first blog entry posted.

Migrating from Amazon EC2 to HP Cloud

So, now we have a live blog running on Amazon EC2 it is now time to migrate to HP Cloud.

We could just run the commands above but using the extension “-e hpcloud” to deploy the services to HP Cloud and then migrate the data.
But a more satisfying way is to use Juju-jitsu again to export the current layout from Amazon EC2 environment and then replicate that on HP Cloud.

So, we can use:

jitsu export -e amazon > wordpress-deployment.json

This will save a file in JSON format detailing the deployed services and their relationships.

First we need to bootstrap our HP Cloud environment:

juju bootstrap -e hpcloud

This will take a few minutes to deploy a new instance and install the Juju bootstrap node.
Once the bootstrap is complete you should be able to check the status by using:

juju status -e hpcloud

The output should be something like this:

machines:
  0:
    agent-state: running
    dns-name: 15.185.102.93
    instance-id: 1064649
    instance-state: running
services: {}

So, let us now deploy the replica of the environment on Amazon to HP:

jitsu import -e hpcloud wordpress-deployment.json

This will then deploy the replicated environment from Amazon EC2. You can check progress with:

juju status -e hpcloud

When completed your output should be as follows:


So we now have a replica of the environment from Amazon EC2 on HP Cloud, but we have no data, yet.
We also need to copy the SQL data from the existing Amazon EC2 MySQL database to the HP Cloud MySQL database to get all your live blog data across to the new environment.
Let’s login to the MySQL DB node on Amazon EC2:

juju ssh mysql/0 -e amazon

Now we are logged in we can get the root password for the Database:

sudo cat /var/lib/juju/mysql.passwd

This will output the root password for the MySQL DB so you can take a copy of the data with:

sudo mysqldump -p wordpress > wordpress.sql

When prompted copy and past the password that you recovered from the previous step.

Now exit the login using:

exit

Now copy the SQL backup file from Amazon EC2 to your local machine:

juju scp mysql/0:wordpress.sql ./ -e amazon

This will download the wordpress.sql file.
You will now need to know your new wordpress server IP address for HP Cloud.
You can find this from juju status:

juju status wordpress -e hpcloud

The output should look like this:

machines:
  3:
    agent-state: running
    dns-name: 15.185.102.121
    instance-id: 1064677
    instance-state: running
services:
  wordpress:
    charm: cs:precise/wordpress-11
    exposed: false
    relations:
      db:
      - mysql
      loadbalancer:
      - wordpress
    units:
      wordpress/0:
        agent-state: started
        machine: 3
        public-address: 15.185.102.121

In order to fix your WordPress server name you will have to replace your Amazon EC2 WordPress public-address with your HP Cloud WordPress server public-address.
So, you will need to do a find and replace in the wordpress.sql file as follows:

sed -e 's/ec2-54-234-249-131.compute-1.amazonaws.com/15.185.102.121/g' wordpress.sql > wordpress-hp.sql

Obviously you will need to customise the command to replace your server addresses from Amazon and HP Cloud in the command above.
NB:This step can be problematic and if you need more detailed information on changing the server name of a wordpress installation and moving servers see this more detailed instructions here:
http://codex.wordpress.org/Moving_WordPress

Now upload to your new HP Cloud MySQL server the database backup file, fixed with the new server public-address:

juju scp wordpress-hp.sql mysql/0: -e hpcloud

Now let’s import the data into your wordpress database on HP Cloud.
First we need to log in to the database server, as before:

juju ssh mysql/0 -e hpcloud

Now let’s get the root password for the Database:

sudo cat /var/lib/juju/mysql.passwd

Now we can import the data using:

sudo mysql -p wordpress < wordpress-hp.sql

And when you are prompted for the password enter the password you retrieved in the previous step, and then exit.

Finally you will still need to expose the wordpress instance on HP Cloud to the outside world using:

juju expose wordpress -e hpcloud

Now connect to your new wordpress blog migrated to HP Cloud from Amazon by connecting to the public-address of the wordpress node.
You can find the address from the output of juju status as follows:

juju status wordpress -e hpcloud

The output should look like this:

machines:
  3:
    agent-state: running
    dns-name: 15.185.102.121
    instance-id: 1064677
    instance-state: running
services:
  wordpress:
    charm: cs:precise/wordpress-11
    exposed: true
    relations:
      db:
      - mysql
      loadbalancer:
      - wordpress
    units:
      wordpress/0:
        agent-state: started
        machine: 3
        open-ports:
        - 80/tcp
        public-address: 15.185.102.121

Now connect to http://15.185.102.121/ and your blog is now hosted on HP Cloud.

Read more
Mark Baker

As clouds for IT infrastructure become commonplace, admins and devops need quick, easy ways of deploying and orchestrating cloud services.  As we mentioned in October, Ubuntu now has a GUI for Juju, the service orchestration tool for server and cloud. In this post we wanted to expand a bit more on how Juju makes it even easier to visualise and keep track of complex cloud environments.

Juju provides the ability to rapidly deploy cloud services on OpenStack, HP Cloud, AWS and other platforms using a library of 100 ‘charms’ which cover applications from node.js to Hadoop. Juju GUI makes the Juju command line interface even easier, giving the ability to deploy, manage and track progress visually as your cloud grows (or shrinks).

Juju GUI is easy and totally intuitive.  To start, you simply search for the service you want on the Juju GUI charm search bar (top right on the screen).  In this case I want to deploy WordPress to host my blog site.  I have the chance to alter the WordPress settings, and with a few clicks the service is ready.  Its displayed as an icon on the GUI.

I then want a mysql service to go alongside.  Again I search for the charm, set the parameter (or accept the defaults) and away we go.

Its even easier to build the relations between these services by point and click. Juju knows that the relationship needs a suitable database link.

I can expose WordPress to users by setting expose flag  - at the bottom of a settings screen – to on. To scale up WordPress I can add more units, creating identical copies of the WordPress deployment, including any relationships.  I have selected ten in total, and this shows in the center of the wordpress icon.

And thats it.

For a simple cloud, Juju or other tools might be sufficient.  But as your cloud grows, Juju GUI will be a wonderful way not only to provision and orchestrate services, but more importantly to validate and check that you have the correct links and relationships.  Its an ideal way to replicate and scale cloud services as you need.

For more details of Juju, go to juju.ubuntu.com.  To try Juju GUI for yourself, go to http://uistage.jujucharms.com:8080/

Read more
Matt Fischer

Getting Juju With It

At the UDS in Copenhagen I finally had time to attend a session on Juju Charms. I knew the theory of Juju, which is that allows you to easily deploy and link services on public clouds, locally, or even on bare metal, but I never had time to try it out. The Charm School (registration required) session in Copenhagen really showed me the power of what Juju can give you. For example, when I first setup my blog, I had to find a webhost, get an an ssh account, download WordPress, install it, and dependencies, setup mysql, configure WordPress, debug why they weren’t communicating, etc. It was super annoying and took way too long. Now, imagine you want to setup ten blogs, or ten instances of couchdb, or one hundred, or one thousand, and it quickly becomes untenable.  With juju, setting up a blog is as simple as:

  • juju deploy wordpress
  • juju deploy mysql
  • juju add-relation wordpress mysql
  • juju expose wordpress

A few minutes later, and I have a functioning WordPress install. For more complex setups and installs Juju helps to manage the relationships between charms and sends events that the charms react to. This makes it easy to add and remove services like haproxy and memcached to an existing webapp. This interaction between charms implies that the more charms that are available the more useful they all become; the network effect applies to charms!

So after I got home, Charm School had left me energized and ready to write a charm, but I didn’t have any great ideas, until I remembered an app that I’ve used before called Tracks. Tracks is a GTD app, in other words, a fancy todo list. I’d used it hosted before, but my free host went offline and I lost all my to do items. Hosting my own would be much safer. So I started working on a Tracks charm.

If you need an idea for a charm, think about what tools you use that you have to setup, what software have you installed and configured recently? If you need an idea and nothing stands out, you can check out the list of “Charm Needed” bugs. Actually you should check that list regardless to make sure nobody else is already writing the same one.

With an idea in hand, I sat down to write my Charm. Step one is the documentation, most of which was contained on this page “Writing a Charm“. I fully expected to spend three weeks learning a new programming language with arcane black magic commands, but I was pleasantly surprised to learn that you can write a charm in any language you want. Most charms seem to be shell scripts or Python and my charm was simple enough that I wrote it in bash.

During the process of charm writing you may have some questions, and there’s plenty of help to be had. First, the examples that are contained in the juju trunk are OLD and I wouldn’t recommend you follow them. They are missing things like README files and don’t expose http interfaces, which was requested for my charm. Instead I’d recommend you pull the wordpress, mysql, and drupal charms from the charm store. If the examples aren’t enough, you can always ask in #juju on freenode or use askubuntu.com. Once your charm works, you can submit it for review. You’ll probably learn a lot during the review, every person I’ve talked to has.

Finally after a bit of work off and on, my charm was done! I submitted it for review, made a few fixes and it made it into the store.

I can now have a Tracks instance up and running in just a few minutes

I’ve barely scratched the surface here with this post, but I hope someone will be energized to go investigate charms and write one. Charms do not use black magic and you don’t need to learn a new language to write one. Help is available if you need it and we’d love to have your contributions.
If you go write a charm please comment here and let me know!

Read more
Mark Baker

Hardened sysadmins and operators often spurn graphical user interfaces (GUIs) as being slow, cumbersome, unscriptable and inflexible. GUIs are for wimps, right?

Well, I’m not going to argue – and certainly, command line interfaces (CLIs) have their benefits, for those comfortable using them. But we are seeing a pronounced change in the industry, as developers start to take a much greater interest in the deployment and operation of flexible, elastic services in scale out or cloud environments. Whilst many of these new ‘devops’ are happy with a CLI, others want to be able to visualise their environment. In the same way that IDEs are popular, being able to see a representation of the services that are running and how they are related can prove extremely valuable. The same goes for launching new services or removing existing ones.

This is why, last week, as part of the new Ubuntu 12.10 release, we announced a GUI for Juju, the Ubuntu service orchestration tool for server and cloud.
The new Juju GUI does all these things and more. For those of you unfamiliar with it, Juju uses a service definition file know as a ‘charm’. Much of the magic in Juju comes from the collective expertise that has gone into developing this the charm. It enables you to deploy complex services without intimate knowledge of the best practice associated that service. Instead, all that deployment expertise is encapsulated in the charm.
Now, with the Juju GUI, it gets even easier. You can select services from a library of nearly 100 charms, covering applications from node.js to Hadoop. And you can deploy them live on any of the providers that Juju supports – OpenStack, HP Cloud, Amazon Web Services and Ubuntu’s Metal-as-a-Service. You can add relations between services while they are running, explore the load on them, upgrade them or destroy them. At the OpenStack Summit in San Diego this year, Mark Shuttleworth even used it to upgrade a running* OpenStack Cloud from Essex to Folsom.
Since the Juju GUI was first shown, the interest and feedback has been tremendous. It certainly seems to make the magic of Juju – and what it can do for people – easier to see. If you haven’t seen it already, check out the screen shots below or visit http://uistage.jujucharms.com:8080/

Because as we’ve always known, a picture really is worth a 1000 words.

 

Juju Gui Image

The Juju GUI

 

 

*Running on Ubuntu Server, obviously.

Read more
Robert Ayres

In my previous post, we added Memcached to our cluster.  In this post, I’ll write a bit more about the Tomcat configuration options that are available including JMX monitoring.  I’ll also show how easy it is to enable session clustering.

Java cluster with JMX and session clusteringConfiguration and Monitoring

All charms come with many options available for configuration.  Each is selected to allow the same tuning you would typically perform on a manually deployed machine.  Configuration options are shown per charm when browsing the Charm Store (jujucharms.com/charms/precise).  The Tomcat charm provides numerous options.  For example, to tweak the JVM options of a running service:

juju set tomcat "java_opts=-Xms768M -Xmx1024M"

This sets the Java heap to a miminum and maximum of 768Mb and 1024Mb respectively.  If you are debugging an application, you may also set:

juju set tomcat "java_opts=-Xms768M -Xmx1024M -XX:-HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=webapps"

To create a ‘.hprof’ Java heap dump you can inspect with VisualVM or jhat each time an OutOfMemoryError occurs.

To open a remote debugger:

juju set tomcat debug_enabled=True

This will open a JDWP debugger on port 8000 you can use to step-through code from Eclipse, Netbeans etc.  (Note: The debugger is never exposed to the Internet, so you need to access it through a ssh tunnel – ‘ssh -L 8000:localhost:8000 ubuntu@xxx.computer.amazonaws.com’, then connect your IDE to localhost port 8000).

A useful part of the JVM is JMX monitoring.  To enable JMX:

juju set tomcat jmx_enabled=True
juju set tomcat "jmx_control_password=<password>"
juju set tomcat "jmx_monitor_password=<password>"

This will start a remote JMX listener on ports 10001, 10002 and set passwords for the ‘monitorRole’ and ‘controlRole’ users (not setting a password disables that account).  You can now open VisualVM or JConsole to connect to the remote JMX instance (screenshot below).  (Note: JMX is never exposed to the Internet, so you need to access it through a ssh tunnel – ‘ssh -L 10001:localhost:10001 -L 10002:localhost:10002 ubuntu@xxx.computer.amazonaws.com’, then connect your JMX client to port 10001).  You can easily expose your own application specific MBeans for monitoring by adding them to the platform MBeanServer.

Juju JMX monitoringOptions are applied to services and to all units under a service.  It isn’t possible to apply options to a specific unit.  So if you enable debugging, you enable it for all Tomcat units.  Same with Java options.

Options may also be applied at deployment time.  For example, to use Tomcat 6 (rather than the default Tomcat 7), create a ‘config.yaml’ file containing the following:

tomcat:
  tomcat_version: tomcat6

Then deploy:

juju deploy --config config.yaml cs:~robert-ayres/precise/tomcat

All units added via ‘add-unit’ will also be Tomcat 6.

Session Clustering

Previously, we setup a Juju cluster consisting of two Tomcat units behind HAProxy.  In this configuration, HTTP sessions exist only on individual Tomcat units.  For many production setups, the use of load balancer sticky sessions and a non-replicated session is the most performant where HTTP sessions are either not required or expendable in the event of unit failure.  For setups concerned about availability of sessions, you can enable Tomcat session clustering on your Juju service which will replicate session data between all units in the service.  Should a unit fail, any of the remaining units can pickup the subsequent requests with the previous session state.  To enable session clustering:

juju set tomcat cluster_enabled=True

We have two choices of how the cluster manages membership.  The preferred choice is using multicast traffic, but as EC2 doesn’t allow this, we must use static configuration.  This is the default, but you can switch between either method by changing the value of the ‘multicast’ option.  Like everything else Juju deployed, any new units added or removed via ‘add-unit’ or ‘remove-unit’ are automatically included/excluded from the cluster membership.  This easily allows you to toggle clustering so that you can benchmark precisely what latency/throughput cost you have by using replicated sessions.

In summary, I’ve shown how you can tweak Tomcat configuration including enabling JMX monitoring.  We’ve also seen how to enable session clustering.  In my final post of the series, I shall show how you can add Solr indexing to your application.

Read more
Robert Ayres

In my previous post, I demonstrated deploying a Juju cluster with a sample Grails application.  Let’s expand our cluster by adding Memcached (see diagram below).

Java memcached clusterDeploy a Memcached service:

juju deploy memcached

Configure Tomcat to map Memcached location under a JNDI name:

juju set tomcat "jndi_memcached_config=param/Memcached:memcached"

This will map the ‘memcached’ service under the JNDI name ‘param/Memcached’.  Whilst Memcached is deploying, you can add the relation ahead of time:

juju add-relation tomcat memcached

We will use the excellent Java Memcached library Spy Memcached (code.google.com/p/spymemcached/) in our application.  Download the ‘spymemcached-x.x.x.jar’ and copy it to ‘juju-example/lib’.
Now edit ‘juju-example/grails-app/conf/spring/resources.groovy’ so it contains the following:

import net.spy.memcached.spring.MemcachedClientFactoryBean
import org.springframework.jndi.JndiObjectFactoryBean

beans = {

    memcachedClient(MemcachedClientFactoryBean) {
        servers = { JndiObjectFactoryBean jndi ->
            jndiName = 'param/Memcached'
            resourceRef = true
        }
    }
}

To make use of our Memcached client, let’s create a simple page counter:

(within 'juju-example' directory)
grails create-controller memcached-count

This will create ‘juju-example/grails-app/controllers/juju/example/MemcachedCountController.groovy’.  Edit it so it contains the following:

package juju.example

class MemcachedCountController {

    def memcachedClient

    def index() {
        def count = memcachedClient.incr('juju-example-count', 1, 1)
        render count
    }
}

When Memcached is deployed and associated with Tomcat, redeploy our application:

(within juju-example directory)
grails clean
grails war

(within parent directory)
cp juju-example/target/juju-example-0.1.war precise/j2ee-deployer/deploy
juju upgrade-charm --repository . juju-example

Once redeployed, you should be able to open http://xxx.compute.amazonaws.com/juju/memcachedCount and refresh the page to an incrementing counter, stored in Memcached.

As with our datasource connection, we utilise a JNDI lookup to instantiate our Memcached client using runtime configuration provided by Juju (a space separated list of Memcached units, provided as a JNDI environment parameter).  With this structure, the developer has total control over integrating external services into their application.  If they want to use a different Memcached library, they can use the Juju configuration to instantiate a different class.

If we want to increase our cache capacity, we can add more units:

juju add-unit -n 2 memcached

This will deploy another 2 Memcached units.  Our Tomcats will update to reflect the new units and restart.
(Note: As you add Memcached units, our example counter may appear to reset as its Memcached key is hashed to another server).

We’ve added Memcached to our Juju cluster and seen how you can integrate external services within your application using JNDI values.
In my next post, I’ll write about how we can enable features of our existing cluster like JMX and utilise Tomcat session clustering.

Read more