Canonical Voices

Michael Hall

It was less than a month that we announced crossing the 10,000 users milestone for Ubuntu phones and tablets, and we’ve already reached another: 100,000 app downloads!

Downloads

10k_downloads_by_countryThe new Ubuntu store used by phones, tablets, and soon the desktop as well, provides app developers with some useful statistics about how many times their app was downloaded, which version was downloaded, and what country the download originated from. This is very useful as it it lets the developer gauge how many users they currently have for their app, and how quickly they are updating to new versions.  One side-effect of these statistics is that we can see how many total downloads there have been across all of the apps in the store, and this week we reached (and quickly passed) the 100,000th download.

Users

app_storeWe’re getting close to having Ubuntu phones go on sale from our partners at Bq and Meizu, but there are still no devices on the market that came with Ubuntu.  This means that we’ve reached this milestone solely from developers and enthusiasts who have installed Ubuntu on one of their own devices (probably a Nexus device) or the device emulator.  

The continued growth in the download number validates the earlier milestone of 10,000 users, a large number of them are clearly still using Ubuntu on their device (or emulator) and keeping their apps up to date (the number represents new app installs and updates). This means that not only are people trying Ubuntu already, many of them are sticking with it too.  Yet another datapoint in support of this is the 600 new unique users who have been using the store since the last milestone announcement.

Pioneers

pioneers_shirtTo supply all of these users with the apps they want, we’re continuing to build our community of app developers around Ubuntu. The first of these have already received their limited edition t-shirts, and are listed on the Ubuntu Pioneers page of the developer portal.

There is still time to get your app published, and claim your place on that page and your t-shirt, but they’re filling up fast so don’t delay. Go to our Developer Portal and get started today, you could be only a few hours away from publishing your first app in the store!

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140701 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

We have rebased our Utopic kernel “unstable” branch to v3.16-rc3 and
uploaded to our ckt ppa (ppa:canonical-kernel-team/ppa). Please test.
I don’t anticipate an official v3.16 based upload to the archive until
further testing and baking has taken place.
—–
Important upcoming dates:
Thurs Jul 24 – 14.04.1 (~3 weeks away)
Thurs Jul 31 – 14.10 Alpha 2 (~4 weeks away)
Thurs Aug 07 – 12.04.5 (~5 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Jul. 1):

  • Lucid – Kernel Prep
  • Precise – Kernel Prep
  • Saucy – Kernel Prep
  • Trusty – Kernel Prep

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
pitti

We currently use completely different methods and tools of building test beds and running tests for Debian vs. Click packages, for normal uploads vs. CI airline landings vs. upstream project merge proposal testing, and keep lots of knowledge about Click package test metadata external and not easily accessible/discoverable.

Today I released autopkgtest 3.0 (and 3.0.1 with a few minor updates) which is a major milestone in unifying how we run package tests both locally and in production CI. The goals of this are:

  • Keep all test metadata, such as test dependencies, commands to run the test etc., in the project/package source itself instead of external. We have had that for a long time for Debian packages with DEP-8 and debian/tests/control, but not yet for Ubuntu’s Click packages.
  • Use the same tools for Debian and Click packages to simplify what developers have to know about and to reduce the amount of test infrastructure code to maintain.
  • Use the exact same testbeds and test runners in production CI than what developers use locally, so that you can reproduce and investigate failures.
  • Re-use the existing autopkgtest capabilities for using various kinds of testbeds, and conversely, making all new testbed types immediately available to all package formats.
  • Stop putting tests into the Ubuntu archive as packages (such as mediaplayer-app-autopilot). This just adds packaging and archive space overhead and also makes updating tests a lot harder and taking longer than it should.

So, let’s dive into the new features!

New runner: adt-virt-ssh

We want to run tests on real hardware such as a laptop of a particular brand with a particular graphics card, or an Ubuntu phone. We also want to restructure our current CI machinery to run tests on a real OpenStack cloud and gradually get rid of our hand-maintained QA lab with its test machines. While these use cases seem rather different, they both have in common that there is an already existing machine which is pretty much only accessible with ssh. Once you have an ssh connection, they look pretty much the same, you just need different initial setup (like fiddling with adb, calling nova boot, etc.) to prepare them.

So the new adt-virt-ssh runner factorizes all the common bits such as communicating with adt-run, auto-detecting sudo availability, doing SSH connection sharing etc., and delegates the target specific bits to a “setup script”. E. g. we could specify --setup-script ssh-setup-nova or --setup-script ssh-setup-adb which would then get called with open at the appropriate time by adt-run; it calls the nova commands to create a VM, or run a few adb commands to install/start ssh and install the public key. Then autopkgtest does its thing, and eventually calls the script with cleanup again. The actual protocol is a bit more involved (see manpage), but that’s the general idea.

autopkgtest now ships readymade scripts for these two use cases. So you could e. g. run the libpng tests in a temporary cloud VM:

# if you don't have one, create it with "nova keypair-create"
$ nova keypair-list
[...]
| pitti | 9f:31:cf:78:50:4f:42:04:7a:87:d7:2a:75:5e:46:56 |

# find a suitable image
$ nova image-list 
[...]
| ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 | Ubuntu Server 14.04 LTS (amd64 20140607.1) - Partner Image                         | ACTIVE |  

$ nova flavor-list 
[...]
| 100 | standard.xsmall  | 1024      | 10   | 10        |      | 1     | 1.0         | N/A       |

# now run the tests: please be patient, this takes a few mins!
$ adt-run libpng --setup-commands="apt-get update" --- ssh -s /usr/share/autopkgtest/ssh-setup/nova -- \
   -f standard.xsmall -i ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 -k pitti
[...]
adt-run [16:23:16]: test build:  - - - - - - - - - - results - - - - - - - - - -
build                PASS
adt-run: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ tests done.

Please see man adt-virt-ssh for details how to use it and how to write setup scripts. There is also a commented /usr/share/autopkgtest/ssh-setup/SKELETON template for writing your own for your use cases. You can also not use any setup script and just specify user and host name as options, but please remember that the ssh runner cannot clean up after itself, so never use this on important machines which you can’t reset/reinstall!

Test dependency installation without apt/root

Ubuntu phones with system images have a read-only file system where you can’t install test dependencies with apt. A similar case is using the “null” runner without root. When apt-get install is not available, autopkgtest now has a reduced fallback mode: it downloads the required test dependencies, unpacks them into a temporary directory, and runs the tests with $PATH, $PYTHONPATH, $GI_TYPELIB_PATH, etc. pointing to the unpacked temp dir. Of course this only works for packages which are relocatable in that way, i. e. libraries, Python modules, or command line tools; it will totally fail for things which look for config files, plugins etc. in hardcoded directory paths. But it’s good enough for the purposes of Click package testing such as installing autopilot, libautopilot-qt etc.

Click package support

autopkgtest now recognizes click source directories and *.click package arguments, and introduces a new test metadata specification syntax in a click package manifest. This is similar in spirit and capabilities to DEP-8 debian/tests/control, except that it’s using JSON:

    "x-test": {
        "unit": "tests/unittests",
        "smoke": {
            "path": "tests/smoketest",
            "depends": ["shunit2", "moreutils"],
            "restrictions": ["allow-stderr"]
        },
        "another": {
            "command": "echo hello > /tmp/world.txt"
        }
    }

For convenience, there is also some magic to make running autopilot tests particularly simple. E. g. our existing click packages usually specify something like

    "x-test": {
        "autopilot": "ubuntu_calculator_app"
    }

which is enough to “do what I mean”, i. e. implicitly add the autopilot test depends and run autopilot with the specified test module name. You can specify your own dependencies and/or commands, and restrictions etc., of course.

So with this, and the previous support for non-apt test dependencies and the ssh runner, we can put all this together to run the tests for e. g. the Ubuntu calculator app on the phone:

$ bzr branch lp:ubuntu-calculator-app
# built straight from that branch; TODO: where is the official" download URL?
$ wget http://people.canonical.com/~pitti/tmp/com.ubuntu.calculator_1.3.283_all.click
$ adt-run ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- \
      ssh -s /usr/share/autopkgtest/ssh-setup/adb
[..]
Traceback (most recent call last):
  File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 93, in test_divide_with_infinity_length_result_number
    self._assert_result("0.33333333")
  File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 63, in _assert_result
    self.main_view.get_result, Eventually(Equals(expected_result)))
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 406, in assertThat
    raise mismatch_error
testtools.matchers._impl.MismatchError: After 10.0 seconds test failed: '0.33333333' != '0.3'

Ran 33 tests in 295.586s
FAILED (failures=1)

Note that the current adb ssh setup script deals with some things like applying the autopilot click AppArmor hooks and disabling screen dimming, but it does not do the first-time setup (connecting to network, doing the gesture intro) and unlocking the screen. These are still on the TODO list, but I need to find out how to do these properly. Help appreciated!

Click app tests in schroot/containers

But, that’s not the only thing you can do! autopkgtest has all these other runners, so why not try and run them in a schroot or container? To emulate the environment of an Ubuntu Touch session I wrote a --setup-commands script:

adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
    ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- schroot utopic

This will actually work in the sense of running (and succeeding) the autopilot tests, but it will fail due to a lot of libust[11345/11358]: Error: Error opening shm /lttng-ust-wait... warnings on stderr. I don’t know what these mean, just that I also see them on the phone itself occasionally.

I also wrote another setup-commands script which emulates “read-only apt”, so that you can test the “unpack only” fallback. So you could prepare a container with click and the App framework preinstalled (so that it doesn’t always take ages to install them), starting from a standard adt-build-lxc container:

$ sudo lxc-clone -o adt-utopic -n click
$ sudo lxc-start -n click
  # run "sudo apt-get install click ubuntu-sdk-libs ubuntu-app-launch-tools" there
  # then "sudo powerdown"

# current apparmor profile doesn't allow remounting something read-only
$ echo "lxc.aa_profile = unconfined" | sudo tee -a /var/lib/lxc/click/config

Now that container has enough stuff preinstalled to be reasonably fast to set up, and the remaining test dependencies (mostly autopilot) work fine with the unpack/$*_PATH fallback:

$ adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
          --setup-commands /usr/share/autopkgtest/setup-commands/ro-apt \
          ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click \
          --- lxc -es click

This will successfully run all the tests, and provided you have apt-cacher-ng installed, it only takes a few seconds to set up. This might be a nice thing to do on merge proposals, if you don’t have an actual phone at hand, or don’t want to clutter it up.

autopkgtest 3.0.1 will be available in Utopic tomorrow (through autosyncs). If you can’t wait to try it out, download it from my people.c.c page ?.

Feedback appreciated!

Read more
Alan Pope

As previously blogged we’re inviting the community to hack on Core Apps and Community Apps this week.

All the details are in the post above, but here’s the executive summary:-

  • Hack days run from 30th June till 4th July
  • We’re hacking on the Core Apps Music, Calendar, Calendar, Clock, Weather & Calculator
  • In addition we’re also hacking on community apps including Beru Ebook Reader, OSM Touch mapping software, and Trojita email client
  • Join us in the #ubuntu-app-devel IRC channel on freenode, and on the ubuntu-phone mailing list to get started
  • Get all the details from the hack days wiki page

As always we welcome new contributions during the Hack Days, but also beyond that.

Read more
Colin Ian King

Linaro's idlestat is another useful tool in the arsenal of CPU monitoring utilities.  Idlestat monitors and captures CPU C-state and P-state transitions using the kernel Ftrace tracer and outputs statistics based on entering/exiting each state for each CPU.  Idlestat  also captures IRQ activity as well which ones caused a CPU to exit an idle state -  knowing why a processor came out of a deep C state is always very useful way to help diagnose power consumption issues.

Using idlestat is easy, to capture 20 seconds of activity into a log file called example.log, run:

 sudo idlestat --trace -f example.log -t 20    
..and this will display the per CPU C-state and P-state and IRQ statistics for that run.

One can also take the saved log file and parse it again to calculate the statistics again using:
 idlestat --import -f example.log  

One can get the source from here and I've packaged version 0.3 (plus a bunch of minor fixes that will land in 0.4) for Ubuntu 14.10 Utopic Unicorn.

Read more
Daniel Holbach

The idea

At the last Ubuntu Online Summit, we had a session in which we discussed (among other things) the idea to make it easier for LoCo teams to share news, stories, pictures, events and everything else. A lot of great work is happening around the world already, but a lot of us don’t get to hear about it.

I took an action item to schedule a meeting to discuss the idea some more. The rough plan being that we add functionality to the LoCo Team Portal which allows teams to share their stories, which then end up on Planet Ubuntu.

We held the meeting last week, find the IRC logs here.

The mini-spec

Find the spec here and please either comments on the loco-contacts mailing list or below in the comments. If you are a designer, a web developer, know Django or just generally want to get involved, please let us know! :)

We will discuss the spec some more, turn it into bug reports and schedule a series of hack days to work on this together.

Read more
Prakash Advani

Top cities with Tech Skills are: Bangalore, Pune, Hyderabad and Chennai.

These 4 Indian cities are ahead of San Francisco Bay Area.

Read the complete report: http://blog.linkedin.com/2014/06/24/indias-got-tech-talent-cities-in-india-top-list-of-cities-attracting-technology-talent/

Read more
facundo

Satélites argentinos


Estos días fue lanzado exitosamente el tercer nanosatélite argentino, "Tita" (llamado así en honor a Tita Merello).

Se llaman "nanosatélites" porque, justamente, son mucho más chicos (y baratos) que los satélites "tradicionales". En particular, Tita pesa unos 25 kilos, está equipado con tres antenas y lleva una cámara para tomar fotos y videos en alta definición.

El satélite Tita, siendo instalado en el lanzador

Lo desarrolló la empresa argentina Satellogic, pero no lo lanzamos nosotros al espacio (todavía no tenemos esa capacidad), sino que fue lanzado desde la ciudad rusa de Yasny. Su objetivo es tomar imágenes durante tres años, en colaboración con otros nanosatélites, los ya lanzados Capitán Beto (llamado así obviamente en referencia a la canción de Spinetta) y Manolito (por el personaje de Mafalda), y a otros 15 satélites que Satellogic planea lanzar durante el próximo año.

Pero Tita es diferente a los dos anteriores, que pesaban alrededor de dos kilos. También es un prototipo, y usa las mismas estrategias de diseño y fabricación con componentes de uso comercial (resortes de ferretería, electrónica de teléfonos celulares y computadoras personales), pero este permite tomar imágenes y videos de dos metros de resolución. Esencialmente, la gente de Satellogic está haciendo lo mismo que hace un satélite convencional, pero a un precio entre cien y mil veces menor.

En este video bastante interesante podemos ver a Lino Barañao (Ministro de Ciencia y Tecnología) y Emiliano Kargieman (CEO de Satellogic), contándonos un poco todo esto (y de paso se ven pasos de la construcción, y las oficinas, ¡donde se ve bastante gente de PyAr trabajando!).



Como detalle final, les dejo este audio de Adrián Paenza hablando sobre los satélites (en general) en el programa La Mañana de Victor Hugo Morales.

Read more
Alan Pope

Ready for RTM*: Ubuntu Touch Core App Hack Days!

* Release to Manufacturing

device-2014-06-25-121330

We’re running another set of Core Apps Hack Days next week. Starting Monday 30th June through to Friday 4th July we’ll be hacking on Core Apps, getting them polished for our upcoming RTM (Release To Manufacture) images. The goal of our hack days is as always to implement missing features, fix bugs, get new developers involved in coding on Ubuntu using the SDK and to have some fun hacking on Free Software.

For those who’ve not seen the hack days before, it’s really simple. We get together from 09:00 UTC till 21:00 UTC on #ubuntu-app-devel on freenode IRC and hack on the Core Apps. We will be testing the apps to destruction, filing and triaging bugs, creating patches, discussing and testing proposals and generally do whatever we can to get these apps ready for RTM. It’s good fun, relaxed and a great way to get started in Ubuntu app development with the SDK

We’ll have developers hanging around to answer questions, and can call on platform and SDK experts for assistance when required. We focus on specific apps each day, but as always we welcome contributions to all the core apps both during the designated days, and beyond.

Not just Core Apps

This time around we’re also doing things a little differently. Typically we only focus attention on the main community maintained Core Apps we ship on our device images. For this set of Hack Days we’d like to invite 3rd party community app developers to bring their apps along as well and hack with us. We’re looking for developers who have already developed their Ubuntu app using the SDK but maybe need help with the “last mile”. Perhaps you have design questions, bugs or feature enhancements which you’d like to get people involved in.

device-2014-06-25-122105 device-2014-06-25-122334

We won’t be writing your code for you, but we can certainly help to find experienced people to answer your questions and advise of platform and SDK details. We’d expect you to make your code available somewhere, to allow contributions and perhaps enable some kind of bug tracker or task manager. It’s up to you to manage your own community app, we’re here to help though!

Get involved

If you’re interested in bringing your app to hack days, then get in touch with popey (Alan Pope) on IRC or via email [popey@ubuntu.com] and we’ll schedule it in for next week and get the word out.

You can find out more about the Core Apps Hack Days on the wiki, and can discuss this with us on IRC in #ubuntu-app-devel.

Read more
Michael

After recently writing the re-usable wsgi-app role for juju charms, I wanted to see how easy it would be to apply this to an existing service. I chose the existing ubuntu-reviews service because it’s open-source and something that we’ll probably need to deploy with juju soon (albeit only the functionality for reviewing newer click applications).

I’ve tried to include the workflow below that I used to create the charm including the detailed steps and some mistakes, in case it’s useful for others. If you’re after more of an overview about using reusable ansible roles in your juju charms, checkout these slides.

1. Create the new charm from the bootstrap-wsgi charm

First grab the charm-bootstrap-wsgi code and rename it to ubuntu-reviews:

$ mkdir -p ~/charms/rnr/precise && cd ~/charms/rnr/precise
$ git clone https://github.com/absoludity/charm-bootstrap-wsgi.git
$ mv charm-bootstrap-wsgi ubuntu-reviews && cd ubuntu-reviews/

Then update the charm metadata to reflect the ubuntu-reviews service:

--- a/metadata.yaml
+++ b/metadata.yaml
@@ -1,6 +1,6 @@
-name: charm-bootstrap-wsgi
-summary: Bootstrap your wsgi service charm.
-maintainer: Developer Account <Developer.Account@localhost>
+name: ubuntu-reviews
+summary: A review API service for Ubuntu click packages.
+maintainer: Michael Nelson <michael.nelson@canonical.com>
 description: |
   <Multi-line description here>
 categories:

I then updated the playbook to expect a tarball named rnr-server.tgz, and gave it a more relevant app_label (which controls the directories under which the service is setup):

--- a/playbook.yml
+++ b/playbook.yml
@@ -2,13 +2,13 @@
 - hosts: localhost
 
   vars:
-    - app_label: wsgi-app.example.com
+    - app_label: click-reviews.ubuntu.com
 
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: example_wsgi:application
-      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
+      wsgi_application: wsgi:app
+      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''

Although when deploying this service we’d be deploying a built asset published somewhere else, for development it’s easier to work with a local file in the charm – and the reusable wsgi-app role allows you to switch between those two options. So the last step here is to add some Makefile targets to enable pulling in the application code and creating the tarball (you can see the details on the git commit for this step).

With that done, I can deploy the charm with `make deploy` – expecting it to fail because the wsgi object doesn’t yet exist.

Aside: if you’re deploying locally, it’s sometimes useful to watch the deployment progress with:

$ tail -F ~/.juju/local/log/unit-ubuntu-reviews-0.log

At this point, juju status shows no issues, but curling the service does (as expected):

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080"
curl: (56) Recv failure: Connection reset by peer

Checking the logs (which, oh joy, are already setup and configured with log rotation etc. by the wsgi-app role) shows the expected error:

$ juju ssh ubuntu-reviews/0 "tail -F /srv/reviews.click.ubuntu.com/logs/reviews.click.ubuntu.com-error.log"
...
ImportError: No module named wsgi

2. Adding the wsgi, settings and urls

The current rnr-server code does have a django project setup, but it’s specific to the current setup (requiring a 3rd party library for settings, and serves the non-click reviews service too). I’d like to simplify that here, so I’m bundling a project configuration in the charm. First the standard (django-generated) manage.py, wsgi.py and near-default settings.py (which you can see in the actual commit).

An extra task is added which copies the project configuration into place during install/upgrade, and both the wsgi application location and the required PYTHONPATH are specified:

--- a/playbook.yml
+++ b/playbook.yml
@@ -7,7 +7,8 @@
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: wsgi:app
+      python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
+      wsgi_application: clickreviewsproject.wsgi:application
       code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
@@ -18,20 +19,18 @@
 
   tasks:
 
-    # Pretend there are some package dependencies for the example wsgi app.
     - name: Install any required packages for your app.
-      apt: pkg={{ item }} state=latest update_cache=yes
+      apt: pkg={{ item }}
       with_items:
-        - python-django
-        - python-django-celery
+        - python-django=1.5.4-1ubuntu1~ctools0
       tags:
         - install
         - upgrade-charm
 
     - name: Write any custom configuration files
-      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
+      copy: src=django-project dest={{ application_dir }} owner={{ wsgi_user }} group={{ wsgi_group }}
       tags:
-        - config-changed
-        # Also any backend relation-changed events, such as databases.
+        - install
+        - upgrade-charm
       notify:
         - Restart wsgi

I can now run juju upgrade-charm and see that the application now responds, but with a 500 error highlighting the first (of a few) missing dependencies…

3. Adding missing dependencies

At this point, it’s easiest in my opinion to run debug-hooks:

$ juju debug-hooks ubuntu-reviews/0

and directly test and install missing dependencies, adding them to the playbook as you go:

ubuntu@michael-local-machine-1:~$ curl http://localhost:8080 | less
ubuntu@michael-local-machine-1:~$ sudo apt-get install python-missing-library -y (and add it to the playbook in your charm editor).
ubuntu@michael-local-machine-1:~$ sudo service gunicorn restart

Rinse and repeat. You may want to occasionally run upgrade-charm in a separate terminal:

$ juju upgrade-charm --repository=../.. ubuntu-reviews

to verify that you’ve not broken things, running the hooks as they are triggered in your debug-hooks window.

Other times you might want to destroy the environment to redeploy from scratch.

Sometimes you’ll have dependencies which are not available in the distro. For our deployments,
we always ensure we have these included in our tarball (in some form). In the case of the reviews server, there was an existing script which pulls in a bunch of extra branches, so I’ve updated to use that in the Makefile target that builds the tarball. But there was another dependency, south, which wasn’t included by that task, so for simplicity here, I’m going to install that via pip (which you don’t want to do in reality – you’d update your build process instead).

You can see the extra dependencies in the commit.

4. Customise settings

At this point, the service deploys but errors due to a missing setting, which makes complete sense as I’ve not added any app-specific settings yet.

So, remove the vanilla settings.py and add a custom settings.py.j2 template, and add an extra django_secret_key option to the charm config.yaml:

--- a/config.yaml
+++ b/config.yaml
@@ -10,8 +10,13 @@ options:
     current_symlink:
         default: "latest"
         type: string
-        description: |
+        description: >
             The symlink of the code to run. The default of 'latest' will use
             the most recently added build on the instance.  Specifying a
             differnt label (eg. "r235") will symlink to that directory assuming
             it has been previously added to the instance.
+    django_secret_key:
+        default: "you-really-should-set-this"
+        type: string
+        description: >
+            The secret key that django should use for each instance.

--- /dev/null
+++ b/templates/settings.py.j2
@@ -0,0 +1,41 @@
+DEBUG = True
+TEMPLATE_DEBUG = DEBUG
+SECRET_KEY = '{{ django_secret_key }}'
+TIME_ZONE = 'UTC'
+ROOT_URLCONF = 'clickreviewsproject.urls'
+WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
+INSTALLED_APPS = (
+    'django.contrib.auth',
+    'django.contrib.contenttypes',
+    'django.contrib.sessions',
+    'django.contrib.sites',
+    'django_openid_auth',
+    'django.contrib.messages',
+    'django.contrib.staticfiles',
+    'clickreviews',
+    'south',
+)
...

Again, full diff in the commit.

With the extra settings now defined and one more dependencie added, I’m now able to ping the service successfully:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

But that only works because the input validation is fired before anything tries to use the (non-existent) database…

5. Adding the database relation

I had expected this step to be simple – add the database relation, call syncdb/migrate, but there were three factors conspiring to complicate things:

  1. The reviews application uses a custom postgres “debversion” column type
  2. A migration for the older non-click reviewsapp service requires postgres superuser creds (specifically, reviews/0013_add_debversion_field.py
  3. The clickreviews app, which I wanted to deploy on its own, is currently dependent on the older non-click reviews app, so it’s necessary to have both enabled and therefore the tables from both sync’d

So to work around the first point, I split the ‘deploy’ task into two tasks so I can install and setup the custom debversion field on the postgres units, before running syncdb:

--- a/Makefile
+++ b/Makefile
@@ -37,8 +37,17 @@ deploy: create-tarball
        @juju set ubuntu-reviews build_label=r$(REVIEWS_REVNO)
        @juju deploy gunicorn
        @juju deploy nrpe-external-master
+       @juju deploy postgresql
        @juju add-relation ubuntu-reviews gunicorn
        @juju add-relation ubuntu-reviews nrpe-external-master
+       @juju add-relation ubuntu-reviews postgresql:db
+       @juju set postgresql extra-packages=postgresql-9.1-debversion
+       @echo "Once the services are related, run 'make setup-db'"
+
+setup-db:
+       @echo "Creating custom debversion postgres type and syncing the ubuntu-reviews database."
+       @juju run --service postgresql 'psql -U postgres ubuntu-reviews -c "CREATE EXTENSION debversion;"'
+       @juju run --unit=ubuntu-reviews/0 "hooks/syncdb"
        @echo See the README for explorations after deploying.

I’ve separated the syncdb out to a manual step (rather than running syncdb on a hook like config-changed) so that I can run it after the postgresql service had been updated with the custom field, and also because juju doesn’t currently have a concept of a leader (yes, syncdb could be run on every unit, and might be safe, but I don’t like it conceptually).

--- a/playbook.yml
+++ b/playbook.yml
@@ -3,13 +3,13 @@
 
   vars:
     - app_label: click-reviews.ubuntu.com
+    - code_archive: "{{ build_label }}/rnr-server.tgz"
 
   roles:
     - role: wsgi-app
       listen_port: 8080
       python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
       wsgi_application: clickreviewsproject.wsgi:application
-      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
     - role: nrpe-external-master
@@ -25,6 +25,7 @@
         - python-django=1.5.4-1ubuntu1~ctools0
         - python-tz
         - python-pip
+        - python-psycopg2
       tags:
         - install
         - upgrade-charm
@@ -46,13 +47,24 @@
       tags:
         - install
         - upgrade-charm
+        - db-relation-changed
       notify:
         - Restart wsgi
+
     # XXX Normally our deployment build would ensure these are all available in the tarball.
     - name: Install any non-distro dependencies (Don't depend on pip for a real deployment!)
       pip: name={{ item.key }} version={{ item.value }}
       with_dict:
         south: 0.7.6
+        django-openid-auth: 0.2
       tags:
         - install
         - upgrade-charm
+
+    - name: sync the database
+      django_manage: >
+        command="syncdb --noinput"
+        app_path="{{ application_dir }}/django-project"
+        pythonpath="{{ code_dir }}/current/src"
+      tags:
+        - syncdb

To work around the second and third points above, I updated to remove the ‘south’ django application so that syncdb will setup the correct current tables without using migrations (while leaving it on the pythonpath as some code imports it currently), and added the old non-click reviews app to satisfy some of the imported dependencies.

By far the most complicated part of this change though, was the database settings:

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -1,6 +1,31 @@
 DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
+
+{% if 'db' in relations %}
+DATABASES = {
+  {% for dbrelation in relations['db'] %}
+  {% if loop.first %}
+    {% set services=relations['db'][dbrelation] %}
+    {% for service in services %}
+      {% if service.startswith('postgres') %}
+      {% set dbdetails = services[service] %}
+        {% if 'database' in dbdetails %}
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ dbdetails["database"] }}',
+        'HOST': '{{ dbdetails["host"] }}',
+        'USER': '{{ dbdetails["user"] }}',
+        'PASSWORD': '{{ dbdetails["password"] }}',
+    }
+        {% endif %}
+      {% endif %}
+    {% endfor %}
+  {% endif %}
+  {% endfor %}
+}
+{% endif %}
+
 TIME_ZONE = 'UTC'
 ROOT_URLCONF = 'clickreviewsproject.urls'
 WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
@@ -12,8 +37,8 @@ INSTALLED_APPS = (
     'django_openid_auth',
     'django.contrib.messages',
     'django.contrib.staticfiles',
+    'reviewsapp',
     'clickreviews',
-    'south',
 )
 AUTHENTICATION_BACKENDS = (
     'django_openid_auth.auth.OpenIDBackend',

Why are the database settings so complicated? I’m currently rendering all the settings on any config change as well as database relation change which means I need to use the global juju state for the relation data, and as far as juju knows, there could be multiple database relations for this stack. The current charm helpers gives you this as a dict (above it’s ‘relations’). The ‘db’ key of that dict contains all the database relations, so I’m grabbing the first database relation and taking the details from the postgres side of that relation (there are two keys for the relation, one for postgres, the other for the connecting service). Yuk (see below for a better solution).

Full details are in the commit. With those changes I can now use the service:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

A. Cleanup: Simplify database settings

In retrospect, the complicated database settings are only needed because the settings are rendered on any config change, not just when the database relation changes, so I’ll clean this up in the next step by moving the database settings to a separate file which is only written on the db-relation-changed hook where we can use the more convenient current_relation dict (given that I know this service only has one database relationship).

This simplifies things quite a bit:

--- a/playbook.yml
+++ b/playbook.yml
@@ -45,11 +45,21 @@
         owner: "{{ wsgi_user }}"
         group: "{{ wsgi_group }}"
       tags:
-        - install
-        - upgrade-charm
+        - config-changed
+      notify:
+        - Restart wsgi
+
+    - name: Write the database settings
+      template:
+        src: "templates/database_settings.py.j2"
+        dest: "{{ application_dir }}/django-project/clickreviewsproject/database_settings.py"
+        owner: "{{ wsgi_user }}"
+        group: "{{ wsgi_group }}"
+      tags:
         - db-relation-changed
       notify:
         - Restart wsgi
+      when: "'database' in current_relation"
 
--- /dev/null
+++ b/templates/database_settings.py.j2
@@ -0,0 +1,9 @@
+DATABASES = {
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ current_relation["database"] }}',
+        'HOST': '{{ current_relation["host"] }}',
+        'USER': '{{ current_relation["user"] }}',
+        'PASSWORD': '{{ current_relation["password"] }}',
+    }
+}

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -2,29 +2,7 @@ DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
 
-{% if 'db' in relations %}
-DATABASES = {
-  {% for dbrelation in relations['db'] %}
-  {% if loop.first %}
-    {% set services=relations['db'][dbrelation] %}
-    {% for service in services %}
-      {% if service.startswith('postgres') %}
-      {% set dbdetails = services[service] %}
-        {% if 'database' in dbdetails %}
-    'default': {
-        'ENGINE': 'django.db.backends.postgresql_psycopg2',
-        'NAME': '{{ dbdetails["database"] }}',
-        'HOST': '{{ dbdetails["host"] }}',
-        'USER': '{{ dbdetails["user"] }}',
-        'PASSWORD': '{{ dbdetails["password"] }}',
-    }
-        {% endif %}
-      {% endif %}
-    {% endfor %}
-  {% endif %}
-  {% endfor %}
-}
-{% endif %}
+from database_settings import DATABASES
 
 TIME_ZONE = 'UTC'

With those changes, the deploy still works as expected:
$ make deploy
$ make setup-db
Creating custom debversion postgres type and syncing the ubuntu-reviews database.
...
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

Summary

Reusing the wsgi-app ansible role allowed me to focus just on the things that make this service different from other wsgi-app services:

  1. Creating the tarball for deployment (not normally part of the charm, but useful for developing the charm)
  2. Handling the application settings
  3. Handling the database relation and custom setup

The wsgi-app role already provides the functionality to deploy and upgrade code tarballs via a config set (ie. `juju set ubuntu-reviews build_label=r156`) from a configured url. It provides the functionality for the complete directory layout, user and group setup, log rotation etc. All of that comes for free.

Things I’d do if I was doing this charm for a real deployment now:

  1. Use the postgres db-admin relation for syncdb/migrations (thanks Simon) and ensure that the normal app uses the non-admin settings (and can’t change the schema etc.) This functionality could be factored out into another reusable role.
  2. Remove the dependency on the older non-click reviews app so that the clickreviews service can be deployed in isolation.
  3. Ensure all dependencies are either available in the distro, or included in the tarball

Filed under: cloud automation, juju

Read more
Michael

After recently writing the re-usable wsgi-app role for juju charms, I wanted to see how easy it would be to apply this to an existing service. I chose the existing ubuntu-reviews service because it’s open-source and something that we’ll probably need to deploy with juju soon (albeit only the functionality for reviewing newer click applications).

I’ve tried to include the workflow below that I used to create the charm including the detailed steps and some mistakes, in case it’s useful for others. If you’re after more of an overview about using reusable ansible roles in your juju charms, checkout these slides.

1. Create the new charm from the bootstrap-wsgi charm

First grab the charm-bootstrap-wsgi code and rename it to ubuntu-reviews:

$ mkdir -p ~/charms/rnr/precise && cd ~/charms/rnr/precise
$ git clone https://github.com/absoludity/charm-bootstrap-wsgi.git
$ mv charm-bootstrap-wsgi ubuntu-reviews && cd ubuntu-reviews/

Then update the charm metadata to reflect the ubuntu-reviews service:

--- a/metadata.yaml
+++ b/metadata.yaml
@@ -1,6 +1,6 @@
-name: charm-bootstrap-wsgi
-summary: Bootstrap your wsgi service charm.
-maintainer: Developer Account <Developer.Account@localhost>
+name: ubuntu-reviews
+summary: A review API service for Ubuntu click packages.
+maintainer: Michael Nelson <michael.nelson@canonical.com>
 description: |
   <Multi-line description here>
 categories:

I then updated the playbook to expect a tarball named rnr-server.tgz, and gave it a more relevant app_label (which controls the directories under which the service is setup):

--- a/playbook.yml
+++ b/playbook.yml
@@ -2,13 +2,13 @@
 - hosts: localhost
 
   vars:
-    - app_label: wsgi-app.example.com
+    - app_label: click-reviews.ubuntu.com
 
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: example_wsgi:application
-      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
+      wsgi_application: wsgi:app
+      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''

Although when deploying this service we’d be deploying a built asset published somewhere else, for development it’s easier to work with a local file in the charm – and the reusable wsgi-app role allows you to switch between those two options. So the last step here is to add some Makefile targets to enable pulling in the application code and creating the tarball (you can see the details on the git commit for this step).

With that done, I can deploy the charm with `make deploy` – expecting it to fail because the wsgi object doesn’t yet exist.

Aside: if you’re deploying locally, it’s sometimes useful to watch the deployment progress with:

$ tail -F ~/.juju/local/log/unit-ubuntu-reviews-0.log

At this point, juju status shows no issues, but curling the service does (as expected):

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080"
curl: (56) Recv failure: Connection reset by peer

Checking the logs (which, oh joy, are already setup and configured with log rotation etc. by the wsgi-app role) shows the expected error:

$ juju ssh ubuntu-reviews/0 "tail -F /srv/reviews.click.ubuntu.com/logs/reviews.click.ubuntu.com-error.log"
...
ImportError: No module named wsgi

2. Adding the wsgi, settings and urls

The current rnr-server code does have a django project setup, but it’s specific to the current setup (requiring a 3rd party library for settings, and serves the non-click reviews service too). I’d like to simplify that here, so I’m bundling a project configuration in the charm. First the standard (django-generated) manage.py, wsgi.py and near-default settings.py (which you can see in the actual commit).

An extra task is added which copies the project configuration into place during install/upgrade, and both the wsgi application location and the required PYTHONPATH are specified:

--- a/playbook.yml
+++ b/playbook.yml
@@ -7,7 +7,8 @@
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: wsgi:app
+      python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
+      wsgi_application: clickreviewsproject.wsgi:application
       code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
@@ -18,20 +19,18 @@
 
   tasks:
 
-    # Pretend there are some package dependencies for the example wsgi app.
     - name: Install any required packages for your app.
-      apt: pkg={{ item }} state=latest update_cache=yes
+      apt: pkg={{ item }}
       with_items:
-        - python-django
-        - python-django-celery
+        - python-django=1.5.4-1ubuntu1~ctools0
       tags:
         - install
         - upgrade-charm
 
     - name: Write any custom configuration files
-      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
+      copy: src=django-project dest={{ application_dir }} owner={{ wsgi_user }} group={{ wsgi_group }}
       tags:
-        - config-changed
-        # Also any backend relation-changed events, such as databases.
+        - install
+        - upgrade-charm
       notify:
         - Restart wsgi

I can now run juju upgrade-charm and see that the application now responds, but with a 500 error highlighting the first (of a few) missing dependencies…

3. Adding missing dependencies

At this point, it’s easiest in my opinion to run debug-hooks:

$ juju debug-hooks ubuntu-reviews/0

and directly test and install missing dependencies, adding them to the playbook as you go:

ubuntu@michael-local-machine-1:~$ curl http://localhost:8080 | less
ubuntu@michael-local-machine-1:~$ sudo apt-get install python-missing-library -y (and add it to the playbook in your charm editor).
ubuntu@michael-local-machine-1:~$ sudo service gunicorn restart

Rinse and repeat. You may want to occasionally run upgrade-charm in a separate terminal:

$ juju upgrade-charm --repository=../.. ubuntu-reviews

to verify that you’ve not broken things, running the hooks as they are triggered in your debug-hooks window.

Other times you might want to destroy the environment to redeploy from scratch.

Sometimes you’ll have dependencies which are not available in the distro. For our deployments,
we always ensure we have these included in our tarball (in some form). In the case of the reviews server, there was an existing script which pulls in a bunch of extra branches, so I’ve updated to use that in the Makefile target that builds the tarball. But there was another dependency, south, which wasn’t included by that task, so for simplicity here, I’m going to install that via pip (which you don’t want to do in reality – you’d update your build process instead).

You can see the extra dependencies in the commit.

4. Customise settings

At this point, the service deploys but errors due to a missing setting, which makes complete sense as I’ve not added any app-specific settings yet.

So, remove the vanilla settings.py and add a custom settings.py.j2 template, and add an extra django_secret_key option to the charm config.yaml:

--- a/config.yaml
+++ b/config.yaml
@@ -10,8 +10,13 @@ options:
     current_symlink:
         default: "latest"
         type: string
-        description: |
+        description: >
             The symlink of the code to run. The default of 'latest' will use
             the most recently added build on the instance.  Specifying a
             differnt label (eg. "r235") will symlink to that directory assuming
             it has been previously added to the instance.
+    django_secret_key:
+        default: "you-really-should-set-this"
+        type: string
+        description: >
+            The secret key that django should use for each instance.

--- /dev/null
+++ b/templates/settings.py.j2
@@ -0,0 +1,41 @@
+DEBUG = True
+TEMPLATE_DEBUG = DEBUG
+SECRET_KEY = '{{ django_secret_key }}'
+TIME_ZONE = 'UTC'
+ROOT_URLCONF = 'clickreviewsproject.urls'
+WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
+INSTALLED_APPS = (
+    'django.contrib.auth',
+    'django.contrib.contenttypes',
+    'django.contrib.sessions',
+    'django.contrib.sites',
+    'django_openid_auth',
+    'django.contrib.messages',
+    'django.contrib.staticfiles',
+    'clickreviews',
+    'south',
+)
...

Again, full diff in the commit.

With the extra settings now defined and one more dependencie added, I’m now able to ping the service successfully:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

But that only works because the input validation is fired before anything tries to use the (non-existent) database…

5. Adding the database relation

I had expected this step to be simple – add the database relation, call syncdb/migrate, but there were three factors conspiring to complicate things:

  1. The reviews application uses a custom postgres “debversion” column type
  2. A migration for the older non-click reviewsapp service requires postgres superuser creds (specifically, reviews/0013_add_debversion_field.py
  3. The clickreviews app, which I wanted to deploy on its own, is currently dependent on the older non-click reviews app, so it’s necessary to have both enabled and therefore the tables from both sync’d

So to work around the first point, I split the ‘deploy’ task into two tasks so I can install and setup the custom debversion field on the postgres units, before running syncdb:

--- a/Makefile
+++ b/Makefile
@@ -37,8 +37,17 @@ deploy: create-tarball
        @juju set ubuntu-reviews build_label=r$(REVIEWS_REVNO)
        @juju deploy gunicorn
        @juju deploy nrpe-external-master
+       @juju deploy postgresql
        @juju add-relation ubuntu-reviews gunicorn
        @juju add-relation ubuntu-reviews nrpe-external-master
+       @juju add-relation ubuntu-reviews postgresql:db
+       @juju set postgresql extra-packages=postgresql-9.1-debversion
+       @echo "Once the services are related, run 'make setup-db'"
+
+setup-db:
+       @echo "Creating custom debversion postgres type and syncing the ubuntu-reviews database."
+       @juju run --service postgresql 'psql -U postgres ubuntu-reviews -c "CREATE EXTENSION debversion;"'
+       @juju run --unit=ubuntu-reviews/0 "hooks/syncdb"
        @echo See the README for explorations after deploying.

I’ve separated the syncdb out to a manual step (rather than running syncdb on a hook like config-changed) so that I can run it after the postgresql service had been updated with the custom field, and also because juju doesn’t currently have a concept of a leader (yes, syncdb could be run on every unit, and might be safe, but I don’t like it conceptually).

--- a/playbook.yml
+++ b/playbook.yml
@@ -3,13 +3,13 @@
 
   vars:
     - app_label: click-reviews.ubuntu.com
+    - code_archive: "{{ build_label }}/rnr-server.tgz"
 
   roles:
     - role: wsgi-app
       listen_port: 8080
       python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
       wsgi_application: clickreviewsproject.wsgi:application
-      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
     - role: nrpe-external-master
@@ -25,6 +25,7 @@
         - python-django=1.5.4-1ubuntu1~ctools0
         - python-tz
         - python-pip
+        - python-psycopg2
       tags:
         - install
         - upgrade-charm
@@ -46,13 +47,24 @@
       tags:
         - install
         - upgrade-charm
+        - db-relation-changed
       notify:
         - Restart wsgi
+
     # XXX Normally our deployment build would ensure these are all available in the tarball.
     - name: Install any non-distro dependencies (Don't depend on pip for a real deployment!)
       pip: name={{ item.key }} version={{ item.value }}
       with_dict:
         south: 0.7.6
+        django-openid-auth: 0.2
       tags:
         - install
         - upgrade-charm
+
+    - name: sync the database
+      django_manage: >
+        command="syncdb --noinput"
+        app_path="{{ application_dir }}/django-project"
+        pythonpath="{{ code_dir }}/current/src"
+      tags:
+        - syncdb

To work around the second and third points above, I updated to remove the ‘south’ django application so that syncdb will setup the correct current tables without using migrations (while leaving it on the pythonpath as some code imports it currently), and added the old non-click reviews app to satisfy some of the imported dependencies.

By far the most complicated part of this change though, was the database settings:

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -1,6 +1,31 @@
 DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
+
+{% if 'db' in relations %}
+DATABASES = {
+  {% for dbrelation in relations['db'] %}
+  {% if loop.first %}
+    {% set services=relations['db'][dbrelation] %}
+    {% for service in services %}
+      {% if service.startswith('postgres') %}
+      {% set dbdetails = services[service] %}
+        {% if 'database' in dbdetails %}
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ dbdetails["database"] }}',
+        'HOST': '{{ dbdetails["host"] }}',
+        'USER': '{{ dbdetails["user"] }}',
+        'PASSWORD': '{{ dbdetails["password"] }}',
+    }
+        {% endif %}
+      {% endif %}
+    {% endfor %}
+  {% endif %}
+  {% endfor %}
+}
+{% endif %}
+
 TIME_ZONE = 'UTC'
 ROOT_URLCONF = 'clickreviewsproject.urls'
 WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
@@ -12,8 +37,8 @@ INSTALLED_APPS = (
     'django_openid_auth',
     'django.contrib.messages',
     'django.contrib.staticfiles',
+    'reviewsapp',
     'clickreviews',
-    'south',
 )
 AUTHENTICATION_BACKENDS = (
     'django_openid_auth.auth.OpenIDBackend',

Why are the database settings so complicated? I’m currently rendering all the settings on any config change as well as database relation change which means I need to use the global juju state for the relation data, and as far as juju knows, there could be multiple database relations for this stack. The current charm helpers gives you this as a dict (above it’s ‘relations’). The ‘db’ key of that dict contains all the database relations, so I’m grabbing the first database relation and taking the details from the postgres side of that relation (there are two keys for the relation, one for postgres, the other for the connecting service). Yuk (see below for a better solution).

Full details are in the commit. With those changes I can now use the service:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

A. Cleanup: Simplify database settings

In retrospect, the complicated database settings are only needed because the settings are rendered on any config change, not just when the database relation changes, so I’ll clean this up in the next step by moving the database settings to a separate file which is only written on the db-relation-changed hook where we can use the more convenient current_relation dict (given that I know this service only has one database relationship).

This simplifies things quite a bit:

--- a/playbook.yml
+++ b/playbook.yml
@@ -45,11 +45,21 @@
         owner: "{{ wsgi_user }}"
         group: "{{ wsgi_group }}"
       tags:
-        - install
-        - upgrade-charm
+        - config-changed
+      notify:
+        - Restart wsgi
+
+    - name: Write the database settings
+      template:
+        src: "templates/database_settings.py.j2"
+        dest: "{{ application_dir }}/django-project/clickreviewsproject/database_settings.py"
+        owner: "{{ wsgi_user }}"
+        group: "{{ wsgi_group }}"
+      tags:
         - db-relation-changed
       notify:
         - Restart wsgi
+      when: "'database' in current_relation"
 
--- /dev/null
+++ b/templates/database_settings.py.j2
@@ -0,0 +1,9 @@
+DATABASES = {
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ current_relation["database"] }}',
+        'HOST': '{{ current_relation["host"] }}',
+        'USER': '{{ current_relation["user"] }}',
+        'PASSWORD': '{{ current_relation["password"] }}',
+    }
+}

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -2,29 +2,7 @@ DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
 
-{% if 'db' in relations %}
-DATABASES = {
-  {% for dbrelation in relations['db'] %}
-  {% if loop.first %}
-    {% set services=relations['db'][dbrelation] %}
-    {% for service in services %}
-      {% if service.startswith('postgres') %}
-      {% set dbdetails = services[service] %}
-        {% if 'database' in dbdetails %}
-    'default': {
-        'ENGINE': 'django.db.backends.postgresql_psycopg2',
-        'NAME': '{{ dbdetails["database"] }}',
-        'HOST': '{{ dbdetails["host"] }}',
-        'USER': '{{ dbdetails["user"] }}',
-        'PASSWORD': '{{ dbdetails["password"] }}',
-    }
-        {% endif %}
-      {% endif %}
-    {% endfor %}
-  {% endif %}
-  {% endfor %}
-}
-{% endif %}
+from database_settings import DATABASES
 
 TIME_ZONE = 'UTC'

With those changes, the deploy still works as expected:
$ make deploy
$ make setup-db
Creating custom debversion postgres type and syncing the ubuntu-reviews database.
...
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

Summary

Reusing the wsgi-app ansible role allowed me to focus just on the things that make this service different from other wsgi-app services:

  1. Creating the tarball for deployment (not normally part of the charm, but useful for developing the charm)
  2. Handling the application settings
  3. Handling the database relation and custom setup

The wsgi-app role already provides the functionality to deploy and upgrade code tarballs via a config set (ie. `juju set ubuntu-reviews build_label=r156`) from a configured url. It provides the functionality for the complete directory layout, user and group setup, log rotation etc. All of that comes for free.

Things I’d do if I was doing this charm for a real deployment now:

  1. Use the postgres db-admin relation for syncdb/migrations (thanks Simon) and ensure that the normal app uses the non-admin settings (and can’t change the schema etc.) This functionality could be factored out into another reusable role.
  2. Remove the dependency on the older non-click reviews app so that the clickreviews service can be deployed in isolation.
  3. Ensure all dependencies are either available in the distro, or included in the tarball

Filed under: cloud automation, juju

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140624 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

We have rebased our Utopic kernel “unstable” branch to v3.16-rc2. We
are preparing initial packages and performing some test builds and DKMS
package testing. I do not anticipate a v3.16 based upload until it has
undergone some additional widespread baking and testing.
—–
Important upcoming dates:
Thurs Jun 26 – Alpha 1 (2 days away)
Fri Jun 27 – Kernel Freeze for 12.04.5 and 14.04.1 (3 days away)
Thurs Jul 31 – Alpha 2 (~5 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (May. 6):

  • Lucid – Testing
  • Precise – Testing
  • Saucy – Testing
  • Trusty – Testing

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    cycle: 08-Jun through 28-Jun
    ====================================================================
    06-Jun Last day for kernel commits for this cycle
    08-Jun – 14-Jun Kernel prep week.
    15-Jun – 21-Jun Bug verification & Regression testing.
    22-Jun – 28-Jun Regression testing & Release to -updates.

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Read more
Dustin Kirkland

It's that simple.
It was about 4pm on Friday afternoon, when I had just about wrapped up everything I absolutely needed to do for the day, and I decided to kick back and have a little fun with the remainder of my work day.

 It's now 4:37pm on Friday, and I'm now done.

Done with what?  The Yo charm, of course!

The Internet has been abuzz this week about the how the Yo app received a whopping $1 million dollars in venture funding.  (Forbes notes that this is a pretty surefire indication that there's another internet bubble about to burst...)

It's little more than the first program any kid writes -- hello world!

Subsequently I realized that we don't really have a "hello world" charm.  And so here it is, yo.

$ juju deploy yo

Deploying up a webpage that says "Yo" is hardly the point, of course.  Rather, this is a fantastic way to see the absolute simplest form of a Juju charm.  Grab the source, and go explore it yo-self!

$ charm-get yo
$ tree yo
??? config.yaml
??? copyright
??? hooks
?   ??? config-changed
?   ??? install
?   ??? start
?   ??? stop
?   ??? upgrade-charm
?   ??? website-relation-joined
??? icon.svg
??? metadata.yaml
??? README.md
1 directory, 11 files



  • The config.yaml let's you set and dynamically changes the service itself (the color and size of the font that renders "Yo").
  • The copyright is simply boilerplate GPLv3
  • The icon.svg is just a vector graphics "Yo."
  • The metadata.yaml explains what this charm is, how it can relate to other charms
  • The README.md is a simple getting-started document
  • And the hooks...
    • config-changed is the script that runs when you change the configuration -- basically, it uses sed to inline edit the index.html Yo webpage
    • install simply installs apache2 and overwrites /var/www/index.html
    • start and stop simply starts and stops the apache2 service
    • upgrade-charm is currently a no-op
    • website-relation-joined sets and exports the hostname and port of this system
The website relation is very important here...  Declaring and defining this relation instantly lets me relate this charm with dozens of other services.  As you can see in the screenshot at the top of this post, I was able to easily relate the varnish website accelerator in front of the Yo charm.

Hopefully this simple little example might help you examine the anatomy of a charm for the first time, and perhaps write your own first charm!

Cheers,

Dustin

Read more
Colin Ian King

stress-ng: an updated system stress test tool

Recently added to Ubuntu 14.10 is stress-ng, a simple tool designed to stress various components of a Linux system.   stress-ng is a re-implementation of the original stress tool written by Amos  Waterland and adds various new ways to exercise a computer as well as a very simple "bogo-operation" set of metrics for each stress method.

stress-ng current contains the following methods to exercise the machine:

  • CPU compute - just lots of sqrt() operations on pseudo-random values. One can also specify the % loading of the CPUs
  • Cache thrashing, a naive cache read/write exerciser
  • Drive stress by writing and removing many temporary files
  • Process creation and termination, just lots of fork() + exit() calls
  • I/O syncs, just forcing lots of sync() calls
  • VM stress via mmap(), memory write and munmap()
  • Pipe I/O, large pipe writes and reads that exercise pipe, copying and context switching
  • Socket stressing, much like the pipe I/O test but using sockets
  • Context switching between a pair of producer and consumer processes
Many of the above stress methods have additional configuration options.  Each stress method can be run by one or more child processes.

The --metrics option dumps the number of operations performed by each stress method, aka "bogo ops", bogos because they are a rough and unscientific metric.  One can specify how long to run a test either by test duration in sections or by bogo ops.

I've tried to make stress-ng compatible with the older stress tool, but note that it is not guaranteed to produce identical results as the common test methods between the two tools have been implemented differently.

Stress-ng has been a useful for helping me measure different power consuming loads.  It is also useful with various thermald optimisation tweaks on one of my older machines.

For more information, consult the stress-ng manual page.  Be warned, this tool can make your system get seriously busy and warm!

Read more
Inayaili de León Persson

Latest from the web team — June 2014

We’re now almost half way through the year and only a few days until summer officially starts here in the UK!

In the last few weeks we’ve worked on:

  • Responsive ubuntu.com: we’ve finished publishing the series on making ubuntu.com responsive on the design blog
  • Ubuntu.com: we’ve released a hub for our legal documents and information, and we’ve created homepage takeovers for Mobile Asia Expo
  • Juju GUI: we’ve planned work for the next cycle, sketched scenarios based on the new personas, and launched the new inspector on the left
  • Fenchurch: we’ve finished version 1 of our new asset server, and we’ve started work on the new Ubuntu partners site
  • Ubuntu Insights: we’ve published the latest iteration of Ubuntu Insights, now with a dedicated press area
  • Chinese website: we’ve released the Chinese version of ubuntu.com

And we’re currently working on:

  • Responsive Day Out: I’m speaking at the Responsive Day Out conference in Brighton on the 27th on how we made ubuntu.com responsive
  • Responsive ubuntu.com: we’re working on the final tweaks and improvements to our code and documentation so that we can release to the public in the next few weeks
  • Juju GUI: we’re now starting to design based on the scenarios we’ve created
  • Fenchurch: we’re now working on Juju charms for the Chinese site asset server and Ubuntu partners website
  • Partners: we’re finishing the build of the new Ubuntu partners site
  • Chinese website: we’ll be adding a cloud and server sections to the site
  • Cloud Installer: we’re working on the content for the upcoming Cloud Installer beta pages

If you’d like to join the web team, we are currently looking for a web designer and a front end developer to join the team!

Juju scenariosWorking on Juju personas and scenarios.

Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Let us know your thoughts in the comments.

Read more
Inayaili de León Persson

Making ubuntu.com responsive: final thoughts

This post is part of the series ‘Making ubuntu.com responsive‘.

There are several resources out there on how to create responsive websites, but they tend to go through the process in an ideal scenario, where the project starts with a blank slate, from scratch.

That’s why we thought it would be nice to share the steps we took in converting our main website and framework, ubuntu.com, into a fully responsive site, with all the constraints that come from working on legacy code, with very little time available, while making sure that the site was kept up-to-date and responding to the needs to the business.

Before we started this process, the idea of converting ubuntu.com seemed like a herculean task. It was only because we divided the work in several stages, smaller projects, tightened scope, and kept things simple, that it was possible to do it.

We learned a lot throughout this past year or so, and there is a lot more we want to do. We’d love to hear about your experience of similar projects, suggestions on how we can improve, tools we should look into, books we should read, articles we should bookmark, and things we should try, so please do leave us your thoughts in the comments section.

Here is the list of all the post in the series:

  1. Intro
  2. Setting the rules
  3. Making the rules a reality
  4. Pilot projects
  5. Lessons learned
  6. Scoping the work
  7. Approach to content
  8. Making our grid responsive
  9. Adapting our navigation to small screens
  10. Dealing with responsive images
  11. Updating font sizes and increasing readability
  12. Our Sass architecture
  13. Ensuring performance
  14. JavaScript considerations
  15. Testing on multiple devices

Note: I will be speaking about making ubuntu.com responsive at the Responsive Day Out conference in Brighton, on the 27th June. See you there!

Read more
mandel

dbus-cpp is the libary that we are currently using to perform dbus calls from cpp within ubuntu touch. For those that are used to dbus-cpp know that the dbus interface can be defined via structures that are later checked at compile time. The following is an example of those definitions:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
struct Geoclue
{
    struct Master
    {
        struct Create
        {
            inline static std::string name()
            {
                return "Create";
            } typedef Master Interface;
            inline static const std::chrono::milliseconds default_timeout() { return std::chrono::seconds{1}; }
        };
    };
 
    struct MasterClient
    {
        struct SetRequirements
        {
            inline static std::string name()
            {
                return "SetRequirements";
            } typedef MasterClient Interface;
            inline static const std::chrono::milliseconds default_timeout() { return std::chrono::seconds{1}; }
        };
        struct GetAddressProvider
        {
            inline static std::string name()
            {
                return "GetAddressProvider";
            } typedef MasterClient Interface;
            inline static const std::chrono::milliseconds default_timeout() { return std::chrono::seconds{1}; }
        };
        struct GetPositionProvider
        {
            inline static std::string name()
            {
                return "GetPositionProvider";
            } typedef MasterClient Interface;
            inline static const std::chrono::milliseconds default_timeout() { return std::chrono::seconds{1}; }
        };
    };
 
    struct Address
    {
        struct GetAddress
        {
            inline static std::string name()
            {
                return "GetAddress";
            } typedef Address Interface;
            inline static const std::chrono::milliseconds default_timeout() { return std::chrono::seconds{1}; }
        };
    };
 
    struct Position
    {
        struct GetPosition
        {
            inline static std::string name()
            {
                return "GetPosition";
            } typedef Position Interface;
            inline static const std::chrono::milliseconds default_timeout() { return std::chrono::seconds{1}; }
        };
        struct Signals
        {
            struct PositionChanged
            {
                inline static std::string name()
                {
                    return "PositionChanged";
                };
                typedef Position Interface;
                typedef std::tuple<int32_t, int32_t, double, double, double, dbus::types::Struct<std::tuple<int32_t, double, double>>> ArgumentType;
            };
        };
    };
};
}

This is quite a lot of work and quite repetitive. We did notice that and used in some of our projects a set of macros to make us type less. We now will provide the following macros for other developers to use as part of dbus-cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
/*
 * Copyright © 2014 Canonical Ltd.
 *
 * This program is free software: you can redistribute it and/or modify it
 * under the terms of the GNU Lesser General Public License version 3,
 * as published by the Free Software Foundation.
 *
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of+
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
 * GNU Lesser General Public License for more details.
 *
 * You should have received a copy of the GNU Lesser General Public License
 * along with this program. If not, see <http://www.gnu.org/licenses/>.
 *
 * Authored by: Thomas Voß <thomas.voss@canonical.com>
 */
 
#ifndef CORE_DBUS_MACROS_H_
#define CORE_DBUS_MACROS_H_
 
#include <core/dbus/types/object_path.h>
 
#include <chrono>
#include <string>
 
#define DBUS_CPP_METHOD_WITH_TIMEOUT_DEF(Name, Itf, Timeout) \
 struct Name \
 { \
 typedef Itf Interface; \
 inline static const std::string& name() \
 { \
 static const std::string s{#Name}; \
 return s; \
 } \
 inline static const std::chrono::milliseconds default_timeout() { return std::chrono::milliseconds{Timeout}; } \
 };\

#define DBUS_CPP_METHOD_DEF(Name, Itf) DBUS_CPP_METHOD_WITH_TIMEOUT_DEF(Name, Itf, 2000)
 
#define DBUS_CPP_SIGNAL_DEF(Name, Itf, ArgType) \
 struct Name \
 { \
 inline static std::string name() \
 { \
 return #Name; \
 }; \
 typedef Itf Interface; \
 typedef ArgType ArgumentType; \
 };\

#define DBUS_CPP_READABLE_PROPERTY_DEF(Name, Itf, Type) \
 struct Name \
 { \
 inline static std::string name() \
 { \
 return #Name; \
 }; \
 typedef Itf Interface; \
 typedef Type ValueType; \
 static const bool readable = true; \
 static const bool writable = false; \
 }; \

#define DBUS_CPP_WRITABLE_PROPERTY_DEF(Name, Itf, Type) \
 struct Name \
 { \
 inline static std::string name() \
 { \
 return #Name; \
 }; \
 typedef Itf Interface; \
 typedef Type ValueType; \
 static const bool readable = true; \
 static const bool writable = true; \
 }; \

#endif // CORE_DBUS_MACROS_H_

Using those macros the above example is as follows

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
struct Geoclue
{
    struct Master
    {
        DBUS_CPP_METHOD_DEF(Create, Master);
    };
 
    struct MasterClient
    {
        DBUS_CPP_METHOD_DEF(SetRequirements, MasterClient);
        DBUS_CPP_METHOD_DEF(GetAddressProvider, MasterClient);
        DBUS_CPP_METHOD_DEF(GetPositionProvider, MasterClient);
    };
 
    struct Address
    {
        DBUS_CPP_METHOD_DEF(GetAddress, Address);
    };
 
    struct Position
    {
        DBUS_CPP_METHOD_DEF(GetPosition, Position);
        struct Signals
        {
            DBUS_CPP_SIGNAL_DEF(PositionChanged, Position);
        };
    };
};
}

As you can see the amount of code that needs to be written is a lot less and a new developer can easily understand what is going on. Happy coding ;)

Read more
Michael Hall

opplanet-tasco-usb-digital-microscope-780200tTwo years ago, my wife and I made the decision to home-school our two children.  It was the best decision we could have made, our kids are getting a better education, and with me working from home since joining Canonical I’ve been able to spend more time with them than ever before. We also get to try and do some really fun things, which is what sets the stage for this story.

Both my kids love science, absolutely love it, and it’s one of our favorite subjects to teach.  A couple of weeks ago my wife found an inexpensive USB microscope, which lets you plug it into a computer and take pictures using desktop software.  It’s not a scientific microscope, nor is it particularly powerful or clear, but for the price it was just right to add a new aspect to our elementary science lessons. All we had to do was plug it in and start exploring.

My wife has a relatively new (less than a year) laptop running windows 8.  It’s not high-end, but it’s all new hardware, new software, etc.  So when we plugged in our simple USB microscope…….it failed.  As in, didn’t do anything.  Windows seemed to be trying to figure out what to do with it, over and over and over again, but to no avail.

My laptop, however, is running Ubuntu 14.04, the latest stable and LTS release.  My laptop is a couple of years old, but classic, Lenovo x220. It’s great hardware to go with Ubuntu and I’ve had nothing but good experiences with it.  So of course, when I decided to give our new USB microsope a try……it failed.  The connection was fine, the log files clearly showed that it was being identified, but nothing was able to see it as a video input device or make use of it.

Now, if that’s where our story ended, it would fall right in line with a Shakespearean tragedy. But while both Windows and Ubuntu failed to “just work” with this microscope, both failures were not equal. Because the Windows drivers were all closed source, my options ended with that failure.

But on Ubuntu, the drivers were open, all I needed to do was find a fix. It took a while, but I eventually found a 2.5 year old bug report for an identical chipset to my microscope, and somebody proposed a code fix in the comments.  Now, the original reporter never responded to say whether or not the fix worked, and it was clearly never included in the driver code, but it was an opportunity.  Now I’m no kernel hacker, nor driver developer, in fact I probably shouldn’t be trusted to write any amount of C code at all.  But because I had Ubuntu, getting the source code of my current driver, as well as all the tools and dependencies needed to build it, took only a couple of terminal commands.  The patch was too old to cleanly apply to the current code, but it was easy enough to figure out where they should go, and after a couple tries to properly build just the driver (and not the full kernel or every driver in it), I had a new binary kernel modules that would load without error.  Then, when I plugged my USB microscope in again, it worked!

Red Onion skin at 120x magnificationPeople use open source for many reasons.  Some people use it because it’s free as in beer, for them it’s on the same level as freeware or shareware, only the cost matters. For others it’s about ethics, they would choose open source even if it cost them money or didn’t work as well, because they feel it’s morally right, and that proprietary software is morally wrong. I use open source because of USB microscopes. Because when they don’t work, open source gives me a chance to change that.

Read more
Inayaili de León Persson

This post is part of the series ‘Making ubuntu.com responsive‘.

When working on a responsive project you’ll have to test on multiple operating systems, browsers and devices, whether you’re using emulators or the real deal.

Testing on the actual devices is preferable — it’s hard to emulate the feel of a device in your hand and the interactions and gestures you can do with it — and more enjoyable, but budget and practicability will never allow you to get a hold of and test on all the devices people might use to access your site.

We followed very simple steps that anyone can emulate to decide which devices we tested ubuntu.com on.

Numbers

You can quickly get a grasp of which operating systems, browsers and devices your visitors are using to get to your site just by looking at your analytics.

By doing this you can establish whether some of the more troublesome ones are worth investing time in. For example, if only 10 people accessed your site through Internet Explorer 6, perhaps you don’t need to provide a PNG fallback solution. But you might also get a few less pleasant surprises and find that a hard-to-cater-for browser is one of the preferred ones by your customers.

When we did our initial analysis we didn’t find any real surprises, however, due to the high volume of traffic that ubuntu.com sees every month even a very small percentage represented a large number of people that we just couldn’t ignore. It was important to keep this in mind as we defined which browsers, operating systems and devices to test on, and what issues we’d fix where.

Browsers (between 11 February and 13 March 2014)
Browser Percentage usage
Chrome 46.88%
Firefox 36.96%
Internet Explorer Total 7.54%
11 41.15%
8 22.96%
10 17.05%
9 14.24%
7 2.96%
6 1.56%
Safari 4.30%
Opera 1.68%
Android Browser 1.04%
Opera Mini 0.45%
Operating systems (between 11 February and 13 March 2014)
Operating system Percentage usage
Windows Total 52.45%
7 60.81%
8.1 14.31%
XP 13.3%
8.84 8.84%
Vista 2.38%
Linux 35.4%
Macintosh 6.14%
Android Total 3.32%
4.4.2 19.62%
4.3 15.51%
4.1.2 15.39%
iOS 1.76%
Mobile devices (between 12 May and 11 June 2014)
5.41% of total visits
Device Percentage usage (of 5.41%)
Apple iPad 17.33%
Apple iPhone 12.82%
Google Nexus 7 3.12%
LG Nexus 5 2.97%
Samsung Galaxy S III 2.01%
Google Nexus 4 2.01%
Samsung Galaxy S IV 1.17%
HTC M7 One 0.92%
Samsung Galaxy Note 2 0.88%
Not set 16.66%

After analysing your numbers, you can also define which combinations to test in (operating system and browser).

Go shopping

Based on the most popular devices people were using the access our site, we made a short list of the ones we wanted to buy first. We weren’t married to the analytics numbers though: the idea was to cover a range of screen sizes and operating systems and expand as we went along.

  • Nexus 7
  • Samsung Galaxy S III
  • Samsung Galaxy Note II

We opted for not splashing on an iPad or iPhone, as there are quite a few around the office (retina and non-retina) and the money we saved meant we could buy other less common devices.

Testing devicesPart of our current device suite.

When we started to get a few bug reports from Android Gingerbread and Windows phone users, we decided we needed phones with those operating systems installed. This was our second batch of purchases:

  • Samsung Galaxy y
  • Kindle Fire HD (Amazon was having a sale at the time we made the list!)
  • Nokia Lumia 520
  • LG G2

And, last but not least, we use a Nexus 4 to test on Ubuntu on phones.

We didn’t spend much in any of our shopping sprees, but have managed to slowly build an ever-growing device suite that we can test our sites on, which is invaluable when working on responsive projects.

Alternatives

Some operating systems and browsers are trickier to test on in native devices. We have a BrowserStack account that we tend to use mostly to test on older Windows and Internet Explorer versions, although we also test Windows on virtual machines.

browserstackBrowserStack website.

Tools

We have to confess we’re not using any special software that synchronises testing and interactions across devices. We haven’t really felt the need for that yet, but at some point we should experiment with a few tools, so we’d like to hear suggestions!

Browser support

We prefer to think of different levels (or ways) of access to the content rather than browser support. The overall rule is that everyone should be able to get to the content, and bugs that obstruct access are prioritised and fixed.

As much as possible, and depending on resources available at the time, we try to fix rendering issues in browsers and devices used by a higher percentage of visitors: degree of usage defines the degree of effort fixing rendering bugs.

And, obviously: dowebsitesneedtolookexactlythesameineverybrowser.com.

Read the final post in this series: “Making ubuntu.com responsive: final thoughts”

Reading list

Read more
David Murphy (schwuk)

Ars Technica has a great write up by Lee Hutchinson on our Orange Box demo and training unit.

You can't help but have your attention grabbed by it!

You can’t help but have your attention grabbed by it!

As the comments are quick to point out – at the expense of the rest of the piece – the hardware isn’t the compelling story here. While you can buy your own, you can almost certainly hand build an equivalent-or-better set up for less money1, but Ars recognises this:

Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it’s making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud.

The Orange Box is about showing off MAAS & Juju, and – usually – OpenStack.

To see what Ars think of those, you should read the article.

I definitely echo Lee’s closing statement:

I wish my closet had an Orange Box in it. That thing is hella cool.


  1. Or make one out of wood like my colleague Gavin did! 

Read more