Canonical Voices

Michael

After recently writing the re-usable wsgi-app role for juju charms, I wanted to see how easy it would be to apply this to an existing service. I chose the existing ubuntu-reviews service because it’s open-source and something that we’ll probably need to deploy with juju soon (albeit only the functionality for reviewing newer click applications).

I’ve tried to include the workflow below that I used to create the charm including the detailed steps and some mistakes, in case it’s useful for others. If you’re after more of an overview about using reusable ansible roles in your juju charms, checkout these slides.

1. Create the new charm from the bootstrap-wsgi charm

First grab the charm-bootstrap-wsgi code and rename it to ubuntu-reviews:

$ mkdir -p ~/charms/rnr/precise && cd ~/charms/rnr/precise
$ git clone https://github.com/absoludity/charm-bootstrap-wsgi.git
$ mv charm-bootstrap-wsgi ubuntu-reviews && cd ubuntu-reviews/

Then update the charm metadata to reflect the ubuntu-reviews service:

--- a/metadata.yaml
+++ b/metadata.yaml
@@ -1,6 +1,6 @@
-name: charm-bootstrap-wsgi
-summary: Bootstrap your wsgi service charm.
-maintainer: Developer Account <Developer.Account@localhost>
+name: ubuntu-reviews
+summary: A review API service for Ubuntu click packages.
+maintainer: Michael Nelson <michael.nelson@canonical.com>
 description: |
   <Multi-line description here>
 categories:

I then updated the playbook to expect a tarball named rnr-server.tgz, and gave it a more relevant app_label (which controls the directories under which the service is setup):

--- a/playbook.yml
+++ b/playbook.yml
@@ -2,13 +2,13 @@
 - hosts: localhost
 
   vars:
-    - app_label: wsgi-app.example.com
+    - app_label: click-reviews.ubuntu.com
 
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: example_wsgi:application
-      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
+      wsgi_application: wsgi:app
+      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''

Although when deploying this service we’d be deploying a built asset published somewhere else, for development it’s easier to work with a local file in the charm – and the reusable wsgi-app role allows you to switch between those two options. So the last step here is to add some Makefile targets to enable pulling in the application code and creating the tarball (you can see the details on the git commit for this step).

With that done, I can deploy the charm with `make deploy` – expecting it to fail because the wsgi object doesn’t yet exist.

Aside: if you’re deploying locally, it’s sometimes useful to watch the deployment progress with:

$ tail -F ~/.juju/local/log/unit-ubuntu-reviews-0.log

At this point, juju status shows no issues, but curling the service does (as expected):

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080"
curl: (56) Recv failure: Connection reset by peer

Checking the logs (which, oh joy, are already setup and configured with log rotation etc. by the wsgi-app role) shows the expected error:

$ juju ssh ubuntu-reviews/0 "tail -F /srv/reviews.click.ubuntu.com/logs/reviews.click.ubuntu.com-error.log"
...
ImportError: No module named wsgi

2. Adding the wsgi, settings and urls

The current rnr-server code does have a django project setup, but it’s specific to the current setup (requiring a 3rd party library for settings, and serves the non-click reviews service too). I’d like to simplify that here, so I’m bundling a project configuration in the charm. First the standard (django-generated) manage.py, wsgi.py and near-default settings.py (which you can see in the actual commit).

An extra task is added which copies the project configuration into place during install/upgrade, and both the wsgi application location and the required PYTHONPATH are specified:

--- a/playbook.yml
+++ b/playbook.yml
@@ -7,7 +7,8 @@
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: wsgi:app
+      python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
+      wsgi_application: clickreviewsproject.wsgi:application
       code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
@@ -18,20 +19,18 @@
 
   tasks:
 
-    # Pretend there are some package dependencies for the example wsgi app.
     - name: Install any required packages for your app.
-      apt: pkg={{ item }} state=latest update_cache=yes
+      apt: pkg={{ item }}
       with_items:
-        - python-django
-        - python-django-celery
+        - python-django=1.5.4-1ubuntu1~ctools0
       tags:
         - install
         - upgrade-charm
 
     - name: Write any custom configuration files
-      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
+      copy: src=django-project dest={{ application_dir }} owner={{ wsgi_user }} group={{ wsgi_group }}
       tags:
-        - config-changed
-        # Also any backend relation-changed events, such as databases.
+        - install
+        - upgrade-charm
       notify:
         - Restart wsgi

I can now run juju upgrade-charm and see that the application now responds, but with a 500 error highlighting the first (of a few) missing dependencies…

3. Adding missing dependencies

At this point, it’s easiest in my opinion to run debug-hooks:

$ juju debug-hooks ubuntu-reviews/0

and directly test and install missing dependencies, adding them to the playbook as you go:

ubuntu@michael-local-machine-1:~$ curl http://localhost:8080 | less
ubuntu@michael-local-machine-1:~$ sudo apt-get install python-missing-library -y (and add it to the playbook in your charm editor).
ubuntu@michael-local-machine-1:~$ sudo service gunicorn restart

Rinse and repeat. You may want to occasionally run upgrade-charm in a separate terminal:

$ juju upgrade-charm --repository=../.. ubuntu-reviews

to verify that you’ve not broken things, running the hooks as they are triggered in your debug-hooks window.

Other times you might want to destroy the environment to redeploy from scratch.

Sometimes you’ll have dependencies which are not available in the distro. For our deployments,
we always ensure we have these included in our tarball (in some form). In the case of the reviews server, there was an existing script which pulls in a bunch of extra branches, so I’ve updated to use that in the Makefile target that builds the tarball. But there was another dependency, south, which wasn’t included by that task, so for simplicity here, I’m going to install that via pip (which you don’t want to do in reality – you’d update your build process instead).

You can see the extra dependencies in the commit.

4. Customise settings

At this point, the service deploys but errors due to a missing setting, which makes complete sense as I’ve not added any app-specific settings yet.

So, remove the vanilla settings.py and add a custom settings.py.j2 template, and add an extra django_secret_key option to the charm config.yaml:

--- a/config.yaml
+++ b/config.yaml
@@ -10,8 +10,13 @@ options:
     current_symlink:
         default: "latest"
         type: string
-        description: |
+        description: >
             The symlink of the code to run. The default of 'latest' will use
             the most recently added build on the instance.  Specifying a
             differnt label (eg. "r235") will symlink to that directory assuming
             it has been previously added to the instance.
+    django_secret_key:
+        default: "you-really-should-set-this"
+        type: string
+        description: >
+            The secret key that django should use for each instance.

--- /dev/null
+++ b/templates/settings.py.j2
@@ -0,0 +1,41 @@
+DEBUG = True
+TEMPLATE_DEBUG = DEBUG
+SECRET_KEY = '{{ django_secret_key }}'
+TIME_ZONE = 'UTC'
+ROOT_URLCONF = 'clickreviewsproject.urls'
+WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
+INSTALLED_APPS = (
+    'django.contrib.auth',
+    'django.contrib.contenttypes',
+    'django.contrib.sessions',
+    'django.contrib.sites',
+    'django_openid_auth',
+    'django.contrib.messages',
+    'django.contrib.staticfiles',
+    'clickreviews',
+    'south',
+)
...

Again, full diff in the commit.

With the extra settings now defined and one more dependencie added, I’m now able to ping the service successfully:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

But that only works because the input validation is fired before anything tries to use the (non-existent) database…

5. Adding the database relation

I had expected this step to be simple – add the database relation, call syncdb/migrate, but there were three factors conspiring to complicate things:

  1. The reviews application uses a custom postgres “debversion” column type
  2. A migration for the older non-click reviewsapp service requires postgres superuser creds (specifically, reviews/0013_add_debversion_field.py
  3. The clickreviews app, which I wanted to deploy on its own, is currently dependent on the older non-click reviews app, so it’s necessary to have both enabled and therefore the tables from both sync’d

So to work around the first point, I split the ‘deploy’ task into two tasks so I can install and setup the custom debversion field on the postgres units, before running syncdb:

--- a/Makefile
+++ b/Makefile
@@ -37,8 +37,17 @@ deploy: create-tarball
        @juju set ubuntu-reviews build_label=r$(REVIEWS_REVNO)
        @juju deploy gunicorn
        @juju deploy nrpe-external-master
+       @juju deploy postgresql
        @juju add-relation ubuntu-reviews gunicorn
        @juju add-relation ubuntu-reviews nrpe-external-master
+       @juju add-relation ubuntu-reviews postgresql:db
+       @juju set postgresql extra-packages=postgresql-9.1-debversion
+       @echo "Once the services are related, run 'make setup-db'"
+
+setup-db:
+       @echo "Creating custom debversion postgres type and syncing the ubuntu-reviews database."
+       @juju run --service postgresql 'psql -U postgres ubuntu-reviews -c "CREATE EXTENSION debversion;"'
+       @juju run --unit=ubuntu-reviews/0 "hooks/syncdb"
        @echo See the README for explorations after deploying.

I’ve separated the syncdb out to a manual step (rather than running syncdb on a hook like config-changed) so that I can run it after the postgresql service had been updated with the custom field, and also because juju doesn’t currently have a concept of a leader (yes, syncdb could be run on every unit, and might be safe, but I don’t like it conceptually).

--- a/playbook.yml
+++ b/playbook.yml
@@ -3,13 +3,13 @@
 
   vars:
     - app_label: click-reviews.ubuntu.com
+    - code_archive: "{{ build_label }}/rnr-server.tgz"
 
   roles:
     - role: wsgi-app
       listen_port: 8080
       python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
       wsgi_application: clickreviewsproject.wsgi:application
-      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
     - role: nrpe-external-master
@@ -25,6 +25,7 @@
         - python-django=1.5.4-1ubuntu1~ctools0
         - python-tz
         - python-pip
+        - python-psycopg2
       tags:
         - install
         - upgrade-charm
@@ -46,13 +47,24 @@
       tags:
         - install
         - upgrade-charm
+        - db-relation-changed
       notify:
         - Restart wsgi
+
     # XXX Normally our deployment build would ensure these are all available in the tarball.
     - name: Install any non-distro dependencies (Don't depend on pip for a real deployment!)
       pip: name={{ item.key }} version={{ item.value }}
       with_dict:
         south: 0.7.6
+        django-openid-auth: 0.2
       tags:
         - install
         - upgrade-charm
+
+    - name: sync the database
+      django_manage: >
+        command="syncdb --noinput"
+        app_path="{{ application_dir }}/django-project"
+        pythonpath="{{ code_dir }}/current/src"
+      tags:
+        - syncdb

To work around the second and third points above, I updated to remove the ‘south’ django application so that syncdb will setup the correct current tables without using migrations (while leaving it on the pythonpath as some code imports it currently), and added the old non-click reviews app to satisfy some of the imported dependencies.

By far the most complicated part of this change though, was the database settings:

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -1,6 +1,31 @@
 DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
+
+{% if 'db' in relations %}
+DATABASES = {
+  {% for dbrelation in relations['db'] %}
+  {% if loop.first %}
+    {% set services=relations['db'][dbrelation] %}
+    {% for service in services %}
+      {% if service.startswith('postgres') %}
+      {% set dbdetails = services[service] %}
+        {% if 'database' in dbdetails %}
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ dbdetails["database"] }}',
+        'HOST': '{{ dbdetails["host"] }}',
+        'USER': '{{ dbdetails["user"] }}',
+        'PASSWORD': '{{ dbdetails["password"] }}',
+    }
+        {% endif %}
+      {% endif %}
+    {% endfor %}
+  {% endif %}
+  {% endfor %}
+}
+{% endif %}
+
 TIME_ZONE = 'UTC'
 ROOT_URLCONF = 'clickreviewsproject.urls'
 WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
@@ -12,8 +37,8 @@ INSTALLED_APPS = (
     'django_openid_auth',
     'django.contrib.messages',
     'django.contrib.staticfiles',
+    'reviewsapp',
     'clickreviews',
-    'south',
 )
 AUTHENTICATION_BACKENDS = (
     'django_openid_auth.auth.OpenIDBackend',

Why are the database settings so complicated? I’m currently rendering all the settings on any config change as well as database relation change which means I need to use the global juju state for the relation data, and as far as juju knows, there could be multiple database relations for this stack. The current charm helpers gives you this as a dict (above it’s ‘relations’). The ‘db’ key of that dict contains all the database relations, so I’m grabbing the first database relation and taking the details from the postgres side of that relation (there are two keys for the relation, one for postgres, the other for the connecting service). Yuk (see below for a better solution).

Full details are in the commit. With those changes I can now use the service:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

A. Cleanup: Simplify database settings

In retrospect, the complicated database settings are only needed because the settings are rendered on any config change, not just when the database relation changes, so I’ll clean this up in the next step by moving the database settings to a separate file which is only written on the db-relation-changed hook where we can use the more convenient current_relation dict (given that I know this service only has one database relationship).

This simplifies things quite a bit:

--- a/playbook.yml
+++ b/playbook.yml
@@ -45,11 +45,21 @@
         owner: "{{ wsgi_user }}"
         group: "{{ wsgi_group }}"
       tags:
-        - install
-        - upgrade-charm
+        - config-changed
+      notify:
+        - Restart wsgi
+
+    - name: Write the database settings
+      template:
+        src: "templates/database_settings.py.j2"
+        dest: "{{ application_dir }}/django-project/clickreviewsproject/database_settings.py"
+        owner: "{{ wsgi_user }}"
+        group: "{{ wsgi_group }}"
+      tags:
         - db-relation-changed
       notify:
         - Restart wsgi
+      when: "'database' in current_relation"
 
--- /dev/null
+++ b/templates/database_settings.py.j2
@@ -0,0 +1,9 @@
+DATABASES = {
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ current_relation["database"] }}',
+        'HOST': '{{ current_relation["host"] }}',
+        'USER': '{{ current_relation["user"] }}',
+        'PASSWORD': '{{ current_relation["password"] }}',
+    }
+}

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -2,29 +2,7 @@ DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
 
-{% if 'db' in relations %}
-DATABASES = {
-  {% for dbrelation in relations['db'] %}
-  {% if loop.first %}
-    {% set services=relations['db'][dbrelation] %}
-    {% for service in services %}
-      {% if service.startswith('postgres') %}
-      {% set dbdetails = services[service] %}
-        {% if 'database' in dbdetails %}
-    'default': {
-        'ENGINE': 'django.db.backends.postgresql_psycopg2',
-        'NAME': '{{ dbdetails["database"] }}',
-        'HOST': '{{ dbdetails["host"] }}',
-        'USER': '{{ dbdetails["user"] }}',
-        'PASSWORD': '{{ dbdetails["password"] }}',
-    }
-        {% endif %}
-      {% endif %}
-    {% endfor %}
-  {% endif %}
-  {% endfor %}
-}
-{% endif %}
+from database_settings import DATABASES
 
 TIME_ZONE = 'UTC'

With those changes, the deploy still works as expected:
$ make deploy
$ make setup-db
Creating custom debversion postgres type and syncing the ubuntu-reviews database.
...
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

Summary

Reusing the wsgi-app ansible role allowed me to focus just on the things that make this service different from other wsgi-app services:

  1. Creating the tarball for deployment (not normally part of the charm, but useful for developing the charm)
  2. Handling the application settings
  3. Handling the database relation and custom setup

The wsgi-app role already provides the functionality to deploy and upgrade code tarballs via a config set (ie. `juju set ubuntu-reviews build_label=r156`) from a configured url. It provides the functionality for the complete directory layout, user and group setup, log rotation etc. All of that comes for free.

Things I’d do if I was doing this charm for a real deployment now:

  1. Use the postgres db-admin relation for syncdb/migrations (thanks Simon) and ensure that the normal app uses the non-admin settings (and can’t change the schema etc.) This functionality could be factored out into another reusable role.
  2. Remove the dependency on the older non-click reviews app so that the clickreviews service can be deployed in isolation.
  3. Ensure all dependencies are either available in the distro, or included in the tarball

Filed under: cloud automation, juju

Read more
Michael

After recently writing the re-usable wsgi-app role for juju charms, I wanted to see how easy it would be to apply this to an existing service. I chose the existing ubuntu-reviews service because it’s open-source and something that we’ll probably need to deploy with juju soon (albeit only the functionality for reviewing newer click applications).

I’ve tried to include the workflow below that I used to create the charm including the detailed steps and some mistakes, in case it’s useful for others. If you’re after more of an overview about using reusable ansible roles in your juju charms, checkout these slides.

1. Create the new charm from the bootstrap-wsgi charm

First grab the charm-bootstrap-wsgi code and rename it to ubuntu-reviews:

$ mkdir -p ~/charms/rnr/precise && cd ~/charms/rnr/precise
$ git clone https://github.com/absoludity/charm-bootstrap-wsgi.git
$ mv charm-bootstrap-wsgi ubuntu-reviews && cd ubuntu-reviews/

Then update the charm metadata to reflect the ubuntu-reviews service:

--- a/metadata.yaml
+++ b/metadata.yaml
@@ -1,6 +1,6 @@
-name: charm-bootstrap-wsgi
-summary: Bootstrap your wsgi service charm.
-maintainer: Developer Account <Developer.Account@localhost>
+name: ubuntu-reviews
+summary: A review API service for Ubuntu click packages.
+maintainer: Michael Nelson <michael.nelson@canonical.com>
 description: |
   <Multi-line description here>
 categories:

I then updated the playbook to expect a tarball named rnr-server.tgz, and gave it a more relevant app_label (which controls the directories under which the service is setup):

--- a/playbook.yml
+++ b/playbook.yml
@@ -2,13 +2,13 @@
 - hosts: localhost
 
   vars:
-    - app_label: wsgi-app.example.com
+    - app_label: click-reviews.ubuntu.com
 
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: example_wsgi:application
-      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
+      wsgi_application: wsgi:app
+      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''

Although when deploying this service we’d be deploying a built asset published somewhere else, for development it’s easier to work with a local file in the charm – and the reusable wsgi-app role allows you to switch between those two options. So the last step here is to add some Makefile targets to enable pulling in the application code and creating the tarball (you can see the details on the git commit for this step).

With that done, I can deploy the charm with `make deploy` – expecting it to fail because the wsgi object doesn’t yet exist.

Aside: if you’re deploying locally, it’s sometimes useful to watch the deployment progress with:

$ tail -F ~/.juju/local/log/unit-ubuntu-reviews-0.log

At this point, juju status shows no issues, but curling the service does (as expected):

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080"
curl: (56) Recv failure: Connection reset by peer

Checking the logs (which, oh joy, are already setup and configured with log rotation etc. by the wsgi-app role) shows the expected error:

$ juju ssh ubuntu-reviews/0 "tail -F /srv/reviews.click.ubuntu.com/logs/reviews.click.ubuntu.com-error.log"
...
ImportError: No module named wsgi

2. Adding the wsgi, settings and urls

The current rnr-server code does have a django project setup, but it’s specific to the current setup (requiring a 3rd party library for settings, and serves the non-click reviews service too). I’d like to simplify that here, so I’m bundling a project configuration in the charm. First the standard (django-generated) manage.py, wsgi.py and near-default settings.py (which you can see in the actual commit).

An extra task is added which copies the project configuration into place during install/upgrade, and both the wsgi application location and the required PYTHONPATH are specified:

--- a/playbook.yml
+++ b/playbook.yml
@@ -7,7 +7,8 @@
   roles:
     - role: wsgi-app
       listen_port: 8080
-      wsgi_application: wsgi:app
+      python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
+      wsgi_application: clickreviewsproject.wsgi:application
       code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
@@ -18,20 +19,18 @@
 
   tasks:
 
-    # Pretend there are some package dependencies for the example wsgi app.
     - name: Install any required packages for your app.
-      apt: pkg={{ item }} state=latest update_cache=yes
+      apt: pkg={{ item }}
       with_items:
-        - python-django
-        - python-django-celery
+        - python-django=1.5.4-1ubuntu1~ctools0
       tags:
         - install
         - upgrade-charm
 
     - name: Write any custom configuration files
-      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
+      copy: src=django-project dest={{ application_dir }} owner={{ wsgi_user }} group={{ wsgi_group }}
       tags:
-        - config-changed
-        # Also any backend relation-changed events, such as databases.
+        - install
+        - upgrade-charm
       notify:
         - Restart wsgi

I can now run juju upgrade-charm and see that the application now responds, but with a 500 error highlighting the first (of a few) missing dependencies…

3. Adding missing dependencies

At this point, it’s easiest in my opinion to run debug-hooks:

$ juju debug-hooks ubuntu-reviews/0

and directly test and install missing dependencies, adding them to the playbook as you go:

ubuntu@michael-local-machine-1:~$ curl http://localhost:8080 | less
ubuntu@michael-local-machine-1:~$ sudo apt-get install python-missing-library -y (and add it to the playbook in your charm editor).
ubuntu@michael-local-machine-1:~$ sudo service gunicorn restart

Rinse and repeat. You may want to occasionally run upgrade-charm in a separate terminal:

$ juju upgrade-charm --repository=../.. ubuntu-reviews

to verify that you’ve not broken things, running the hooks as they are triggered in your debug-hooks window.

Other times you might want to destroy the environment to redeploy from scratch.

Sometimes you’ll have dependencies which are not available in the distro. For our deployments,
we always ensure we have these included in our tarball (in some form). In the case of the reviews server, there was an existing script which pulls in a bunch of extra branches, so I’ve updated to use that in the Makefile target that builds the tarball. But there was another dependency, south, which wasn’t included by that task, so for simplicity here, I’m going to install that via pip (which you don’t want to do in reality – you’d update your build process instead).

You can see the extra dependencies in the commit.

4. Customise settings

At this point, the service deploys but errors due to a missing setting, which makes complete sense as I’ve not added any app-specific settings yet.

So, remove the vanilla settings.py and add a custom settings.py.j2 template, and add an extra django_secret_key option to the charm config.yaml:

--- a/config.yaml
+++ b/config.yaml
@@ -10,8 +10,13 @@ options:
     current_symlink:
         default: "latest"
         type: string
-        description: |
+        description: >
             The symlink of the code to run. The default of 'latest' will use
             the most recently added build on the instance.  Specifying a
             differnt label (eg. "r235") will symlink to that directory assuming
             it has been previously added to the instance.
+    django_secret_key:
+        default: "you-really-should-set-this"
+        type: string
+        description: >
+            The secret key that django should use for each instance.

--- /dev/null
+++ b/templates/settings.py.j2
@@ -0,0 +1,41 @@
+DEBUG = True
+TEMPLATE_DEBUG = DEBUG
+SECRET_KEY = '{{ django_secret_key }}'
+TIME_ZONE = 'UTC'
+ROOT_URLCONF = 'clickreviewsproject.urls'
+WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
+INSTALLED_APPS = (
+    'django.contrib.auth',
+    'django.contrib.contenttypes',
+    'django.contrib.sessions',
+    'django.contrib.sites',
+    'django_openid_auth',
+    'django.contrib.messages',
+    'django.contrib.staticfiles',
+    'clickreviews',
+    'south',
+)
...

Again, full diff in the commit.

With the extra settings now defined and one more dependencie added, I’m now able to ping the service successfully:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

But that only works because the input validation is fired before anything tries to use the (non-existent) database…

5. Adding the database relation

I had expected this step to be simple – add the database relation, call syncdb/migrate, but there were three factors conspiring to complicate things:

  1. The reviews application uses a custom postgres “debversion” column type
  2. A migration for the older non-click reviewsapp service requires postgres superuser creds (specifically, reviews/0013_add_debversion_field.py
  3. The clickreviews app, which I wanted to deploy on its own, is currently dependent on the older non-click reviews app, so it’s necessary to have both enabled and therefore the tables from both sync’d

So to work around the first point, I split the ‘deploy’ task into two tasks so I can install and setup the custom debversion field on the postgres units, before running syncdb:

--- a/Makefile
+++ b/Makefile
@@ -37,8 +37,17 @@ deploy: create-tarball
        @juju set ubuntu-reviews build_label=r$(REVIEWS_REVNO)
        @juju deploy gunicorn
        @juju deploy nrpe-external-master
+       @juju deploy postgresql
        @juju add-relation ubuntu-reviews gunicorn
        @juju add-relation ubuntu-reviews nrpe-external-master
+       @juju add-relation ubuntu-reviews postgresql:db
+       @juju set postgresql extra-packages=postgresql-9.1-debversion
+       @echo "Once the services are related, run 'make setup-db'"
+
+setup-db:
+       @echo "Creating custom debversion postgres type and syncing the ubuntu-reviews database."
+       @juju run --service postgresql 'psql -U postgres ubuntu-reviews -c "CREATE EXTENSION debversion;"'
+       @juju run --unit=ubuntu-reviews/0 "hooks/syncdb"
        @echo See the README for explorations after deploying.

I’ve separated the syncdb out to a manual step (rather than running syncdb on a hook like config-changed) so that I can run it after the postgresql service had been updated with the custom field, and also because juju doesn’t currently have a concept of a leader (yes, syncdb could be run on every unit, and might be safe, but I don’t like it conceptually).

--- a/playbook.yml
+++ b/playbook.yml
@@ -3,13 +3,13 @@
 
   vars:
     - app_label: click-reviews.ubuntu.com
+    - code_archive: "{{ build_label }}/rnr-server.tgz"
 
   roles:
     - role: wsgi-app
       listen_port: 8080
       python_path: "{{ application_dir }}/django-project:{{ current_code_dir }}/src"
       wsgi_application: clickreviewsproject.wsgi:application
-      code_archive: "{{ build_label }}/rnr-server.tgz"
       when: build_label != ''
 
     - role: nrpe-external-master
@@ -25,6 +25,7 @@
         - python-django=1.5.4-1ubuntu1~ctools0
         - python-tz
         - python-pip
+        - python-psycopg2
       tags:
         - install
         - upgrade-charm
@@ -46,13 +47,24 @@
       tags:
         - install
         - upgrade-charm
+        - db-relation-changed
       notify:
         - Restart wsgi
+
     # XXX Normally our deployment build would ensure these are all available in the tarball.
     - name: Install any non-distro dependencies (Don't depend on pip for a real deployment!)
       pip: name={{ item.key }} version={{ item.value }}
       with_dict:
         south: 0.7.6
+        django-openid-auth: 0.2
       tags:
         - install
         - upgrade-charm
+
+    - name: sync the database
+      django_manage: >
+        command="syncdb --noinput"
+        app_path="{{ application_dir }}/django-project"
+        pythonpath="{{ code_dir }}/current/src"
+      tags:
+        - syncdb

To work around the second and third points above, I updated to remove the ‘south’ django application so that syncdb will setup the correct current tables without using migrations (while leaving it on the pythonpath as some code imports it currently), and added the old non-click reviews app to satisfy some of the imported dependencies.

By far the most complicated part of this change though, was the database settings:

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -1,6 +1,31 @@
 DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
+
+{% if 'db' in relations %}
+DATABASES = {
+  {% for dbrelation in relations['db'] %}
+  {% if loop.first %}
+    {% set services=relations['db'][dbrelation] %}
+    {% for service in services %}
+      {% if service.startswith('postgres') %}
+      {% set dbdetails = services[service] %}
+        {% if 'database' in dbdetails %}
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ dbdetails["database"] }}',
+        'HOST': '{{ dbdetails["host"] }}',
+        'USER': '{{ dbdetails["user"] }}',
+        'PASSWORD': '{{ dbdetails["password"] }}',
+    }
+        {% endif %}
+      {% endif %}
+    {% endfor %}
+  {% endif %}
+  {% endfor %}
+}
+{% endif %}
+
 TIME_ZONE = 'UTC'
 ROOT_URLCONF = 'clickreviewsproject.urls'
 WSGI_APPLICATION = 'clickreviewsproject.wsgi.application'
@@ -12,8 +37,8 @@ INSTALLED_APPS = (
     'django_openid_auth',
     'django.contrib.messages',
     'django.contrib.staticfiles',
+    'reviewsapp',
     'clickreviews',
-    'south',
 )
 AUTHENTICATION_BACKENDS = (
     'django_openid_auth.auth.OpenIDBackend',

Why are the database settings so complicated? I’m currently rendering all the settings on any config change as well as database relation change which means I need to use the global juju state for the relation data, and as far as juju knows, there could be multiple database relations for this stack. The current charm helpers gives you this as a dict (above it’s ‘relations’). The ‘db’ key of that dict contains all the database relations, so I’m grabbing the first database relation and taking the details from the postgres side of that relation (there are two keys for the relation, one for postgres, the other for the connecting service). Yuk (see below for a better solution).

Full details are in the commit. With those changes I can now use the service:

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/"
{"errors": {"package_name": ["This field is required."]}}

$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

A. Cleanup: Simplify database settings

In retrospect, the complicated database settings are only needed because the settings are rendered on any config change, not just when the database relation changes, so I’ll clean this up in the next step by moving the database settings to a separate file which is only written on the db-relation-changed hook where we can use the more convenient current_relation dict (given that I know this service only has one database relationship).

This simplifies things quite a bit:

--- a/playbook.yml
+++ b/playbook.yml
@@ -45,11 +45,21 @@
         owner: "{{ wsgi_user }}"
         group: "{{ wsgi_group }}"
       tags:
-        - install
-        - upgrade-charm
+        - config-changed
+      notify:
+        - Restart wsgi
+
+    - name: Write the database settings
+      template:
+        src: "templates/database_settings.py.j2"
+        dest: "{{ application_dir }}/django-project/clickreviewsproject/database_settings.py"
+        owner: "{{ wsgi_user }}"
+        group: "{{ wsgi_group }}"
+      tags:
         - db-relation-changed
       notify:
         - Restart wsgi
+      when: "'database' in current_relation"
 
--- /dev/null
+++ b/templates/database_settings.py.j2
@@ -0,0 +1,9 @@
+DATABASES = {
+    'default': {
+        'ENGINE': 'django.db.backends.postgresql_psycopg2',
+        'NAME': '{{ current_relation["database"] }}',
+        'HOST': '{{ current_relation["host"] }}',
+        'USER': '{{ current_relation["user"] }}',
+        'PASSWORD': '{{ current_relation["password"] }}',
+    }
+}

--- a/templates/settings.py.j2
+++ b/templates/settings.py.j2
@@ -2,29 +2,7 @@ DEBUG = True
 TEMPLATE_DEBUG = DEBUG
 SECRET_KEY = '{{ django_secret_key }}'
 
-{% if 'db' in relations %}
-DATABASES = {
-  {% for dbrelation in relations['db'] %}
-  {% if loop.first %}
-    {% set services=relations['db'][dbrelation] %}
-    {% for service in services %}
-      {% if service.startswith('postgres') %}
-      {% set dbdetails = services[service] %}
-        {% if 'database' in dbdetails %}
-    'default': {
-        'ENGINE': 'django.db.backends.postgresql_psycopg2',
-        'NAME': '{{ dbdetails["database"] }}',
-        'HOST': '{{ dbdetails["host"] }}',
-        'USER': '{{ dbdetails["user"] }}',
-        'PASSWORD': '{{ dbdetails["password"] }}',
-    }
-        {% endif %}
-      {% endif %}
-    {% endfor %}
-  {% endif %}
-  {% endfor %}
-}
-{% endif %}
+from database_settings import DATABASES
 
 TIME_ZONE = 'UTC'

With those changes, the deploy still works as expected:
$ make deploy
$ make setup-db
Creating custom debversion postgres type and syncing the ubuntu-reviews database.
...
$ juju ssh ubuntu-reviews/0 "curl http://localhost:8080/api/1.0/reviews/?package_name=foo"
[]

Summary

Reusing the wsgi-app ansible role allowed me to focus just on the things that make this service different from other wsgi-app services:

  1. Creating the tarball for deployment (not normally part of the charm, but useful for developing the charm)
  2. Handling the application settings
  3. Handling the database relation and custom setup

The wsgi-app role already provides the functionality to deploy and upgrade code tarballs via a config set (ie. `juju set ubuntu-reviews build_label=r156`) from a configured url. It provides the functionality for the complete directory layout, user and group setup, log rotation etc. All of that comes for free.

Things I’d do if I was doing this charm for a real deployment now:

  1. Use the postgres db-admin relation for syncdb/migrations (thanks Simon) and ensure that the normal app uses the non-admin settings (and can’t change the schema etc.) This functionality could be factored out into another reusable role.
  2. Remove the dependency on the older non-click reviews app so that the clickreviews service can be deployed in isolation.
  3. Ensure all dependencies are either available in the distro, or included in the tarball

Filed under: cloud automation, juju

Read more
Michael

I’ve been writing Juju charms to automate the deployment of a few different services at work, which all happen to be wsgi applications… some Django apps, others with other frameworks. I’ve been using the ansible support for writing charms which makes charm authoring simpler, but even then, essentially each wsgi service charm needs to do the same thing:

  1. Setup specific users
  2. Install the built code into a specific location
  3. Install any package dependencies
  4. Relate to a backend (could be postgresql, could be elasticsearch)
  5. Render the settings
  6. Setup a wsgi (gunicorn) service (via a subordinate charm)
  7. Setup log rotation
  8. Support updating to a new codebase without upgrading the charm
  9. Support rolling updates a new codebase

Only three of the above really change slightly from service to service: which package dependencies are required, the rendering of the settings and the backend relation (which usually just causes the settings to be rerendered anyway).

After trying (and failing) to create a nice reusable wsgi-app charm, I’ve switched to utilise ansible’s built-in support for reusable roles and created a charm-bootstrap-wsgi repo on github, which demonstrates all of the above out of the box (the README has an example rolling upgrade). The charm’s playbook is very simple, just reusing the wsgi-app role:

 

roles:
    - role: wsgi-app
      listen_port: 8080
      wsgi_application: example_wsgi:application
      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
      when: build_label != ''

 

and only needs to do two things itself:

tasks:
    - name: Install any required packages for your app.
      apt: pkg={{ item }} state=latest update_cache=yes
      with_items:
        - python-django
        - python-django-celery
      tags:
        - install
        - upgrade-charm

    - name: Write any custom configuration files
      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
      tags:
        - config-changed
        # Also any backend relation-changed hooks for databases etc.
      notify:
        - Restart wsgi

 

Everything else is provided by the reusable wsgi-app role. For the moment I’ve got the source of the reusable roles in a separate github repo, but I’d like to get these into the charm-helpers project itself eventually. Of course there will be cases where the service charm may need to do quite a bit more custom functionality, but if we can encapsulate and reuse as much as possible, it’s a win for all of us.

If you’re interested in chatting about easier charms with ansible (or any issues you can see), we’ll be getting together for a hangout tomorrow (Jun 11, 2014 at 1400UTC).


Filed under: ansible, juju, python

Read more
Michael

I’ve been writing Juju charms to automate the deployment of a few different services at work, which all happen to be wsgi applications… some Django apps, others with other frameworks. I’ve been using the ansible support for writing charms which makes charm authoring simpler, but even then, essentially each wsgi service charm needs to do the same thing:

  1. Setup specific users
  2. Install the built code into a specific location
  3. Install any package dependencies
  4. Relate to a backend (could be postgresql, could be elasticsearch)
  5. Render the settings
  6. Setup a wsgi (gunicorn) service (via a subordinate charm)
  7. Setup log rotation
  8. Support updating to a new codebase without upgrading the charm
  9. Support rolling updates a new codebase

Only three of the above really change slightly from service to service: which package dependencies are required, the rendering of the settings and the backend relation (which usually just causes the settings to be rerendered anyway).

After trying (and failing) to create a nice reusable wsgi-app charm, I’ve switched to utilise ansible’s built-in support for reusable roles and created a charm-bootstrap-wsgi repo on github, which demonstrates all of the above out of the box (the README has an example rolling upgrade). The charm’s playbook is very simple, just reusing the wsgi-app role:

 

roles:
    - role: wsgi-app
      listen_port: 8080
      wsgi_application: example_wsgi:application
      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
      when: build_label != ''

 

and only needs to do two things itself:

tasks:
    - name: Install any required packages for your app.
      apt: pkg={{ item }} state=latest update_cache=yes
      with_items:
        - python-django
        - python-django-celery
      tags:
        - install
        - upgrade-charm

    - name: Write any custom configuration files
      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
      tags:
        - config-changed
        # Also any backend relation-changed hooks for databases etc.
      notify:
        - Restart wsgi

 

Everything else is provided by the reusable wsgi-app role. For the moment I’ve got the source of the reusable roles in a separate github repo, but I’d like to get these into the charm-helpers project itself eventually. Of course there will be cases where the service charm may need to do quite a bit more custom functionality, but if we can encapsulate and reuse as much as possible, it’s a win for all of us.

If you’re interested in chatting about easier charms with ansible (or any issues you can see), we’ll be getting together for a hangout tomorrow (Jun 11, 2014 at 1400UTC).


Filed under: ansible, juju, python

Read more
Michael

Just documenting for later (and for a friend and colleague who needs it now) – my notes for setting up openstack swift using juju. I need to go back and check whether keystone is required – I initially had issue with the test auth so switched to keystone.

First, create the config file to use keystone, local block-devices on the swift storage units (ie. no need to mount storage), and using openstack havana:

cat >swift.cfg <<END
swift-proxy:
    zone-assignment: auto
    replicas: 3
    auth-type: keystone
    openstack-origin: cloud:precise-havana/updates
swift-storage:
    zone: 1
    block-device: /etc/swift/storagedev1.img|2G
    openstack-origin: cloud:precise-havana/updates
keystone:
    admin-token: somebigtoken
    openstack-origin: cloud:precise-havana/updates
END

Deploy it (this could probably be replaced with a charm bundle?):

juju deploy --config=swift.cfg swift-proxy
juju deploy --config=swift.cfg --num-units 3 swift-storage
juju add-relation swift-proxy swift-storage
juju deploy --config=swift.cfg keystone
juju add-relation swift-proxy keystone

Once everything is up and running, create a tenant and user with the user having admin rights for the tenant (using your keystone unit’s IP address for keystone-ip). Note, below I’m using the names of tenant, user and role – which works with keystone 0.3.2, but apparently earlier versions require you to use the uuids instead. Check with `keystone help user-role-add`).

$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken tenant-create --name mytenant
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-create --name myuser --tenant mytenant --pass userpassword
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-role-add --tenant mytenant --user myuser --role Admin

And finally, use our new admin user to create a container for use in our dev environment (specify auth version 2):

$ export OS_REGION_NAME=RegionOne
$ export OS_TENANT_NAME=mytenant
$ export OS_USERNAME=myuser
$ export OS_PASSWORD=userpassword
$ export OS_AUTH_URL=http://keystone-ip:5000/v2.0/
$ swift -V 2 post mycontainer

If you want the container to be readable without auth:

$ swift -V 2 post mycontainer -r '.r:*'

If you want another keystone user to have write access:

$ swift -V 2 post mycontainer -w mytenant:otheruser

Verify that the container is ready for use:
$ swift -V 2 stat mycontainer

Please let me know if you spot any issues (these notes are from a month or two ago, so I haven’t just tried this).


Filed under: Uncategorized

Read more
Michael

Just documenting for later (and for a friend and colleague who needs it now) – my notes for setting up openstack swift using juju. I need to go back and check whether keystone is required – I initially had issue with the test auth so switched to keystone.

First, create the config file to use keystone, local block-devices on the swift storage units (ie. no need to mount storage), and using openstack havana:

cat >swift.cfg <<END
swift-proxy:
    zone-assignment: auto
    replicas: 3
    auth-type: keystone
    openstack-origin: cloud:precise-havana/updates
swift-storage:
    zone: 1
    block-device: /etc/swift/storagedev1.img|2G
    openstack-origin: cloud:precise-havana/updates
keystone:
    admin-token: somebigtoken
    openstack-origin: cloud:precise-havana/updates
END

Deploy it (this could probably be replaced with a charm bundle?):

juju deploy --config=swift.cfg swift-proxy
juju deploy --config=swift.cfg --num-units 3 swift-storage
juju add-relation swift-proxy swift-storage
juju deploy --config=swift.cfg keystone
juju add-relation swift-proxy keystone

Once everything is up and running, create a tenant and user with the user having admin rights for the tenant (using your keystone unit’s IP address for keystone-ip). Note, below I’m using the names of tenant, user and role – which works with keystone 0.3.2, but apparently earlier versions require you to use the uuids instead. Check with `keystone help user-role-add`).

$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken tenant-create --name mytenant
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-create --name myuser --tenant mytenant --pass userpassword
$ keystone --endpoint http://keystone-ip:35357/v2.0/ --token somebigtoken user-role-add --tenant mytenant --user myuser --role Admin

And finally, use our new admin user to create a container for use in our dev environment (specify auth version 2):

$ export OS_REGION_NAME=RegionOne
$ export OS_TENANT_NAME=mytenant
$ export OS_USERNAME=myuser
$ export OS_PASSWORD=userpassword
$ export OS_AUTH_URL=http://keystone-ip:5000/v2.0/
$ swift -V 2 post mycontainer

If you want the container to be readable without auth:

$ swift -V 2 post mycontainer -r '.r:*'

If you want another keystone user to have write access:

$ swift -V 2 post mycontainer -w mytenant:otheruser

Verify that the container is ready for use:
$ swift -V 2 stat mycontainer

Please let me know if you spot any issues (these notes are from a month or two ago, so I haven’t just tried this).


Filed under: Uncategorized

Read more
Michael

After working with InformatiQ to setup a new charm using the ansible support (and ironing out a few issues), it made sense to capture the process…

The README at charm-bootstrap-ansible has the details, but the branch will pull in the required charm-helpers library and run the tests, leaving you ready to deploy and explore.

Hopefully I can get this into the charm-create tool eventually.


Filed under: bzr, juju

Read more
Michael

After working with InformatiQ to setup a new charm using the ansible support (and ironing out a few issues), it made sense to capture the process…

The README at charm-bootstrap-ansible has the details, but the branch will pull in the required charm-helpers library and run the tests, leaving you ready to deploy and explore.

Hopefully I can get this into the charm-create tool eventually.


Filed under: bzr, juju

Read more
Michael

I’ve been working on some more support for ansible in the juju charm-helpers package recently [1], which has effectively transformed my juju charm’s hooks.py to something like:

# Create the hooks helper, passing a list of hooks which will be
# handled by default by running all sections of the playbook
# tagged with the hook name.
hooks = charmhelpers.contrib.ansible.AnsibleHooks(
    playbook_path='playbooks/site.yaml',
    default_hooks=['start', 'stop', 'config-changed',
                   'solr-relation-changed'])

@hooks.hook()
def install():
    charmhelpers.contrib.ansible.install_ansible_support(from_ppa=True)

And that’s it.

If I need something done outside of ansible, like in the install hook above, I can write a simple hook with the non-ansible setup (in this case, installing ansible), but the decorator will still ensure all the sections of the playbook tagged by the hook-name (in this case, ‘install’) are applied once the custom hook function finishes. All the other hooks (start, stop, config-changed and solr-relation-changed) are registered so that ansible will run the tagged sections automatically on those hooks.

Why am I excited about this? Because it means that practically everything related to ensuring the state of the machine is now handled by ansibles yaml declarations (and I trust those to do what I declare). Of coures those playbooks could themselves get quite large and hard to maintain, but ansible has plenty of ways to break up declarations into includes and roles.

It also means that I need to write and maintain fewer unit-tests – in the above example I need to ensure that when the install() hook is called that ansible is installed, but that’s about it. I no longer need to unit-test the code which creates directories and users, ensures permissions etc., or even calls out to relevant charm-helper functions, as it’s all instead declared as part of the machine state. That said, I’m still just as dependent on integration testing to ensure the started state of the machine is what I need.

I’m pretty sure that ansible + juju has even more possibilities for being able to create extensible charms with plugins (using roles), rather than forcing too much into the charms config.yaml, and other benefits… looking forward to trying it out!

[1] The merge proposal still needs to be reviewed, possibly updated and landed :)


Filed under: Uncategorized

Read more
Michael

I’ve been working on some more support for ansible in the juju charm-helpers package recently [1], which has effectively transformed my juju charm’s hooks.py to something like:

# Create the hooks helper, passing a list of hooks which will be
# handled by default by running all sections of the playbook
# tagged with the hook name.
hooks = charmhelpers.contrib.ansible.AnsibleHooks(
    playbook_path='playbooks/site.yaml',
    default_hooks=['start', 'stop', 'config-changed',
                   'solr-relation-changed'])

@hooks.hook()
def install():
    charmhelpers.contrib.ansible.install_ansible_support(from_ppa=True)

And that’s it.

If I need something done outside of ansible, like in the install hook above, I can write a simple hook with the non-ansible setup (in this case, installing ansible), but the decorator will still ensure all the sections of the playbook tagged by the hook-name (in this case, ‘install’) are applied once the custom hook function finishes. All the other hooks (start, stop, config-changed and solr-relation-changed) are registered so that ansible will run the tagged sections automatically on those hooks.

Why am I excited about this? Because it means that practically everything related to ensuring the state of the machine is now handled by ansibles yaml declarations (and I trust those to do what I declare). Of coures those playbooks could themselves get quite large and hard to maintain, but ansible has plenty of ways to break up declarations into includes and roles.

It also means that I need to write and maintain fewer unit-tests – in the above example I need to ensure that when the install() hook is called that ansible is installed, but that’s about it. I no longer need to unit-test the code which creates directories and users, ensures permissions etc., or even calls out to relevant charm-helper functions, as it’s all instead declared as part of the machine state. That said, I’m still just as dependent on integration testing to ensure the started state of the machine is what I need.

I’m pretty sure that ansible + juju has even more possibilities for being able to create extensible charms with plugins (using roles), rather than forcing too much into the charms config.yaml, and other benefits… looking forward to trying it out!

[1] The merge proposal still needs to be reviewed, possibly updated and landed :)


Filed under: Uncategorized

Read more
Michael

logo-jujuHave you ever wished you could just declare the installed state of your juju charm like this?

deploy_user:
    group.present:
        - gid: 1800
    user.present:
        - uid: 1800
        - gid: 1800
        - createhome: False
        - require:
            - group: deploy_user

exampleapp:
    group.present:
        - gid: 1500
    user.present:
        - uid: 1500
        - gid: 1500
        - createhome: False
        - require:
            - group: exampleapp


/srv/{{ service_name }}:
    file.directory:
        - group: exampleapp
        - user: exampleapp
        - require:
            - user: exampleapp
        - recurse:
            - user
            - group


/srv/{{ service_name }}/{{ instance_type }}-logs:
    file.directory:
        - makedirs: True

While writing charms for Juju a long time ago, one of the things that I struggled with was testing the hook code – specifically the install hook code where the machine state is set up (ie. packages installed, directories created with correct permissions, config files setup etc.) Often the test code would be fragile – at best you can patch some attributes of your module (like “code_location = ‘/srv/example.com/code’”) to a tmp dir and test the state correctly, but at worst you end up testing the behaviour of your code (ie. os.mkdir was called with the correct user/group etc.). Either way, it wasn’t fun to write and iterate those tests.

But support has improved over the past year with the charmhelpers library. And recently I landed a branch adding support for declaring saltstack states in yaml, like the above example. That means that the install hook of your hooks.py can be reduced to something like:

import charmhelpers.core.hookenv
import charmhelpers.payload.execd
import charmhelpers.contrib.saltstack


hooks = charmhelpers.core.hookenv.Hooks()


@hooks.hook()
def install():
    """Setup the machine dependencies and installed state."""
    charmhelpers.contrib.saltstack.install_salt_support()
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/dependencies.yaml')
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/installed.yaml')


# Other hooks...

if __name__ == "__main__":
    hooks.execute(sys.argv)

…letting you focus on testing and writing the actual hook functionality (like relation-set’s etc. I’d like to add some test helpers that will automatically check the syntax of the state yaml files and template rendering output, but haven’t yet).

Hopefully we can add similar support for puppet and Ansible soon too, so that the charmer can choose the tools they want to use to declare the local machine state.

A few other things that I found valuable while writing my charm:

  • Use a branch for charmhelpers – this way you can make improvements to the library as you develop and not be dependent on your changes landing straight away to deploy (thanks Sidnei – I think I just copied that idea from one of his charms). The easiest way that I found for that was to install the branch into mycharm/lib so that it’s included in both dev and when you deploy (with a small snippet in your hooks.py.
  • Make it easy to deploy your local charm from the branch… the easiest way I found was a link-test-juju-repo make target – I’m not sure what other people do here?
  • In terms of writing actual hook functionality (like relation-set events etc), I found the easiest way to develop the charm was to iterate within a debug-hook session. Something like:
    1. write new test+code then juju upgrade-charm or add-relation
    2. run the hook and if it fails…
    3. fix and test right there within the debug-hook
    4. put the code back into my actual charm branch and update the test
    5. restore the system state in debug hook
    6. then juju upgrade-charm again to ensure it works, if it fails, iterate from 3.
  • Use the built-in support of template rendering from saltstack for rendering any config files that you need.

I don’t think I’d really appreciated the beauty of what juju is doing until, after testing my charm locally together with a gunicorn charm and a solr backend, I then setup a config using juju-deployer to create a full stack with an Apache front-end, a cache load balancer for multiple squid caches, as well as a load balancer in front of potentially multiple instances of my charms wsgi app, then a back-end loadbalancer in between my app and the (multiple) solr backends… and it just works.


Filed under: juju, python, testing

Read more
Michael

logo-jujuHave you ever wished you could just declare the installed state of your juju charm like this?

deploy_user:
    group.present:
        - gid: 1800
    user.present:
        - uid: 1800
        - gid: 1800
        - createhome: False
        - require:
            - group: deploy_user

exampleapp:
    group.present:
        - gid: 1500
    user.present:
        - uid: 1500
        - gid: 1500
        - createhome: False
        - require:
            - group: exampleapp


/srv/{{ service_name }}:
    file.directory:
        - group: exampleapp
        - user: exampleapp
        - require:
            - user: exampleapp
        - recurse:
            - user
            - group


/srv/{{ service_name }}/{{ instance_type }}-logs:
    file.directory:
        - makedirs: True

While writing charms for Juju a long time ago, one of the things that I struggled with was testing the hook code – specifically the install hook code where the machine state is set up (ie. packages installed, directories created with correct permissions, config files setup etc.) Often the test code would be fragile – at best you can patch some attributes of your module (like “code_location = ‘/srv/example.com/code’”) to a tmp dir and test the state correctly, but at worst you end up testing the behaviour of your code (ie. os.mkdir was called with the correct user/group etc.). Either way, it wasn’t fun to write and iterate those tests.

But support has improved over the past year with the charmhelpers library. And recently I landed a branch adding support for declaring saltstack states in yaml, like the above example. That means that the install hook of your hooks.py can be reduced to something like:

import charmhelpers.core.hookenv
import charmhelpers.payload.execd
import charmhelpers.contrib.saltstack


hooks = charmhelpers.core.hookenv.Hooks()


@hooks.hook()
def install():
    """Setup the machine dependencies and installed state."""
    charmhelpers.contrib.saltstack.install_salt_support()
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/dependencies.yaml')
    charmhelpers.contrib.saltstack.update_machine_state(
        'machine_states/installed.yaml')


# Other hooks...

if __name__ == "__main__":
    hooks.execute(sys.argv)

…letting you focus on testing and writing the actual hook functionality (like relation-set’s etc. I’d like to add some test helpers that will automatically check the syntax of the state yaml files and template rendering output, but haven’t yet).

Hopefully we can add similar support for puppet and Ansible soon too, so that the charmer can choose the tools they want to use to declare the local machine state.

A few other things that I found valuable while writing my charm:

  • Use a branch for charmhelpers – this way you can make improvements to the library as you develop and not be dependent on your changes landing straight away to deploy (thanks Sidnei – I think I just copied that idea from one of his charms). The easiest way that I found for that was to install the branch into mycharm/lib so that it’s included in both dev and when you deploy (with a small snippet in your hooks.py.
  • Make it easy to deploy your local charm from the branch… the easiest way I found was a link-test-juju-repo make target – I’m not sure what other people do here?
  • In terms of writing actual hook functionality (like relation-set events etc), I found the easiest way to develop the charm was to iterate within a debug-hook session. Something like:
    1. write new test+code then juju upgrade-charm or add-relation
    2. run the hook and if it fails…
    3. fix and test right there within the debug-hook
    4. put the code back into my actual charm branch and update the test
    5. restore the system state in debug hook
    6. then juju upgrade-charm again to ensure it works, if it fails, iterate from 3.
  • Use the built-in support of template rendering from saltstack for rendering any config files that you need.

I don’t think I’d really appreciated the beauty of what juju is doing until, after testing my charm locally together with a gunicorn charm and a solr backend, I then setup a config using juju-deployer to create a full stack with an Apache front-end, a cache load balancer for multiple squid caches, as well as a load balancer in front of potentially multiple instances of my charms wsgi app, then a back-end loadbalancer in between my app and the (multiple) solr backends… and it just works.


Filed under: juju, python, testing

Read more
Michael

A number of times over the past few years I’ve needed to create some quite complex migrations (both schema and data) in a few of the Django apps that I help out with at Canonical. And like any TDD fanboy, I cry at the thought of deploying code that I’ve just tested by running it a few times with my own sample data (or writing code without first setting failing tests demoing the expected outcome).

This migration test case helper has enabled me to develop migrations test first:

class MigrationTestCase(TransactionTestCase):
    """A Test case for testing migrations."""

    # These must be defined by subclasses.
    start_migration = None
    dest_migration = None
    django_application = None

    def setUp(self):
        super(MigrationTestCase, self).setUp()
        migrations = Migrations(self.django_application)
        self.start_orm = migrations[self.start_migration].orm()
        self.dest_orm = migrations[self.dest_migration].orm()

        # Ensure the migration history is up-to-date with a fake migration.
        # The other option would be to use the south setting for these tests
        # so that the migrations are used to setup the test db.
        call_command('migrate', self.django_application, fake=True,
                     verbosity=0)
        # Then migrate back to the start migration.
        call_command('migrate', self.django_application, self.start_migration,
                     verbosity=0)

    def tearDown(self):
        # Leave the db in the final state so that the test runner doesn't
        # error when truncating the database.
        call_command('migrate', self.django_application, verbosity=0)

    def migrate_to_dest(self):
        call_command('migrate', self.django_application, self.dest_migration,
                     verbosity=0)
 

It’s not perfect – schema tests in particular end up being quite complicated as you need to ensure you’re working with the correct orm model when creating your test data – and you can’t use the normal factories to create your test data. But it does enable you to write migration tests like:

class MyMigrationTestCase(MigrationTestCase):

    start_migration = '0022_previous_migration'
    dest_migration = '0024_data_migration_after_0023_which_would_be_schema_changes'
    django_application = 'myapp'

    def test_schema_and_data_updated(self):
        # Test setup code

        self.migrate_to_dest()

        # Assertions

which keeps me happy. When I wrote that I couldn’t find any other suggestions out there for testing migrations. A quick search now turns up one idea from André (data-migrations only),  but nothing else substantial. Let me know if you’ve seen something similar or a way to improve testing of migrations.


Filed under: django, python, testing

Read more
Michael

A number of times over the past few years I’ve needed to create some quite complex migrations (both schema and data) in a few of the Django apps that I help out with at Canonical. And like any TDD fanboy, I cry at the thought of deploying code that I’ve just tested by running it a few times with my own sample data (or writing code without first setting failing tests demoing the expected outcome).

This migration test case helper has enabled me to develop migrations test first:

class MigrationTestCase(TransactionTestCase):
    """A Test case for testing migrations."""

    # These must be defined by subclasses.
    start_migration = None
    dest_migration = None
    django_application = None

    def setUp(self):
        super(MigrationTestCase, self).setUp()
        migrations = Migrations(self.django_application)
        self.start_orm = migrations[self.start_migration].orm()
        self.dest_orm = migrations[self.dest_migration].orm()

        # Ensure the migration history is up-to-date with a fake migration.
        # The other option would be to use the south setting for these tests
        # so that the migrations are used to setup the test db.
        call_command('migrate', self.django_application, fake=True,
                     verbosity=0)
        # Then migrate back to the start migration.
        call_command('migrate', self.django_application, self.start_migration,
                     verbosity=0)

    def tearDown(self):
        # Leave the db in the final state so that the test runner doesn't
        # error when truncating the database.
        call_command('migrate', self.django_application, verbosity=0)

    def migrate_to_dest(self):
        call_command('migrate', self.django_application, self.dest_migration,
                     verbosity=0)
 

It’s not perfect – schema tests in particular end up being quite complicated as you need to ensure you’re working with the correct orm model when creating your test data – and you can’t use the normal factories to create your test data. But it does enable you to write migration tests like:

class MyMigrationTestCase(MigrationTestCase):

    start_migration = '0022_previous_migration'
    dest_migration = '0024_data_migration_after_0023_which_would_be_schema_changes'
    django_application = 'myapp'

    def test_schema_and_data_updated(self):
        # Test setup code

        self.migrate_to_dest()

        # Assertions

which keeps me happy. When I wrote that I couldn’t find any other suggestions out there for testing migrations. A quick search now turns up one idea from André (data-migrations only),  but nothing else substantial. Let me know if you’ve seen something similar or a way to improve testing of migrations.


Filed under: django, python, testing

Read more
Michael

Android UI Fragments look like a great way to build re-usable elements within an app, but they don’t work exactly as expected out of the box (well, exactly is I expected – but that could be a lack of experience with android):

After defining an onClick event on a button within my fragment:

  <Button android:id="@+id/add_goal_button"
      android:layout_width="wrap_content"
      android:layout_height="wrap_content"
      android:text="@string/button_create"
      android:onClick="addGoal" />

I’d expected this event to be routed directly to my fragment class without the containing Activity class needing to know about it – but instead, the addGoal() method is expected on the containing Activity instead.

To connect the fragment event directly to a click handler on the fragment class (so the view doesn’t need to be handling the fragment events, you can do the following instead (thanks Brill Pappin):

public class NewGoalFragment extends Fragment {
	
	...
	
	@Override
	public View onCreateView(LayoutInflater inflater, ViewGroup container,
			                 Bundle savedInstanceState) {
		final View fragmentView = inflater.inflate(R.layout.fragment_new_goal,
							   container, false);
		...
		Button addGoalButton = (Button) fragmentView.findViewById(R.id.add_goal_button);
		addGoalButton.setOnClickListener(new OnClickListener() {
			@Override
			public void onClick(final View v) {
				// Pass the fragmentView through to the handler
				// so that findViewById can be used to get a handle on
				// the fragments own views.
				addGoal(fragmentView);
			}
		});
		return fragmentView;
	}
	
    public void addGoal(View view) {    	
    	EditText newGoal = (EditText) view.findViewById(R.id.new_goal);
	...
    }

Filed under: android

Read more
Michael

Android UI Fragments look like a great way to build re-usable elements within an app, but they don’t work exactly as expected out of the box (well, exactly is I expected – but that could be a lack of experience with android):

After defining an onClick event on a button within my fragment:

  <Button android:id="@+id/add_goal_button"
      android:layout_width="wrap_content"
      android:layout_height="wrap_content"
      android:text="@string/button_create"
      android:onClick="addGoal" />

I’d expected this event to be routed directly to my fragment class without the containing Activity class needing to know about it – but instead, the addGoal() method is expected on the containing Activity instead.

To connect the fragment event directly to a click handler on the fragment class (so the view doesn’t need to be handling the fragment events, you can do the following instead (thanks Brill Pappin):

public class NewGoalFragment extends Fragment {
	
	...
	
	@Override
	public View onCreateView(LayoutInflater inflater, ViewGroup container,
			                 Bundle savedInstanceState) {
		final View fragmentView = inflater.inflate(R.layout.fragment_new_goal,
							   container, false);
		...
		Button addGoalButton = (Button) fragmentView.findViewById(R.id.add_goal_button);
		addGoalButton.setOnClickListener(new OnClickListener() {
			@Override
			public void onClick(final View v) {
				// Pass the fragmentView through to the handler
				// so that findViewById can be used to get a handle on
				// the fragments own views.
				addGoal(fragmentView);
			}
		});
		return fragmentView;
	}
	
    public void addGoal(View view) {    	
    	EditText newGoal = (EditText) view.findViewById(R.id.new_goal);
	...
    }

Filed under: android

Read more
Michael

The last exercise in the Go Tour – parallelizing a web crawler – turned out to be quite a bit more interesting than I’d expected. If anyone has suggested improvements from which I can learn a bit more, or there own solutions posted, let me know – my exercise solution is on github. I’ve tried to stick to the tour content (ie. only using channels rather than the sync package for accessing shared data).

Spoiler Alert: If you are learning Golang and haven’t yet worked through the Go-Tour, go and do so now. If you get stuck, keep struggling, take a break, try again in a few days etc., before looking at other peoples’ solutions.

The solution I ended up with has a Crawl() function very similar to the original, just with two extra function parameters:

func Crawl(url string, depth int, fetcher Fetcher,
	startCrawl func(string) bool, crawlComplete chan string) {

	if depth <= 0 {
		crawlComplete <- url
 		return
	}

	body, urls, err := fetcher.Fetch(url)
	if err != nil {
		fmt.Println(err)
		crawlComplete <- url
 		return
	}

	fmt.Printf("found: %s %q\n", url, body)
	for _, u := range urls {
		if startCrawl(u) {
			go Crawl(u, depth-1, fetcher, startCrawl, crawlComplete)
		}
	}
	crawlComplete <- url
}

The two parameters are:

  • startCrawl func(url string) bool – used as a check before spawning a new ‘go Crawl(url)’ to ensure that we don’t crawl the same url twice.
  • crawlComplete chan string – used to signal that the Crawl function has fetched the page and finished spawning any child go-routines.

These two resources are created and passed in to the initial Crawl() call in the main() function:

func main() {
	startCrawl := make(chan StartCrawlData)
	crawlComplete := make(chan string)
	quitNow := make(chan bool)
	go processCrawls(startCrawl, crawlComplete, quitNow)

	// Returns whether a crawl should be started for a given
	// URL.
	startCrawlFn := func(url string) bool {
		resultChan := make(chan bool)
		startCrawl <- StartCrawlData{url, resultChan}
		return <-resultChan
	}

	Crawl("http://golang.org/", 4, fetcher, startCrawlFn,
		crawlComplete)

	<-quitNow
}

Access to the shared state of which urls have been crawled and when all Crawls() have finished etc., is managed via those channels in the processCrawls() go-routine, so that the main() can simply call the first Crawl() and then wait to quit. I want to check how cheap the temporary creation of a channel is (for the return value of the startCrawlFn above) – I think I saw this method on an earlier GoLang tutorial example, but otherwise I’m happy with the solution :-).

Other solutions to learn from:


Filed under: golang

Read more
Michael

The last exercise in the Go Tour – parallelizing a web crawler – turned out to be quite a bit more interesting than I’d expected. If anyone has suggested improvements from which I can learn a bit more, or there own solutions posted, let me know – my exercise solution is on github. I’ve tried to stick to the tour content (ie. only using channels rather than the sync package for accessing shared data).

Spoiler Alert: If you are learning Golang and haven’t yet worked through the Go-Tour, go and do so now. If you get stuck, keep struggling, take a break, try again in a few days etc., before looking at other peoples’ solutions.

The solution I ended up with has a Crawl() function very similar to the original, just with two extra function parameters:

func Crawl(url string, depth int, fetcher Fetcher,
	startCrawl func(string) bool, crawlComplete chan string) {

	if depth <= 0 {
		crawlComplete <- url
 		return
	}

	body, urls, err := fetcher.Fetch(url)
	if err != nil {
		fmt.Println(err)
		crawlComplete <- url
 		return
	}

	fmt.Printf("found: %s %q\n", url, body)
	for _, u := range urls {
		if startCrawl(u) {
			go Crawl(u, depth-1, fetcher, startCrawl, crawlComplete)
		}
	}
	crawlComplete <- url
}

The two parameters are:

  • startCrawl func(url string) bool – used as a check before spawning a new ‘go Crawl(url)’ to ensure that we don’t crawl the same url twice.
  • crawlComplete chan string – used to signal that the Crawl function has fetched the page and finished spawning any child go-routines.

These two resources are created and passed in to the initial Crawl() call in the main() function:

func main() {
	startCrawl := make(chan StartCrawlData)
	crawlComplete := make(chan string)
	quitNow := make(chan bool)
	go processCrawls(startCrawl, crawlComplete, quitNow)

	// Returns whether a crawl should be started for a given
	// URL.
	startCrawlFn := func(url string) bool {
		resultChan := make(chan bool)
		startCrawl <- StartCrawlData{url, resultChan}
		return <-resultChan
	}

	Crawl("http://golang.org/", 4, fetcher, startCrawlFn,
		crawlComplete)

	<-quitNow
}

Access to the shared state of which urls have been crawled and when all Crawls() have finished etc., is managed via those channels in the processCrawls() go-routine, so that the main() can simply call the first Crawl() and then wait to quit. I want to check how cheap the temporary creation of a channel is (for the return value of the startCrawlFn above) – I think I saw this method on an earlier GoLang tutorial example, but otherwise I’m happy with the solution :-).

Other solutions to learn from:


Filed under: golang

Read more
Michael

Kanban for kids

Facing the arrival of another child, I set out a few months ago to experiment with ways to help each other get things done together as a family without becoming a task-master of a father. A very simplified Kanban board seemed like a good fit for our kitchen wall, as it visualises what needs doing or other fun things that are planned, helps us to help each other when needed, and enables the kids to take part in the planning and organising of the day.

The photo of our board above is what we’ve ended up with after a few months. It is really a habit tracker – with most of the items on the board being things that we do daily, every few days, weekly or fortnightly. Some points that we’ve found helpful:

  • Items specific to the kids are colour coded (ie. we have a “Put clothes away” item for each child in their colour).
  • We have some incentives – each item has a number of associated “service points” [1]. For eg. 5 for putting away clothes, 15 for taking the dog out for a walk etc. (agreed with the kids and adjusted on feedback).
  • We have incentives to have fun doing items together – if you complete an item together with someone else, you get 50% extra service points, simply to encourage having fun doing things together (we put a magnet on the item to signify this).
  • We very rarely tell the kids to do something – in extreme circumstances I’ve told my daughter I need her to take the dog out now as I can’t do it myself, but it’s something we generally avoid, in fact…
  • The kids generally choose what item they want to do next and when they want to do it (ie. “You’d like to help daddy set the table for lunch? Shall we do it now or after playing Lego for a while?”).
  • Each item has a drawing and is laminated with a small magnet, so that we can stack certain items together (for eg, we have a stack of cards related to the dog which are done in order each day – dog breakfast, walk 7am-ish, Walk 11am-ish, etc).
  • We start each day by pulling things that we want to do into the backlog (in addition to the daily stuff). Some items are weekly, which we store in a stack for each day on the top-left. Other items are ad-hoc which we pull in from the mess on the bottom left.
  • We limit the number of cards in the TODO and Doing lanes to 4, to help us help each flow (or review whether we really want to do something).
  • We don’t put normal stuff like playing lego or dressups there – the kids can play whenever they want without feeling they need to have an item on the board :P

So far it’s been a great help, but as always, a continual learning process. Each child is motivated by different things, and yet the end goal is that we’d all enjoy helping each other without extrinsic motivation. Interestingly, one of the most freeing things has been for the kids (and my wife) to get used to the fact that it’s OK to say, no, I don’t feel like doing that today, I’m just going to move the card over to Thursday for the moment. That is, the board is something we control – a helpful way to organise and communicate – it does not tell us what we have to do.

[1] Service points – The kids can use the service points at the end of the day in ways such as staying up a bit later (1minute per point, within reason), or going out on their own with mum or dad for an ice-cream desert (50 points). We haven’t yet found a good solution for left over points, and are currently converting to Euro-cents as pocket money at the end of the week which works well in terms of value, but I don’t like the direct connection of being paid money to do stuff around the house. Another option I’m trying at the moment is using saved service points as bargaining power for larger items that the kids need (like a new school bag). I’d be glad to hear of other ideas.


Filed under: Uncategorized

Read more
Michael

Kanban for kids

Facing the arrival of another child, I set out a few months ago to experiment with ways to help each other get things done together as a family without becoming a task-master of a father. A very simplified Kanban board seemed like a good fit for our kitchen wall, as it visualises what needs doing or other fun things that are planned, helps us to help each other when needed, and enables the kids to take part in the planning and organising of the day.

The photo of our board above is what we’ve ended up with after a few months. It is really a habit tracker – with most of the items on the board being things that we do daily, every few days, weekly or fortnightly. Some points that we’ve found helpful:

  • Items specific to the kids are colour coded (ie. we have a “Put clothes away” item for each child in their colour).
  • We have some incentives – each item has a number of associated “service points” [1]. For eg. 5 for putting away clothes, 15 for taking the dog out for a walk etc. (agreed with the kids and adjusted on feedback).
  • We have incentives to have fun doing items together – if you complete an item together with someone else, you get 50% extra service points, simply to encourage having fun doing things together (we put a magnet on the item to signify this).
  • We very rarely tell the kids to do something – in extreme circumstances I’ve told my daughter I need her to take the dog out now as I can’t do it myself, but it’s something we generally avoid, in fact…
  • The kids generally choose what item they want to do next and when they want to do it (ie. “You’d like to help daddy set the table for lunch? Shall we do it now or after playing Lego for a while?”).
  • Each item has a drawing and is laminated with a small magnet, so that we can stack certain items together (for eg, we have a stack of cards related to the dog which are done in order each day – dog breakfast, walk 7am-ish, Walk 11am-ish, etc).
  • We start each day by pulling things that we want to do into the backlog (in addition to the daily stuff). Some items are weekly, which we store in a stack for each day on the top-left. Other items are ad-hoc which we pull in from the mess on the bottom left.
  • We limit the number of cards in the TODO and Doing lanes to 4, to help us help each flow (or review whether we really want to do something).
  • We don’t put normal stuff like playing lego or dressups there – the kids can play whenever they want without feeling they need to have an item on the board :P

So far it’s been a great help, but as always, a continual learning process. Each child is motivated by different things, and yet the end goal is that we’d all enjoy helping each other without extrinsic motivation. Interestingly, one of the most freeing things has been for the kids (and my wife) to get used to the fact that it’s OK to say, no, I don’t feel like doing that today, I’m just going to move the card over to Thursday for the moment. That is, the board is something we control – a helpful way to organise and communicate – it does not tell us what we have to do.

[1] Service points – The kids can use the service points at the end of the day in ways such as staying up a bit later (1minute per point, within reason), or going out on their own with mum or dad for an ice-cream desert (50 points). We haven’t yet found a good solution for left over points, and are currently converting to Euro-cents as pocket money at the end of the week which works well in terms of value, but I don’t like the direct connection of being paid money to do stuff around the house. Another option I’m trying at the moment is using saved service points as bargaining power for larger items that the kids need (like a new school bag). I’d be glad to hear of other ideas.


Filed under: Uncategorized

Read more