Canonical Voices

What allenap/maas talks about

Posts tagged with 'maas'

Gavin Panella

South, South 2, and Django Migrations

blockquote { background-color: #444; padding: 0.2em 1em; }

A couple of months ago we on the MAAS team found ourselves in a bit of a pickle: we needed to be able to support a product targeted at both Django <1.7 and Django ≥1.7 with database migrations. This is a problem because South is replaced by Django's own migration support in 1.7, and there are differences.

I emailed Andrew Godwin to ask his advice. He's the author of South 2and so apparently knows his stuff, but we also wondered if South 2 might be a way out of our mess. His reply confirmed him as knowledgable, kind, and helpful. Although he did not bless South 2 as our silver bullet, he did have some other useful advice instead.

I promised I would document our correspondence where others might learn from it, and this is it, somewhat overdue. I've edited it slightly for clarity.

Thanks Andrew!


Hi Andrew,

I found your south2 repository on GitHub today. It looks like you've not touched it in a while, but I wondered if I could ask you a few questions about it anyway? There's a lot of context but it boils down to two-ish questions:

  1. What would you recommend for transitioning a packaged product (i.e. one which we don't provide as a service) from South-based migrations to Django ≥1.7 migrations?

    As a general answer, I suggest the method described in the Django docs, which is to move the South migrations to a south_migrationsdirectory and generate new initial Django ones. As long as your users have South 1.0 or higher, that'll keep both versions running during a transition, and Django's automatic application of initial migrations makes things a lot easier. I don't recommend that you try and support both migration sets at the same time; make 1.7 or higher a hard dependency for a release. This obviously is a bit different for the case below, which I answered down there.

  2. How much work would be required to get south2 working?

    It was abandoned with good reason - it's around another two months of work to get it working remotely reliably, and I'm not sure it could be done at all without much more of a rewrite rather than the current source translation approach. I didn't abandon the idea lightly, but alas it just wasn't proving very stable.

We're in a tricky situation:

  • We have an application, MAAS, that we ship as a package in Ubuntu, i.e. end-users install it. It uses PostgreSQL.
  • It's supported in Ubuntu 14.04 (Trusty) and will be supported until April 2019. Trusty ships with Django 1.6, and this won't change (only security fixes and fixes for very serious bugs are back-ported).
  • Django 1.7 is now available in the development version of Ubuntu (Vivid).
  • Django 1.7 or later will be in the next LTS (Long Term Support) version of Ubuntu, out next year. (Trusty is the most recent LTS release.)
  • We have been using South for several years.
  • To support MAAS in Trusty we may need to back-port migrations from trunk. Once we base trunk on Django ≥1.7 we can't back-port directly; we'd need to recreate any migrations with South.
  • However, we also need a seamless upgrade path for users on Trusty when they upgrade to the next LTS release, where they can skip right over three intermediate releases of Ubuntu.
  • Between Trusty and the next LTS (hereafter just "Next"), the upgrade path might look like (where mXXX = "migration XXX"):

    Trusty -- m134 -- m135 -- m136 -- m137 (then EOL)
    \ \ \ \
    Next -- m0 ---- m1 ---- m2 ---- m3 ---- m4 ---- ...

    In other words, Trusty and the next LTS share a common ancestor in South migration 134; the Django ≥1.7 migration baseline is derived at that point.

    At any point after that a user could choose to upgrade to the next LTS. If they upgrade from an installation that's got m136, we could map that over to m2 in the new migrations model, tell Django to fake-apply m0, m1, and m2, then proceed from there.

  • In truth, a user could choose to upgrade from Trusty to Next beforehaving applied m134 because users can choose to follow only security fixes, and not updates. (They can choose to follow nothing at all, but that's getting into a very grey area w.r.t. support.)

    In this situation we'd want to apply all remaining South migrations up to at least m134 before switching over to the new Django migrations model.

    On the other hand, there may be a way to prevent a Trusty → Next upgrade based on a precondition, e.g. "m134 or greater is needed", but I don't currently know how that would be implemented.

  • There's a risk of South migrations not matching up to Django ≥1.7 migrations. That would most likely be an issue with our process, but it could be a software issue too.
  • With a variety of automated testing we can mitigate a lot of the process risk, and catch software issues early.
  • However, that all adds up to quite a lot of work.
  • Another option entirely would be for us to invest time into south2 and switch everything over to Django ≥1.7 migrations. That sounds like it would be a lot simpler, and thus carry a lot less risk.
  • The thing I don't know, which I hope you can answer, is how much work might it be to get south2 to a point where this would be possible? What would the ongoing maintenance look like?
  • What would you recommend?

    There's no clean solution, sorry. I'd document having to apply the most recent migrations before switching (and perhaps have a code entry on startup in the 1.7 dependent version that checks the south_migrations table directly and hard fails if you didn't), then have people clean switch over to the latest release.

    Can I ask why you won't just ship a newer version of Django with the newer releases of MAAS, even on Trusty? I know OS packaging is a tough thing to get around, but trying to backport migrations to work on South and older releases is only going to bring you pain (South is much more limited than Django migrations, and you might have to do a lot of manual workarounds).

    South2 isn't going to work - don't go down that path, I abandoned it for good reason, I'm not even sure the automated source translation approach is possible and a rewrite would take months. You're better off somehow shipping 1.7 bundled or as some kind of special dependency.

Read more
Gavin Panella

Preparing for Python 3 in MAAS

Something we've done in MAAS — which is Python 2 only so far — is to put:

from __future__ import (
absolute_import,
print_function,
unicode_literals,
)

__metaclass__ = type

str = None

at the top of every source file. We knew that we would port MAAS to Python 3 at some point, and we hoped that doing this would help that effort. We'll find out if that's the case soon enough: one of our goals for this cycle is to port MAAS to Python 3.

The str = None line forces us to use the bytes synonym, and thus think more about what we're actually doing, but it doesn't save us from implicit conversions.

In places like data ingress and egress points we also assert unicode or byte strings, as appropriate, to try and catch ourselves slipping up. We also encode and decode at these points, with the goal of using only unicode strings internally for textual data.

We never coerce between unicode and byte strings either, because that involves implicit recoding; we always encode or decode explicitly.

In maas-test — which is a newer and smaller codebase than MAAS — we drop the str = None line, but use tox to test on both Python 2.7 and 3.3. Unfortunately we've recently had to use a Python-2-only dependency and have had to disable 3.3 tests until it's ported.

maas-test started the same as maas: a Python 2 codebase with the same prologue in every source file. It's anecdotal, but it turned out that few changes were needed to get it working with Python 3.3, and I think that's in part because of the hoops we forced ourselves to jump through. We used six to bridge some gaps (text_type in a few places, PY3 in setup.py too), but nothing invasive. My fingers are crossed that this experience is repeated with MAAS.

Read more
Gavin Panella

Protocol buffers for logging?

I'd always thought of Protocol Buffers as an on-the-wire structured message format only, but using them for logs seems head-smackingly useful. I'm a little ashamed I didn't think of it before, especially because their description clearly states:

Google uses Protocol Buffers for almost all of its internal RPC protocols and file formats.

Adam D'Angelo's answer on Quora was what finally made me twig.

There are probably limitations, many of them surely similar to the criticisms levelled at systemd's journal, but for logs meant to be re-consumed by machines — before presentation to a human perhaps — I think they're interesting to consider. For example, MAAS's logging is due some love in the coming months as part of our efforts to make it much easier to debug.

Read more
Gavin Panella

Workaround for uploading files to MAAS

It turns out that it's not possible to use maas-cli to upload files to MAAS. It's not something that most people need to do because tools like Juju use MAAS's API directly to upload files. However, the following workaround can be used like so: upload.py http://example.com/MAAS/api/1.0/ my:api:key my_filename < my_file

Read more