Canonical Voices

Alan Griffiths

Mir release 0.27

Mir release 0.27/MirAL release 1.4

This is an interim development release of Mir and MirAL to Ubuntu 17.10 (Artful) that delivers many of the features that were work-in-progress when we needed to restructure the project. The Mir release notes are here:

The MirAL 1.4 release exposes a few new features and removes support for Mir versions that are no longer supported:

  • Support for passing messages to enable Drag & Drop
  • Support for client requested move
  • Port to the undeprecated Mir APIs
  • Added “–cursor-theme” option when configuring a cursor theme
  • Drop support for Mir versions before 0.26

There will be further Mir releases culminating in a Mir 1.0 release before the Ubuntu 17.10 (Artful) feature freeze in August.

Read more
Christian Brauner

Storage Tools

Having implemented or at least rewritten most storage backends in LXC as well as LXD has left me under the impression that most storage tools suck. Most advanced storage drivers provide a set of tools that allow userspace to administer storage without having to link against an external library. This is a huge advantage if one wants to keep the amount of external dependencies to a minimum. This is a policy to which LXC and LXD always try to adhere. One of the most crucial features such tools should provide is the ability to retrieve each property for each storage entity they allow to administer in a predictable and machine-readable way. As far as I can tell, only the ZFS and LVM tools allow one to do this. For example

zfs get -H -p -o "value" <key> <storage-entity>

will let you retrieve (nearly) all properties. The RBD and BTRFS tools lack this ability which makes them inconvenient to use at times.

Read more
Robin Winslow

Canonical’s webteam manage over 18 websites as well as many supporting projects and frameworks. These projects are built with any combination of Python, Ruby, NodeJS, Go, PostgreSQL, MongoDB or OpenStack Swift.

We have 9 full-time developers – half the number of websites we have. And naturally some of our projects get a lot of time spent on them (like, and others only get worked on once every few months. Most devs will touch most projects at some point, and some may work on a few of them in any given day.

Before any developer can start a new piece of work, they need to get the project running on their computer. These computers may be running any flavour of Linux or macOS (thankfully we don’t yet need to support Windows).

A focus on tooling

If you’ve ever tried to get up and running on a new software project, you’ll certainly appreciate how difficult that can be. Sometimes developers can spend days simply working out how to install a dependency.

XKCD 1742: Will it work?

Given the number and diversity of our projects, and how often we switch between them, this is a delay we simply cannot afford.

This is why we’ve invested a lot of time into refining and standardising the local development tooling, making it as easy as possible for any of our devs, or any contributors, to get up and running as simply as possible.

The standard interface

We needed a simple, standardised set of commands that could be run across all projects, to achieve predictable results. We didn’t want our developers to have to dig into the README or other documentation every time they wanted to get a new project running.

This is the standard interface we chose to implement in all our projects, to cover the basic functions common to almost all our projects:

./run        # An alias for "./run serve"
./run serve  # Prepare the project and run the local server
./run build  # Build the project, ready for distribution or release
./run watch  # Watch local files for changes and rebuild as necessary
./run test   # Check code syntax and run unit tests
./run clean  # Remove any temporary or built files or local databases

We decided on using a single run executable as our single entry-point into all our projects only after previously trying and eventually rejecting a number of alternatives:

  • A Makefile: The syntax can be confusing. Makefiles are really made for compiling system binaries, which doesn’t usually apply to our projects
  • gulp, or NPM scripts: Not all our projects need NodeJS, and NodeJS isn’t always available on a developer’s system
  • docker-compose: Although we do ultimately run everything through Docker (see below), the docker-compose entrypoint alone wasn’t powerful enough to achieve everything we needed

In contrast to all these options, the run script allows us to perform whatever actions we choose, using any interpreter that’s available on the local system. The script is currently written in Bash because it’s available on all Linux and macOS systems. As an additional bonus, ./run is quicker to type than the other options, saving our devs crucial nanoseconds.

The single dependency that developers need to install to run the script is Docker, for reasons outlines below.

Knowing we can run or build our projects through this standard interface is not only useful for humans, but also for supporting services – like our build jobs and automated tests. We can write general solutions, and know they’ll be able to work with any of our projects.

Using ./run is optional

All our website projects are openly available on GitHub. While we believe the ./run script offers a nice easy way of running our projects, we are mindful that people from outside our team may want to run the project without installing Docker, want to have more fine-grained control over how the project is run, or just not trust our script.

For this reason, we have tried to keep the addition of the ./run script from affecting the wider shape of our projects. It remains possible to run each of our projects using standard methods, without ever knowing or caring about the ./run script or Docker.

  • Django projects can still be run with pip install -r requirements.txt; ./ runserver
  • Jekyll projects can still be run with bundle install; bundle exec jekyll serve
  • NodeJS projects can still be run with npm install; npm run serve

While the documentation in our READMEs recommend the ./run script, we also try to mention the alternatives, e.g.’s

Using Docker for encapsulation

Although we strive to keep our projects as simple as possible, every software project relies on dependent libraries and programs. These dependencies pose 2 problems for us:

  • We need to install and run these dependencies in a predictable way – which may be difficult in some operating systems
  • We must keep these dependencies from affecting the developer’s wider system – there’s nothing worse than having a project break your computer

For a while now, developers have been solving this problem by running applications within virtual machines running Linux (e.g. with VirtualBox and Vagrant), which is a great way of encapsulating software within a predictable environment.

Linux containers offer light-weight encapsulation

More recently, containers have entered the scene.


A container is a part of the existing system with carefully controlled permissions and an encapsulated filesystem, to make it appear and behave like a separate operating system. Containers are much lighter and quicker to run than a full virtual machine, and yet provide similar benefits.

The easiest and most direct way to run containers is probably LXD, but unfortunately there’s no easy way to run LXD on macOS. By contrast, Docker CE is trivial to install and use on macOS, and so this became our container manager of choice. When it becomes easier to run LXD on macOS, we’ll revisit this decision.

Each project uses a number of Docker images

docker cat

Running containers through Docker helps us to carefully manage our projects’ dependencies, by:

  • Keeping all our software, from Python modules to databases, from affecting the wider system
  • Logically grouping our dependencies into separate light-weight containers: one for the database, and a separate one for each technology stack (Python, Ruby, Node etc.)
  • Easily cleaning up a project by simply deleting its associated containers

So the ./run script in each project will run the necessary commands to start the project by running the relevant commands inside the relevant Docker images. For example, in, the ./run command will:

Docker is the only dependency

By using Docker images in this way, the developer doesn’t need to install any of the project dependencies on their local system (NodeJS, Python, PostgreSQL etc.). Docker – which should be trivial to install on both Linux and macOS – is the single dependency they need to run any of our projects.

Keeping the ./run script up-to-date across projects

A key feature of this our solution is to provide a consistent interface in all of our projects. However, the script itself will vary between projects, as different projects have different requirements. So we needed a way of sharing relevant parts of the script while keeping the ability to customise it locally.

It is also important that we don’t add significant bloat to the project’s dependencies. This script is just meant to be a useful shorthand way of running the project, but we don’t want it to affect the shape of the project at large, or add too much extra complexity.

However, we still need a way of making improvements to the script in a centralised way and easily updating the script in existing projects.

A yeoman generator

To achieve these goals, we maintain a yeoman generator called canonical-webteam. This generator contains a few ways of adding the ./run architecture, for some common types of projects we use:

$ yo canonical-webteam:run            # Add ./run for a basic node-only project
$ yo canonical-webteam:run-django     # Add ./run for a databaseless Django project
$ yo canonical-webteam:run-django-db  # Add ./run for a Django project with a database
$ yo canonical-webteam:run-jekyll     # Add ./run for a Jekyll project

These generator scripts can be used either to add the ./run script to a project that doesn’t have it, or to replace an existing ./run script with the latest version. It will also optionally update .gitignore and package.json with some of our standard settings for our projects.

Try it out!

To see this ./run tooling in action, first install Docker by following the official instructions.

Run the website

You should now be able to run a version of the website on your computer:

  • Download the codebase, E.g.:

    curl -L >
  • Run the site!

    $ ./run
    # Wait a while (the first time) for it to download and install dependencies. Until:
    Starting development server at
    Quit the server with CONTROL-C.
  • Visit in your browser, and you should see the latest version of the website.

Forking or improving our work

We have documented this standard interface in our team practices repository, and we keep the central code in our canonical-webteam Yeoman generator.

Feel free to fork our code, or if you’d like to suggest improvements please submit an issue or pull-request against either repository.

Also published on Medium.

Read more
Matthew Paul Thomas


In January, I was presented with a design challenge. Many open-source software developers use GitHub. Let’s make it as easy as possible for them to build and release their code automatically, as a snap software package for Ubuntu and other Linux systems. The result is now available to the world:

My first task was to interview project stakeholders, getting an understanding of the data and technical constraints involved. That gave me enough information to draw a basic user flow diagram for the app.

This include a front page, a “Dashboard”, a settings page, repo and build pages, and steps for adding repos, adding YAML, and registering a name, which would require Ubuntu One sign-in.

Next, I worked with visual designer Jamie Young to produce a “competitor analysis” of software CI (continuous integration) services, such as Travis, AppVeyor, and CircleCI. These are not actually competitors — our app works alongside whichever CI service you use, and we use Travis ourselves. But we weren’t aware of any existing service doing auto-building and distribution together. And CI services were useful comparisons because they have many of the same user flows.

Our summary of good and not-so-good details in those services became the basis for a design workshop in February, where designers, engineers, and managers worked together on sketching the pages we’d need.

My design colleague Paty Davila distilled these sketches into initial wireframes. I then drew detailed wireframes that included marketing and instructional text. Whether wireframing and copywriting are done by the same person or different people, doing them in tandem can reveal ways to shorten or eliminate text by improving layout or visual elements. I also wrote a functional specification describing the presence, contents, and behavior of each element in the site.

A sketch of the front page, one of several produced during the workshop. My minimal wireframe, including text. Several iterations later, a mockup from Jamie Young. The front page as it is today.

The design patterns in Canonical’s Vanilla CSS framework, for basic components like headings and buttons, made it possible for engineers to lay out pages based directly on the wireframes and functional spec with little need for visual design. But in a few cases, visual designers produced mockups where we had good reason to diverge from existing patterns. And the front page in particular benefited from illustrations by graphics whiz Matthieu James.

The most challenging part of designing this service has been that it communicates with four external systems: not just GitHub, but also the Launchpad build service, the snap store, and the Ubuntu One sign-on service. This has required special emphasis on handling error cases — where any of the external sites behave unexpectedly or provide incomplete information — and communicating progress through the overall flow.

Since launching the site, we’ve added the ability to build organization repos, making the service more useful for teams of developers. Once a repo admin adds the repo to, it will show up automatically for their colleagues as well.

I continue maintaining the specification, designing planned features and addressing new constraints. As well as improving design consistency, the spec helps smooth the on-ramp for new developers joining the project. Engineers are working on integrating the site better with the Vanilla framework. And at the same time, we’re planning a project that will use as the foundation of something much bigger. Good design never sleeps.

Meanwhile, if you have code on GitHub yourself, and this snap thing sounds intriguing, try it out.

Read more

Toneladas de películas (la utilización de una medida de peso y no de cantidad es para dar efecto (?)) desde la última vez, pero creo que sólo porque pasó mucho tiempo, ya que también hay quintillones de anotadas nuevas (la utilización ahora sí de un número para la cantidad es sólo para ser internamente inconsistente).

  • A Perfect Day: +1. Gran historia (grandes actores), y me gustó como muestra un universo que no conozco y que no me toca.
  • Air: -0. Con algunos detalles interesantes, pero nada nuevo.
  • Amy: -1. No me gustó para nada la forma en que estaba "armado" el documental, era como la versión video de un diario amarillista. No la pude terminar de ver, y eso que algunas partes eran interesantes.
  • Anesthesia: -0. Tiene algunas partes MUY interesantes, pero es en general aburrida hasta más de la primera mitad, sin terminar de enganchar con las situaciones, y luego termina... abruptamente, sin resolver la mayoría de las cosas.
  • Black Mass: -0. La historia tiene sus detalles interesantes, pero me pasó lo mismo que con otras pelis basadas en una historia real, no tiene "sustento", le falta dinámica de película, no sé... como que no empieza ni termina, es floja en ese sentido.
  • Chloe & Theo: +1. Hermosa peli, pura enseñanza.
  • Crimson Peak: +0. Una de fantasmas, pero bien hecha... por eso creo que mejor diría "una CON fantasmas", no "DE".
  • Deadpool: +0. Sátira de superheroe, me divertí mucho.
  • Doctor Strange: +1. Divertida, interesante, buenas actuaciones y efectos. Me gustó el personaje en sí, también (yo no lo conocía).
  • Experimenter: -0. La info de fondo está buena, pero la película en sí no me gustó nada; creo que prefiero un documental sobre esa persona y su trabajo, antes que algo así aburrido
  • Ghost in the Shell: -0. No vale la pena. Si querés ver la historia bien armada, mirá Kôkaku Kidôtai (el manga original) , y si la querés ver a Scarlett Johansson actuando mirá Lost in Translation (y si la querés ver en pelotas mirá Under the Skin).
  • Hotel Transylvania 2: +0. Todo lo que podés esperar de una peli para chicos.
  • Jane Got a Gun: -0. La película no está mal, pero al final como que no te deja nada.
  • Jason Bourne: -0. Es movida y atrapante, pero no hay nada nuevo en la historia. Un "más de lo mismo" en su máxima expresión. Nunca más.
  • Momentum: -1. Aburrida en un montón de partes (lo cual para una peli de acción es mucho), pero lo que me colmó el vaso es esa forma burda de dejar la historia "pendiente" hasta una próxima película.
  • Now You See Me 2: +1. Muy divertida, aunque le falta un poco de sustancia como a la original.
  • Point Break: -0. Tiene algunas enseñanzas piolas, montañas, y pasiajes hermosos... pero el resto es más escenas de motocross que guión :/
  • Regression: +0. Interesante por la temática y cómo te iba llevando sin entender del todo qué pasaba; cierra un poco floja, pero zafa.
  • Rogue One: A Star Wars Story: +0. Está buena, pero más que nada por lo que cuenta y el universo en el que está embebido, después la peli tiene muchas fallas. Sospecho (temo?) que empezarán a salir mil películas satélites a la historia principal con calidades cada vez más bajas (como están haciendo con las de superheroes)...
  • Space Station 76: -0. Una película mediocre ambientada en una (muy interesante y divertida, sí) estética "sci fi de los '70)
  • Spectre: +0. Movida, sin dar respiro, y con buena fotografía, pero no se escapa de ser "solamente una más de James Bond"
  • Star Trek Beyond: +1. Sigue funcionando, continua con ese ese espíritu de las series originales que, a mi entender y gusto, hace que valgan la pena.
  • The Gunman: +0. La típica del muchacho que era malo, luego bueno, luego mata todos los malos. Pero está bien llevada, muestra una cara de las multinacionales en paises del tercer mundo, es llevadera.
  • The Man from U.N.C.L.E.: +0. Peli de espías de los 60/70. Los agentes de CIPOL, bah. Pasatista, disfrutable, con escenas memorables. Si veías la serie vieja supongo que te va a gustar mucho más.
  • The Zero Theorem: -0. Sorprendentemente aburrida para ser tan bizarra.
  • X-Men: Apocalypse: +0. Es más de lo mismo, pero me gustó la forma en que entrelazan todas las historias en la "cronología X-Men" y van explicando como se formó todo; estaría bueno que en algún momento hagan algo así con el universo Tolkien.

Como decía, un montón de anotadas nuevas...

  • Blind (2017; Drama, Romance) Bestselling novelist, Bill Oakland loses his wife and his sight in a vicious car crash. Five years later Socialite Suzanne Dutchman is forced to read to Bill in an intimate room three times a week as a plea bargain for being associated with her husband's insider trading. A passionate affair ensues, forcing them both to question whether or not it's ever too late to find true love. But when Suzanne's husband is let out on a technicality, she is forced to choose between the man she loves and the man she built a life with. [D: Michael Mailer; A: Demi Moore, Alec Baldwin, Dylan McDermott]
  • Casi leyendas (2017; Comedy, Drama, Music) Three estranged friends reunite and reluctantly reform a rock band that in their youth was about to be famous, but for mysterious reasons, they never succeeded. [D: Gabriel Nesci; A: Florencia Bertotti, Claudia Fontán, Leandro Juarez]
  • Deep Burial (2017; Sci-Fi, Thriller) In the near future, when communications go offline at a remote nuclear power plant isolated in the desert, a young safety inspector, Abby Dixon, is forced to fly out to bring them back online. Once inside the facility, mysterious clues and strange behaviors cause Abby to have doubts about the sanity, and perhaps identities, of the two employees onsite. [D: Dagen Merrill; A: Tom Sizemore, Sarah Habel, Dominic Monaghan]
  • Julie & Julia (2009; Biography, Drama, Romance) Julia Child and Julie Powell - both of whom wrote memoirs - find their lives intertwined. Though separated by time and space, both women are at loose ends... until they discover that with the right combination of passion, fearlessness and butter, anything is possible. [D: Nora Ephron; A: Meryl Streep, Amy Adams, Stanley Tucci]
  • La Sangre del Gallo (2015; Thriller) Damian is a 26 year old that wakes up one morning all beaten up, bound, hooded and alone in an unfamiliar place. He doesn't know how he got there, or why. He doesn't even remember his name. A man arrives; He is clearly not the one who captured him, however he attends him. Damian now remembers an accident where his mother and brother died, he was driving. He remembers a discussion that reveals secrets from his past. The path that led him to hit rock bottom keeps going through his head. He starts a special relationship with his captor, who will form part of the puzzle that Damian must complete. [D: Mariano Dawidson; A: Santiago Pedrero, Eduardo Sapac, Emiliano Carrazzone]
  • La vache (2016; Adventure, Comedy, Drama) An Algerian man's life-long dream finally comes true when he receives an invitation to take his cow Jacqueline to the Paris International Agriculture Fair. [D: Mohamed Hamidi; A: Fatsah Bouyahmed, Lambert Wilson, Jamel Debbouze]
  • Mecánica Popular (2015; Comedy, Drama) After devoting his life to publish philosophy, history and psychoanalysis, the editor Mario Zavadikner, discontented with the social and intellectual reality, decides to shoot himself at the office of his publishing house. An unexpected presence stops his attempt: Silvia Beltran, aspiring writer who threatens to commit suicide if Zavadikner refuses to publish her novel. [D: Alejandro Agresti; A: Alejandro Awada, Patricio Contreras, Marina Glezer]
  • Murder on the Orient Express (2017; Crime, Drama, Mystery) A lavish train ride unfolds into a stylish & suspenseful mystery. From the novel by Agatha Christie, Murder on the Orient Express tells of thirteen stranded strangers & one man's race to solve the puzzle before the murderer strikes again. [D: Kenneth Branagh; A: Johnny Depp, Michelle Pfeiffer, Daisy Ridley]
  • Nieve negra (2017; Crime, Drama, Mystery, Thriller) Accused of killing his brother during adolescence, Salvador lives isolated in the middle of Patagonia. After several decades without seeing, his brother Marcos and his sister-in-law Laura, come to convince him to sell the lands that they share by inheritance. The crossing, in the middle of a lonely and inaccessible place, revives the duel where the roles of victim and murderer are transformed over and over again. [D: Martin Hodara; A: Laia Costa, Ricardo Darín, Dolores Fonzi]
  • Seven Sisters (2017; Sci-Fi, Thriller) In a not so distant future, where overpopulation and famine have forced governments to undertake a drastic One-Child Policy, seven identical sisters (all of them portrayed by Noomi Rapace) live a hide-and-seek existence pursued by the Child Allocation Bureau. The Bureau, directed by the fierce Nicolette Cayman (Glenn Close), enforces a strict family-planning agenda that the sisters outwit by taking turns assuming the identity of one person: Karen Settman. Taught by their grandfather (Willem Dafoe) who raised and named them - Monday, Tuesday, Wednesday, Thursday, Friday, Saturday and Sunday - each can go outside once a week as their common identity, but are only free to be themselves in the prison of their own apartment. That is until, one day, Monday does not come home. [D: Tommy Wirkola; A: Noomi Rapace, Willem Dafoe, Glenn Close]
  • The Assignment (2016; Action, Crime, Thriller) Following an ace assassin who is double crossed by gangsters and falls into the hands of rogue surgeon known as The Doctor who turns him into a woman. The hitman now a hitwoman sets out for revenge, aided by a nurse named Johnnie who also has secrets. [D: Walter Hill; A: Michelle Rodriguez, Sigourney Weaver, Anthony LaPaglia]
  • Unlocked (2017; Action, Thriller) A CIA interrogator is lured into a ruse that puts London at risk of a biological attack. [D: Michael Apted; A: Orlando Bloom, Noomi Rapace, Toni Collette]
  • Absolutely Anything (2015; Comedy, Sci-Fi) When some aliens, who travel from planet to planet to see what kind of species inhabit them, come to Earth. And if they are, according to their standards, decent, they are welcomed to be their friend. And if not the planet is destroyed. To find out they choose one inhabitant and give that person the power to do whatever he/she wants. And they choose Neil Clarke, a teacher who teaches the special kids. He is constantly being berated by the headmaster and is attracted to his neighbor, Catherine but doesn't have the guts to approach her. But now he can do anything he wants but has to be careful. [D: Terry Jones; A: Simon Pegg, Kate Beckinsale, Sanjeev Bhaskar]
  • Atomic Blonde (2017; Action, Mystery, Thriller) The crown jewel of Her Majesty's Secret Intelligence Service, Agent Lorraine Broughton (Theron) is equal parts spycraft, sensuality and savagery, willing to deploy any of her skills to stay alive on her impossible mission. Sent alone into Berlin to deliver a priceless dossier out of the destabilized city, she partners with embedded station chief David Percival (James McAvoy) to navigate her way through the deadliest game of spies. [D: David Leitch; A: Sofia Boutella, Charlize Theron, James McAvoy]
  • El faro de las orcas (2016; Drama, Romance) Beto is a lonely man who works as Ranger of the isolated Peninsula Valdes' National Park (Chubut, Argentina). Lover of the nature and animals, the peace of his days watching orcas, seals and sea lions in the sea ends after the arrival of Lola, a Spanish mother who travels there from Madrid with his autistic 11 years old son Tristán looking for Beto after both watch him in a documentary about whales. Desperate, Lola asks help Beto in order to make a therapy for Tristán, hoping that his isolation caused by the autism can be overcome. Reluctant at the beginning, Beto agrees to help Tristán, sailing by the cost in a boat to meet orcas (defying the rules that prevent touching them and swimming them), the only one that causes emotional responses in Tristán. As days go by, Tristán starts slowly to express emotions, in the same way that Beto's boss tries to fire him in the belief that orcas are a dangerous killers whales, Lola realizes about a familiar trouble in Spain and that they Lola and Beto learns about the feelings between them... [D: Gerardo Olivares; A: Maribel Verdú, Joaquín Furriel, Joaquín Rapalini]
  • La tortue rouge (2016; Animation, Fantasy) Surrounded by the immense and furious ocean, a shipwrecked mariner battles all alone for his life with the relentless towering waves. Right on the brink of his demise, the man set adrift by the raging tempest washes ashore on a small and deserted tropical island of sandy beaches, timid animal inhabitants and a slender but graceful swaying bamboo forest. Alone, famished, yet, determined to break free from his Eden-like prison, after foraging for food and fresh water and encouraged by the dense forest, the stranded sailor builds a raft and sets off to the wide sea, however, an indistinguishable adversary prevents him from escaping. Each day, the exhausted man never giving up hope will attempt to make a new, more improved raft, but the sea is vast with wonderful and mysterious creatures and the island's only red turtle won't let the weary survivor escape that easily. Is this the heartless enemy? [D: Michael Dudok de Wit; A: Emmanuel Garijo, Tom Hudson, Baptiste Goy]
  • Star Wars: The Last Jedi (2017; Action, Adventure, Fantasy, Sci-Fi) Having taken her first steps into a larger world in [D: Rian Johnson; A: Tom Hardy, Daisy Ridley, Billie Lourd]
  • The Autopsy of Jane Doe (2016; Horror, Mystery, Thriller) Cox and Hirsch play father and son coroners who receive a mysterious homicide victim with no apparent cause of death. As they attempt to identify the beautiful young "Jane Doe," they discover increasingly bizarre clues that hold the key to her terrifying secrets. [D: André Øvredal; A: Brian Cox, Emile Hirsch, Ophelia Lovibond]
  • The Circle (2017; Drama, Sci-Fi, Thriller) When Mae is hired to work for the world's largest and most powerful tech and social media company, she sees it as an opportunity of a lifetime. As she rises through the ranks, she is encouraged by the company's founder, Eamon Bailey, to engage in a groundbreaking experiment that pushes the boundaries of privacy, ethics and ultimately her personal freedom. Her participation in the experiment, and every decision she makes, begin to affect the lives and future of her friends, family and that of humanity. [D: James Ponsoldt; A: Emma Watson, Ellar Coltrane, Glenne Headly]
  • The Dark Tower (2017; Action, Adventure, Fantasy, Horror, Sci-Fi, Western) The Gunslinger, Roland Deschain, roams an Old West-like landscape where "the world has moved on" in pursuit of the man in black. Also searching for the fabled Dark Tower, in the hopes that reaching it will preserve his dying world. [D: Nikolaj Arcel; A: Katheryn Winnick, Matthew McConaughey, Idris Elba]
  • The Little Hours (2017; Comedy, Romance) A young servant fleeing from his master takes refuge at a convent full of emotionally unstable nuns in the Middle Ages. Introduced as a deaf blind man, he must fight to hold his cover as the nuns try to resist temptation. [D: Jeff Baena; A: Alison Brie, Dave Franco, Kate Micucci]
  • The Recall (2017; Horror, Sci-Fi, Thriller) When five friends vacation at a remote lake house they expect nothing less than a good time, unaware that planet Earth is under an alien invasion and mass-abduction. [D: Mauro Borrelli; A: Wesley Snipes, RJ Mitte, Jedidiah Goodacre]
  • Thor: Ragnarök (2017; Action, Adventure, Fantasy, Sci-Fi) Thor is imprisoned on the other side of the universe and finds himself in a race against time to get back to Asgard to stop Ragnarok, the destruction of his homeworld and the end of Asgardian civilization, at the hands of an all-powerful new threat, the ruthless Hela. [D: Taika Waititi; A: Benedict Cumberbatch, Idris Elba, Tom Hiddleston]

Finalmente, el conteo de pendientes por fecha:

(Ago-2011)    4
(Ene-2012)   11   3
(Jul-2012)   14  11
(Nov-2012)   11  11   6
(Feb-2013)   14  14   8   2
(Jun-2013)   15  15  15  11   2
(Sep-2013)   18  18  17  16   8
(Dic-2013)   14  12  12  12  12   4
(Abr-2014)    9   9   8   8   8   3
(Jul-2014)       10  10  10  10  10   5   1
(Nov-2014)           24  22  22  22  22   7
(Feb-2015)               13  13  13  13  10
(Jun-2015)                   16  16  15  13  11   1
(Dic-2015)                       21  19  19  18   6
(May-2016)                           26  25  23  21
(Sep-2016)                               19  19  18
(Feb-2017)                                   26  25
(Jun-2017)                                       23
Total:      110 103 100  94  91  89 100  94  97  94

Read more
Alan Griffiths

Mir: the new order

The Past

The Mir project has always been about how best to develop a shell for the modern desktop. It was about addressing concerns like a security model for desktop environments; convergence (which has implications for app lifecycles); and, making efficient use of modern hardware. It has never been only about Unity8, that was just the first of (hopefully) many shells written using Mir. To that end, the Mir developers have tried to ensure that the code wasn’t too tightly coupled to Unity8 (e.g. by developing demo servers with alternative behaviors).

There have been many reasons that no other shells used Mir but to tackle some of them I started a “hobby project” (MirAL) last year. MirAL aimed to make it easier to build shells other than Unity8 with Mir, and one of the examples I produced, miral-kiosk, proved important to Canonical’s support for graphics for the “Internet of Things”. Even on the Internet of Things Mir is more than just a way of getting pixels onscreen, it also fits the security model needed. That secures a future for Mir at Canonical.

The Present

In Canonical the principle target for Mir is now Ubuntu Core, and that is currently based on the 16.04LTS. We’ve recently released Mir 0.26.3 to this series and will be upgrading it to Mir 1.0 when that comes out.

Outside Canonical there are other projects that are making use of Mir.

UBports is taking forward the work begun by Canonical to support phones and tablets. Their current stack is based on an old release of Mir (0.17) but they are migrating to Ubuntu 16.04LTS and will get the latest release of Mir with that.

Yunit is taking forward the work begun by Canonical on “convergence”. They are still in an exploratory phase, but the debs they’ve created for Debian Unstable use the current Mir 0.26 release.

As reported elsewhere there have been discussions with other projects who are interested in using Mir. It remains to see if, and in what way, those discussions develop.

The Future

For all of these projects the Mir team must be more responsive than it has been to the needs of those outside Canonical. I think, with the work done on MirAL, much of the technical side of that has been addressed. But we still have to prove ourselves in other ways.

There’s a new (0.27) release of Mir undergoing testing for release to the Ubuntu 17.10 series. This delivers a lot of the work that was “in progress” before Canonical’s focus shifted from Unity8 to miral-kiosk and marks the point of departure from our previous plans for a Mir 1.0. Mir 0.27 will not be released to other series, as we expect to ship Mir 1.0 in 17.10.

The other thing 0.27 offers is based on the efforts we’ve seen in UBports (and a PR from a former Mir developer that has joined the project): we’ve published the APIs needed to develop a “mir platform” out of the Mir source tree. That means, for example, developing a mir-on-wayland platform doesn’t require forking the whole of Mir. One specific platform that is now out-of-tree is the “mir-on-android” platform – from my experience with MirAL I know that having a real out-of-tree project helps prove things really work.

In addition, while we won’t be releasing Mir 0.27 to 17.04, I’ve included testing there, along with the Unity8 desktop to ensure that all the features required by a “real shell” remain intact.

Beyond Mir 0.27 the plan towards 1.0 diverges from past discussions. We are no longer planning to remove deprecated functions from the libmirclient ABI, instead we are working towards supporting Wayland clients directly.

Read more
Leo Arias

Travis CI offers a great continuous integration service for the projects hosted on GitHub. With it, you can run tests, deliver artifacts and deploy applications every time you push a commit, on pull requests, after they are merged, or with some other frequency.

Last week Travis CI updated the Ubuntu 14.04 (Trusty) machines that run your tests and deployment steps. This update came with a nice surprise for everybody working to deliver software to Linux users, because it is now possible to install snaps in Travis!

I've been excited all week telling people about all the doors that this opens; but if you have been following my adventures in the Ubuntu world, by now you can probably guess that I'm mostly thinking about all the potential this has for automated testing. For the automation of user acceptance tests.

User acceptance tests are executed from the point of view of the user, with your software presented as a black box to them. The tests can only interact with the software through the entry points you define for your users. If it's a CLI application, then the tests will call commands and subcommands and check the outputs. If it's a website or a desktop application, the tests will click things, enter text and check the changes on this GUI. If it's a service with an HTTP API, the tests will make requests and check the responses. On these tests, the closer you can get to simulate the environment and behaviour of your real users, the better.

Snaps are great for the automation of user acceptance tests because they are immutable and they bundle all their dependencies. With this we can make sure that your snap will work the same on any of the operating systems and architectures that support snaps. The snapd service takes care of hiding the differences and presenting a consistent execution environment for the snap. So, getting a green execution of these tests in the Trusty machine of Travis is a pretty good indication that it will work on all the active releases of Ubuntu, Debian, Fedora and even on a Raspberry Pi.

Let me show you an example of what I'm talking about, obviously using my favourite snap called IPFS. There is more information about IPFS in my previous post.

Check below the packaging metadata for the IPFS snap, a single snapcraft.yaml file:

name: ipfs
version: master
summary: global, versioned, peer-to-peer filesystem
description: |
  IPFS combines good ideas from Git, BitTorrent, Kademlia, SFS, and the Web.
  It is like a single bittorrent swarm, exchanging git objects. IPFS provides
  an interface as simple as the HTTP web, but with permanence built in. You
  can also mount the world at /ipfs.
confinement: strict

    command: ipfs
    plugs: [home, network, network-bind]

    plugin: nil
    build-packages: [make, wget]
    prepare: |
      mkdir -p ../go/src/
      cp -R . ../go/src/
    build: |
      env GOPATH=$(pwd)/../go make -C ../go/src/ install
    install: |
      mv ../go/bin/ipfs $SNAPCRAFT_PART_INSTALL/bin/
    after: [go]
    source-tag: go1.7.5

It's not the most simple snap because they use their own build tool to get the go dependencies and compile; but it's also not too complex. If you are new to snaps and want to understand every detail of this file, or you want to package your own project, the tutorial to create your first snap is a good place to start.

What's important here is that if you run snapcraft using the snapcraft.yaml file above, you will get the IPFS snap. If you install that snap, then you can test it from the point of view of the user. And if the tests work well, you can push it to the edge channel of the Ubuntu store to start the crowdtesting with your community.

We can automate all of this with Travis. The snapcraft.yaml for the project must be already in the GitHub repository, and we will add there a .travis.yml file. They have good docs to prepare your Travis account. First, let's see what's required to build the snap:

sudo: required
services: [docker]

  - docker run -v $(pwd):$(pwd) -w $(pwd) snapcore/snapcraft sh -c "apt update && snapcraft"

For now, we build the snap in a docker container to keep things simple. We have work in progress to be able to install snapcraft in Trusty as a snap, so soon this will be even nicer running everything directly in the Travis machine.

This previous step will leave the packaged .snap file in the current directory. So we can install it adding a few more steps to the Travis script:


  - docker [...]
  - sudo apt install --yes snapd
  - sudo snap install *.snap --dangerous

And once the snap is installed, we can run it and check that it works as expected. Those checks are our automated user acceptance test. IPFS has a CLI client, so we can just run commands and verify outputs with grep. Or we can get fancier using shunit2 or bats. But the basic idea would be to add to the Travis script something like this:


  - /snap/bin/ipfs init
  - /snap/bin/ipfs cat /ipfs/QmVLDAhCY3X9P2uRudKAryuQFPM5zqA3Yij1dY8FpGbL7T/readme | grep -z "^Hello and Welcome to IPFS!.*$"
  - [...]

If one of those checks fail, Travis will mark the execution as failed and stop our release process until we fix them. If instead, all of the checks pass, then this version is good enough to put into the store, where people can take it and run exploratory tests to try to find problems caused by weird scenarios that we missed in the automation. To help with that we have the snapcraft enable-ci travis command, and a tutorial to guide you step by step setting up the continuous delivery from Travis CI.

For the IPFS snap we had for a long time a manual smoke suite, that our amazing community of testers have been executing over and over again, every time we want to publish a new release. I've turned it into a simple bash script that from now on will be executed frequently by Travis, and will tell us if there's something wrong before anybody gives it a try manually. With this our community of testers will have more time to run new and interesting scenarios, trying to break the application in clever ways, instead of running the same repetitive steps many times.

Thanks to Travis and snapcraft we no longer have to worry about a big part of or release process. Continuous integration and delivery can be fully automated, and we will have to take a look only when something breaks.

As for IPFS, it will keep being my guinea pig to guide new features for snapcraft and showcase them when ready. It has many more commands that have to be added to the automated test suite, and it also has a web UI and an HTTP API. Lots of things to play with! If you would like to help, and on the way learn about snaps, automation and the decentralized web, please let me know. You can take a look on my IPFS snap repo for more details about testing snaps in Travis, and other tricks for the build and deployment.

screenshot of the IPFS smoke test running in travis

Read more


  • Transition to Git in Launchpad
    The MAAS team is happy to announce that we have moved our code repositories away from Bazaar. We are now using Git in Launchpad.[1]

MAAS 2.3 (current development release)

This week, the team has worked on the following features and improvements:

  • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress involved:
    • Updated Jenkins job configuration to run CI tests from Git instead of bzr.
    • Created new Jenkins jobs to test older releases via Git instead of bzr.
    • Update Jenkins job triggering mechanism from using Tarmac to using the Jenkins Git plugin.
    • Replaced the maas code lander (based on tarmac) with a Jenkins job to automatically land approved branches.
      • This also includes a mechanism to automatically set milestones and close Launchpad bugs.
    • Updated Snap building recipe to build from Git. 
  • Removal of ‘tgt’ as a dependency behind a feature flag – This week we have landed the ability to load ephemeral images via HTTP from the initrd, instead of doing it via iSCSI (served by ‘tgt’). While the use of ‘tgt’ is still default, the ability to not use it is hidden behind a feature flag (http_boot). This is only available in trunk. 
  • Django 1.11 transition – We are down to the latest items of the transition, and we are targeting it to be completed by the upcoming week. 
  • Network Beaconing & better network discovery – The team is continuing to make progress on beacons. Following a thorough review, the beaconing packet format has been optimized; beacon packets are now simpler and more compact. We are targeting rack registration improvements for next week, so that newly-registered rack controllers do not create new fabrics if an interface can be determined to be on an existing fabric.

Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1). The MAAS team is currently targeting a new 2.2.1 release for the upcoming week.

  • LP #1687305 – Fix virsh pods reporting wrong storage
  • LP #1699479 – A couple of unstable tests failing when using IPv6 in LXC containers


Read more
Colin Ian King

New features in forkstat V0.02.00

The forkstat mascot
Forkstat is a tiny utility I wrote a while ago to monitor process activity via the process events connector. Recently I was sent a patch from Philipp Gesang to add a new -l option to switch to line buffered output to reduce the delay on output when redirecting stdout, which is a useful addition to the tool.   During some spare time I looked at the original code and noticed that I had overlooked some of lesser used process event types:
  • STAT_PTRC - ptrace attach/detach events
  • STAT_UID - UID (and GID) change events
  • STAT_SID - SID change events I've now added support for these events too.
    I've also added some extra per-process information on each event. The new -x "extra info" option will now also display the UID of the process and where possible the TTY it is associated with.  This allows one to easily detect who is responsible for generating the process events.

    The following example shows fortstat being used to detect when a process is being traced using ptrace:

     sudo ./forkstat -x -e ptrce  
    Time Event PID UID TTY Info Duration Process
    11:42:31 ptrce 17376 0 pts/15 attach strace -p 17350
    11:42:31 ptrce 17350 1000 pts/13 attach top
    11:42:37 ptrce 17350 1000 pts/13 detach

    Process 17376 runs strace on process 17350 (top). We can see the ptrace attach event on the process and also then a few seconds later the detach event.  We can see that the strace was being run from pts/15 by root.   Using forkstat we can now snoop on users who are snooping on other user's processes.

    I use forkstat mainly to capture busy process fork/exec/exit activity that tools such as ps and top cannot see because of the very sort duration of some processes or threads. Sometimes processes are created rapidly that one needs to run forkstat with a high priority to capture all the events, and so the new -r option will run forkstat with a high real time scheduling priority to try and capture all the events.

    These new features landed in forkstat V0.02.00 for Ubuntu 17.10 Aardvark.

    Read more
    Alan Griffiths


    In order to trace a problem[1] in the Mir stack I needed to build mesa to facilitate debugging. As the options needed were not entirely obvious I’m blogging the recipe here so I can find it again next time.

    $ apt source libgl1-mesa-dri
    $ cd mesa-17.0.3
    $ QUILT_PATCHES=debian/patches/ quilt push -a
    $ sudo mk-build-deps -i
    $ ./configure --with-gallium-drivers= --with-egl-platforms=mir,drm,rs
    $ make -j6
    $ sudo make install && sudo ldconfig

    [1] The stack breaking with EGL clients when the Mir server is running on a second host Mir server that is using the mesa-kms plugin and mesa is using the intel i965 driver. (LP: #1699484)

    Read more

    La nube en casa

    Ya tengo andando un proyecto que arrancó hace tiempo pero se fue consolidando por partes, de forma bastante demorada. Así y todo, todavía no está 100% terminado, pero tampoco falta tanto.

    ¿Leyeron alguna vez la frase "there is no cloud, it's just someone else's computer" (no existe la "nube", es sólo la computadora de alguien más)? Bueno, este proyecto se basaba en comprar alguna computadorita y meterla en casa, :)

    La nube

    ¿Para qué? Básicamente para correr dos tareas...

    Una es Magicicada (la parte del server), que es el servicio de sincronización de archivos renacido de las cenizas de Ubuntu One. Entonces tengo tanto la computadora de escritorio como la laptop con un par de directorios sincronizados entre ellas, tanto si estoy en casa como si estoy afuera, lo cual me es muy útil. Y además me sirve de backup de tantísimos archivos (aunque no los sincronice a la laptop, como fotos y videos).

    El otro laburo que puse a correr en mi "nube personal" es el cdpetron, que es el generador automático de CDPedias. Es un proceso que tarda muchos días en terminar, y además hace un uso bastante intensivo de disco, entonces es algo que tener corriendo en mi desktop es bastante molesto.

    ¿En qué hardware puse a andar todo esto? Si están imaginando un datacenter, nada más lejos. En una minicomputadora: la Gigabyte Brix GB-BXBT-1900.


    Como pueden ver en las especificaciones es bastante modesta: un Celeron, espacio para un DIMM de memoria y un disco de 2.5" (que no vienen incluidos), y algunos puertos de salida, como Ethernet (que lo uso obviamente todo el tiempo), HDMI o USB (que usé sólo durante la instalación) y un par más que no utilicé para nada.

    A esta maquinita le puse 8GB de RAM (que va bien incluso cuando tengo todo corriendo en simultaneo) y un disco rígido (de los clásicos, de "platos que giran") de 750GB, que debería darme espacio para laburar durante un buen tiempo.

    ¿Por qué en casa y no en un server remoto o algo más "nuboso"? Por el costo, básicamente.

    Alquilar un VPS es relativamente barato, con disco decente, uno o dos cores y buena memoria. Así tengo mi blog, el servidor de linkode, y otras cosas por ahí. Pero si empezás a crecer en disco, se vuelve muy caro. En algún momento estuve alquilando un VPS con disco como para hacer la CDPedia ahí, pero me salía mucha plata, y en el momento en que ese disco también me quedó corto, lo dí de baja. Y a esto sumémosle los archivos sincronizados y de backup que tengo en Magicicada, que son más de 200GB.

    La cuenta es fácil: todo el hardware que compré (computadorita, disco, memoria, un par de cables) me salió menos que pagar un año el "pedazo de nube" que necesitaría...

    ¿Tiene algunas desventajas tener esto en casa? Ocupa algo de espacio y consume electricidad, pero es chiquita, y como no tiene ventiladores no hace ruido.

    Pero hay un factor que sí es claramente una desventaja: no me sirve de "backup offsite". O sea, si pasa algo que me afecta a todas las computadoras de casa (incendio, un rayo, me entran a robar, lo que sea), este backup también se ve afectado. Para mitigar este problema estoy pensando congelar backups periódicos.

    Read more
    Colin Ian King

    The stress-ng logo
    The latest release of stress-ng contains a mechanism to measure latencies via a cyclic latency test.  Essentially this is just a loop that cycles around performing high precisions sleeps and measures the (extra overhead) latency taken to perform the sleep compared to expected time.  This loop runs with either one of the Round-Robin (rr) or First-In-First-Out real time scheduling polices.

    The cyclic test can be configured to specify the sleep time (in nanoseconds), the scheduling type (rr or fifo),  the scheduling priority (1 to 100) and also the sleep method (explained later).

    The first 10,000 latency measurements are used to compute various latency statistics:
    • mean latency (aka the 'average')
    • modal latency (the most 'popular' latency)
    • minimum latency
    • maximum latency
    • standard deviation
    • latency percentiles (25%, 50%, 75%, 90%, 95.40%, 99.0%, 99.5%, 99.9% and 99.99%
    • latency distribution (enabled with the --cyclic-dist option)
    The latency percentiles indicate the latency at which a percentage of the samples fall into.  For example, the 99% percentile for the 10,000 samples is the latency at which 9,900 samples are equal to or below.

    The latency distribution is shown when the --cyclic-dist option is used; one has to specify the distribution interval in nanoseconds and up to the first 100 values in the distribution are output.

    For an idle machine, one can invoke just the cyclic measurements with stress-ng as follows:

     sudo stress-ng --cyclic 1 --cyclic-policy fifo \
    --cyclic-prio 100 --cyclic-method --clock_ns \
    --cyclic-sleep 20000 --cyclic-dist 1000 -t 5
    stress-ng: info: [27594] dispatching hogs: 1 cyclic
    stress-ng: info: [27595] stress-ng-cyclic: sched SCHED_FIFO: 20000 ns delay, 10000 samples
    stress-ng: info: [27595] stress-ng-cyclic: mean: 5242.86 ns, mode: 4880 ns
    stress-ng: info: [27595] stress-ng-cyclic: min: 3050 ns, max: 44818 ns, 1142.92
    stress-ng: info: [27595] stress-ng-cyclic: latency percentiles:
    stress-ng: info: [27595] stress-ng-cyclic: 25.00%: 4881 us
    stress-ng: info: [27595] stress-ng-cyclic: 50.00%: 5191 us
    stress-ng: info: [27595] stress-ng-cyclic: 75.00%: 5261 us
    stress-ng: info: [27595] stress-ng-cyclic: 90.00%: 5368 us
    stress-ng: info: [27595] stress-ng-cyclic: 95.40%: 6857 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.00%: 8942 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.50%: 9821 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.90%: 22210 us
    stress-ng: info: [27595] stress-ng-cyclic: 99.99%: 36074 us
    stress-ng: info: [27595] stress-ng-cyclic: latency distribution (1000 us intervals):
    stress-ng: info: [27595] stress-ng-cyclic: latency (us) frequency
    stress-ng: info: [27595] stress-ng-cyclic: 0 0
    stress-ng: info: [27595] stress-ng-cyclic: 1000 0
    stress-ng: info: [27595] stress-ng-cyclic: 2000 0
    stress-ng: info: [27595] stress-ng-cyclic: 3000 82
    stress-ng: info: [27595] stress-ng-cyclic: 4000 3342
    stress-ng: info: [27595] stress-ng-cyclic: 5000 5974
    stress-ng: info: [27595] stress-ng-cyclic: 6000 197
    stress-ng: info: [27595] stress-ng-cyclic: 7000 209
    stress-ng: info: [27595] stress-ng-cyclic: 8000 100
    stress-ng: info: [27595] stress-ng-cyclic: 9000 50
    stress-ng: info: [27595] stress-ng-cyclic: 10000 10
    stress-ng: info: [27595] stress-ng-cyclic: 11000 9
    stress-ng: info: [27595] stress-ng-cyclic: 12000 2
    stress-ng: info: [27595] stress-ng-cyclic: 13000 2
    stress-ng: info: [27595] stress-ng-cyclic: 14000 1
    stress-ng: info: [27595] stress-ng-cyclic: 15000 9
    stress-ng: info: [27595] stress-ng-cyclic: 16000 1
    stress-ng: info: [27595] stress-ng-cyclic: 17000 1
    stress-ng: info: [27595] stress-ng-cyclic: 18000 0
    stress-ng: info: [27595] stress-ng-cyclic: 19000 0
    stress-ng: info: [27595] stress-ng-cyclic: 20000 0
    stress-ng: info: [27595] stress-ng-cyclic: 21000 1
    stress-ng: info: [27595] stress-ng-cyclic: 22000 1
    stress-ng: info: [27595] stress-ng-cyclic: 23000 0
    stress-ng: info: [27595] stress-ng-cyclic: 24000 1
    stress-ng: info: [27595] stress-ng-cyclic: 25000 2
    stress-ng: info: [27595] stress-ng-cyclic: 26000 0
    stress-ng: info: [27595] stress-ng-cyclic: 27000 1
    stress-ng: info: [27595] stress-ng-cyclic: 28000 1
    stress-ng: info: [27595] stress-ng-cyclic: 29000 2
    stress-ng: info: [27595] stress-ng-cyclic: 30000 0
    stress-ng: info: [27595] stress-ng-cyclic: 31000 0
    stress-ng: info: [27595] stress-ng-cyclic: 32000 0
    stress-ng: info: [27595] stress-ng-cyclic: 33000 0
    stress-ng: info: [27595] stress-ng-cyclic: 34000 0
    stress-ng: info: [27595] stress-ng-cyclic: 35000 0
    stress-ng: info: [27595] stress-ng-cyclic: 36000 1
    stress-ng: info: [27595] stress-ng-cyclic: 37000 0
    stress-ng: info: [27595] stress-ng-cyclic: 38000 0
    stress-ng: info: [27595] stress-ng-cyclic: 39000 0
    stress-ng: info: [27595] stress-ng-cyclic: 40000 0
    stress-ng: info: [27595] stress-ng-cyclic: 41000 0
    stress-ng: info: [27595] stress-ng-cyclic: 42000 0
    stress-ng: info: [27595] stress-ng-cyclic: 43000 0
    stress-ng: info: [27595] stress-ng-cyclic: 44000 1
    stress-ng: info: [27594] successful run completed in 5.00s

    Note that stress-ng needs to be invoked using sudo to enable the Real Time FIFO scheduling for the cyclic measurements.

    The above example uses the following options:

    • --cyclic 1
      • starts one instance of the cyclic measurements (1 is always recommended)
    • --cyclic-policy fifo 
      • use the real time First-In-First-Out scheduling for the cyclic measurements
    • --cyclic-prio 100 
      • use the maximum scheduling priority  
    • --cyclic-method clock_ns
      • use the clock_nanoseconds(2) system call to perform the high precision duration sleep
    • --cyclic-sleep 20000 
      • sleep for 20000 nanoseconds per cyclic iteration
    • --cyclic-dist 1000 
      • enable latency distribution statistics with an interval of 1000 nanoseconds between each data point.
    • -t 5
      • run for just 5 seconds
    From the run above, we can see that 99.5% of latencies were less than 9821 nanoseconds and most clustered around the 4880 nanosecond model point. The distribution data shows that there is some clustering around the 5000 nanosecond point and the samples tail off with a bit of a long tail.

    Now for the interesting part. Since stress-ng is packed with many different stressors we can run these while performing the cyclic measurements, for example, we can tell stress-ng to run *all* the virtual memory related stress tests and see how this affects the latency distribution using the following:

     sudo stress-ng --cyclic 1 --cyclic-policy fifo \  
    --cyclic-prio 100 --cyclic-method clock_ns \
    --cyclic-sleep 20000 --cyclic-dist 1000 \
    --class vm --all 1 -t 60s

    ..the above invokes all the vm class of stressors to run all at the same time (with just one instance of each stressor) for 60 seconds.

    The --cyclic-method specifies the delay used on each of the 10,000 cyclic iterations used.  The default (and recommended method) is clock_ns, using the high precision delay.  The available cyclic delay methods are:
    • clock_ns (use the clock_nanosecond() sleep)
    • posix_ns (use the POSIX nanosecond() sleep)
    • itimer (use a high precision clock timer and pause to wait for a signal to measure latency)
    • poll (busy spin-wait on clock_gettime() to eat cycles for a delay.
    All the delay mechanisms use the CLOCK_REALTIME system clock for timing.

    I hope this is plenty of cyclic measurement functionality to get some useful latency benchmarks against various kernel components when using some or a mix of the stress-ng stressors.  Let me know if I am missing some other cyclic measurement options and I can see if I can add them in.

    Keep stressing and measuring those systems!

    Read more
    Dustin Kirkland

    Thank you to Oracle Cloud for inviting me to speak at this month's CloudAustin Meetup hosted by Rackspace.

    I very much enjoyed deploying Canonical Kubernetes on Ubuntu in the Oracle Cloud, and then exploring Kubernetes a bit, how it works, the architecture, and a simple workload within.  I'm happy to share my slides below, and you can download a PDF here:

    If you're interested in learning more, check out:
    It was a great audience, with plenty of good questions, pizza, and networking!

    I'm pleased to share my slide deck here.


    Read more

    The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

    MAAS Sprint

    The Canonical MAAS team sprinted at Canonical’s London offices this week. The purpose was to review the previous development cycle & release (MAAS 2.2), as well as discuss and finalize the plans and goals for the next development release cycle (MAAS 2.3).

    MAAS 2.3 (current development release)

    The team has been working on the following features and improvements:

    • New Feature – support for ‘upstream’ proxy (API only)Support for upstream proxies has landed in trunk. This iteration contains API only support. The team continues to work on the matching UI support for this feature.
    • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress so far is:
      • Prepared the MAAS CI infrastructure to fully support Git once the transition is complete.
      • Started working on creating new processes for PR’s auto-testing and landing.
    • Django 1.11 transition – The team continues to work through the Django 1.11 transition; we’re down to 130 unittest failures!
    • Network Beaconing & better network discovery – Prototype beacons have now been sent and received! The next steps will be to work on the full protocol implementation, followed by making use of beaconing to enhance rack registration. This will provide a better out-of-the-box experience for MAAS; interfaces which share network connectivity will no longer be assumed to be on separate fabrics.
    • Started the removal of ‘tgt’ as a dependency – We have started the removal of ‘tgt’ as a dependency. This simplies the boot process by not loading ephemeral images from tgt, but rather, having the initrd download and load the ephemeral environment.
    • UI Improvements
      • Performance Improvements – Improved the loading of elements in the Device Discovery, Node listing and Events page, which greatly improve UI performance.
      • LP #1695312 – The button to edit dynamic range says ‘Edit’ while it should say ‘Edit reserved range’
      • Remove auto-save on blur for the Fabric details summary row. Applied static content when not in edit mode.

    Bug Fixes

    The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

    • LP: #1678339 – allow physical (and bond) interfaces to be placed on VLANs with a known 802.1q tag.
    • LP: #1652298 – Improve loading of elements in the device discovery page

    Read more

    Note: Community TFTP documentation is on the Ubuntu Wiki but this short guide adds extra steps to help secure and safeguard your TFTP server.

    Every Data Centre Engineer should have a TFTP server somewhere on their network whether it be running on a production host or running on their own notebook for disaster recovery. And since TFTP is lightweight without any user authentication care should be taken to prevent access to or overwriting of critical files.

    The following example is similar to the configuration I run on my personal Ubuntu notebook and home Ubuntu servers. This allows me to do switch firmware upgrades and backup configuration files regardless of environment since my notebook is always with me.

    Step 1: Install TFTP and TFTP server

    $ sudo apt update; sudo apt install tftp-hpa tftpd-hpa

    Step 2: Configure TFTP server

    The default configuration below allows switches and other devices to download files but, if you have predictable filenames, then anyone can download those files if you configure TFTP Server on your notebook. This can lead to dissemination of copyrighted firmware images or config files that may contain passwords and other sensitive information.

    # /etc/default/tftpd-hpa

    Instead of keeping any files directly in the /var/lib/tftpboot base directory I’ll use mktemp to create incoming and outgoing directories with hard-to-guess names. This prevents guessing common filenames.

    First create an outgoing directory owned by root mode 755. Files in this directory should be owned by root to prevent unauthorized or accidental overwriting. You wouldn’t want your expensive Cisco IOS firmware image accidentally or maliciously overwritten.

    $ cd /var/lib/tftpboot
    $ sudo chmod 755 $(sudo mktemp -d XXXXXXXXXX --suffix=-outgoing)

    Next create incoming directory owned by tftp mode 700 . This allows tftpd-hpa to create files in this directory if configured to do so.

    $ sudo chown tftp:tftp $(sudo mktemp -d XXXXXXXXXX --suffix=-incoming)
    $ ls -1

    Configure tftpd-hpa to allow creation of new files. Simply add –create to TFTP_OPTIONS in /etc/default/tftpd-hpa.

    # /etc/default/tftpd-hpa
    TFTP_OPTIONS="--secure --create"

    And lastly restart tftpd-hpa.

    $ sudo /etc/init.d/tftpd-hpa restart
    [ ok ] Restarting tftpd-hpa (via systemctl): tftpd-hpa.service.

    Step 3: Firewall rules

    If you have a software firewall enabled you’ll need to allow access to port 69/udp. Either add this rule to your firewall scripts if you manually configure iptables or run the following UFW command:

    $ sudo ufw allow tftp

    Step 4: Transfer files

    Before doing a firmware upgrade or other possibly destructive maintenance I always backup my switch config and firmware.

    cisco-switch#copy running-config tftp://
    Address or name of remote host []? 
    Destination filename [UHiI443eTG-incoming/config-cisco-switch]? 
    3554 bytes copied in 0.388 secs (9160 bytes/sec)
    cisco-switch#copy flash:?
    flash:c1900-universalk9-mz.SPA.156-3.M2.bin flash:ccpexp flash:cpconfig-19xx.cfg flash:home.shtml
    cisco-switch#copy flash:c1900-universalk9-mz.SPA.156-3.M2.bin tftp:// 
    Address or name of remote host []? 
    Destination filename [UHiI443eTG-incoming/c1900-universalk9-mz.SPA.156-3.M2.bin]? 
    85258084 bytes copied in 172.692 secs (493700 bytes/sec)

    Files in incoming will be owned by tftp mode 666 (world writable) by default. Remember to move those files to your outgoing directory and change ownership to root mode 644 for safe keeping.

    Once you’re sure your switch config and firmware is safely backed up it’s safe to copy new firmware to flash or do any other required destructive maintenance.

    Step 5: Prevent TFTP access

    It’s good practice on a notebook to deny services when not actively in-use. Assuming you have a software firewall be sure to deny access to your TFTP server when on the road or when connected to hostile networks.

    $ sudo ufw deny tftp
    Rule updated
    Rule updated (v6)
    $ sudo ufw status
    Status: active
    To Action From
    -- ------ ----
    CUPS ALLOW Anywhere 
    OpenSSH DENY Anywhere 
    69/udp DENY Anywhere 
    CUPS (v6) ALLOW Anywhere (v6) 
    OpenSSH (v6) DENY Anywhere (v6) 
    69/udp (v6) DENY Anywhere (v6)

    Read more

    Hace un par de semanas finalmente salió el trámite de la Inspección General de Justicia sin nada para revisar o modificar... ¡está formada legalmente la Asociación Civil Python Argentina!

    Estatuto todo sellado

    Ahora estamos trabajando a full para sacar el CUIT en la AFIP, lo que nos va a permitir abrir una cuenta en el banco. De esta manera los chicos que están organizando la PyCon ya van a poder darle luz verde a los sponsors para que pongan plata.

    Más allá de ayudar organizativamente en la PyCon y en otros eventos, son cuatro las cosas que queremos empujar el primer par de años:

    • Becas de viaje: porque creemos que hay mucho valor en que la gente se conozca, así que trataremos de ayudar a que la gente pueda viajar a eventos que se organicen en el país
    • Traducciones en eventos: si van a venir disertantes grosos que no hablen castellano, hacer lo posible para que la mayoría pueda entenderlos
    • Descuentos en cursos: estamos barajando un par de modalidades
    • Sitio web de PyAr y otra infraestructura: tenemos que dar un salto en seriedad a la hora de mantener los distintos servicios que da el grupo

    Para eso (y para los costos operativos) básicamente vamos a necesitar dinero :) La Asociación se va a financiar de dos maneras, principalmente...

    Una es por aporte de los socios. La idea es que los socios, que se beneficiarían directa e indirectamente por la Asociación Civil, pongan un manguito por mes para ayudar a hacer cosas.

    El otro mecanismo es por aporte directo de empresas (de las cuales esperamos un manguito más grande, posiblemente anual).

    Ya les contaremos bien cuales serán los mecanismos, montos, y eso. ¡Estén atentos!

    Read more
    Alan Griffiths

    Mir release 0.26.3

    Mir 0.26.3 for all!

    By itself Mir 0.26.3 isn’t a very interesting release, just a few minor bugfixes: []

    The significant thing with Mir 0.26.3 is that we are making this version available across the latest releases of Ubuntu as well as 17.10 (Artful Ardvark). That is: Ubuntu 17.04 (Zesty Zapus), Ubuntu 16.10 (Yakkety Yak) and, last but not least, Ubuntu 16.04LTS (Xenial Xerus).

    This is important to those developing Mir based snaps. Having Mir 0.26 in the 16.04LTS archive removes the need to build Mir based snaps using the “stable-phone-overlay” PPA.

    Read more
    Stéphane Graber

    LXD logo


    As you may know, LXD uses unprivileged containers by default.
    The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).

    The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.

    The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.

    From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.

    LXD does offer a number of options related to unprivileged configuration:

    • Increasing the size of the default uid/gid map
    • Setting up per-container maps
    • Punching holes into the map to expose host users and groups

    Increasing the size of the default map

    As mentioned above, in most cases, LXD will have a default map that’s made of 65536 uids/gids.

    In most cases you won’t have to change that. There are however a few cases where you may have to:

    • You need access to uid/gid higher than 65535.
      This is most common when using network authentication inside of your containers.
    • You want to use per-container maps.
      In which case you’ll need 65536 available uid/gid per container.
    • You want to punch some holes in your container’s map and need access to host uids/gids.

    The default map is usually controlled by the “shadow” set of utilities and files. On systems where that’s the case, the “/etc/subuid” and “/etc/subgid” files are used to configure those maps.

    On systems that do not have a recent enough version of the “shadow” package. LXD will assume that it doesn’t have to share uid/gid ranges with anything else and will therefore assume control of a billion uids and gids, starting at the host uid/gid 100000.

    But the common case, is a system with a recent version of shadow.
    An example of what the configuration may look like is:

    stgraber@castiana:~$ cat /etc/subuid
    stgraber@castiana:~$ cat /etc/subgid

    The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.

    Now if you want to increase the size of the map available to LXD. Simply edit both of the files and bump the last value from 65536 to whatever size you need. I tend to bump it to a billion just so I don’t ever have to think about it again:

    stgraber@castiana:~$ cat /etc/subuid
    stgraber@castiana:~$ cat /etc/subgid

    After altering those files, you need to restart LXD to have it detect the new map:

    root@vorash:~# systemctl restart lxd
    root@vorash:~# cat /var/log/lxd/lxd.log
    lvl=info msg="LXD 2.14 is starting in normal mode" path=/var/lib/lxd t=2017-06-14T21:21:13+0000
    lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-06-14T21:21:13+0000
    lvl=info msg="Kernel uid/gid map:" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - u 0 0 4294967295" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - g 0 0 4294967295" t=2017-06-14T21:21:13+0000
    lvl=info msg="Configured LXD uid/gid map:" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - u 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - g 0 1000000 1000000000" t=2017-06-14T21:21:13+0000
    lvl=info msg="Connecting to a remote simplestreams server" t=2017-06-14T21:21:13+0000
    lvl=info msg="Expiring log files" t=2017-06-14T21:21:13+0000
    lvl=info msg="Done expiring log files" t=2017-06-14T21:21:13+0000
    lvl=info msg="Starting /dev/lxd handler" t=2017-06-14T21:21:13+0000
    lvl=info msg="LXD is socket activated" t=2017-06-14T21:21:13+0000
    lvl=info msg="REST API daemon:" t=2017-06-14T21:21:13+0000
    lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2017-06-14T21:21:13+0000
    lvl=info msg=" - binding TCP socket" socket=[::]:8443 t=2017-06-14T21:21:13+0000
    lvl=info msg="Pruning expired images" t=2017-06-14T21:21:13+0000
    lvl=info msg="Updating images" t=2017-06-14T21:21:13+0000
    lvl=info msg="Done pruning expired images" t=2017-06-14T21:21:13+0000
    lvl=info msg="Done updating images" t=2017-06-14T21:21:13+0000

    As you can see, the configured map is logged at LXD startup and can be used to confirm that the reconfiguration worked as expected.

    You’ll then need to restart your containers to have them start using your newly expanded map.

    Per container maps

    Provided that you have a sufficient amount of uid/gid allocated to LXD, you can configure your containers to use their own, non-overlapping allocation of uids and gids.

    This can be useful for two reasons:

    1. You are running software which alters kernel resource ulimits.
      Those user-specific limits are tied to a kernel uid and will cross container boundaries leading to hard to debug issues where one container can perform an action but all others are then unable to do the same.
    2. You want to know that should there be a way for someone in one of your containers to somehow get access to the host that they still won’t be able to access or interact with any of the other containers.

    The main downsides to using this feature are:

    • It’s somewhat wasteful with using 65536 uids and gids per container.
      That being said, you’d still be able to run over 60000 isolated containers before running out of system uids and gids.
    • It’s effectively impossible to share storage between two isolated containers as everything written by one will be seen as -1 by the other. There is ongoing work around virtual filesystems in the kernel that will eventually let us get rid of that limitation.

    To have a container use its own distinct map, simply run:

    stgraber@castiana:~$ lxc config set test security.idmap.isolated true
    stgraber@castiana:~$ lxc restart test
    stgraber@castiana:~$ lxc config get test volatile.last_state.idmap

    The restart step is needed to have LXD remap the entire filesystem of the container to its new map.
    Note that this step will take a varying amount of time depending on the number of files in the container and the speed of your storage.

    As can be seen above, after restart, the container is shown to have its own map of 65536 uids/gids.

    If you want LXD to allocate more than the default 65536 uids/gids to an isolated container, you can bump the size of the allocation with:

    stgraber@castiana:~$ lxc config set test security.idmap.size 200000
    stgraber@castiana:~$ lxc restart test
    stgraber@castiana:~$ lxc config get test volatile.last_state.idmap

    If you’re trying to allocate more uids/gids than are left in LXD’s allocation, LXD will let you know:

    stgraber@castiana:~$ lxc config set test security.idmap.size 2000000000
    error: Not enough uid/gid available for the container.

    Direct user/group mapping

    The fact that all uids/gids in an unprivileged container are mapped to a normally unused range on the host means that sharing of data between host and container is effectively impossible.

    Now, what if you want to share your user’s home directory with a container?

    The obvious answer to that is to define a new “disk” entry in LXD which passes your home directory to the container:

    stgraber@castiana:~$ lxc config device add test home disk source=/home/stgraber path=/home/ubuntu
    Device home added to test

    So that was pretty easy, but did it work?

    stgraber@castiana:~$ lxc exec test -- bash
    root@test:~# ls -lh /home/
    total 529K
    drwx--x--x 45 nobody nogroup 84 Jun 14 20:06 ubuntu

    No. The mount is clearly there, but it’s completely inaccessible to the container.
    To fix that, we need to take a few extra steps:

    • Allow LXD’s use of our user uid and gid
    • Restart LXD to have it load the new map
    • Set a custom map for our container
    • Restart the container to have the new map apply
    stgraber@castiana:~$ printf "lxd:$(id -u):1\nroot:$(id -u):1\n" | sudo tee -a /etc/subuid
    stgraber@castiana:~$ printf "lxd:$(id -g):1\nroot:$(id -g):1\n" | sudo tee -a /etc/subgid
    stgraber@castiana:~$ sudo systemctl restart lxd
    stgraber@castiana:~$ printf "uid $(id -u) 1000\ngid $(id -g) 1000" | lxc config set test raw.idmap -
    stgraber@castiana:~$ lxc restart test

    At which point, things should be working in the container:

    stgraber@castiana:~$ lxc exec test -- su ubuntu -l
    ubuntu@test:~$ ls -lh
    total 119K
    drwxr-xr-x 5  ubuntu ubuntu 8 Feb 18 2016 data
    drwxr-x--- 4  ubuntu ubuntu 6 Jun 13 17:05 Desktop
    drwxr-xr-x 3  ubuntu ubuntu 28 Jun 13 20:09 Downloads
    drwx------ 84 ubuntu ubuntu 84 Sep 14 2016 Maildir
    drwxr-xr-x 4  ubuntu ubuntu 4 May 20 15:38 snap


    User namespaces, the kernel feature that makes those uid/gid mappings possible is a very powerful tool which finally made containers on Linux safe by design. It is however not the easiest thing to wrap your head around and all of that uid/gid map math can quickly become a major issue.

    In LXD we’ve tried to expose just enough of those underlying features to be useful to our users while doing the actual mapping math internally. This makes things like the direct user/group mapping above significantly easier than it otherwise would be.

    Going forward, we’re very interested in some of the work around uid/gid remapping at the filesystem level, this would let us decouple the on-disk user/group map from that used for processes, making it possible to share data between differently mapped containers and alter the various maps without needing to also remap the entire filesystem.

    Extra information

    The main LXD website is at:
    Development happens on Github at:
    Discussion forun:
    Mailing-list support happens on:
    IRC support happens in: #lxcontainers on
    Try LXD online:

    Read more


    In this post we discus our efforts to setup a Federation of on-prem Kubernetes Clusters using CoreDNS. The Kubernetes Cluster version used is 1.6.2. We use Juju to deliver clusters on AWS, yet the clusters should be considered on-prem since they do not integrate with any of the cloud’s features. The steps described here are repeatable on any pool of resources you may have available (cloud or bare metal). Note that this is still a work in progress and should not be used on a production environment.

    What is Federation

    Cluster Federation (Fig. from

    Cluster federation allows you to “connect” your Kubernetes clusters and manage certain functionality from a central point. The need for central management arises when you have more than one clusters possibly spread across different locations. You may need a globally distributed infrastructure for performance, availability or even legal reasons. You might want to have isolated clusters satisfying the needs of different teams. Whatever the case may be, federation can assist in some of the ops tasks. You can find more on the benefits of federation in the official documentation.

    Why Cluster Federation with Juju

    Juju makes delivery of Kubernetes clusters dead simple! Juju builds an abstraction over the pool of resources you have and allows us to deliver Kubernetes to all the main public and private clouds (AWS, Azure, GCE, Openstack, Oracle etc) and bare metal physical clusters. Juju also enables you to experiment with small scale deployments stood up on your local machine using lxc containers.

    Having the option to deploy a cluster with the ease Juju offers, federation comes as a natural next step. Deployed clusters should be considered on-prem clusters as they are exactly the same as if they were deployed on bare metal. This is simply because even if you deploy on a cloud Juju will provision machines (virtual machines in this case) and deploy Kubernetes as if it were a local on-prem deployment. You see, Juju is not magic after all!

    There is a excellent blog post showing how you can place Juju clusters under a federation hosting cluster setup on GCE. Here we focus on a federation that has no dependencies on a specific cloud provider. To this end, we go with CoreDNS as a federation DNS provider.

    Let’s Federate

    Deploy clusters

    Our setup is made of 3 clusters:

    • f1: will host the federation controller place
    • c1, c2: federated clusters

    At the time of this writing (2nd of June 2017) cluster federation is in beta and is not yet integrated into the official Kubernetes charms. Therefore we will be using a patched version of charms and bundles (source of charms and cdk-addons, charms master and worker, and bundle).

    We assume you already have Juju installed and a controller in place. If not have a look at the Juju installation instructions.

    Let’s deploy these three clusters.

    #> juju add-model f1
    #> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2
    #> juju add-model c1
    #> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2
    #> juju add-model c2
    #> juju deploy cs:~kos.tsakalozos/bundle/federated-kubernetes-core-2

    Here is what the bundle.yaml looks like:

    Let’s copy and merge the kubeconfig files.

    #> juju switch f1
    #> juju scp kubernetes-master/0:config ~/.kube/config
    #> juju switch c1
    #> juju scp kubernetes-master/0:config ~/.kube/config.c1
    #> juju switch c2
    #> juju scp kubernetes-master/0:config ~/.kube/config.c2

    Using this script you can merge the configs

    #> ./ config.c1 c1
    #> ./ config.c2 c2

    To enable CoreDNS on f1 you just need to:

    #> juju switch f1
    #> juju config kubernetes-master enable-coredns=True

    Have a look at your three contexts:

    #> kubectl config get-contexts
    f1 f1 ubuntu
    c1 c1-cluster c1-user
    c2 c2-cluster c2-user

    Create the federation

    At this point you are ready to deploy the federation control plane on f1. To do so you will need a call to kubefed init. Kubefed init will deploy the required services and it will also perform a health check ping to see everything is ready. During this last check kubefed init will try to reach a service on a port that is not open. At the time of this writing we have a patch waiting upstream to make the port configurable so you can open it before triggering the federation creation. If you do not want to wait for the patch to be merged you can call kubefed init with -v 9 to see what port the services were appointed to and expose them though Juju while keeping kubefed running. This will become clear shortly.

    Create a coredns-provider.conf file with the etc endpoint of etcd of f1 and your zone. You can look at the Juju status to find out the IP of the etcd machine:

    #> cat ./coredns-provider.conf 
    etcd-endpoints =
    zones =

    Call kubefed

    #> juju switch f1
    #> kubefed init federation — host-cluster-context=f1 — dns-provider=”coredns” — dns-zone-name=”” — dns-provider-config=coredns-provider.conf — etcd-persistent-storage=false — api-server-service-type=”NodePort” -v 9 — api-server-advertise-address= —

    Note here that the api-server-advertise-address= is the IP of the worker on your f1 cluster. You selected NodePort so the federation service is available from all the worker nodes under a random port.

    Kubefed will end up trying to do a health check. You should see on your console something similar to this: “GET” . The port 30229 is randomly selected and will differ in your case. While keeping kubefed init running, go ahead and open this port from a second terminal:

    #> juju run — application kubernetes-worker “open-port 30229”

    The federation context should be present in the list of contexts you have.

    #> kubectl config get-contexts
    f1 f1 ubuntu
    federation federation federation
    c1 c1-cluster c1-user
    c2 c2-cluster c2-user

    Now you are ready to have other clusters join the federation:

    #> kubectl config use-context federation
    #> kubefed join c1 — host-cluster-context=f1
    #> kubefed join c2 — host-cluster-context=f1

    Two clusters should soon appear ready:

    #> kubectl get clusters
    c1 Ready 3m
    c2 Ready 1m

    Congratulations! You are all set, your federation is ready to use.

    What works

    Setup your default namespace:

    #> kubectl — context=federation create ns default

    Get this replica set and create it.

    #> kubectl — context=federation create -f nginx-rs.yaml

    See your replica set created:

    #> kubectl — context=federation describe rs/nginx
    Name: nginx
    Namespace: default
    Selector: app=nginx
    Labels: app=nginx
    Annotations: <none>
    Replicas: 1 current / 1 desired
    Pods Status: error in fetching pods: the server could not find the requested resource
    Pod Template:
    Labels: app=nginx
    Image: nginx
    Port: 80/TCP
    Environment: <none>
    Mounts: <none>
    Volumes: <none>
    FirstSeen LastSeen Count From SubObjectPath Type Reason Message
    — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -
    26s 26s 1 federated-replicaset-controller Normal CreateInCluster Creating replicaset in cluster c2

    Create a service.

    #> kubectl — context=federation create -f beacon.yaml

    Scale and open ports so the service is available

    #> kubectl — context=federation scale rs/nginx — replicas 3

    What does not work yet

    Federation does not seem to handle NodePort services. To see that you can deploy a busybox and try to resolve the name of the service you deployed right above.

    Use this yaml.

    #> kubectl — context=c2 create -f bbox-4-test.yaml
    #> kubectl — context=c2 exec -it busybox — nslookup beacon

    This returns a local cluster IP which is rather unexpected.

    There are some configuration options that did not have the effect I was expecting. First, we would want to expose the CoreDNS service as described in the docs. Assigning the same NodePort for both UDP and TCP had no effect; this is rather unfortunate since the issue is already addressed. The configuration described here and here (although it should be automatically applied I thought I would give it a try) did not seem to have any effect.

    Open issues:


    Federation on on-prem clusters has gone a long way and is getting significant improvements with every release. Some of the issues described here have already been addressed, I am waiting anxiously for the upcoming v1.7.0 release to see the fixes in action.

    Do not hesitate to reach out to the sig-federation group, but most importantly I would be extremely happy and grateful if you pointed out some bug I have on my configs/code/thoughts ;)

    References and Links

    1. Official federations docs,
    2. Juju docs,
    3. Canonical Distribution of Kubernetes,
    4. Federation with host on GCE,
    5. CDK-addons source,
    6. Kubernetes charms,
    7. Kubernetes master charm,
    8. Kubernetes worker charm,
    9. Kubernetes bundle used for federation,
    10. Juju getting started,
    11. Patch kubefed init,
    12. CoreDNS setup instructions
    13. CoreDNS yaml file
    14. Nodeport UDP/TCP issue,
    15. Federation DNS on-prem configuration

    Read more