Canonical Voices

Alan Griffiths

Mir release 0.28

[reposted from https://community.ubuntu.com/t/mir-release-0-28/545 comments are enabled there, and disabled here]

Mir 0.28

We are pleased to announce that Mir 0.28 has been released and is available in Ubuntu 17.10 (Artful).

As the content (and even the name) of this release has changed over the time we’ve been working towards it now is probably a good time to reflect on what it is, and what it isn’t after all.

What is in Mir 0.28?

There is now a stable server ABI

This simplifies the use of “Mir snaps” making it possible to release new library versions without breaking servers.

One of the barriers to the adoption of Mir has been the potential for Mir releases to break downstream projects that depend on a stable ABI. This is provided by libmiral, which is now part of Mir.

The Yunit project which uses Mir as part of its graphics stack has already started migrating code to use libmiral for this reason.

The start of Wayland support

The desktop community has adopted Wayland as the client-server protocol of choice for replacing X11. This is already supported by several server implementations (Weston, Kwin and Mutter are the best known). By providing Wayland support we will make Mir servers compatible with the various toolkits and libraries that already have Wayland backends.

Our MVP goal for Mir 0.28 has been to support a Wayland client with a single fullscreen surface. We have slightly exceeded this goal in 0.28 as you can see from this short video:

https://www.youtube.com/watch?v=sfcZrpkc2NU

We will continue to expand on this Wayland support in future releases.

The MirAL shells are part of Mir

The miral-kiosk shell is used by the mir-kiosk-snap to provide graphics support on UbuntuCore. This release includes a number of improvements to miral-kiosk based on feedback from potential users.

The miral-shell is now the canonical example of writing a Mir server. We’ve dropped several older examples that used other APIs (and reworked mir_demo_server to fit the new server APIs).

There is a (currently unstable) API for “graphics platform” plugins

We had planned to stabilize the graphics platform API and ABI before publishing it but we had to change that plan. Canonical no longer has the infrastructure to test and maintain the “android platform”. However there is was interest from UBports both in continuing to support the “android platform” and for developing a “Wayland platform”.

What is NOT in Mir 0.28?

We have not upstreamed Mesa distro patches

These patches support “Mir EGL” which forms part of the Mir client API. With the adoption of Wayland (and Wayland EGL) it looks likely that these will not have a long term future and upstreaming a wasted effort.

This should also become less of an issue as it only affects EGL clients using the legacy Mir client APIs. Software rendering, Xmir and Xwayland clients will work without these patches.

Read more
admin

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 Beta 2 has now been released and it is currently available in PPA and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use beta 2, please use the following PPA:
ppa:maas/next
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode —beta
 

MAAS 2.3.0 (beta 2)

Issues fixed in this release

https://launchpad.net/maas/+milestone/2.3.0beta2

  • LP: #1711760    [2.3] resolv.conf is not set (during commissioning or testing)

  • LP: #1721108    [2.3, UI, HWTv2] Machine details cards – Don’t show “see results” when no tests have been run on a machine

  • LP: #1721111    [2.3, UI, HWTv2] Machine details cards – Storage card doesn’t match CPU/Memory one

  • LP: #1721548    [2.3] Failure on controller refresh seem to be causing version to not get updated

  • LP: #1710092    [2.3, HWTv2] Hardware Tests have a short timeout

  • LP: #1721113    [2.3, UI, HWTv2] Machine details cards – Storage – If multiple disks, condense the card instead of showing all disks

  • LP: #1721524    [2.3, UI, HWTv2] When upgrading from older MAAS, Storage HW tests are not mapped to the disks

  • LP: #1721587    [2.3, UI, HWTv2] Commissioning logs (and those of v2 HW Tests) are not being shown

  • LP: #1719015    $TTL in zone definition is not updated

  • LP: #1721276    [2.3, UI, HWTv2] Hardware Test tab – Table alignment for the results doesn’t align with titles

  • LP: #1721525    [2.3, UI, HWTv2] Storage card on machine details page missing red bar on top if there are failed tests

  • LP: #1722589    syslog full of “topology hint” logs

  • LP: #1719353    [2.3a3, Machine listing] Improve the information presentation of the exact tasks MAAS is running when running hardware testing

  • LP: #1719361    [2.3 alpha 3, HWTv2] On machine listing page, remove success icons for components that passed the tests

  • LP: #1721105    [2.3, UI, HWTv2] Remove green success icon from Machine listing page

  • LP: #1721273    [2.3, UI, HWTv2] Storage section on Hardware Test tab does not describe each disk to match the design

Read more
Alan Griffiths

that which we call a rose…

For the last six months we’ve been working on, and talking about “Mir 1.0”.

But today’s Mir release is not going to be called “1.0” it will be called “0.28”. The code is the same, just the name has changed.

Read more
Francesca Granato

During our user testing sessions on ubuntu.com, we often receive feedback from users about content on the site (“I can’t find this”, “I’d like more of that” or “I want to know this”). Accumulated feedback like this contributed to our decision here on the Web team to find a more standardised way of designing our product landing pages. We have two main motivations for doing this work:

1)  To make our users’ lives easier The www.ubuntu.com site has a long legacy of bespoke page design which has resulted in an inconsistent content strategy across some of our pages.  In order to evaluate and compare our products effectively, our users need consistent information delivered in a consistent way.

2) To make our lives easier Here at Canonical, we don’t have huge teams to write copy, make videos or create content for our websites. Because of this our product pages need to be quick and easy to design, build and maintain – which they will be if they all follow a standardised set of guidelines.

After a process of auditing the current site content, researching competitors, and refining a few different design routes – we reached a template that we all agreed was better than what we currently had in most cases.  Here’s some annotated photos of the process.

Web pages printed out with post-it notes

First we completed a thorough content audit of existing ubuntu.com product pages. Here the coloured post-it notes denote different types of content.

Flip-chart of hand-written list of components for a product page

Our audit of the site resulted in this unprioritized ‘short-list’ of possible types of content  to be included on a product page.

Early wireframe sketch 1Early wireframe sketch 2Early wireframe sketch 3

Some examples of early wireframe sketches.

Here is an illustrated wireframe of new template. I use this illustrated wireframe as a guideline for our stakeholders, designers and developers to follow when considering creating new or enhancing existing product pages.

Diagram of a product page template for ubuntu.com

We have begun rolling out this new template across our product pages –  e.g. our server-provisioning page. Our plan is to continue to test, watch and measure the pages using this template and then to iterate on the design accordingly. In the meantime, it’s already making our lives here on the Web Team easier!

Read more
facundo

Usando Go desde Python


¿Alguna vez necesitaron usar un código de Go desde Python? Yo sí, y acá cuento qué hice.

Antes que nada, un poco de background, para que el ejercicio no sea demasiado teórico: en el laburo tenemos que validar las licencias que se incluyen en el .snap, y aunque el formato en que están sería estándar (SPDX), una condición de contorno es usar el mismo parser/validador que se usa en snapd, para estar 107% seguros que el comportamiento va a ser el mismo hasta en los corner cases o bugs.

El detalle es que snapd está escrito en Go, y el server está escrito en Python. Entonces tengo que compilar ese código en Go y usarlo desde Python... de allí este post, claro.

Es más fácil de lo que parece, ya que el compilador de Go tiene la capacidad de buildear a "biblioteca compartida", y de ahí usarlo desde Python es casi trivial ("casi", porque tenemos que poner algo de código en C).

Para ser más claro, si queremos ejecutar "la lib de SPDX hecha en Go" desde nuestro Python, tenemos que poner dos componentes, que funcionan casi de adaptadores:

  • Un pequeño código en C para armar "como módulo" una funcioncita que recibe y entrega objetos Python, y hace la traducción al "mundo C" y llama a otra función en Go.
  • Un pequeño código en Go que traduce los parámetros desde C y llama a la biblioteca SPDX correspondiente.


Adaptador de Python a C

El archivo completo es spdx.c, paso a explicarlo haciendo antes la aclaración que es para Python 2 (que es lo que tenemos hoy en ese servicio), pero si fuera para Python 3 sería muy parecido (la idea es la misma, cambian algunos nombres, revisen acá).

Antes que nada, incluir la lib de Python

    #include <Python.h>

Vamos a llamar a una función de Go, necesitamos explicitar lo que va recibir (una cadena de bytes, que a nivel de C es un puntero a chars)  y lo que nos devuelve (un número, que interpretaremos como bool).

    long IsValid(char *);

Definimos la función que vamos a llamar desde Python... es sencilla porque es genérica, recibe self y argumentos, devuelve un objeto Python:

    static PyObject *
    is_valid(PyObject *self, PyObject *args)

El cuerpo de la función es sencillo también. Primero definimos 'source' (el string con la licencia a validar) y 'res' (el resultado), luego llamamos a PyArg_ParseTuple que nos va a parsear 'args', buscando una cadena ('s') la cual va a poner en 'source' (y si algo sale mal nos vamos enseguida, para eso está el 'if' alrededor).

    {
        char * source;
        long res;

        if (!PyArg_ParseTuple(args, "s", &source))
            return NULL;

Finalmente llamamos a IsValid (la función en Go), y a ese resultado lo convertimos en un objeto de Python tipo bool, que es lo que realmente devolvemos:

        res = IsValid(source);
        return PyBool_FromLong(res);
    }

Ahora que tenemos nuestra función útil, debemos meterla en un módulo, para lo cual tenemos que definir qué cosas van a haber en dicho módulo. Entonces, armamos la siguiente estructura, con dos lineas; la primera habla sobre nuestra función, la última es una marca en la estructura para que sepa que es el final.

    static PyMethodDef SPDXMethods[] = {
        {"is_valid", is_valid, METH_VARARGS, "Check if the given license is valid."},
        {NULL, NULL, 0, NULL}
    };

En la linea útil tenemos:

  • "is_valid": es el nombre de la función que vamos a usar desde afuera del módulo
  • is_valid: es una referencia a la función que tenemos definida arriba (para que sepa qué ejecutar cuando llamamos a "is_valid" desde afuera del módulo.
  • METH_VARARGS: la forma en que recibe los argumentos (fuertemente atado a como luego los parseamos con PyArg_ParseTuple arriba.
  • "Check ...": el docstring de la función.

Para terminar con este código, va el inicializador del módulo, con un nombre predeterminado ("init" + nombre del módulo), y la inicialización propiamente dicha, pasando el nombre del módulo y la estructura que acabamos de definir arriba:

    PyMODINIT_FUNC
    initspdx(void)
    {
        (void) Py_InitModule("spdx", SPDXMethods);
    }


Adaptador de C a Go

El archivo completo es spdxlib.go.

Tenemos que meter el código en un paquete 'main'

    package main

Importamos el código para SPDX de snapd (tienen que bajarlo antes con go get github.com/snapcore/snapd/spdx):

    import "github.com/snapcore/snapd/spdx"

Importamos adaptadores desde/a C, indicando que cuando buildeemos vamos a usarlo desde Python 2:

    // #cgo pkg-config: python2
    import "C"

La función propiamente dicha, donde indicamos que recibimos un puntero a char de C y vamos a devolver un bool:

    //export IsValid
    func IsValid(license *C.char) bool {

El cuerpo es nuevamente sencillo: llamamos al ValidateLicense de SPDX (convirtiendo primero la cadena a Go), y luego comparamos el resultado para saber si la licencia es válida o no:

        res := spdx.ValidateLicense(C.GoString(license))
        if res == nil {
            return true
        } else {
            return false
        }
    }

Cerramos con la definición obligatoria de main:

    func main() {}


Lo usamos

Primer paso, buildear (yo tengo Go 1.6, creo que necesitan 1.5 o superior para poder armar directamente la biblioteca compartida de C, pero no estoy seguro):

    go build -buildmode=c-shared -o spdx.so

Segundo paso, profit!

    $ python2
    >>> import spdx
    >>> spdx.is_valid("GPL-3.0")
    True

Read more
facundo

Trabajando en New York


Toda la semana pasada estuve también de viaje, aunque no de placer, sino de trabajo. De sprint, bah, como tantas otras veces.

Esta vez tocó New York, una ciudad bastante grande y conocida, pero a la que yo no había ido nunca. Así y todo de ciudad cosmopólita y una de las más "importantes" del mundo (atenti a las comillas) yo no tenía demasiada expectativas con el viaje.

Es que, como dije antes, era por laburo. Entonces uno no se arma de lugares para visitar y pasear, ya que no hay demasiado tiempo, normalmente. En este caso tuve la suerte que el horario de laburo fue 8:30-17:30, y sumado a que recién arrancaba el otoño, había luz bastante rato al salir cada día, entonces pude pasear más de lo que preví.

Restringido a adultos

Viejo taxi

Muchos carriles en el subte

Como me gusta a mí, estuve caminando un montón. Yendo de un lado para el otro, mirando la gente, etc. El domingo que llegué, ahí nomás, estuve caminando una hora sólo para llegar al restaurant donde íbamos a almorzar con Naty, Matias, y una pareja amiga de ellos.

Por otro lado, no me moví demasiaaaaaado de donde estaba el hotel. O sea, algunos kilómetros para acá, algunos kilómetros para acá, pero (casi) no salí de la isla de Manhattan, que es como la parte más monona de Nueva York.

Esquina típica

Estación Central

Mis primeras impresiones fueron... digitales. No, digo; mis primeras impresiones fue que había demasiada gente y demasiado ruido. Después me di cuenta que la ciudad huele feo, por todos lados, todo el tiempo. Y es cara, y hay poca luz.

En otras palabras, no me gustó New York. No todo es malo, eh. Tiene un parque fantástico (ver abajo), una vida cultural buenísima, la comida es decente (lo cual es bastante, para ser Estados Unidos), buenos bares, y un par de detalles más, pero en general, es una ciudad que no disfruté como otras.

Todo esto no evitó que pasee y conozca.

Contraste entre dos edificios

Una de las tardes me fui a caminar un rato por Chinatown (el barrio chino, bah... me quedo con los de Londres y Buenos Aires), que está pegado a una zona llamada "Little Italy" (Pequeña Italia), que tiene una interesante variedades de lugares para comer italiano. No me quedé por esa zona, porque mi idea era cenar en un local clásico de Nueva York: Katz.

Aunque es un restaurant famoso por ser sede de escenas de varias películas (la más famosa quizás fue el escandaloso orgasmo fingido por Meg Ryan en Cuando Harry conoció a Sally), mi intención era ir allí porque es uno de los mejores lugares para comer pastrami.

Barrio Chino

Pastrameeeeeeeeeeeee

No me volvió loco, el pastrami. Si lo tengo que describir, piensen en una tapa de asado ahumada y cocinada muy lento, tanto que se deshace completamente, con un sabor que parece un embutido rico. Fue algo totalmente nuevo para mí, a nivel comida: objetivo cumplido.

Algo que sí me gustó bastante fue el Central Park. Un espacio verde enorme, ahí en el medio de la ciudad, de los rascacielos y las avenidas. Como los bosques de Palermo, podrán pensar... bueno, como para ponerlo en perspectiva, el Central Park es OCHO veces más grande que los bosques de Palermo.

El viernes cortamos el laburo a la 4, y yo aproveché el extra de luz y me fuí para el parquecito. Llegué rápido (estaba a unas 10 o 15 cuadras) y estuve caminando hasta que se hizo de noche. Lo crucé a lo ancho, y no llegué ni a la mitad longitudinalmente hablando, pero todo lo que ví me gustó: un bosque, basicamente, con senderotes, senderos y senderitos, iluminado y cuidado.

Parque Central

Biblioteca de New York

El sábado tenía varias horas para pasear. Con Ricardo y Maxi nos tomamos un subte hasta el sur de Manhattan, y de ahí un ferry hasta la isla de enfrente, un paseito corto y piola para sacarle unas fotos a la Estatua de la Libertad. Cuando volvimos pegamos una vuelta por la parte financiera (Wall Street y eso), nos acercamos hasta el puente de Brooklyn (el cual empezamos a caminar, pero no cruzamos), y luego enfilamos para pegar un par de vueltas al Barrio Chino y a la Little Italy, donde merendamos en un café muy bueno (yo probé un capuchino y un cannolo a la siciliana, impecables).

En tu cara, toro

Manhattan, desde el ferry

Después no mucho más. Subte de vuelta al hotel, agarrar las valijas, juntarnos un grupito e ir a la parada del NYC Airporter, el micro que nos llevó al aeropuerto, checkin, espera, viaaaaaaje, y casita :).

Todas las fotos, acá.

Read more
admin

MAAS 2.3.0 (beta1)

New Features & Improvements

Hardware Testing

MAAS 2.3 beta overhauls and improves the visibility of hardware test results and information. This includes various changes across MAAS:

  • Machine Listing page
    • Surface progress and failures of hardware tests, actively showing when a test is pending, running, successful or failed.
  • Machine Details page
    • Summary tab – Provide hardware testing information about the different components (CPU, Memory, Storage)
    • Hardware Tests tab – Completely re-design of the Hardware Test tab. It now shows a list of test results per component. Adds the ability to view more details about the test itself.
    • Storage tab – Ability to view tests results per storage component.

UI Improvements

Machines, Devices, Controllers

MAAS 2.3 beta 1 introduces a new design for the node summary pages:

  • “Summary tab” now only shows information of the machine, in a complete new design.
  • “Settings tab” has been introduced. It now includes the ability to edit such node.
  • “Logs tab” now consolidates the commissioning output and the installation log output.

Other UI improvements

Other UI improvements that have been made for MAAS 2.3 beta 1 includes:

  • Add DHCP status column on the ‘Subnet’s tab.
  • Add architecture filters
  • Update VLAN and Space details page to no longer allow inline editing.
  • Update VLAN page to include the IP ranges tables.
  • Convert the Zones page into AngularJS (away from YUI).
  • Add warnings when changing a Subnet’s mode (Unmanaged or Managed).

Rack Controller Deployment

MAAS beta 1 now adds the ability to deploy any machine with the rack controller, which is only available via the API.

API Improvements

MAAS 2.3 beta 1 introduces API output for volume_groups, raids, cache_sets, and bcaches field to the machines endpoint.

Known issues:

The following are a list of known UI issues affecting hardware testing:

Issues fixed in this release

https://launchpad.net/maas/+milestone/2.3.0beta1

  • #1711320    [2.3, UI] Can’t ‘Save changes’ and ‘Cancel’ on machine/device details page
  • #1696270    [2.3] Toggling Subnet from Managed to Unmanaged doesn’t warn the user that behavior changes
  • #1717287    maas-enlist doesn’t work when provided with serverurl with IPv6 address
  • #1718209    PXE configuration for dhcpv6 is wrong
  • #1718270    [2.3] MAAS improperly determines the version of some installs
  • #1718686    [2.3, master] Machine lists shows green checks on components even when no tests have been run
  • #1507712    cli: maas logout causes KeyError for other profiles
  • #1684085    [2.x, Accessibility] Inconsistent save states for fabric/subnet/vlan/space editing
  • #1718294    [packaging] dpkg-reconfigure for region controller refers to an incorrect network topology assumption

Read more
facundo

CDPedia, release y planes


Nueva versión

Hace unos diez días terminé de armar las imágenes para la nueva versión de la CDPedia, la 0.8.4 con contenido actualizado a Junio 2017, y la semana pasada hice los avisos correspondientes en todos lados menos acá.

Ya saben qué es la CDPedia, pero vuelvo a insistir: descárguenla, compártanla, difúndanla, ya que ayuda a que la mayor gente posible acceda a la Wikipedia y al conocimiento que la misma permite.

Vayan a la página oficial para ver cómo descargarla y otra info.

Esta versión no trae demasiadas novedades más que el contenido actualizado (que ya es bastante), pero también renové el contenido de la página de inicio, e hice varias mejoras a la hora de la generación de discos y tarballs, así como también en la calidad del código.

Velita en un festival de jazz en Baradero


Grupo de trabajo

Por otro lado, tengo ganas de mezclar dos cosas que me vienen dando vuelta en la cabeza: la idea de meter más gente a trabajar con CDPedia, y la de hacer algo de mentoring para que gente más nueva aprenda a programar (no sólo a nivel lenguaje, sino también las buenas prácticas, etc.).

Entonces, tengo la idea de armar un grupo de trabajo para CDPedia. Buscar tres o cuatro newbies, más quizás alguien con experiencia que me ayude, y ponernos a trabajar (de forma relajada, remota, pero más o menos constante) en CDPedia.

A nivel del proyecto a varias cosas al alcance de la mano, desde modernización del código y hacerlo más robusto en varias situaciones, hasta mejorar y normalizar los logs, preparar todo para Python 3, corregir bugs, etc.

Claro, la idea también ronda alrededor de las personas, los que participen del proyecto, que aprenderían Python en un nivel básico o intermedio, y obtendrían experiencia en trabajar en grupo de forma remota (y algo de presencial). También se haría foco en usar herramientas como control de versiones, manejo de issues/bugs, y prácticas modernas de desarrollo en general.

En algún punto es parecido a lo que en otro momento se llamó Adopte un Newbie, pero en este caso no hay una relación uno a uno entre mentor y participante, sino que sería un grupo donde cada integrante puede ayudar a otros, todos tutelados o guiados por mí (y como dije arriba, quizás alguien más), todo en un ambiente amigable y "seguro".

Fuente en el Cerro Santa Lucia, Santiago de Chile

Tengo que terminar de redondear la idea, especialmente a nivel operativa: composición del grupo (haría una búsqueda/oferta y luego una selección), formas de comunicación, ¿reuniones presenciales?, etc. También quiero definir la duración de la experiencia: quiero hacerla finita, y luego de alguna manera presentar los resultados con el grupo en alguna conferencia.

Ya les iré contando más novedades.

Read more
admin

Hello MAASters!

This past week, the MAAS team met face to face in NYC! The week was concentrated on finalizing the improvements that users will be able to enjoy in MAAS 2.3 and preparing the for the first beta release. While MAAS 2.3.0 beta 1 will be announced separately, we wanted to bring you an update of the work the team has been doing over the past couple weeks.

MAAS 2.3 (current development release)

  • Hardware Testing Phase 2
    • Backend work to support the new UX changes for hardware testing. This includes websockets handlers, directives and triggers.
    • UI – Add ability to upload custom hardware tests scripts.
    • UI – Update the machine listing page to show hardware status. This shows status of hardware testing while pending, running, failed, degraded, timed out, etc.
    • UI – Implement new designs for Hardware Testing:
      • Add cards (new design) on node details pages that include metrics (if tests have been run) and hardware test information.
      • Add a new Hardware Test tab that better surfaces status of hardware tests per component
      • Add a more detailed log view of hardware test results.
      • Surface hardware test results per storage device on each of the block devices (on the machines details page).
      • Add ability to view all test results performed on each of the components overtime.
  • Switch Support
    • Add actions to switch listing page (still under a feature flag)
    • Fetch Wedge 100 switch metadata using the FRUID API endpoint on the BMC.
    • UI – Add websockets and triggers to support the UI changes for switches.
    • UI – Update the UI to display the vendor and model on the switch listing page (behind feature flag)

  • UI improvements
    • Add DHCP status column on the ‘Subnet’s tab.
    • Add architecture filters
    • Implement a new design for node details page:
      • Consolidate all of machine, devices, controllers, switches Summary tab into cards.
      • Add a new Settings tab, combined with the Power tab to allow editing different components of machines, devices, controllers, etc.
      • Consolidate commissioning output and installation logs in a “Log” tab.
    • Update VLAN and Space details page to no longer allow inline editing.
    • Update VLAN page to include the IP ranges tables.
    • Convert the Zones page into AngularJS (away from YUI).
    • Add warnings when changing a Subnet’s mode (Unmanaged or Managed).

  • Rack controller deployment
    • Add ability to deploy any machine as a rack controller via the API.

  • API changes:
    • Add volume_groups, raids, cache_sets, and bcaches field to the Machine API output.

  • Issues fixed:
    • #1711320    [2.3, UI] Can’t ‘Save changes’ and ‘Cancel’ on machine/device details page
    • #1696270    [2.3] Toggling Subnet from Managed to Unmanaged doesn’t warn the user that behavior changes
    • #1717287    maas-enlist doesn’t work when provided with serverurl with IPv6 address
    • #1718209    PXE configuration for dhcpv6 is wrong
    • #1718270    [2.3] MAAS improperly determines the version of some installs
    • #1718686    [2.3, master] Machine lists shows green checks on components even when no tests have been run
    • #1507712    cli: maas logout causes KeyError for other profiles
    • #1684085    [2.x, Accessibility] Inconsistent save states for fabric/subnet/vlan/space editing
    • #1718294    [packaging] dpkg-reconfigure for region controller refers to an incorrect network topology assumption

Libmaas

We have improved the library to allow the managing of block devices and partitions.

  • Add ability to list machine’s block devices.
  • Add ability to update, create and delete block devices.
  • Add ability to list machine’s partitions.
  • Add ability to update, create and delete partitions.
  • Add ability to format/unformat partitions and block devices.
  • Add ability to mount/unmount partitions and block devices.

The release of a new version of libmaas will be announced separately.

CLI

MAAS has been working on a new CLI that’s based (and uses) MAAS’ python client library. The work that has been done includes:

  • Add ability to log in/log out via user and password.
  • Add ability to switch between profiles.
  • Add support for interactive login.
  • Add help command.
  • Ability to list nodes, machines, devices, controllers.
  • Ability to list all components in the networking model (subnets, vlans, spaces, fabrics).
  • Ability to obtain details on machines, devices and controllers.
  • Ability to obtain details on subnets, vlans, spaces, fabrics.
  • Ability to perform actions on machines (with the exception of testing and rescue mode).
  • Add ability to perform actions for multiple nodes
  • Add a ‘maas ssh’ command.
  • When listing, add support for automatic paging.
  • Add ability to view output in different formats (pretty, plain, json, yaml, csv).
  • Show progress indication on actions that are synchronous or blocking.

The release of the new CLI will be announced separately.

Read more
Alan Griffiths

Custom compositing in Mir servers

Although Mir 1.0 hasn’t quite shipped yet my thoughts are turning to what we do next. One of the work items that got paused was removing the mirserver dependencies from QtMir.

While Canonical isn’t using QtMir, the purpose of the affected code is still relevant. It enables custom compositing for transitions and other effects and this is of continuing, wider, interest.

Clearly Qt isn’t the only framework that might want to customize the way the server composites the scene and Mir ought to support a range of options through a stable, well thought out API.

We need to start somewhere. A few conversations over the past few days have identified a couple of options: GDK4 and Clutter.

I don’t know enough yet to prioritise or plan this work, but input from anyone interested in helping get one of these toolkits working would be great.

Read more
jdstrand

This may be totally obvious to many but I thought I would point out a problem I had with slow boot and how I solved it to highlight a neat feature of systemd to investigate the problem.

I noticed that I was having terrible boot times, so I used the `systemd-analyze critical-chain` command to see what the problem was. This showed me that `network-online.target` was taking 2 minutes. Looking at the logs (and confirming with `systemd-analyze blame`), I found it was timing out because `systemd-networkd` failed to bring interfaces either online or to fail (I was in an area where I had not connected to the existing wifi before and the wireless interface was scanning instead of failing). I looked around and found that I had no configuration for systemd-networkd (/etc/systemd/network was empty) or for netplan (/etc/netplan was empty), so I simply ran `sudo systemctl disable systemd-networkd` (leaving NetworkManager enabled) and I then had a very fast boot.

I need to file a bug on the cause of the problem, but I found the `systemd-analyze` command so helpful, I wanted to share. :)

UPDATE: this bug was reported as https://launchpad.net/bugs/1714301 and fixed in systemd 234-2ubuntu11.


Filed under: ubuntu

Read more
K.Tsakalozos

In our last post we discussed the steps required to build the Canonical Distribution of Kubernetes (CDK). That post should give you a good picture of the components coming together to form a release. Some of these components are architecture agnostic, some not. Here we will update CDK to support IBM’s s390x architecture.

The CDK bundle is made of charms in python running on Ubuntu. That means we are already in a pretty good shape in terms of running on multiple architectures.

However, charms deploy binaries that are architecture specific. These binaries are of two types:

  1. snaps and
  2. juju resources

Snap packages are cross-distribution but unfortunately they are not cross-architecture. We need to build snaps for the architecture we are targeting. Snapped binaries include Kubernetes with its addons as well as etcd.

There are a few other architecture specific binaries that are not snapped yet. Flannel and CNI plugins are consumed by the charms as Juju resources.

Build snaps for s390x

CDK uses snaps to deploy a) Kubernetes binaries b) add-on services and,
c) etcd binaries.

Snaps with Kubernetes binaries

Kubernetes snaps are built using the branch in, https://github.com/juju-solutions/release/tree/rye/snaps/snap. You will need to login to your s390x machine, clone the repository and checkout the right branch:

On your s390x machine:
> git clone https://github.com/juju-solutions/release.git
> cd release
> git checkout rye/snaps
> cd snap

At this point we can build the Kubernetes snaps:

On your s390x machine:
> make KUBE_ARCH=s390x KUBE_VERSION=v1.7.4 kubectl kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubeadm kubefed

The above will fetch the Kubernetes released binaries from upstream and package them in snaps. You should see a bunch of *_1.7.4_-s390x.snap files in the snap directory.

Snap Kubernetes addons

Apart from the Kubernetes binaries CDK also packages the Kubernetes addons as snaps. The process of building the cdk-addons_1.7.4_s390x.snap is almost identical to the Kubernetes snaps. Clone the cdk-addons repository and make the addons based on the Kubernetes version:

On your s390x machine:
> git clone https://github.com/juju-solutions/cdk-addons.git
> cd cdk-addons
> make KUBE_VERSION=v1.7.4 KUBE_ARCH=s390x

Snap with etcd binaries

The last snap we will need is the snap for etcd. This time we do not have fancy Makafiles but the build process is still very simple:

> git clone https://github.com/tvansteenburgh/etcd-snaps
> cd etcd-snaps/etcd-2.3
> snapcraft — target-arch s390x

At this point you should have etcd_2.3.8_s390x.snap. This snap as well as the addons one and one snap for each of the kubectl kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubeadm kubefed will be attached to the Kubernetes charms upon release.

Make sure you go through this great post on using Kubernetes snaps.

Build juju resources for s390x

CDK needs to deploy the binaries for CNI and Flannel. These two binaries are not packaged as snaps (they might be in the future), instead they are provided as Juju resources. As these two binaries are architecture specific we need to build them for s390x.

CNI resource

For CNI you need to clone the respective repository, checkout the version you need and compile using a docker environment:

> git clone https://github.com/containernetworking/cni.git cni 
> cd cni
> git checkout -f v0.5.1
> docker run — rm -e “GOOS=linux” -e “GOARCH=s390x” -v ${PWD}/cni:/cni golang /bin/bash -c “cd /cni && ./build”

Have a look at how CI creates the tarball needed with the CNI binaries: https://github.com/juju-solutions/kubernetes-jenkins/blob/master/resources/build-cni.sh

Flannel resource

Support for s390x was added from version v0.8.0. Here is the usual cloning and building the repository.

> git clone https://github.com/coreos/flannel.git flannel
> cd flannel
> checkout -f v0.8.0
> make dist/flanneld-s390x

If you look at the CI script for Flannel resource you will see that the tarball also contains the CNI binaries and the etcd client: https://github.com/juju-solutions/kubernetes-jenkins/blob/master/resources/build-flannel.sh

Building etcd:

> git clone https://github.com/coreos/etcd.git etcd 
> cd etcd
> git checkout -f v2.3.7
> docker run --rm -e "GOOS=linux" -e "GOARCH=s390x -v ${PWD}/etcd:/etcd golang /bin/bash -c "cd /etcd &&./build"

Update charms

As we already mentioned, charms are written in python so they run on any platform/architecture. We only need to make sure they are fed the right architecture specific binaries to deploy and manage.

Kubernetes charms have a configuration option called channel. Channel points to a fallback snap channel where snaps are fetched from. It is a fallback because snaps are also attached to the charms when releasing them, and priority is given to those attached snaps. Let me a explain how this mechanism works. When releasing a Kubernetes charm you need to provide a resource file for each of the snaps the charm will deploy. If you upload a zero sized file (.snap) the charm will consider this as a dummy snap and try to snap-install the respective snap from the official snap repository. Grabbing the snaps from the official repository works towards a single multi arch charm since the arch specific repository is always available.

For the non snapped binaries we build above (cni and flannel) we need to patch the charms to make sure those binaries are available as Juju resources. In this pull request we add support for multi arch non snapped resources. We add an additional Juju resource for cni built s390x arch.

# In the metadata.yaml file of Kubernetes worker charm
cni-s390x: type: file
etype: file
filename: cni.tgz
description: CNI plugins for amd64

The charm will concatenate “cni-” and the architecture of the system to form the name of the right cni resource. On an amd64 the resource to be used is cni-amd64, on an s390x the cni-s390x is used.

We follow the same approach for the flannel charm. Have a look at the respective pull request.

Build and release

We would need to build the charms, push them to the store, attach the resources and release them. Lets trace these steps for kubernetes-worker:

> cd <path_to_kubernetes_worker_layer>
> charm build
> cd <output_directory_probably_$JUJU_REPOSITORY/builds/kubernetes-worker>
> charm push . cs:~kos.tsakalozos/kubernetes-worker-s390x

You will get a revision number for your charm. In my case it was 0. Lets list the resources we have for that charm:

> charm list-resources cs:~kos.tsakalozos/kubernetes-worker-s390x-0

And attach the resources to the uploaded charm:

> cd <where_your_resources_are>
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kube-proxy=./kube-proxy.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kubectl=./kubectl.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 kubelet=./kubelet.snap
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 cni-s390x=./cni-s390x.tgz
> charm attach ~kos.tsakalozos/kubernetes-worker-s390x-0 cni-amd64=./cni-amd64.tgz

Notice how we have to attach one resource per architecture for the cni non-snapped resource. You can try to provide a valid amd64 build for cni but since we are not building a multi arch charm it wouldn’t matter as it will not be used on the targeted s390x platform.

Now let’s release and grant read visibility to everyone:

> charm release cs:~kos.tsakalozos/kubernetes-worker-s390x-0 — channel edge -r cni-s390x-0 -r cni-amd64-0 -r kube-proxy-0 -r kubectl-0 -r kubelet-0
> charm grant cs:~kos.tsakalozos/kubernetes-worker-s390x-0 everyone

We omit building and releasing rest of the charms for brevity.

Summing up

Things have lined up really nice for CDK to support multiple hardware architectures. The architecture specific binaries are well contained into Juju resources and most of them are now shipped as snaps. Building each binary is probably the toughest part in the process outlined above. Each binary has its own dependencies and build process but the scripts linked here reduce the complexity a lot.

In the future we plan to support other architectures. We should end-up with a single bundle that would deploy on any architecture via a juju deploy canonical-kubernetes. You can follow our progress on our trello board. Do not hesitate to reach out and suggest improvements and tell us what architecture you think we should target next.

Read more
facundo

Paseando por Chile


El finde pasado nos fuimos con Moni a Chile, a pasear un poco.

Es el primer viaje que hacemos sin niños después de esta semana (si es que se puede considerar que ahí también estábamos todavía de a dos).

La gran ventaja de ir sin niños es que basicamente no teníamos que ocuparnos de nadie más que de nosotros :D. Íbamos a donde queríamos, comíamos a la hora que queríamos, nos levantábamos o acostábamos a la hora que queríamos, etc. Lo que hace una pareja sin hijos, bah...

Solos, en la escalera-piano

Paseamos a morir. Hicimos base en Santiago, pero todo un día lo pasamos en Valparaíso.

A Valparaíso fuimos y volvimos en micro. Cuando llegamos, lo primero que hicimos fue tomarnos un bondi hasta La Sebastiana, la casa que tenía Neruda en esa ciudad. La recorrimos con ayuda de una audioguía, ¡buenísimo! Nos encantó.

La Sebastiana vista desde afuera

Despues salimos y caminamos un montón, paseando, parando para almorzar, y seguimos caminando, recorriendo los cerros, subiendo por un funicular... en general muchas escaleras, muchas subidas y bajadas, ASÍ teníamos las pantorrillas...

Terminamos llegando al puerto, donde paseamos un poco por una feria e hicimos la merienda. Luego un trole hasta el centro de la ciudad, y el micro de vuelta a Santiago.

Esquina random de Valparaíso

La escalera del zapato, con una gran frase

Valparaíso vista desde arriba

Los otros tres días los pasamos en Santiago, aunque no fueron enteros por los viajes para llegar e irnos de Chile.

Paseamos bastante también, y no solamente por shoppings como parece ser costumbre de los argentinos que están por allá (?). Estaba toda la ciudad muy influenciada por las Fiestas Patrias, que es algo que allá es muy importante.

La cordillera siempre presente

Mural al lado de La Piojera

Paseamos un rato por el centro, pero justo el sábado y estaba todo bastante cerrado. Fuimos al Mercado Central también, en donde comimos ricas comidas basadas en pescados y mariscos.

También subimos al Cerro Santa Lucía, un lindo paseo que no completamos del todo porque no nos daba el tiempo restante para subir y recorrer todo el castillo que tiene en su parte superior. Pero paseamos por toda la zona y aledaños, así como también en el barrio de Bellavista, donde almorzamos en la Cervecería Kunstmann, super recomendable!

El paseo había arrancado en La Chascona, la casa de Neruda en Santiago. La visita (y recorrida con la audioguía) es algo que también aquí vale mucho la pena. Esta casa, como la que mencioné antes, y la de Isla Negra que nos debemos, son mantenidas y expuestas por la Fundación Neruda, y hacen un excelente trabajo.

Detalles de La Chascona

Si tengo que resaltar algo feo de Santiago/Valparaíso, además que la primera es una "ciudad grande" que a priori no me gustan, es que el cablerío que se ve en los postes en la calle es sorprendente, no se entiende como puede ser que hayan tantos cables ni para qué están. Parece ser que es porque los ponen y luego cuando los dejan de usar no los sacan, también parece que se va a solucionar eventualmente.

Lo otro que noté es que todo está pintarrajeado, a nivel grafiti/vandalismo. Pero no sólo paredes. Todo. Colectivos, ventanas de casas particulares, negocios pequeños y grandes. Todo.

Las fotos de todo el viaje, acá.  Con Moni tenemos que hacernos más de estas escapadas, son reconfortantes :)

Read more
jdstrand

Finding your VMs and containers via DNS resolution so you can ssh into them can be tricky. I was talking with Stéphane Graber today about this and he reminded me of his excellent article: Easily ssh to your containers and VMs on Ubuntu 12.04.

These days, libvirt has the `virsh dominfo` command and LXD has a slightly different way of finding the IP address.

Here is an updated `~/.ssh/config` that I’m now using (thank you Stéphane for the update for LXD):

Host *.lxd
    #User ubuntu
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(lxc list -c s4 $(echo %h | sed "s/\.lxd//g") %h | grep RUNNING | cut -d' ' -f4) %p
 
Host *.vm
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(virsh domifaddr $(echo %h | sed "s/\.vm//g") | awk -F'[ /]+' '{if (NR>2 && $5) print $5}') %p

You may want to uncomment `StrictHostKeyChecking` and `UserKnownHostsFile` depending on your environment (see `man ssh_config`) for details.

With the above, I can ssh in with:

$ ssh foo.vm uptime
16:37:26 up 50 min, 0 users, load average: 0.00, 0.00, 0.00
$ ssh bar.lxd uptime
21:37:35 up 12:39, 2 users, load average: 0.55, 0.73, 0.66

Enjoy!


Filed under: canonical, ubuntu, ubuntu-server

Read more
Alan Griffiths

Mir support for Wayland

I’ve seen some confusion about how Mir is supporting Wayland clients on the Phoronix forums . What we are doing is teaching the Mir server library to talk Wayland in addition to its original client-server protocol. That’s analogous to me learning to speak another language (such as Dutch).

This is not anything like XMir or XWayland. Those are both implementations of an X11 server as a client of a Mir or Wayland. (Xmir is a client of a Mir server or and XWayland is a client of a Wayland server.) They both introduce a third process that acts as a “translator” between the client and server.

The Wayland support is directly in the Mir server and doesn’t rely on a translator. Mir’s understanding of Wayland is going to start pretty limited (Like my Dutch). At present it understands enough “conversational Wayland” for a client to render content and for the server to composite it as a window. We need to teach it more “verbs” (e.g. to support for the majority of window management requests) but there is a limited range of things that do work.

Once Mir’s support for Wayland clients is on a par with the support for “native” Mir clients we will likely phase out support for the latter.

We’re still testing things prior to the Mir 1.0 release, and Mir 1.0 will not support “everything Wayland”. If you are curious you can install a preview of the current development version from the “Mir Staging” PPA.

Read more
admin

MAAS 2.3.0 (alpha3)

New Features & Improvements

Hardware Testing (backend only)

MAAS has now introduced an improved hardware testing framework. This new framework allows MAAS to test individual components of a single machine, as well as providing better feedback to the user for each of those tests. This feature has introduced:

  • Ability to define a custom testing script with a YAML definition – Each custom test can be defined with YAML that will provide information about the test. This information includes the script name, description, required packages, and other metadata about what information the script will gather. This information can then be displayed in the UI.

  • Ability to pass parameters – Adds the ability to pass specific parameters to the scripts. For example, in upcoming beta releases, users would be able to select which disks they want to test if they don’t want to test all disks.

  • Running test individually – Improves the way how hardware tests are run per component. This allows MAAS to run tests against any individual component (such a single disk).

  • Adding additional performance tests

    • Added a CPU performance test with 7z.

    • Added a storage performance test with fio.

Please note that individual results for each of the components is currently only available over the API. Upcoming beta release will include various UI improvements that will allow the user to better surface and interface with these new features.

Rack Controller Deployment in Whitebox Switches

MAAS has now the ability to install and configure a MAAS rack controller once a machine has been deployed. As of today, this feature is only available when MAAS detects the machine is a whitebox switch. As such, all MAAS certified whitebox switches will be deployed with a MAAS rack controller. Currently certified switches include the Wedge 100 and the Wedge 40.

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

UI – Controller Versions & Notifications

MAAS now surfaces the version of each running controller, and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as a HA setup.

Issues fixed in this release

  • #1702703    Cannot run maas-regiond without /bin/maas-rack
  • #1711414    [2.3, snap] Cannot delete a rack controller running from the snap
  • #1712450    [2.3] 500 error when uploading a new commissioning script
  • #1714273    [2.3, snap] Rack Controller from the snap fails to power manage on IPMI
  • #1715634    ‘tags machines’ takes 30+ seconds to respond with list of 9 nodes
  • #1676992    [2.2] Zesty ISO install fails on region controller due to postgresql not running
  • #1703035    MAAS should warn on version skew between controllers
  • #1708512    [2.3, UI] DNS and Description Labels misaligned on subnet details page
  • #1711700    [2.x] MAAS should avoid updating DNS if nothing changed
  • #1712422    [2.3] MAAS does not report form errors on script upload
  • #1712423    [2.3] 500 error when clicking the ‘Upload’ button with no script selected.
  • #1684094    [2.2.0rc2, UI, Subnets] Make the contextual menu language consistent across MAAS
  • #1688066    [2.2] VNC/SPICE graphical console for debugging purpose on libvirt pod created VMs
  • #1707850    [2.2] MAAS doesn’t report cloud-init failures post-deployment
  • #1711714    [2.3] cloud-init reporting not configured for deployed ubuntu core systems
  • #1681801    [2.2, UI] Device discovery – Tooltip misspelled
  • #1686246    [CLI help] set-storage-layout says Allocated when it should say Ready
  • #1621175    BMC acc setup during auto-enlistment fails on Huawei model RH1288 V3

For full details please visit:

https://launchpad.net/maas/+milestone/2.3.0alpha3

Read more
Robin Winslow

I’ve been thinking about the usability of command-line terminals a lot recently.

Command-line interfaces remain mystifying to many people. Usability hobbyists seem as inclined to ask why the terminal exists, as how to optimise it. I’ve also had it suggested to me that the discipline of User Experience (UX) has little to offer the Command-Line Interface (CLI), because the habits of terminal users are too inherent or instinctive to be defined and optimised by usability experts.

As an experienced terminal user with a keen interest in usability, I disagree that usability has little to offer the CLI experience. I believe that the experience can be improved through the application of usability principles just as much as for more graphical domains.

Steps to learn a new CLI tool

To help demystify the command-line experience, I’m going to lay out some of the patterns of thinking and behaviour that define my use of the CLI.

New CLI tools I’ve learned recently include snap, kubectl and nghttp2, and I’ve also dabbled in writing command-line tools myself.

Below I’ll map out an example of the steps I might go through when discovering a new command-line tool, as a basis for exploring how these tools could be optimised for CLI users.

  1. Install the tool
    • First, I might try apt install {tool} (or brew install {tool} on a mac)
    • If that fails, I’ll probably search the internet for “Install {tool}” and hope to find the official documentation
  2. Check it is installed, and if tab-complete works
    • Type the first few characters of the command name (sna for snap) followed by <tab> <tab>, to see if the command name auto-completes, signifying that the system is aware of its existence
    • Hit space, and then <tab> <tab> again, to see if it shows me a list of available sub-commands, indicating that tab completion is set up correctly for the tool
  3. Try my first command
    • I’m probably following some documentation at this point, which will be telling me the first command to run (e.g. snap install {something}), so I’ll try that out and expect prompt succinct feedback to show me that it’s working
    • For basic tools, this may complete my initial interaction with the tool. For more complex tools like kubectl or git I may continue playing with it
  4. Try to do something more complex
    • Now I’m likely no longer following a tutorial, instead I’m experimenting on my own, trying to discover more about the tool
    • If what I want to do seems complex, I’ll straight away search the internet for how to do it
    • If it seems more simple, I’ll start looking for a list of subcommands to achieve my goal
    • I start with {tool} <tab> <tab> to see if it gives me a list of subcommands, in case it will be obvious what to do next from that list
    • If that fails I’ll try, in order, {tool} <enter>, {tool} -h, {tool} --help, {tool} help or {tool} /?
    • If none of those work then I’ll try man {tool}, looking for a Unix manual entry
    • If that fails then I’ll fall back to searching the internet again

UX recommendations

Considering my own experience of CLI tools, I am reasonably confident the following recommendations make good general practice guidelines:

  • Always implement a --help option on the main command and all subcommands, and if appropriate print out some help when no options are provided ({tool} <enter>)
  • Provide both short- (e.g. -h) and long- (e.g. --help) form options, and make them guessable
  • Carefully consider the naming of all subcommands and options, use familiar words where possible (e.g. help, clean, create)
  • Be consistent with your naming – have a clear philosophy behind your use of subcommands vs options, verbs vs nouns etc.
  • Provide helpful, readable output at all times – especially when there’s an error (npm I’m looking at you)
  • Use long-form options in documentation, to make commands more self-explanatory
  • Make the tool easy to install with common software management systems (snap, apt, Homebrew, or sometimes NPM or pip)
  • Provide tab-completion. If it can’t be installed with the tool, make it easy to install and document how to set it up in your installation guide
  • Command outputs should use the appropriate output streams (STDOUT and STDERR) and should be as user-friendly and succinct as possible, and ideally make use of terminal colours

Some of these recommendations are easier to implement than others. Ideally every command should consider their subcommands and options carefully, and implement --help. But writing auto-complete scripts is a significant undertaking.

Similarly, packaging your tool as a snap is significantly easier than, for example, adding software to the official Ubuntu software sources.

Although I believe all of the above to be good general advice, I would very much welcome research to highlight the relative importance of addressing each concern.

Outstanding questions

There are a number of further questions for which the answers don’t seem obvious to me, but I’d love to somehow find out the answers:

  • Once users have learned the short-form options (e.g. -h) do they ever use the long-form (e.g. --help)?
  • Do users prefer subcommands (mytool create {something}) or options (mytool --create {something})?
  • For multi-level commands, do users prefer {tool} {object} {verb} (e.g. git remote add {remote_name}), or {tool} {verb} {object} (e.g. kubectl get pod {pod_name}), or perhaps {tool} {verb}-{object} (e.g. juju remove-application {app_name})?
  • What patterns exist for formatting command output? What’s the optimal length for users to read, and what types of formatting do users find easiest to understand?

If you know of either authoritative recommendations or existing research on these topics, please let me know in the comments below.

I’ll try to write a more in-depth follow-up to this post when I’ve explored a bit further on some of these topics.

Read more
Anthony Dillon

Webteam development summary

Iteration 6

dating between 14th to the 25th of August

This iteration saw a lot of work on tutorials.ubuntu.com and on the migration of design.ubuntu.com from WordPress to a fresh new Jekyll site project. Continued research and planning into the new snapcraft.io site, with some beginnings of the development framework.

Vanilla Framework put a lot of emphasis into polishing the existing components and porting the old theme concept patterns into the code base.

Websites issues: 66 closed, 33 opened (551 in total)

Some highlights include:
– Fixing content of card touching card edge in tutorials – https://github.com/canonical-websites/tutorials.ubuntu.com/issues/312
– Migrate canonical.com to Vanilla: Polish and custom patterns – https://github.com/canonical-websites/www.canonical.com/issues/172
– Prepare for deploy of design.ubuntu.comhttps://github.com/canonical-websites/design.ubuntu.com/issues/54
– Redirect from https://www.ubuntu.com/usn/ to https://usn.ubuntu.com/usn were broken – https://github.com/canonical-websites/www.ubuntu.com/issues/2128
design.ubuntu.com/web-style-guide build page and then hide pages – https://github.com/canonical-websites/design.ubuntu.com/issues/66
– Snapcraft prototype: Snap page – https://github.com/canonical-websites/snapcraft.io/issues/346
– Create Flask skeleton application –https://github.com/canonical-websites/snapcraft-flask/issues/2

Vanilla Framework issues: 24 closed, 16 opened (43 in total)

Some highlights include:
– Combine the entire suite of brochure theme patterns to Vanilla’s code base – https://github.com/vanilla-framework/vanilla-framework/issues/1177
– Many improvements to the documentation theme – https://github.com/vanilla-framework/vanilla-docs-theme/issues/45
– External link icon seems stretched – https://github.com/vanilla-framework/vanilla-framework/issues/1058
– .p-heading–icon pattern remove text color – https://github.com/vanilla-framework/vanilla-framework/issues/1272
– Remove margin rules on card content – https://github.com/vanilla-framework/vanilla-framework/issues/1277

All of these projects are open source. So please file issues if you find any bugs or even better propose a pull request. See you in two weeks for the next update from the web team here at Canonical.

Read more
K.Tsakalozos

Patch CDK #1: Build & Release

Happens all the time. You often come across a super cool open source project you would gladly contribute but setting up the development environment and learning to patch and release your fixes puts you off. The Canonical Distribution of Kubernetes (CDK) is not an exception. This set of blog posts will shed some light on the most dark secrets of CDK.

Welcome to the CDK patch journey!

What is your Build & Release workflow? (Figure from xkcd)

Build CDK from source

Prerequisits

You would need to have Juju configured and ready to build charms. We will not be covering that in this blog post. Please, follow the official documentation to setup your environment and build you own first charm with layers.

Build the charms

CDK is made of a few charms, namely:

To build each charm you need to spot the top level charm layer and do a `charm build` on it. The links on the above list will get you to the github repository you will need to clone and build. Lets try this out for easyrsa:

> git clone https://github.com/juju-solutions/layer-easyrsa
Cloning into ‘layer-easyrsa’…
remote: Counting objects: 55, done.
remote: Total 55 (delta 0), reused 0 (delta 0), pack-reused 55
Unpacking objects: 100% (55/55), done.
Checking connectivity… done.
> cd ./layer-easyrsa/
> charm build
build: Composing into /home/jackal/workspace/charms
build: Destination charm directory: /home/jackal/workspace/charms/builds/easyrsa
build: Processing layer: layer:basic
build: Processing layer: layer:leadership
build: Processing layer: easyrsa (from .)
build: Processing interface: tls-certificates
proof: OK!

The above builds the easyrsa charm and prints the output directory (/home/jackal/workspace/charms/builds/easyrsa in this case).

Building the kubernetes-* charms is slightly different. As you might already know the kubernetes charm layers are already upstream under cluster/juju/layers. Building the respective charms requires you to clone the kubernetes repository and pass the path to each layer to your invocation of charm build. Let’s build the kubernetes worker layer here:

> git clone https://github.com/kubernetes/kubernetes
Cloning into ‘kubernetes’…
remote: Counting objects: 602553, done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 602553 (delta 18), reused 20 (delta 15), pack-reused 602481
Receiving objects: 100% (602553/602553), 456.97 MiB | 2.91 MiB/s, done.
Resolving deltas: 100% (409190/409190), done.
Checking connectivity… done.
> cd ./kubernetes/
> charm build cluster/juju/layers/kubernetes-worker/
build: Composing into /home/jackal/workspace/charms
build: Destination charm directory: /home/jackal/workspace/charms/builds/kubernetes-worker
build: Processing layer: layer:basic
build: Processing layer: layer:debug
build: Processing layer: layer:snap
build: Processing layer: layer:nagios
build: Processing layer: layer:docker (from ../../../workspace/charms/layers/layer-docker)
build: Processing layer: layer:metrics
build: Processing layer: layer:tls-client
build: Processing layer: layer:nvidia-cuda (from ../../../workspace/charms/layers/nvidia-cuda)
build: Processing layer: kubernetes-worker (from cluster/juju/layers/kubernetes-worker)
build: Processing interface: nrpe-external-master
build: Processing interface: dockerhost
build: Processing interface: sdn-plugin
build: Processing interface: tls-certificates
build: Processing interface: http
build: Processing interface: kubernetes-cni
build: Processing interface: kube-dns
build: Processing interface: kube-control
proof: OK!

During charm build all layers and interfaces referenced recursively starting by the top charm layer are fetched and merged to form your charm. The layers needed for building a charm are specified in a layer.yaml file on the root of the charm’s directory. For example, looking at cluster/juju/layers/kubernetes-worker/layer.yaml we see that the kubernetes worker charm uses the following layers and interfaces:

- 'layer:basic'
- 'layer:debug'
- 'layer:snap'
- 'layer:docker'
- 'layer:metrics'
- 'layer:nagios'
- 'layer:tls-client'
- 'layer:nvidia-cuda'
- 'interface:http'
- 'interface:kubernetes-cni'
- 'interface:kube-dns'
- 'interface:kube-control'

Layers is an awesome way to share operational logic among charms. For instance, the maintainers of the nagios layer have a better understanding of the operational needs of nagios but that does not mean that the authors of the kubernetes charms cannot use it.

charm build will recursively lookup each layer and interface at http://interfaces.juju.solutions/ to figure out where the source is. Each repository is fetched locally and squashed with all the other layers to form a single package, the charm. Go ahead a do a charm build with “-l debug” to see how and when a layer is fetched. It is important to know that if you already have a local copy of a layer under $JUJU_REPOSITORY/layers or interface under $JUJU_REPOSITORY/interfaces charm build will use those local forks instead of fetching them from the registered repositories. This enables charm authors to work on cross layer patches. Note that you might need to rename the directory of your local copy to match exactly the name of the layer or interface.

Building Resources

Charms will install Kubernetes but to do so they need to have the Kubernetes binaries. We package these binaries in snaps so that they are self-contained and deployed in any Linux distribution. Building such binaries is pretty straight forward as long as you know where to find them :)

Here is the repository holding the Kubernetes snaps: https://github.com/juju-solutions/release.git. The branch we want is rye/snaps:

> git clone https://github.com/juju-solutions/release.git
Cloning into ‘release’…
remote: Counting objects: 1602, done.
remote: Total 1602 (delta 0), reused 0 (delta 0), pack-reused 1602
Receiving objects: 100% (1602/1602), 384.69 KiB | 236.00 KiB/s, done.
Resolving deltas: 100% (908/908), done.
Checking connectivity… done.
> cd release
> git checkout rye/snaps
Branch rye/snaps set up to track remote branch rye/snaps from origin.
Switched to a new branch ‘rye/snaps’

Have a look at the README.md inside the snap directory to see how to build the snaps:

> cd snap/
> ./docker-build.sh KUBE_VERSION=v1.7.4

A number of .snap files should be available after the build.

In similar fashion you can build the snap package holding Kubernetes addons. We refer to this package as cdk-addons and it can be found at: https://github.com/juju-solutions/cdk-addons.git

> git clone https://github.com/juju-solutions/cdk-addons.git
Cloning into ‘cdk-addons’…
remote: Counting objects: 408, done.
remote: Total 408 (delta 0), reused 0 (delta 0), pack-reused 408
Receiving objects: 100% (408/408), 51.16 KiB | 0 bytes/s, done.
Resolving deltas: 100% (210/210), done.
Checking connectivity… done.
> cd cdk-addons/
> make

The last resource you will need (which is not packaged as a snap) is the container network interface (cni). Lets grab the repository and get to a release tag:

> git clone https://github.com/containernetworking/cni.git
Cloning into ‘cni’…
remote: Counting objects: 4048, done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 4048 (delta 0), reused 2 (delta 0), pack-reused 4043
Receiving objects: 100% (4048/4048), 1.76 MiB | 613.00 KiB/s, done.
Resolving deltas: 100% (1978/1978), done.
Checking connectivity… done.
> cd cni
> git checkout -f v0.5.1

Build and package the cni resource:

> docker run — rm -e “GOOS=linux” -e “GOARCH=amd64” -v `pwd`:/cni golang /bin/bash -c “cd /cni && ./build”
Building API
Building reference CLI
Building plugins
flannel
tuning
bridge
ipvlan
loopback
macvlan
ptp
dhcp
host-local
noop
> cd ./bin
> tar -cvzf ../cni.tgz *
bridge
cnitool
dhcp
flannel
host-local
ipvlan
loopback
macvlan
noop
ptp
tuning

You should now have a cni.tgz in the root folder of the cni repository.

Two things to note here:
- We do have a CI for building, testing and releasing charms and bundles. In case you want to follow each step of the build process, you can find our CI scripts here: https://github.com/juju-solutions/kubernetes-jenkins
- You do not need to build all resources yourself. You can grab the resources used in CDK from the Juju store. Starting from the canonical-kubernetes bundle you can navigate to any charm shipped. Select one from the very end of the bundle page and then look for the “resources” sidebar on the right. Download any of them, rename it properly and you are ready to use it in your release.

Releasing Your Charms

After patching the charms to match your needs, please, consider submitting a pull request to tell us what you have been up to. Contrary to many other projects you do not need to wait for your PR to get accepted before you can actually make your work public. You can immediately release your work under your own namespace on the store. This is described in detail on the official charm authors documentation. The developers team is often using private namespaces to test PoCs and new features. The main namespace where CDK is released from is “containers”.

Yet, there is one feature you need to be aware of when attaching snaps to your charms. Snaps have their own release cycle and repositories. If you want to use the officially released snaps instead of attaching them to the charms, you can use a dummy zero sized file with the correct extension (.snap) in the place of each snap resource. The snap layer will see that that resource is empty and will try to grab the snap from the official repositories. Using the official snaps is recommended, however, in network restricted environments you might need to attach your own snaps while you deploy the charms.

Why is it this way?

Building CDK is of average difficulty as long as you know how where to look. It is not perfect by any standards and it will probably continue this path. The reason is that there are opposing forces shaping the build processes. This should come as no surprise. As Kubernetes changes rapidly and constantly expands, the build and release process should be flexible enough to include any new artefacts. Consider for example the switch from flannel cni to calico. In our case it is a resource and a charm that need to be updated. A monolithic build script would have been more “elegant” to the outsiders (eg, make CDK), but we would have been hiding a lot under the carpet. CI should be part of the culture of any team and should be owned by the team or else you get disconnected from the end product causing delays and friction. Our build and release process might look a bit “dirty” with a lot of moving parts but it really is not that bad! I managed to highlight the build and release steps in a single blog. Positive feedback also comes from our field engineers. Most of the time CDK deploys out of the box. When our field engineers are called it is either because our customers have a special requirement from the software or they have a “unconventional” environment in which Kubernetes needs to be deployed. Having such a simple and flexible build and release process enables our people to solve a problem on-site and release it tothe Juju store within a couple of hours.

Next steps

This blog post serves as foundation work for what is coming up. The plan is to go over some easy patches so we further demystify how CDK works. Funny thing this post was originally titled “Saying goodbye to my job security”. Cheers!

Read more
admin

Hello MAASters! This is the development summary for the past couple of weeks:

MAAS 2.3 (current development release)

  • Hardware Testing Phase 2
    • Added parameters form for script parameters validation.
    • Accept and validate results from nodes.
    • Added hardware testing 7zip CPU benchmarking builtin script.
    • WIP – ability to send parameters to test scripts and process results of individual components. (e.g. will provide the ability for users to select which disk they want to test, and capture results accordingly)
    • WIP – disk benchmark test via Fio.
  • Network beaconing & better network discovery
    • MAAS controllers now send out beacon advertisements every 30 seconds, regardless of whether or not any solicitations were received.
  • Switch Support
    • Backend changes to automatically detect switches (during commissioning) and make use of the new switch model.
    • Introduce base infrastructure for NOS drivers, similar to the power management one.
    • Install the Rack Controller when deploying a supported Switch (Wedge 40, Wedge 100)
    • UI – Add a switch listing tab behind a feature flag.
  • Minor UI improvements
    • The version of MAAS installed on each controller is now reported on the controller details page.
  • python-libmaas
    • Added ability to power on, power off, and query the power state of a machine.
    • Added PowerState enum to make it easy to check the current power state of a machine.
    • Added ability to reference the children and parent interfaces of an interface.
    • Added ability to reference the owner of node.
    • Added base level `Node` object that `Machine`, `Device`, `RackController`, and `RegionController` extend from.
    • Added `as_machine`, `as_device`, `as_rack_controller`, and `as_region_controller` to the Node object. Allowing the ability to convert a `Node` into the type you need to perform an action on.
  • Bug fixes:
    • LP: #1676992 – force Postgresql restart on maas-region-controller installation.
    • LP: #1708512 – Fix DNS & Description misalignment
    • LP: #1711714 – Add cloud-init reporting for deployed Ubuntu Core systems
    • LP: #1684094 – Make context menu language consistent for IP ranges.
    • LP: #1686246 – Fix docstring for set-storage-layout operation
    • LP: #1681801 – Device discovery – Tooltip misspelled
    • LP: #1688066 – Add Spice graphical console to pod created VM’s
    • LP: #1711700 – Improve DNS reloading so its happens only when required.
    • LP: #1712423, #1712450, #1712422 – Properly handle a ScriptForm being sent an empty file.
    • LP: #1621175 – Generate password for BMC’s with non-spec compliant password policy
    • LP: #1711414 – Fix deleting a rack when it is installed via the snap
    • LP: #1702703 – Can’t run region controller without a rack controller installed.

Read more