Canonical Voices

Sergio Schvezov

UbuConLA 2017 Summit Summary

These are my notes about UbuconLA, a bit of social activities and thoughts on the talks related to the snappy ecosystem. Arriving I left late on thursday for a planned night arrival into Lima, Leo was going to be arriving around 2 hours earlier than me flying in from Costa Rica. Once the plane I was on coming from Cordoba, Argentina landed into Lima I got onto an ad-hoc telegram group we had created and mentioned that I had landed and discovered Leo was still trying to Uber out of there.

Read more
admin

Hello MAASters!

I’m happy to annouce that MAAS 2.3.0 Alpha 2 has now been released and it is currently available in Ubuntu Artful, PPA for Xenial and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use Alpha 1, please use the following PPA:
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode –beta

MAAS 2.3.0 (alpha2)

Important announcements

Advanced Network for CentOS & Windows

The MAAS team is happy to announce that MAAS 2.3 now supports the ability to perform network configuration for CentOS and Windows. The network configuration is performed via cloud-init. MAAS CentOS images now use the latest available version of cloud-init that includes these features.

New Features & Improvements

CentOS Networking support

MAAS can now perform machine network configuration for CentOS, giving CentOS networking feature parity with Ubuntu. The following can now be configured for MAAS deployed CentOS images:

  • Static network configuration.

  • Bonds, VLAN and bridge interfaces.

Thanks for the cloud-init team for improving the network configuration support for CentOS.

Windows Networking support

MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information (https://maas.io/contact-us).

Network Discovery & Beaconing

MAAS now sends out encrypted beacons to facilitate network discovery and monitoring. Beacons are sent using IPv4 and IPv6 multicast (and unicast) to UDP port 5240. When registering a new controller, MAAS uses the information gathered from the beaconing protocol to ensure that newly registered interfaces on each controller are associated with existing known networks in MAAS.

UI improvements

Minor UI improvements have been made

  • Renamed “Device Discovery” to “Network Discovery”.

  • Discovered devices where MAAS cannot determine the hostname now show the hostname as “unknown” and greyed out instead of using the MAC address manufacturer as the hostname.

Issues fixed in this release

Issues fixed in this release are detailed at:

https://launchpad.net/maas/+milestone/2.3.0alpha1

Read more
Anthony Dillon

Cookie notification component

We’ve all seen the annoying cookie notification which website owners are legally obliged to include on their sites. We can’t avoid them, so let’s have some fun with them.

Previously, for Canonical’s sites, the cookie notification was a shared CSS file and JavaScript file that we injected into the head of each site. This resulted in a cookie policy notification appearing at the bottom of the site.

Job done, I hear you say. But the styling of the notification was becoming dated and after the redesign of the Vanilla notification pattern we wanted to align them. So we decided to create a small service/component to serve cookie notifications in a consistent and more manageable way.

The JavaScript

We started by developing a small JavaScript function in ES6 that took an optional object to set two options: a custom notification message and an auto destroy timeout value in milliseconds.

To initialise the cookie notification component you simply add the following to your site’s footer.

var options = {
  'content': 'We use cookies to improve your experience. By your continued use of this site you accept such use.
 This notice will disappear by itself. To change your settings please see our policy.',
  'duration': 3000
}
ubuntu.cookiePolicy.setup(options);

 

The styling

We wanted to use Vanilla’s notification styling as the base for the cookie message. We could simply copy the SCSS or even the compiled CSS into the component and call it a day. But that would get out of date quickly.

To avoid this, we made Vanilla framework an npm dependency of the cookie project and include just the notification styling. Importantly, this gave us a version of Vanilla that we could use to indicate if the package needs to be updated and the effects of the update. We could also update the project’s styling simply by running npm update.

Serving the cookie notification

We decided to deliver this project with Bower as our package manager of choice, as we wanted to dictate the location that the Bower components are installed, which is defined in our .bowerrc file.

One tradeoff of using Bower as a package manager is that the built CSS and JS need to be committed into the GitHub repository. This is due to the way in which Bower installs packages: it effectivity clones the project into the components directory. Therefore the pull requests are not as clean but it can be worked around by mentioning which files to ignore in the pull request description.

This means we don’t need the target site or app to support Sass or ES6 compilation. Simply embed the built CSS and JS file into the head of the site or app.

<link rel="stylesheet" type="text/css" media="screen" href="/bower-components/canonical-cookie-policy/build/css/cookie-policy.css" />
<script src="/bower-components/canonical-cookie-policy/build/js/cookie-policy.js"></script>

 

Conclusion

This process has helped to make our small components much more manageable, by splitting them into their own small project, giving them a home and a test HTML file which can run locally to see the development of component on its own.

We plan to continue to split out components into their own projects when it makes sense and delivers them in this fashion.

Read more
Dustin Kirkland

Earlier this month, I spoke at ContainerDays, part of the excellent DevOpsDays series of conferences -- this one in lovely Portland, Oregon.

I gave a live demo of Kubernetes running directly on bare metal.  I was running it on an 11-node Ubuntu Orange Box -- but I used the exact same tools Canonical's world class consulting team uses to deploy Kubernetes onto racks of physical machines.
You see, the ability to run Kubernetes on bare metal, behind your firewall is essential to the yin-yang duality of Cloud Native computing.  Sometimes, what you need is actually a Native Cloud.
Deploying Kubernetes into virtual machines in the cloud is rather easy, straightforward, with dozens of tools now that can handle that.

But there's only one tool today, that can deploy the exact same Kubernetes to AWS, Azure, GCE, as well as VMware, OpenStack, and bare metal machines.  That tools is conjure-up, which acts as a command line front end to several essential Ubuntu tools: MAAS, LXD, and Juju.

I don't know if the presentation was recorded, but I'm happy to share with you my slides for download, and embedded here below.  There are a few screenshots within that help convey the demo.




Cheers,
Dustin

Read more
admin

Hello MAASters! This is the development summary for the past couple of weeks:

MAAS 2.3 (current development release)

The team is preparing and testing the next official release, MAAS 2.3 alpha2. It is currently undergoing a heavy round of testing and will be announced separately the beginning of the upcoming week. In the past three weeks, the team has:

  • Support for CentOS Network configuration
    We have completed the work to support CentOS Advanced Networking, which provides the ability for users to configure VLAN, bond and bridge interfaces, bringing it feature parity with Ubuntu. This will be available in MAAS 2.3 alpha 2.
  • Support for Windows Network configuration
    MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information [1].
  • Hardware Testing Phase 2

    • Testing scripts now define a type field that informs MAAS for which component will be tested and where the resulting metrics will apply. This may be node, cpu, memory, or storage, defaults to node.
    • Completed work to support the definition and parsing of a YAML based description for custom test scripts. This allows the user to defined the test’s title, description, and the metrics the test will output, which allows MAAS to parse and eventually display over the UI/API.
  • Network beaconing & better network discovery

    • Beaconing is now fully functional for controller registration and interface updates!
    • When registering or updating a new controller (either the first standalone controller, or a secondary/HA controller), new interfaces that have been determined to be on an existing VLAN will not cause a new fabric to be created in MAAS.
  • Switch modeling
    • The basic database model for the new switching model has been implemented.
    • On-going progress of presenting switches in the node listing is under way.
    • Work is in-progress to allow MAAS to deploy a rack controller which will be utilized when deploying a new switch with MAAS.
  • Minor UI improvements
    • Renamed “Device Discovery” to “Network Discovery”.
    • Discovered devices where MAAS cannot determine the hostname now just show the hostname as “unknown” and grayed out instead of using the MAC address manufacturer as the hostname.
  • Bug fixes:
    • LP: #1704444 – MAAS API returns 500 internal server error instead of raising actual error.
    • LP: #1705501 – django warning on install
    • LP: #1707971 – MAAS becomes unstable after rack controller restarts
    • LP: #1708052 – Quick erase doesn’t remove md superblock
    • LP: #1710681 – Cannot delete an Ubuntu image, “Update Selection” is disabled

MAAS 2.2.2 Released in the Ubuntu Archive!

MAAS 2.2.2 has now also been released in the Ubuntu Archive. For more details on MAAS 2.2.2, please see [2].

 

[1]: https://maas.io/contact-us

[2]: https://lists.ubuntu.com/archives/maas-devel/2017-August/002663.html

Read more
facundo

Dos experiencias nuevas


Por un lado, el miércoles 9 pasado hicimos un evento de PyAr (en oficinas de Onapsis) que nunca habíamos hecho antes: un Consultorio Python.

Coorganizado por Nico y yo, la idea era replicar de forma personal un poco lo que sucede en el canal de IRC día a día: que alguien venga con una duda o consulta y que algunos que saben más o tienen más experiencia piensen un rato para resolver la inquietud.

Parte de la gracia era no sólo solucionar el problema, sino también que gente más nueva vea que hacerlo "no es magia", sino que todos buscamos en internet, todos hacemos razonamientos esquivos, todos vamos elaborando la respuesta, y así...

Estuvo muy bueno. La gente se animó a participar, fuimos pensando distintos problemas que nada que ver entre uno y otro, etc. Y todo mientras comíamos sanguchitos y tomábamos unas cervezas, cortesía de Onapsis (¡gracias!). Fue divertido, y el feedback de la gente también estuvo bien, les gustó.

Fotito que tuiteó Chechu:

Consultorio Python

Por otro lado, el domingo pasado en las PASO fui por primera vez autoridad de mesa en una elección. Presidente de mesa, incluso.

Llegué siete y media a la escuela, entramos, me dieron la urna con cosas adentro, una bolsa con boletas, un bolsín también con planillas, y unas viandas frías/secas.

Mi mesa era en el primer piso, así que subí, y ahí nomás fueron llegando fiscales: la mayoría generales, y una fiscal de Cambiemos que estuvo todo el día conmigo en mi mesa. El suplente que correspondía a mi mesa quedó como presidente en una de las mesas de abajo, de la cual faltaron las dos autoridades (bah, faltaron ambos autoridades de ambas mesas de abajo!).

Abrí la urna, que contenía una caja con útiles y planillas varias. Preparé el cuarto oscuro, reordenando las mesas y acomodando las boletas por número de lista. Algunos fiscales ayudaban, pero en general me dejaban sólo (yo prefería eso).

Luego preparé el padrón principal, más otro para control, y le dí uno de control a la fiscal. Hice el acta de apertura a las ocho de la mañana, y voté :). Fue muy loco, me firmé mi propio comprobante :p.

Mi comprobante de que voté, firmado por mí :p

Luego, empezó a venir la gente, de forma lenta. Pero a eso de las nueve empezaron a venir más. Y ahí nomás ya nos saturamos: desde la 9:30 a las 15hs no paramos nunca. Por suerte con la fiscal (y con otro que la vino a reemplazar una media hora mientras ella iba a votar a otro lado) tuvimos una buena dinámica y no teníamos tiempos muertos (las esperas eran en gran parte por los que tardaban adentro del cuarto oscuro).

También una vez por hora venían los fiscales generales, y revisábamos el cuarto oscuro para ver si estaba todo bien. Yo también entraba cada tanto y revisaba. Era necesario, la gente hace maldades: en una se robaron las boletas de Cambiemos, en otra se robaron las boletas del Frente de Izquierda, y también pasó que habían puesto las boletas de 1Pais arriba de las de Unidad Ciudadana, tapándolas.

Desde las tres estuvo más tranquilo. Hasta eso de las cinco y cuarto, que por media hora vino un poco más de gente, pero ya al final no apareció nadie, al punto que empecé a acomodar todo faltando diez minutos para las seis. Contamos los votos que teníamos registrados (un par de veces, teníamos algunas diferencias, pero las subsanamos), y luego cerramos la mesa.

Agarré todo, me metí en el cuarto oscuro, y empezó el recuento. Abrí la urna, y contamos todos los sobres (yo los contaba de a diez, y se los pasaba a la fiscal que validaba que hubieran diez como le había pedido). Luego de que vimos que teníamos la misma cantidad de sobres que votos registrados, empecé a abrirlos.

Acá vinieron varios fiscales a querer ayudarme con la apertura de los sobres, a lo que yo me fui negando siempre. El sobre lo abría yo, veía lo que había, y ahí le pasaba el contenido a la fiscal de siempre o a otro que ayudó un poco (había un par más de fiscales, pero no se metieron) y los iban apilando por lista (en dos grandes grupos: lista entera, y cortes). Yo también separaba sobres incompletos o raros: en blanco, o con votos parciales, o que parecían nulos.

Ya con todos los sobres abiertos, empecé a contar lista por lista todas las boletas (dos veces) e íbamos anotando en un "acta-poster" que llevaron los fiscales de Cambiemos (¡esa acta-poster fue de una ayuda bárbara!). Anotamos las listas completas y los cortes, partido por partido, y luego los votos parciales (con cargos en blanco), luego los otros blancos, y finalmente estudiamos los nulos.

Para terminar sumamos todo, casillero por casillero (un fiscal con calculadora, pero yo terminaba antes mentalmente y veía que estuviera bien). Y sumamos todas las columnas. Y nos dio redondito redondito :D.

Planilla con los totales de votos

Ahí empezaron todos a llenar sus planillas. Y yo también: el acta de escrutinio, el certificado de escrutinio, y el telegrama. Y empezar a acomodar todas las cosas (que se entregan al correo de forma muy predeterminada). Luego firmas por todos lados (yo como presidente de mesa a todo lo que hice yo e hicieron los fiscales, y ellos también entre sí y a lo que llené yo), y terminar de irnos.

Salí de la esuela a las nueve de la noche. Muerto de cansancio, pero contento, fue una experiencia buenísima.

Debo admitir que no vi ni una actitud sospechosa de parte de los fiscales, en ningún momento. Eso sí, siempre los mantuve "a raya" y no dejaba que se metieran en cosas que me parecían que tenía que resolver yo y sobre las que tenía que confiar yo en el resultado. Aunque tardáramos (especialmente en el recuento de votos).

En Octubre repito, ya en la elección "real", aprovechando toda la experiencia que gané :D

Read more
Inayaili de León Persson

Vanilla Framework has a new website

We’re happy to announce the long overdue Vanilla Framework website, where you can find all the relevant links and resources needed to start using Vanilla.

 

vanillaframework.io homepageThe homepage of vanillaframework.io

 

When you visit the new site, you will also see the new Vanilla logo, which follows the same visual principles as other logos within the Ubuntu family, like Juju, MAAS and Landscape.

We wanted to make sure that the Vanilla site showcased what you can do with the framework, so we kept it nice and clean, using only Vanilla patterns.

We plan on extending it to include more information about Vanilla and how it works, how to prototype using Vanilla, and the design principles that are behind it, so keep tuned.

And remember you can follow Vanilla on Twitter, ask questions on Slack and file new pattern proposals and issues on GitHub.

Read more
Alan Griffiths

Mir related ppas

Mir staging ppa

A few weeks ago I was reminded of the “Mir staging ppa” and saw that, with the discontinuation of Unity8 it had fallen into disuse. After considering deleting it I eventually removed some Ubuntu series that are no longer supported and added MirAL trunk.

That means that for Xenial, Zesty and Artful you can get the latest development version of Mir and MirAL like this:

sudo add-apt-repository ppa:mir-team/staging
sudo apt-get update
sudo apt-get upgrade

miral-release ppa

Now for the Internet of Things we are using UbuntuCore16 and snaps, but occasionally it is useful to have debs too. For Mir it is easy to use the underlying 16.04LTS (Xenial) archive, but MirAL isn’t in that archive.

To make up for that I’ve created a “miral-release” ppa containing the latest release of MirAL built for both Xenial and Zesty. Vis:

sudo add-apt-repository ppa:alan-griffiths/miral-release
sudo apt-get update
sudo apt-get upgrade

 

Read more
facundo

Eligiendo héroes


Dos grandes jugadores, dos grandes jugadas.

La primera (en orden cronológico) es este golazo de Messi que sirvió para que el Barça le gane al Real Madrid (su archi-oponente) en tiempo vencido, en un partido de Abril de este año, click para el video:

Jugada de Messi

La segunda jugada es esta tapa de Manu Ginóbili a uno de los jugadores estrellas del momento (James Harden), lo que permitió que los San Antonio Spurs ganen el quinto juego contra Houston, en las Semifinales de la Conferencia Oeste, post-temporada de la NBA, Mayo de este año, click para el video:

Jugada de Manu

Miren las dos jugadas, son fantásticas. Ahora miren los festejos de Messi y de Manu.

¿Saben qué me "jode"? Que Messi va y hace la señal de la cruz.

Perdón, pero yo a mis héroes los elijo laicos.

Read more
admin

Hello MAASters! The MAAS development summaries are back!

The past three weeks the team has been made good progress on three main areas, the development of 2.3, maintenance for 2.2, and out new and improved python library (libmaas).

MAAS 2.3 (current development release)

The first official MAAS 2.3 release has been prepared. It is currently undergoing a heavy round of testing and will be announced separately once completed. In the past three weeks, the team has:

  • Completed Upstream Proxy UI
    • Improve the UI to better configure the different proxy modes.
    • Added the ability to configure an upstream proxy.
  • Network beaconing & better network discovery
  • Started Hardware Testing Phase 2
      • UX team has completed the initial wireframes and gathered feedback.
      • Started changes to collect and gather better test results.
  • Started Switch modeling
      • Started changes to support switch and switch port modeling.
  • Bug fixes
    • LP: #1703403 – regiond workers can use too many postgres connections
    • LP: #1651165 – Unable to change disk name using maas gui
    • LP: #1702690 – [2.2] Commissioning a machine prefers minimum kernel over commissioning global
    • LP: #1700802 – [2.x] maas cli allocate interfaces=<label>:ip=<ADDRESS> errors with Unknown interfaces constraint Edit
    • LP: #1703713 – [2.3] Devices don’t have a link from the DNS page
    • LP: #1702976 – Cavium ThunderX lacks power settings after enlistment apparently due to missing kernel
    • LP: #1664822 – Enable IPMI over LAN if disabled
    • LP: #1703713 – Fix missing link on domain details page
    • LP: #1702669 – Add index on family(ip) for each StaticIPAddress to improve execution time of the maasserver_routable_pairs view.
    • LP: #1703845 – Set the re-check interval for rack to region RPC connections to the lowest value when a RPC connection is closed or lost.

MAAS 2.2 (current stable release)

  • Last week, MAAS 2.2 was SRU’d into the Ubuntu Archives and to our latest LTS release, Ubuntu Xenial, replacing the MAAS 2.1 series.
  • This week, a new MAAS 2.2 point release has also been prepared. It is currently undergoing heavy testing. Once testing is completed, it will be released in a separate announcement.

Libmaas

Last week, the team has worked on increasing the level

  • Added ability to create machines.
  • Added ability to commission machines.
  • Added ability to manage MAAS networking definitions. Including Subnet, Fabrics, Spaces, vlans, IP Ranges, Static Routes and DHCP.

Read more
facundo

En tu cara, planeta redondo


Ejercicio de Python. El objetivo es tener una serie de timestamps, en función de un registro "tipo cron" que indique periodicidad, desde un punto de partida, hasta "ahora".

El problema es que el "ahora" es de Buenos Aires, mientras que el servidor está en Holanda (o podría estar en cualquier lado).

Lo resolvemos con pytz y croniter. Veamos...

Arranquemos un intérprete interactivo dentro de un virtualenv con las dos libs que acabo de mencionar (y las importamos, además de datetime):

    $ fades -d pytz -d croniter
    *** fades ***  2017-07-26 18:27:20,009  INFO     Hi! This is fades 6.0, automatically managing your dependencies
    *** fades ***  2017-07-26 18:27:20,009  INFO     Need to install a dependency with pip, but no builtin, doing it manually...
    *** fades ***  2017-07-26 18:27:22,979  INFO     Installing dependency: 'pytz'
    *** fades ***  2017-07-26 18:27:24,431  INFO     Installing dependency: 'croniter'
    Python 3.5.2 (default, Nov 17 2016, 17:05:23)
    [GCC 5.4.0 20160609] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import croniter
    >>> import pytz
    >>> import datetime

Veamos que el server tiene horarios "complicados" (en el momento de hacer esto, acá en Buenos Aires son las 18:09):

    >>> datetime.datetime.now()
    datetime.datetime(2017, 7, 26, 23, 9, 51, 476140)
    >>> datetime.datetime.utcnow()
    datetime.datetime(2017, 7, 26, 21, 9, 56, 707279)

Instanciemos croniter, indicando repeticiones todas las horas a las 20 (a propósito, de manera que cuando iteremos desde hace una semana hasta "ahora", debería llegar hasta ayer, porque ahora son las 18 y pico acá, pero justo UTC o la hora del server son más que las 20...):

    >>> cron = croniter.croniter("0 20 * * * ", datetime.datetime(year=2017, month=7, day=20))

Pidamos la hora UTC actual, agregándole metadata de que es UTC, justamente:

    >>> utc_now = pytz.utc.localize(datetime.datetime.utcnow())
    >>> utc_now
    datetime.datetime(2017, 7, 26, 21, 15, 27, 508732, tzinfo=<UTC>)

Pidamos un timezone para Buenos Aires, y el "ahora" de antes pero calculado para esta parte del planeta:

    >>> bsas_tz = pytz.timezone("America/Buenos_Aires")
    >>> bsas_now = utc_now.astimezone(bsas_tz)
    >>> bsas_now
    datetime.datetime(2017, 7, 26, 18, 15, 27, 508732, tzinfo=<DstTzInfo 'America/Buenos_Aires' -03-1 day, 21:00:00 STD>)

Ahora hagamos un loop, pidiendo las fechas a cron y mostrándola, mientras que no sea más que ahora (notar que para compararla, hay que localizarla con el mismo timezone).

    >>> while True:
    ...     next_ts = cron.get_next(datetime.datetime)
    ...     bsas_next_ts = bsas_tz.localize(next_ts)
    ...     if bsas_next_ts > bsas_now:
    ...         break
    ...     print(bsas_next_ts)
    ...
    2017-07-20 20:00:00-03:00
    2017-07-21 20:00:00-03:00
    2017-07-22 20:00:00-03:00
    2017-07-23 20:00:00-03:00
    2017-07-24 20:00:00-03:00
    2017-07-25 20:00:00-03:00

Vemos que tuvimos fechas arrancando el 20 de julio, y tenemos "varios días a las 20hs" hasta ayer, porque todavía no es "hoy a las 20hs". ¡Listo!

Read more
Dustin Kirkland

Back in March, we asked the HackerNews community, “What do you want to see in Ubuntu 17.10?”: https://ubu.one/AskHN

A passionate discussion ensued, the results of which are distilled into this post: http://ubu.one/thankHN

In fact, you can check that link, http://ubu.one/thankHN and see our progress so far this cycle.  We already have a beta code in 17.10 available for your testing for several of those:

And several others have excellent work in progress, and will be complete by 17.10:

In summary -- your feedback matters!  There are hundreds of engineers and designers working for *you* to continue making Ubuntu amazing!

Along with the switch from Unity to GNOME, we’re also reviewing some of the desktop applications we package and ship in Ubuntu.  We’re looking to crowdsource input on your favorite Linux applications across a broad set of classic desktop functionality.

We invite you to contribute by listing the applications you find most useful in Linux in order of preference. To help us parse your input, please copy and paste the following bullets with your preferred apps in Linux desktop environments.  You’re welcome to suggest multiple apps, please just order them prioritized (e.g. Web Browser: Firefox, Chrome, Chromium).  If some of your functionality has moved entirely to the web, please note that too (e.g. Email Client: Gmail web, Office Suite: Office360 web).  If the software isn’t free/open source, please note that (e.g. Music Player: Spotify client non-free).  If I’ve missed a category, please add it in the same format.  If your favorites aren’t packaged for Ubuntu yet, please let us know, as we’re creating hundreds of new snap packages for Ubuntu desktop applications, and we’re keen to learn what key snaps we’re missing.

  • Web Browser: ???
  • Email Client: ???
  • Terminal: ???
  • IDE: ???
  • File manager: ???
  • Basic Text Editor: ???
  • IRC/Messaging Client: ???
  • PDF Reader: ???
  • Office Suite: ???
  • Calendar: ???
  • Video Player: ???
  • Music Player: ???
  • Photo Viewer: ???
  • Screen recording: ???

In the interest of opening this survey as widely as possible, we’ve cross-posted this thread to HackerNews, Reddit, and Slashdot.  We very much look forward to another friendly, energetic, collaborative discussion.

Or, you can fill out the survey here: https://ubu.one/apps1804

Thank you!
On behalf of @Canonical and @Ubuntu

Read more
facundo


El último día de Nerdear.la armá una encuesta que llamé "¿Cuál evento?", y la empecé a desparramar por las redes.

La pregunta principal era ¿A qué evento te interesería asistir?

  • PyCon: Conferencia nacional de Python, gratis, con charlas de nivel internacional. Dura 2 o 3 días, sucede una vez por año en algún lugar del país (el próximo es en Córdoba).
  • PyDay: Conferencia local de Python, gratis, con charlas más introductoria. Dura un día, puede haber varios en un año y puede ser en tu ciudad o cerca.
  • PyCamp: Hacking space durante cuatro días haciendo Python o lo que se te cante. Es pago (ya que tenemos el lugar y las comidas todo incluido). Sucede una vez al año en algún lugar del país (el próximo es en Baradero).
  • Consultorio Python: Un afteroffice nerd donde podés traer tu problema de Python y lo resolvemos entre todos. Es en algún lugar con proyector o tele, donde podamos charlar y tomar algo. Se puede hacer varias veces al año, en cualquier ciudad.
  • Reunión Social: Nos juntamos en un bar y charlamos de cualquier cosa (incluso de Python y PyAr, especialmente si hay novedades del grupo o cosas que decidir). Se puede hacer varias veces al año, en cualquier ciudad.
  • Meetup corto: Mini-conferencia de un par de horas para ver dos o tres charlas/presentaciones cortas, con algún espacio social también. Se puede hacer varias veces al año, en cualquier ciudad.
  • Meetup largo: Mezcla de mini-conferencia y sprint. Se hace normalmente un sábado, con algunas charlas a la mañana y un espacio de trabajo/hackeo a la tarde. Se puede hacer varias veces al año, en cualquier ciudad.
  • Sprint: Espacio de trabajo (sobre uno o más proyectos), normalmente un sábado para tener varias horas. Se puede hacer varias veces al año, en cualquier ciudad.


Luego también preguntaba ¿En qué ciudad vivís?, ¿A qué otras ciudades cercanas irías para un evento que dure un día o una tarde?, y dejaba un campo para comentarios.

Obtuve 169 respuestas. Bien.

Los eventos que mayormente la gente quiere ir tienen estilo de conferencia, primero PyCon, luego PyDay. Después ya vienen eventos mezcla de codear y charla (pycamp, meetups, sprints).

Notablemente el modelo de "consultorio Python" fue el que menos votos recibió, igual queremos probar esto al menos una vez, para ver como sale...

El siguiente gráfico es el que me da el form para ver los resultados, está recortado excluyendo opciones que no estaban en la encuesta original (de cualquier manera acá está la planilla con toda la data)

Cual evento

La distribución de votantes es la esperable en función de la (lamentable) centralización de nuestro pais: muchos de CABA y Buenos Aires, bastantes de otras ciudades grandes (Córdoba, Rosario, Santa Fe, La Plata), algunos de otros lados. En general la gente está dispuesta a viajar para los eventos.

A nivel comentarios, los más notables son los que reproduzco acá...

    Me parece que estaría bueno usar internet para acortar distancias, los
    que estamos lejos sabemos que todo pasa en Bs.As. o alrededores (y es
    lógico por una cuestion de cantidad de asistentes). Me encantaría que
    halla un taller online o un PyDayOnline o algo así.
   
    Algunos eventos no los seleccioné por mi nivel de conocimiento, más
    adelante sí me gustaría participar. El que más me gusta es pycamp (esta
    letra gris no se ve, je)
   
    Es notable la ausencia de meetups como en otras comunidades, lugar: una
    empresa, aporte de bebidas y comida, dos conferencias, lightning talks...
    Un PyCamp es demasiado inmersivo y generalmente, para mi, muy lejos. Lo
    bueno de las meetups, es que cumplen con la regla agil de los "dos pies":
    en cualquier momento, te podes ir, caminando :-)
   
    Estaria muy bueno mas charlas jornadas, hackaton de python cientifico
   
    La pycon debería hacerse en días de semana y/o sin coincidir con fin de
    semana largo.  Mucha gente usa Python en su trabajo y puede ir. En cambio
    un fin de semana (y más si es largo) choca con el necesario descanso y
    espacio para la vida personal
   
    Los que somos del interior aprovechariamos más  los eventos que tienen
    mas días.
   
    Me gustan más los eventos en donde todos hablan a los eventos que tienen
    charlas predefinidas, ya que de esa manera todos intercambiamos ideas y
    se pueden escuchar muchas ideas y opiniones.
   
    Me gustaría que exista información acerca de vuelos disponibles, hoteles
    cercanos al evento y costos mínimos de bus, tren, taxi para movilización
    en los lugares que se desarrolla los eventos

En fin, acá está toda la data si quieren hacer más análisis o buscar algún dato puntual.

Read more
Colin Ian King

The latest release of stress-ng V0.08.09 incorporates new stressors and a handful of bug fixes. So what is new in this release?

  • memrate stressor to exercise and measure memory read/write throughput
  • matrix yx option to swap order of matrix operations
  • matrix stressor size can now be 8192 x 8192 in size
  • radixsort stressor (using the BSD library radixsort) to exercise CPU and memory
  • improved job script parsing and error reporting
  • faster termination of rmap stressor (this was slow inside VMs)
  • icache stressor now calls cacheflush()
  • anonymous memory mappings are now private allowing hugepage madvise
  • fcntl stressor exercises the 4.13 kernel F_GET_FILE_RW_HINT and F_SET_FILE_RW_HINT
  • stream and vm stressors have new mdavise options
The new memrate stressor performs 64/32/16/8 bit reads and writes to a large memory region.  It will attempt to get some statistics on the memory bandwidth for these simple reads and writes.  One can also specify the read/write rates in terms of MB/sec using the --memrate-rd-mbs and --memrate-wr-mbs options, for example:

 stress-ng --memrate 1 --memrate-bytes 1G \  
--memrate-rd-mbs 1000 --memrate-wr-mbs 2000 -t 60
stress-ng: info: [22880] dispatching hogs: 1 memrate
stress-ng: info: [22881] stress-ng-memrate: write64: 1998.96 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read64: 998.61 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write32: 1999.68 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read32: 998.80 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write16: 1999.39 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read16: 999.66 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write8: 1841.04 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read8: 999.94 MB/sec
stress-ng: info: [22880] successful run completed in 60.00s (1 min, 0.00 secs)

...the memrate stressor will attempt to limit the memory rates but due to scheduling jitter and other memory activity it may not be 100% accurate.  By careful setting of the size of the memory being exercised with the --memrate-bytes option one can exercise the L1/L2/L3 caches and/or the entire memory.

By default, matrix stressor will perform matrix operations with optimal memory access to memory.  The new --matrix-yx option will instead perform matrix operations in a y, x rather than an x, y matrix order, causing more cache stalls on larger matrices.  This can be useful for exercising cache misses.

To complement the heapsort, mergesort and qsort memory/CPU exercising sort stressors I've added the BSD library radixsort stressor to exercise sorting of hundreds of thousands of small text strings.

Finally, while exercising various hugepage kernel configuration options I was inspired to make stress-ng mmap's to work better with hugepage madvise hints, so where possible all anonymous memory mappings are now private to allow hugepage madvise to work.  The stream and vm stressors also have new madvise options to allow one to chose hugepage, nohugepage or normal hints.

No big changes as per normal, just small incremental improvements to this all purpose stress tool.

Read more
facundo


Las últimas semanas mostraron claros avances en cinco frentes totalmente disímiles. Y un percance.

El más ajustado al tiempo es el Seminario de Python. Claro, llegó Julio y arrancó el seminario, que no es más que un curso con mucha gente :)

Público, primer día

Mostrando cosas en la pantalla, segundo día

Estuvo muy bien. Vinieron casi todos, las preguntas fueron interesantes, el tiempo me alcanzó como había planeado. Y además el hacerlo en oficinas de Onapsis fue un éxito: desayunamos rico, y estuvimos muy cómodos (¡gracias!).

El proyecto que también nos venía apretando el correr de las semanas es la Asociación Civil de Python Argentina. El último tiempo fue de muchos trámites. Que un papel con escribano presente para certificar que Devecoop nos presta su lugar físico (¡gracias!), que papeles presentados en la AFIP del centro, que presentar muchos papeles en el Credicoop para que nos abran la cuenta, que ir a la AFIP que corresponde a mi domicilio fiscal para registrar mis datos biométricos, y así.

Los pasos siguientes son: hacer una reunión (presencial o remota, veremos) para terminar de definir el tema de socios, particulares y empresas; esperar a que el banco nos abra la cuenta (esperemos que no nos ponga demasiadas vueltas); ya que tenemos CUIT poner a nuestro nombre los nombres de dominio de internet. Les iré contando de novedades.

Antes de seguir con los avances, vamos al percance.

El otro día agarré mi viejita Dell XPS 1330, que vino usando Moni los últimos 4 o 5 años, porque es la única máquina en casa que tiene una "compactera" y quería bajar el DVD "Huellas digitales" de Eruca Sativa para tenerlo en archivo normal de video. Llevé la máquina arriba, la prendí y metí el DVD (lo "chupó" lo más bien), pero la máquina nunca arrancó. Luego de pegarle muchas vueltas, me dí cuenta que tiraba un código de error con las luces: "The memory is believed to be good, but it's about to be exercised. Such as shadowing the BIOS and zeroing all the memory.". Y no había forma de sacar el DVD :( (al ser entrada por "ranura", no tiene un agujerito donde uno pueda meter un clip y expulsar manualmente el disco).

Entonces abrí toda la máquina. La despanzurré, hasta que saqué la compactera, la cual también desarmé para sacar el DVD, todo bien. Después volví a armar la máquina. Al final, me seguía dando el mismo código de error. Si saco las dos memorias me tira "No SODIMM installed", y si pongo una de los das en cualquiera de los dos slots me tira "SPD data reports all SODIMMS are unusable".

La laptop toda abierta

Lo mejor que puedo determinar es que se le jodió algo de la mother relacionado con la memoria. Caput. Una lástima que se haya muerto esta máquina, pero la verdad es que se la bancó: la tengo desde hace 8.5 años, cuando entré en Canonical, y cuando la dejé de usar yo la empezó a usar Moni. Nota de color: este es uno de los casos en que realmente un video es mejor que un documento de texto.

Ya entrando en proyectos de software, hay avances en los tres que vengo empujando últimamente: Encuentro, Fades y CDPedia.

Con respecto a Encuentro, la mejor de las noticias: ¡vuelve! Es que hace mucho que no le hacía nada, y habían cambiado varias cosas con respecto a los backends. Bah, principalmente CDA (que desapareció) y Encuentro en sí (que renovó totalmente el sitio). Pero Diego Mascialino me dio una mano y renovamos los scrapers, mejoramos algunas cositas más, y ya estaría casi listo para un release. Bonus track: sitio renovado, ya verán.

De Fades les cuento que con Nicolás Demarchi le estuvimos poniendo algo de ganas luego del release, y empezamos el (esperemos corto) camino para la versión 7. La mayor noticia en este frente es que Michael Kennedy y Brian Okken charlaron sobre este proyecto en este episodio del podcast Python Bytes (que tanto Nico como yo escuchamos todas las semanas); estamos muy contentos.

Quesos (por el logo de fades, larga historia)

Finalmente, con respecto a CDPedia les cuento que también avancé en este proyecto. Estoy planeando hacer una release completa estas semanas, y para ello armé una versión beta (luego de corregir varias cosas las últimas semanas/meses para adecuarla a cambios en Wikipedia), así que les pido por favor que la bajen, revisen y si encuentran cualquier cosa me avisen.

Más noticias próximamente por este canal :)

Read more
Dustin Kirkland


I met up with the excellent hosts of the The Changelog podcast at OSCON in Austin a few weeks back, and joined them for a short segment.

That podcast recording is now live!  Enjoy!


The Changelog 256: Ubuntu Snaps and Bash on Windows Server with Dustin Kirkland
Listen on Changelog.com



Cheers,
Dustin

Read more
Christian Brauner

 

containers

For a long time LXD has supported multiple storage drivers. Users could choose between zfs, btrfs, lvm, or plain directory storage pools but they could only ever use a single storage pool. A frequent feature request was to support not just a single storage pool but multiple storage pools. This way users would for example be able to maintain a zfs storage pool backed by an SSD to be used by very I/O intensive containers and another simple directory based storage pool for other containers. Luckily, this is now possible since LXD gained its own storage management API a few versions back.

Creating storage pools

A new LXD installation comes without any storage pool defined. If you run lxd init LXD will offer to create a storage pool for you. The storage pool created by lxd init will be the default storage pool on which containers are created.

asciicast

Creating further storage pools

Our client tool makes it really simple to create additional storage pools. In order to create and administer new storage pools you can use the lxc storage command. So if you wanted to create an additional btrfs storage pool on a block device /dev/sdb you would simply use lxc storage create my-btrfs btrfs source=/dev/sdb. But let’s take a look:

asciicast

Creating containers on the default storage pool

If you started from a fresh install of LXD and created a storage pool via lxd init LXD will use this pool as the default storage pool. That means if you’re doing a lxc launch images:ubuntu/xenial xen1 LXD will create a storage volume for the container’s root filesystem on this storage pool. In our examples we’ve been using my-first-zfs-pool as our default storage pool:

asciicast

Creating containers on a specific storage pool

But you can also tell lxc launch and lxc init to create a container on a specific storage pool by simply passing the -s argument. For example, if you wanted to create a new container on the my-btrfs storage pool you would do lxc launch images:ubuntu/xenial xen-on-my-btrfs -s my-btrfs:

asciicast

Creating custom storage volumes

If you need additional space for one of your containers to for example store additional data the new storage API will let you create storage volumes that can be attached to a container. This is as simple as doing lxc storage volume create my-btrfs my-custom-volume:

asciicast

Attaching custom storage volumes to containers

Of course this feature is only helpful because the storage API let’s you attach those storage volume to containers. To attach a storage volume to a container you can use lxc storage volume attach my-btrfs my-custom-volume xen1 data /opt/my/data:

asciicast

Sharing custom storage volumes between containers

By default LXD will make an attached storage volume writable by the container it is attached to. This means it will change the ownership of the storage volume to the container’s id mapping. But Storage volumes can also be attached to multiple containers at the same time. This is great for sharing data among multiple containers. However, this comes with a few restrictions. In order for a storage volume to be attached to multiple containers they must all share the same id mapping. Let’s create an additional container xen-isolated that has an isolated id mapping. This means its id mapping will be unique in this LXD instance such that no other container does have the same id mapping. Attaching the same storage volume my-custom-volume to this container will now fail:

asciicast

But let’s make xen-isolated have the same mapping as xen1 and let’s also rename it to xen2 to reflect that change. Now we can attach my-custom-volume to both xen1 and xen2 without a problem:

asciicast

Summary

The storage API is a very powerful addition to LXD. It provides a set of essential features that are helpful in dealing with a variety of problems when using containers at scale. This short introducion hopefully gave you an impression on what you can do with it. There will be more to come in the future.

Advertisements

Read more
Leo Arias

I'm a Quality Assurance Engineer. A big part of my job is to find problems, then make sure that they are fixed and automated so they don't regress. If I do my job well, then our process will identify new and potential problems early without manual intervention from anybody in the team. It's like trying to automate myself, everyday, until I'm no longer needed and have to jump to another project.

However, as we work in the project, it's unavoidable that many small manual tasks accumulate on my hands. This happens because I set up the continuous integration infrastructure, so I'm the one who knows more about it and have easier access, or because I'm the one who requested access to the build farm so I'm the one with the password, or because I configured the staging environment and I'm the only one who knows the details. This is a great way to achieve job security, but it doesn't lead us to higher quality. It's a job half done, and it's terribly boring to be a bottleneck and a silo of information about testing and the release process. All of these tasks should be shared by the whole team, as with all the other tasks in the project.

There are two problems. First, most of these tasks involve delicate credentials that shouldn't be freely shared with everybody. Second, even if the task itself is simple and quick to execute, it's not very simple to document how to set up the environment to be able to execute them, nor how to make sure that the right task is executed in the right moment.

Chatops is how I like to solve all of this. The idea is that every task that requires manual intervention is implemented in a script that can be executed by a bot. This bot joins the communication channel where the entire team is present, and it will execute the tasks and report about their results as a response to external events that happen somewhere in the project infrastructure, or as a response to the direct request of a team member in the channel. The credentials are kept safe, they only have to be shared with the bot and the permissions can be handled with access control lists or membership to the channel. And the operative knowledge is shared with all the team, because they are all listening in the same channel with the bot. This means that anybody can execute the tasks, and the bot assists them to make it simple.

In snapcraft we started writing our bot not so long ago. It's called snappy-m-o (Microbe Obliterator), and it's written in python with errbot. We, of course, packaged it as a snap so we have automated delivery every time we change it's source code, and the bot is also autoupdated in the server, so in the chat we are always interacting with the latest and greatest.

Let me show you how we started it, in case you want to get your own. But let's call this one Baymax, and let's make a virtual environment with errbot, to experiment.

drawing of the Baymax bot

$ mkdir -p ~/workspace/baymax
$ cd ~/workspace/baymax
$ sudo apt install python3-venv
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install errbot
$ errbot --init

The last command will initialize this bot with a super simple plugin, and will configure it to work in text mode. This means that the bot won't be listening on any channel, you can just interact with it through the command line (the ops, without the chat). Let's try it:

$ errbot
[...]
>>> !help
All commands
[...]
!tryme - Execute to check if Errbot responds to command.
[...]
>>> !tryme
It works !
>>> !shutdown --confirm

tryme is the command provided by the example plugin that errbot --init created. Take a look at the file plugins/err-example/example.py, errbot is just lovely. In order to define your own plugin you will just need a class that inherits from errbot.BotPlugin, and the commands are methods decorated with @errbot.botcmd. I won't dig into how to write plugins, because they have an amazing documentation about Plugin development. You can also read the plugins we have in our snappy-m-o, one for triggering autopkgtests on GitHub pull requests, and the other for subscribing to the results of the pull requests tests.

Let's change the config of Baymax to put it in an IRC chat:

$ pip install irc

And in the config.py file, set the following values:

BACKEND = 'IRC'
BOT_IDENTITY = {
    'nickname' : 'baymax-elopio',  # Nicknames need to be unique, so append your own.
                                   # Remember to replace 'elopio' with your nick everywhere
                                   # from now on.
    'server' : 'irc.freenode.net',
}
CHATROOM_PRESENCE = ('#snappy',)

Run it again with the errbot command, but this time join the #snappy channel in irc.freenode.net, and write in there !tryme. It works ! :)

screenshot of errbot on IRC

So, this is very simple, but let's package it now to start with the good practice of continuous delivery before it gets more complicated. As usual, it just requires a snapcraft.yaml file with all the packaging info and metadata:

name: baymax-elopio
version: '0.1-dev'
summary: A test bot with errbot.
description: Chat ops bot for my team.
grade: stable
confinement: strict

apps:
  baymax-elopio:
    command: env LC_ALL=C.UTF-8 errbot -c $SNAP/config.py
    plugs: [home, network, network-bind]

parts:
  errbot:
    plugin: python
    python-packages: [errbot, irc]
  baymax:
    source: .
    plugin: dump
    stage:
      - config.py
      - plugins
    after: [errbot]

And we need to change a few more values in config.py to make sure that the bot is relocatable, that we can run it in the isolated snap environment, and that we can add plugins after it has been installed:

import os

BOT_DATA_DIR = os.environ.get('SNAP_USER_DATA')
BOT_EXTRA_PLUGIN_DIR = os.path.join(os.environ.get('SNAP'), 'plugins')
BOT_LOG_FILE = BOT_DATA_DIR + '/err.log'

One final try, this time from the snap:

$ sudo apt install snapcraft
$ snapcraft
$ sudo snap install baymax*.snap --dangerous
$ baymax-elopio

And go back to IRC to check.

Last thing would be to push the source code we have just written to a GitHub repo, and enable the continuous delivery in build.snapcraft.io. Go to your server and install the bot with sudo snap install baymax-elopio --edge. Now everytime somebody from your team makes a change in the master repo in GitHub, the bot in your server will be automatically updated to get those changes within a few hours without any work from your side.

If you are into chatops, make sure that every time you do a manual task, you also plan for some time to turn that task into a script that can be executed by your bot. And get ready to enjoy tons and tons of free time, or just keep going through those 400 open bugs, whichever you prefer :)

Read more
Leo Arias

I love playing with my prototyping boards. Here at Ubuntu we are designing the core operating system to support every single-board computer, and keep it safe, updated and simple. I've learned a lot about physical computing, but I always have a big problem when my prototype is done, and I want to deploy it. I am working with a Raspberry Pi, a DragonBoard, and a BeagleBone. They are all very different, with different architectures, different pins, onboard capabilities and peripherals, and they can have different operating systems. When I started learning about this, I had to write 3 programs that were very different, if I wanted to try my prototype in all my boards.

picture of the three different SBCs

Then I found Gobot, a framework for robotics and IoT that supports my three boards, and many more. With the added benefit that you can write all the software in the lovely and clean Go language. The Ubuntu store supports all their architectures too, and packaging Go projects with snapcraft is super simple. So we can combine all of this to make a single snap package that with the help of Gobot will work on every board, and deploy it to all the users of these boards through the snaps store.

Let's dig into the code with a very simple example to blink an LED, first for the Raspberry PI only.

package main

import (
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  adaptor := raspi.NewAdaptor()
  led := gpio.NewLedDriver(adaptor, "7")

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

In there you will see some of the Gobot concepts. There's an adaptor for the board, a driver for the specific device (in this case the LED), and a robot to control everything. In this program, there are only two things specific to the Raspberry Pi: the adaptor and the name of the GPIO pin ("7").

picture of the Raspberry Pi prototype

It works nicely in one of the boards, but let's extend the code a little to support the other two.

package main

import (
  "log"
  "os/exec"
  "strings"
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/beaglebone"
  "gobot.io/x/gobot/platforms/dragonboard"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  out, err := exec.Command("uname", "-r").Output()
  if err != nil {
    log.Fatal(err)
  }
  var adaptor gobot.Adaptor
  var pin string
  kernelRelease := string(out)
  if strings.Contains(kernelRelease, "raspi2") {
    adaptor = raspi.NewAdaptor()
    pin = "7"
  } else if strings.Contains(kernelRelease, "snapdragon") {
    adaptor = dragonboard.NewAdaptor()
    pin = "GPIO_A"
  } else {
    adaptor = beaglebone.NewAdaptor()
    pin = "P8_7"
  }
  digitalWriter, ok := adaptor.(gpio.DigitalWriter)
  if !ok {
    log.Fatal("Invalid adaptor")
  }
  led := gpio.NewLedDriver(digitalWriter, pin)

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

We are basically adding in there a block to select the right adaptor and pin, depending on which board the code is running. Now we can compile this program, throw the binary in the board, and give it a try.

picture of the Dragonboard prototype

But we can do better. If we package this in a snap, anybody with one of the boards and an operating system that supports snaps can easily install it. We also open the door to continuous delivery and crowd testing. And as I said before, super simple, just put this in the snapcraft.yaml file:

name: gobot-blink-elopio
version: master
summary:  Blink snap for the Raspberry Pi with Gobot
description: |
  This is a simple example to blink an LED in the Raspberry Pi
  using the Gobot framework.

confinement: devmode

apps:
  gobot-blink-elopio:
    command: gobot-blink

parts:
  gobot-blink:
    source: .
    plugin: go
    go-importpath: github.com/elopio/gobot-blink

To build the snap, here is a cool trick thanks to the work that kalikiana recently added to snapcraft. I'm writing this code in my development machine, which is amd64. But the raspberry pi and beaglebone are armhf, and the dragonboard is arm64; so I need to cross-compile the code to get binaries for all the architectures:

snapcraft --target-arch=armhf
snapcraft clean
snapcraft --target-arch=arm64

That will leave two .snap files in my working directory that then I can upload to the store with snapcraft push. Or I can just push the code to GitHub and let build.snapcraft.io to take care of building and pushing for me.

Here is the source code for this simple example: https://github.com/elopio/gobot-blink

Of course, Gobot supports many more devices that will let you build complex robots. Just take a look at the documentation in the Gobot site, and at the guide about deployable packages with Gobot and snapcraft.

picture of the BeagleBone prototype

If you have one of the boards I'm using here to play, give it a try:

sudo snap install gobot-blink-elopio --edge --devmode
sudo gobot-blink-elopio

Now my experiments will be to try make the snap more secure, with strict confinement. If you have any questions or want to help, we have a topic in the forum.

Read more
admin

Hello MAASters!

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS 2.3 (current development release)

  • Completed Django 1.11 transition
      • MAAS 2.3 snap will use Django 1.11 by default.
      • Ubuntu package will use Django 1.11 in Artful+
  • Network beaconing & better network discovery
      • MAAS now listens for [unicast and multicast] beacons on UDP port 5240. Beacons are encrypted and authenticated using a key derived from the MAAS shared secret. Upon receiving certain types of beacons, MAAS will reply, confirming the sender that existing MAAS on the network has the same shared key. In addition, records are kept about which interface each beacon was received on, and what VLAN tag (if any) was in use on that interface. This allows MAAS to determine which interfaces observed the same beacon (and thus must be on the same fabric). This information can also determine if [what would previously have been assumed to be] a separate fabric is actually an alternate VLAN in an existing fabric.
      • The maas-rack send-beacons command is now available to test the beacon protocol. (This command is intended for testing and support, not general use.) The MAAS shared secret must be installed before the command can be used. By default, it will send multicast beacons out all possible interfaces, but it can also be used in unicast mode.
      • Note that while IPv6 support is planned, support for receiving IPv6 beacons in MAAS is not yet available. The maas-rack send-beacons command, however, is already capable of sending IPv6 beacons. (Full IPv6 support is expected to make beacons more flexible, since IPv6 multicast can be sent out on interfaces without a specific IP address assignment, and without resorting to raw sockets.)
      • Improvements to rack registration are now under development, so that users will see a more accurate representation of fabrics upon initial installation or registration of a MAAS rack controller.
  • Bug fixes
    • LP: #1701056: Show correct information for a device details page as a normal user
    • LP: #1701052: Do not show the controllers tab as a normal user
    • LP: #1683765: Fix format when devices/controllers are selected to match those of machines
    • LP: #1684216 – Update button label from ‘Save selection’ to ‘Update selection’
    • LP: #1682489 – Fix Cancel button on add user dialog, which caused the user to be added anyway
    • LP: #1682387 – Unassigned should be (Unassigned)

MAAS 2.2.1

The past week the team was also focused on preparing and QA’ing the new MAAS 2.2.1 point release, which was released on Friday June the 30th. For more information about the bug fixes please visit the following https://launchpad.net/maas/+milestone/2.2.1 .

MAAS 2.2.1 is available in:

  • ppa:maas/stable

Read more