Canonical Voices

facundo

Eligiendo héroes


Dos grandes jugadores, dos grandes jugadas.

La primera (en orden cronológico) es este golazo de Messi que sirvió para que el Barça le gane al Real Madrid (su archi-oponente) en tiempo vencido, en un partido de Abril de este año, click para el video:

Jugada de Messi

La segunda jugada es esta tapa de Manu Ginóbili a uno de los jugadores estrellas del momento (James Harden), lo que permitió que los San Antonio Spurs ganen el quinto juego contra Houston, en las Semifinales de la Conferencia Oeste, post-temporada de la NBA, Mayo de este año, click para el video:

Jugada de Manu

Miren las dos jugadas, son fantásticas. Ahora miren los festejos de Messi y de Manu.

¿Saben qué me "jode"? Que Messi va y hace la señal de la cruz.

Perdón, pero yo a mis héroes los elijo laicos.

Read more
admin

Hello MAASters! The MAAS development summaries are back!

The past three weeks the team has been made good progress on three main areas, the development of 2.3, maintenance for 2.2, and out new and improved python library (libmaas).

MAAS 2.3 (current development release)

The first official MAAS 2.3 release has been prepared. It is currently undergoing a heavy round of testing and will be announced separately once completed. In the past three weeks, the team has:

  • Completed Upstream Proxy UI
    • Improve the UI to better configure the different proxy modes.
    • Added the ability to configure an upstream proxy.
  • Network beaconing & better network discovery
  • Started Hardware Testing Phase 2
      • UX team has completed the initial wireframes and gathered feedback.
      • Started changes to collect and gather better test results.
  • Started Switch modeling
      • Started changes to support switch and switch port modeling.
  • Bug fixes
    • LP: #1703403 – regiond workers can use too many postgres connections
    • LP: #1651165 – Unable to change disk name using maas gui
    • LP: #1702690 – [2.2] Commissioning a machine prefers minimum kernel over commissioning global
    • LP: #1700802 – [2.x] maas cli allocate interfaces=<label>:ip=<ADDRESS> errors with Unknown interfaces constraint Edit
    • LP: #1703713 – [2.3] Devices don’t have a link from the DNS page
    • LP: #1702976 – Cavium ThunderX lacks power settings after enlistment apparently due to missing kernel
    • LP: #1664822 – Enable IPMI over LAN if disabled
    • LP: #1703713 – Fix missing link on domain details page
    • LP: #1702669 – Add index on family(ip) for each StaticIPAddress to improve execution time of the maasserver_routable_pairs view.
    • LP: #1703845 – Set the re-check interval for rack to region RPC connections to the lowest value when a RPC connection is closed or lost.

MAAS 2.2 (current stable release)

  • Last week, MAAS 2.2 was SRU’d into the Ubuntu Archives and to our latest LTS release, Ubuntu Xenial, replacing the MAAS 2.1 series.
  • This week, a new MAAS 2.2 point release has also been prepared. It is currently undergoing heavy testing. Once testing is completed, it will be released in a separate announcement.

Libmaas

Last week, the team has worked on increasing the level

  • Added ability to create machines.
  • Added ability to commission machines.
  • Added ability to manage MAAS networking definitions. Including Subnet, Fabrics, Spaces, vlans, IP Ranges, Static Routes and DHCP.

Read more
facundo

En tu cara, planeta redondo


Ejercicio de Python. El objetivo es tener una serie de timestamps, en función de un registro "tipo cron" que indique periodicidad, desde un punto de partida, hasta "ahora".

El problema es que el "ahora" es de Buenos Aires, mientras que el servidor está en Holanda (o podría estar en cualquier lado).

Lo resolvemos con pytz y croniter. Veamos...

Arranquemos un intérprete interactivo dentro de un virtualenv con las dos libs que acabo de mencionar (y las importamos, además de datetime):

    $ fades -d pytz -d croniter
    *** fades ***  2017-07-26 18:27:20,009  INFO     Hi! This is fades 6.0, automatically managing your dependencies
    *** fades ***  2017-07-26 18:27:20,009  INFO     Need to install a dependency with pip, but no builtin, doing it manually...
    *** fades ***  2017-07-26 18:27:22,979  INFO     Installing dependency: 'pytz'
    *** fades ***  2017-07-26 18:27:24,431  INFO     Installing dependency: 'croniter'
    Python 3.5.2 (default, Nov 17 2016, 17:05:23)
    [GCC 5.4.0 20160609] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import croniter
    >>> import pytz
    >>> import datetime

Veamos que el server tiene horarios "complicados" (en el momento de hacer esto, acá en Buenos Aires son las 18:09):

    >>> datetime.datetime.now()
    datetime.datetime(2017, 7, 26, 23, 9, 51, 476140)
    >>> datetime.datetime.utcnow()
    datetime.datetime(2017, 7, 26, 21, 9, 56, 707279)

Instanciemos croniter, indicando repeticiones todas las horas a las 20 (a propósito, de manera que cuando iteremos desde hace una semana hasta "ahora", debería llegar hasta ayer, porque ahora son las 18 y pico acá, pero justo UTC o la hora del server son más que las 20...):

    >>> cron = croniter.croniter("0 20 * * * ", datetime.datetime(year=2017, month=7, day=20))

Pidamos la hora UTC actual, agregándole metadata de que es UTC, justamente:

    >>> utc_now = pytz.utc.localize(datetime.datetime.utcnow())
    >>> utc_now
    datetime.datetime(2017, 7, 26, 21, 15, 27, 508732, tzinfo=<UTC>)

Pidamos un timezone para Buenos Aires, y el "ahora" de antes pero calculado para esta parte del planeta:

    >>> bsas_tz = pytz.timezone("America/Buenos_Aires")
    >>> bsas_now = utc_now.astimezone(bsas_tz)
    >>> bsas_now
    datetime.datetime(2017, 7, 26, 18, 15, 27, 508732, tzinfo=<DstTzInfo 'America/Buenos_Aires' -03-1 day, 21:00:00 STD>)

Ahora hagamos un loop, pidiendo las fechas a cron y mostrándola, mientras que no sea más que ahora (notar que para compararla, hay que localizarla con el mismo timezone).

    >>> while True:
    ...     next_ts = cron.get_next(datetime.datetime)
    ...     bsas_next_ts = bsas_tz.localize(next_ts)
    ...     if bsas_next_ts > bsas_now:
    ...         break
    ...     print(bsas_next_ts)
    ...
    2017-07-20 20:00:00-03:00
    2017-07-21 20:00:00-03:00
    2017-07-22 20:00:00-03:00
    2017-07-23 20:00:00-03:00
    2017-07-24 20:00:00-03:00
    2017-07-25 20:00:00-03:00

Vemos que tuvimos fechas arrancando el 20 de julio, y tenemos "varios días a las 20hs" hasta ayer, porque todavía no es "hoy a las 20hs". ¡Listo!

Read more
Dustin Kirkland

Back in March, we asked the HackerNews community, “What do you want to see in Ubuntu 17.10?”: https://ubu.one/AskHN

A passionate discussion ensued, the results of which are distilled into this post: http://ubu.one/thankHN

In fact, you can check that link, http://ubu.one/thankHN and see our progress so far this cycle.  We already have a beta code in 17.10 available for your testing for several of those:

And several others have excellent work in progress, and will be complete by 17.10:

In summary -- your feedback matters!  There are hundreds of engineers and designers working for *you* to continue making Ubuntu amazing!

Along with the switch from Unity to GNOME, we’re also reviewing some of the desktop applications we package and ship in Ubuntu.  We’re looking to crowdsource input on your favorite Linux applications across a broad set of classic desktop functionality.

We invite you to contribute by listing the applications you find most useful in Linux in order of preference. To help us parse your input, please copy and paste the following bullets with your preferred apps in Linux desktop environments.  You’re welcome to suggest multiple apps, please just order them prioritized (e.g. Web Browser: Firefox, Chrome, Chromium).  If some of your functionality has moved entirely to the web, please note that too (e.g. Email Client: Gmail web, Office Suite: Office360 web).  If the software isn’t free/open source, please note that (e.g. Music Player: Spotify client non-free).  If I’ve missed a category, please add it in the same format.  If your favorites aren’t packaged for Ubuntu yet, please let us know, as we’re creating hundreds of new snap packages for Ubuntu desktop applications, and we’re keen to learn what key snaps we’re missing.

  • Web Browser: ???
  • Email Client: ???
  • Terminal: ???
  • IDE: ???
  • File manager: ???
  • Basic Text Editor: ???
  • IRC/Messaging Client: ???
  • PDF Reader: ???
  • Office Suite: ???
  • Calendar: ???
  • Video Player: ???
  • Music Player: ???
  • Photo Viewer: ???
  • Screen recording: ???

In the interest of opening this survey as widely as possible, we’ve cross-posted this thread to HackerNews, Reddit, and Slashdot.  We very much look forward to another friendly, energetic, collaborative discussion.

Or, you can fill out the survey here: https://ubu.one/apps1804

Thank you!
On behalf of @Canonical and @Ubuntu

Read more
facundo


El último día de Nerdear.la armá una encuesta que llamé "¿Cuál evento?", y la empecé a desparramar por las redes.

La pregunta principal era ¿A qué evento te interesería asistir?

  • PyCon: Conferencia nacional de Python, gratis, con charlas de nivel internacional. Dura 2 o 3 días, sucede una vez por año en algún lugar del país (el próximo es en Córdoba).
  • PyDay: Conferencia local de Python, gratis, con charlas más introductoria. Dura un día, puede haber varios en un año y puede ser en tu ciudad o cerca.
  • PyCamp: Hacking space durante cuatro días haciendo Python o lo que se te cante. Es pago (ya que tenemos el lugar y las comidas todo incluido). Sucede una vez al año en algún lugar del país (el próximo es en Baradero).
  • Consultorio Python: Un afteroffice nerd donde podés traer tu problema de Python y lo resolvemos entre todos. Es en algún lugar con proyector o tele, donde podamos charlar y tomar algo. Se puede hacer varias veces al año, en cualquier ciudad.
  • Reunión Social: Nos juntamos en un bar y charlamos de cualquier cosa (incluso de Python y PyAr, especialmente si hay novedades del grupo o cosas que decidir). Se puede hacer varias veces al año, en cualquier ciudad.
  • Meetup corto: Mini-conferencia de un par de horas para ver dos o tres charlas/presentaciones cortas, con algún espacio social también. Se puede hacer varias veces al año, en cualquier ciudad.
  • Meetup largo: Mezcla de mini-conferencia y sprint. Se hace normalmente un sábado, con algunas charlas a la mañana y un espacio de trabajo/hackeo a la tarde. Se puede hacer varias veces al año, en cualquier ciudad.
  • Sprint: Espacio de trabajo (sobre uno o más proyectos), normalmente un sábado para tener varias horas. Se puede hacer varias veces al año, en cualquier ciudad.


Luego también preguntaba ¿En qué ciudad vivís?, ¿A qué otras ciudades cercanas irías para un evento que dure un día o una tarde?, y dejaba un campo para comentarios.

Obtuve 169 respuestas. Bien.

Los eventos que mayormente la gente quiere ir tienen estilo de conferencia, primero PyCon, luego PyDay. Después ya vienen eventos mezcla de codear y charla (pycamp, meetups, sprints).

Notablemente el modelo de "consultorio Python" fue el que menos votos recibió, igual queremos probar esto al menos una vez, para ver como sale...

El siguiente gráfico es el que me da el form para ver los resultados, está recortado excluyendo opciones que no estaban en la encuesta original (de cualquier manera acá está la planilla con toda la data)

Cual evento

La distribución de votantes es la esperable en función de la (lamentable) centralización de nuestro pais: muchos de CABA y Buenos Aires, bastantes de otras ciudades grandes (Córdoba, Rosario, Santa Fe, La Plata), algunos de otros lados. En general la gente está dispuesta a viajar para los eventos.

A nivel comentarios, los más notables son los que reproduzco acá...

    Me parece que estaría bueno usar internet para acortar distancias, los
    que estamos lejos sabemos que todo pasa en Bs.As. o alrededores (y es
    lógico por una cuestion de cantidad de asistentes). Me encantaría que
    halla un taller online o un PyDayOnline o algo así.
   
    Algunos eventos no los seleccioné por mi nivel de conocimiento, más
    adelante sí me gustaría participar. El que más me gusta es pycamp (esta
    letra gris no se ve, je)
   
    Es notable la ausencia de meetups como en otras comunidades, lugar: una
    empresa, aporte de bebidas y comida, dos conferencias, lightning talks...
    Un PyCamp es demasiado inmersivo y generalmente, para mi, muy lejos. Lo
    bueno de las meetups, es que cumplen con la regla agil de los "dos pies":
    en cualquier momento, te podes ir, caminando :-)
   
    Estaria muy bueno mas charlas jornadas, hackaton de python cientifico
   
    La pycon debería hacerse en días de semana y/o sin coincidir con fin de
    semana largo.  Mucha gente usa Python en su trabajo y puede ir. En cambio
    un fin de semana (y más si es largo) choca con el necesario descanso y
    espacio para la vida personal
   
    Los que somos del interior aprovechariamos más  los eventos que tienen
    mas días.
   
    Me gustan más los eventos en donde todos hablan a los eventos que tienen
    charlas predefinidas, ya que de esa manera todos intercambiamos ideas y
    se pueden escuchar muchas ideas y opiniones.
   
    Me gustaría que exista información acerca de vuelos disponibles, hoteles
    cercanos al evento y costos mínimos de bus, tren, taxi para movilización
    en los lugares que se desarrolla los eventos

En fin, acá está toda la data si quieren hacer más análisis o buscar algún dato puntual.

Read more
Colin Ian King

The latest release of stress-ng V0.08.09 incorporates new stressors and a handful of bug fixes. So what is new in this release?

  • memrate stressor to exercise and measure memory read/write throughput
  • matrix yx option to swap order of matrix operations
  • matrix stressor size can now be 8192 x 8192 in size
  • radixsort stressor (using the BSD library radixsort) to exercise CPU and memory
  • improved job script parsing and error reporting
  • faster termination of rmap stressor (this was slow inside VMs)
  • icache stressor now calls cacheflush()
  • anonymous memory mappings are now private allowing hugepage madvise
  • fcntl stressor exercises the 4.13 kernel F_GET_FILE_RW_HINT and F_SET_FILE_RW_HINT
  • stream and vm stressors have new mdavise options
The new memrate stressor performs 64/32/16/8 bit reads and writes to a large memory region.  It will attempt to get some statistics on the memory bandwidth for these simple reads and writes.  One can also specify the read/write rates in terms of MB/sec using the --memrate-rd-mbs and --memrate-wr-mbs options, for example:

 stress-ng --memrate 1 --memrate-bytes 1G \  
--memrate-rd-mbs 1000 --memrate-wr-mbs 2000 -t 60
stress-ng: info: [22880] dispatching hogs: 1 memrate
stress-ng: info: [22881] stress-ng-memrate: write64: 1998.96 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read64: 998.61 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write32: 1999.68 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read32: 998.80 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write16: 1999.39 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read16: 999.66 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write8: 1841.04 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read8: 999.94 MB/sec
stress-ng: info: [22880] successful run completed in 60.00s (1 min, 0.00 secs)

...the memrate stressor will attempt to limit the memory rates but due to scheduling jitter and other memory activity it may not be 100% accurate.  By careful setting of the size of the memory being exercised with the --memrate-bytes option one can exercise the L1/L2/L3 caches and/or the entire memory.

By default, matrix stressor will perform matrix operations with optimal memory access to memory.  The new --matrix-yx option will instead perform matrix operations in a y, x rather than an x, y matrix order, causing more cache stalls on larger matrices.  This can be useful for exercising cache misses.

To complement the heapsort, mergesort and qsort memory/CPU exercising sort stressors I've added the BSD library radixsort stressor to exercise sorting of hundreds of thousands of small text strings.

Finally, while exercising various hugepage kernel configuration options I was inspired to make stress-ng mmap's to work better with hugepage madvise hints, so where possible all anonymous memory mappings are now private to allow hugepage madvise to work.  The stream and vm stressors also have new madvise options to allow one to chose hugepage, nohugepage or normal hints.

No big changes as per normal, just small incremental improvements to this all purpose stress tool.

Read more
facundo


Las últimas semanas mostraron claros avances en cinco frentes totalmente disímiles. Y un percance.

El más ajustado al tiempo es el Seminario de Python. Claro, llegó Julio y arrancó el seminario, que no es más que un curso con mucha gente :)

Público, primer día

Mostrando cosas en la pantalla, segundo día

Estuvo muy bien. Vinieron casi todos, las preguntas fueron interesantes, el tiempo me alcanzó como había planeado. Y además el hacerlo en oficinas de Onapsis fue un éxito: desayunamos rico, y estuvimos muy cómodos (¡gracias!).

El proyecto que también nos venía apretando el correr de las semanas es la Asociación Civil de Python Argentina. El último tiempo fue de muchos trámites. Que un papel con escribano presente para certificar que Devecoop nos presta su lugar físico (¡gracias!), que papeles presentados en la AFIP del centro, que presentar muchos papeles en el Credicoop para que nos abran la cuenta, que ir a la AFIP que corresponde a mi domicilio fiscal para registrar mis datos biométricos, y así.

Los pasos siguientes son: hacer una reunión (presencial o remota, veremos) para terminar de definir el tema de socios, particulares y empresas; esperar a que el banco nos abra la cuenta (esperemos que no nos ponga demasiadas vueltas); ya que tenemos CUIT poner a nuestro nombre los nombres de dominio de internet. Les iré contando de novedades.

Antes de seguir con los avances, vamos al percance.

El otro día agarré mi viejita Dell XPS 1330, que vino usando Moni los últimos 4 o 5 años, porque es la única máquina en casa que tiene una "compactera" y quería bajar el DVD "Huellas digitales" de Eruca Sativa para tenerlo en archivo normal de video. Llevé la máquina arriba, la prendí y metí el DVD (lo "chupó" lo más bien), pero la máquina nunca arrancó. Luego de pegarle muchas vueltas, me dí cuenta que tiraba un código de error con las luces: "The memory is believed to be good, but it's about to be exercised. Such as shadowing the BIOS and zeroing all the memory.". Y no había forma de sacar el DVD :( (al ser entrada por "ranura", no tiene un agujerito donde uno pueda meter un clip y expulsar manualmente el disco).

Entonces abrí toda la máquina. La despanzurré, hasta que saqué la compactera, la cual también desarmé para sacar el DVD, todo bien. Después volví a armar la máquina. Al final, me seguía dando el mismo código de error. Si saco las dos memorias me tira "No SODIMM installed", y si pongo una de los das en cualquiera de los dos slots me tira "SPD data reports all SODIMMS are unusable".

La laptop toda abierta

Lo mejor que puedo determinar es que se le jodió algo de la mother relacionado con la memoria. Caput. Una lástima que se haya muerto esta máquina, pero la verdad es que se la bancó: la tengo desde hace 8.5 años, cuando entré en Canonical, y cuando la dejé de usar yo la empezó a usar Moni. Nota de color: este es uno de los casos en que realmente un video es mejor que un documento de texto.

Ya entrando en proyectos de software, hay avances en los tres que vengo empujando últimamente: Encuentro, Fades y CDPedia.

Con respecto a Encuentro, la mejor de las noticias: ¡vuelve! Es que hace mucho que no le hacía nada, y habían cambiado varias cosas con respecto a los backends. Bah, principalmente CDA (que desapareció) y Encuentro en sí (que renovó totalmente el sitio). Pero Diego Mascialino me dio una mano y renovamos los scrapers, mejoramos algunas cositas más, y ya estaría casi listo para un release. Bonus track: sitio renovado, ya verán.

De Fades les cuento que con Nicolás Demarchi le estuvimos poniendo algo de ganas luego del release, y empezamos el (esperemos corto) camino para la versión 7. La mayor noticia en este frente es que Michael Kennedy y Brian Okken charlaron sobre este proyecto en este episodio del podcast Python Bytes (que tanto Nico como yo escuchamos todas las semanas); estamos muy contentos.

Quesos (por el logo de fades, larga historia)

Finalmente, con respecto a CDPedia les cuento que también avancé en este proyecto. Estoy planeando hacer una release completa estas semanas, y para ello armé una versión beta (luego de corregir varias cosas las últimas semanas/meses para adecuarla a cambios en Wikipedia), así que les pido por favor que la bajen, revisen y si encuentran cualquier cosa me avisen.

Más noticias próximamente por este canal :)

Read more
Dustin Kirkland


I met up with the excellent hosts of the The Changelog podcast at OSCON in Austin a few weeks back, and joined them for a short segment.

That podcast recording is now live!  Enjoy!


The Changelog 256: Ubuntu Snaps and Bash on Windows Server with Dustin Kirkland
Listen on Changelog.com



Cheers,
Dustin

Read more
Christian Brauner

 

containers

For a long time LXD has supported multiple storage drivers. Users could choose between zfs, btrfs, lvm, or plain directory storage pools but they could only ever use a single storage pool. A frequent feature request was to support not just a single storage pool but multiple storage pools. This way users would for example be able to maintain a zfs storage pool backed by an SSD to be used by very I/O intensive containers and another simple directory based storage pool for other containers. Luckily, this is now possible since LXD gained its own storage management API a few versions back.

Creating storage pools

A new LXD installation comes without any storage pool defined. If you run lxd init LXD will offer to create a storage pool for you. The storage pool created by lxd init will be the default storage pool on which containers are created.

asciicast

Creating further storage pools

Our client tool makes it really simple to create additional storage pools. In order to create and administer new storage pools you can use the lxc storage command. So if you wanted to create an additional btrfs storage pool on a block device /dev/sdb you would simply use lxc storage create my-btrfs btrfs source=/dev/sdb. But let’s take a look:

asciicast

Creating containers on the default storage pool

If you started from a fresh install of LXD and created a storage pool via lxd init LXD will use this pool as the default storage pool. That means if you’re doing a lxc launch images:ubuntu/xenial xen1 LXD will create a storage volume for the container’s root filesystem on this storage pool. In our examples we’ve been using my-first-zfs-pool as our default storage pool:

asciicast

Creating containers on a specific storage pool

But you can also tell lxc launch and lxc init to create a container on a specific storage pool by simply passing the -s argument. For example, if you wanted to create a new container on the my-btrfs storage pool you would do lxc launch images:ubuntu/xenial xen-on-my-btrfs -s my-btrfs:

asciicast

Creating custom storage volumes

If you need additional space for one of your containers to for example store additional data the new storage API will let you create storage volumes that can be attached to a container. This is as simple as doing lxc storage volume create my-btrfs my-custom-volume:

asciicast

Attaching custom storage volumes to containers

Of course this feature is only helpful because the storage API let’s you attach those storage volume to containers. To attach a storage volume to a container you can use lxc storage volume attach my-btrfs my-custom-volume xen1 data /opt/my/data:

asciicast

Sharing custom storage volumes between containers

By default LXD will make an attached storage volume writable by the container it is attached to. This means it will change the ownership of the storage volume to the container’s id mapping. But Storage volumes can also be attached to multiple containers at the same time. This is great for sharing data among multiple containers. However, this comes with a few restrictions. In order for a storage volume to be attached to multiple containers they must all share the same id mapping. Let’s create an additional container xen-isolated that has an isolated id mapping. This means its id mapping will be unique in this LXD instance such that no other container does have the same id mapping. Attaching the same storage volume my-custom-volume to this container will now fail:

asciicast

But let’s make xen-isolated have the same mapping as xen1 and let’s also rename it to xen2 to reflect that change. Now we can attach my-custom-volume to both xen1 and xen2 without a problem:

asciicast

Summary

The storage API is a very powerful addition to LXD. It provides a set of essential features that are helpful in dealing with a variety of problems when using containers at scale. This short introducion hopefully gave you an impression on what you can do with it. There will be more to come in the future.

Advertisements
&
&

Read more
Leo Arias

I'm a Quality Assurance Engineer. A big part of my job is to find problems, then make sure that they are fixed and automated so they don't regress. If I do my job well, then our process will identify new and potential problems early without manual intervention from anybody in the team. It's like trying to automate myself, everyday, until I'm no longer needed and have to jump to another project.

However, as we work in the project, it's unavoidable that many small manual tasks accumulate on my hands. This happens because I set up the continuous integration infrastructure, so I'm the one who knows more about it and have easier access, or because I'm the one who requested access to the build farm so I'm the one with the password, or because I configured the staging environment and I'm the only one who knows the details. This is a great way to achieve job security, but it doesn't lead us to higher quality. It's a job half done, and it's terribly boring to be a bottleneck and a silo of information about testing and the release process. All of these tasks should be shared by the whole team, as with all the other tasks in the project.

There are two problems. First, most of these tasks involve delicate credentials that shouldn't be freely shared with everybody. Second, even if the task itself is simple and quick to execute, it's not very simple to document how to set up the environment to be able to execute them, nor how to make sure that the right task is executed in the right moment.

Chatops is how I like to solve all of this. The idea is that every task that requires manual intervention is implemented in a script that can be executed by a bot. This bot joins the communication channel where the entire team is present, and it will execute the tasks and report about their results as a response to external events that happen somewhere in the project infrastructure, or as a response to the direct request of a team member in the channel. The credentials are kept safe, they only have to be shared with the bot and the permissions can be handled with access control lists or membership to the channel. And the operative knowledge is shared with all the team, because they are all listening in the same channel with the bot. This means that anybody can execute the tasks, and the bot assists them to make it simple.

In snapcraft we started writing our bot not so long ago. It's called snappy-m-o (Microbe Obliterator), and it's written in python with errbot. We, of course, packaged it as a snap so we have automated delivery every time we change it's source code, and the bot is also autoupdated in the server, so in the chat we are always interacting with the latest and greatest.

Let me show you how we started it, in case you want to get your own. But let's call this one Baymax, and let's make a virtual environment with errbot, to experiment.

drawing of the Baymax bot

$ mkdir -p ~/workspace/baymax
$ cd ~/workspace/baymax
$ sudo apt install python3-venv
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install errbot
$ errbot --init

The last command will initialize this bot with a super simple plugin, and will configure it to work in text mode. This means that the bot won't be listening on any channel, you can just interact with it through the command line (the ops, without the chat). Let's try it:

$ errbot
[...]
>>> !help
All commands
[...]
!tryme - Execute to check if Errbot responds to command.
[...]
>>> !tryme
It works !
>>> !shutdown --confirm

tryme is the command provided by the example plugin that errbot --init created. Take a look at the file plugins/err-example/example.py, errbot is just lovely. In order to define your own plugin you will just need a class that inherits from errbot.BotPlugin, and the commands are methods decorated with @errbot.botcmd. I won't dig into how to write plugins, because they have an amazing documentation about Plugin development. You can also read the plugins we have in our snappy-m-o, one for triggering autopkgtests on GitHub pull requests, and the other for subscribing to the results of the pull requests tests.

Let's change the config of Baymax to put it in an IRC chat:

$ pip install irc

And in the config.py file, set the following values:

BACKEND = 'IRC'
BOT_IDENTITY = {
    'nickname' : 'baymax-elopio',  # Nicknames need to be unique, so append your own.
                                   # Remember to replace 'elopio' with your nick everywhere
                                   # from now on.
    'server' : 'irc.freenode.net',
}
CHATROOM_PRESENCE = ('#snappy',)

Run it again with the errbot command, but this time join the #snappy channel in irc.freenode.net, and write in there !tryme. It works ! :)

screenshot of errbot on IRC

So, this is very simple, but let's package it now to start with the good practice of continuous delivery before it gets more complicated. As usual, it just requires a snapcraft.yaml file with all the packaging info and metadata:

name: baymax-elopio
version: '0.1-dev'
summary: A test bot with errbot.
description: Chat ops bot for my team.
grade: stable
confinement: strict

apps:
  baymax-elopio:
    command: env LC_ALL=C.UTF-8 errbot -c $SNAP/config.py
    plugs: [home, network, network-bind]

parts:
  errbot:
    plugin: python
    python-packages: [errbot, irc]
  baymax:
    source: .
    plugin: dump
    stage:
      - config.py
      - plugins
    after: [errbot]

And we need to change a few more values in config.py to make sure that the bot is relocatable, that we can run it in the isolated snap environment, and that we can add plugins after it has been installed:

import os

BOT_DATA_DIR = os.environ.get('SNAP_USER_DATA')
BOT_EXTRA_PLUGIN_DIR = os.path.join(os.environ.get('SNAP'), 'plugins')
BOT_LOG_FILE = BOT_DATA_DIR + '/err.log'

One final try, this time from the snap:

$ sudo apt install snapcraft
$ snapcraft
$ sudo snap install baymax*.snap --dangerous
$ baymax-elopio

And go back to IRC to check.

Last thing would be to push the source code we have just written to a GitHub repo, and enable the continuous delivery in build.snapcraft.io. Go to your server and install the bot with sudo snap install baymax-elopio --edge. Now everytime somebody from your team makes a change in the master repo in GitHub, the bot in your server will be automatically updated to get those changes within a few hours without any work from your side.

If you are into chatops, make sure that every time you do a manual task, you also plan for some time to turn that task into a script that can be executed by your bot. And get ready to enjoy tons and tons of free time, or just keep going through those 400 open bugs, whichever you prefer :)

Read more
Leo Arias

I love playing with my prototyping boards. Here at Ubuntu we are designing the core operating system to support every single-board computer, and keep it safe, updated and simple. I've learned a lot about physical computing, but I always have a big problem when my prototype is done, and I want to deploy it. I am working with a Raspberry Pi, a DragonBoard, and a BeagleBone. They are all very different, with different architectures, different pins, onboard capabilities and peripherals, and they can have different operating systems. When I started learning about this, I had to write 3 programs that were very different, if I wanted to try my prototype in all my boards.

picture of the three different SBCs

Then I found Gobot, a framework for robotics and IoT that supports my three boards, and many more. With the added benefit that you can write all the software in the lovely and clean Go language. The Ubuntu store supports all their architectures too, and packaging Go projects with snapcraft is super simple. So we can combine all of this to make a single snap package that with the help of Gobot will work on every board, and deploy it to all the users of these boards through the snaps store.

Let's dig into the code with a very simple example to blink an LED, first for the Raspberry PI only.

package main

import (
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  adaptor := raspi.NewAdaptor()
  led := gpio.NewLedDriver(adaptor, "7")

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

In there you will see some of the Gobot concepts. There's an adaptor for the board, a driver for the specific device (in this case the LED), and a robot to control everything. In this program, there are only two things specific to the Raspberry Pi: the adaptor and the name of the GPIO pin ("7").

picture of the Raspberry Pi prototype

It works nicely in one of the boards, but let's extend the code a little to support the other two.

package main

import (
  "log"
  "os/exec"
  "strings"
  "time"

  "gobot.io/x/gobot"
  "gobot.io/x/gobot/drivers/gpio"
  "gobot.io/x/gobot/platforms/beaglebone"
  "gobot.io/x/gobot/platforms/dragonboard"
  "gobot.io/x/gobot/platforms/raspi"
)

func main() {
  out, err := exec.Command("uname", "-r").Output()
  if err != nil {
    log.Fatal(err)
  }
  var adaptor gobot.Adaptor
  var pin string
  kernelRelease := string(out)
  if strings.Contains(kernelRelease, "raspi2") {
    adaptor = raspi.NewAdaptor()
    pin = "7"
  } else if strings.Contains(kernelRelease, "snapdragon") {
    adaptor = dragonboard.NewAdaptor()
    pin = "GPIO_A"
  } else {
    adaptor = beaglebone.NewAdaptor()
    pin = "P8_7"
  }
  digitalWriter, ok := adaptor.(gpio.DigitalWriter)
  if !ok {
    log.Fatal("Invalid adaptor")
  }
  led := gpio.NewLedDriver(digitalWriter, pin)

  work := func() {
    gobot.Every(1*time.Second, func() {
      led.Toggle()
    })
  }

  robot := gobot.NewRobot("snapbot",
    []gobot.Connection{adaptor},
    []gobot.Device{led},
    work,
  )

  robot.Start()
}

We are basically adding in there a block to select the right adaptor and pin, depending on which board the code is running. Now we can compile this program, throw the binary in the board, and give it a try.

picture of the Dragonboard prototype

But we can do better. If we package this in a snap, anybody with one of the boards and an operating system that supports snaps can easily install it. We also open the door to continuous delivery and crowd testing. And as I said before, super simple, just put this in the snapcraft.yaml file:

name: gobot-blink-elopio
version: master
summary:  Blink snap for the Raspberry Pi with Gobot
description: |
  This is a simple example to blink an LED in the Raspberry Pi
  using the Gobot framework.

confinement: devmode

apps:
  gobot-blink-elopio:
    command: gobot-blink

parts:
  gobot-blink:
    source: .
    plugin: go
    go-importpath: github.com/elopio/gobot-blink

To build the snap, here is a cool trick thanks to the work that kalikiana recently added to snapcraft. I'm writing this code in my development machine, which is amd64. But the raspberry pi and beaglebone are armhf, and the dragonboard is arm64; so I need to cross-compile the code to get binaries for all the architectures:

snapcraft --target-arch=armhf
snapcraft clean
snapcraft --target-arch=arm64

That will leave two .snap files in my working directory that then I can upload to the store with snapcraft push. Or I can just push the code to GitHub and let build.snapcraft.io to take care of building and pushing for me.

Here is the source code for this simple example: https://github.com/elopio/gobot-blink

Of course, Gobot supports many more devices that will let you build complex robots. Just take a look at the documentation in the Gobot site, and at the guide about deployable packages with Gobot and snapcraft.

picture of the BeagleBone prototype

If you have one of the boards I'm using here to play, give it a try:

sudo snap install gobot-blink-elopio --edge --devmode
sudo gobot-blink-elopio

Now my experiments will be to try make the snap more secure, with strict confinement. If you have any questions or want to help, we have a topic in the forum.

Read more
admin

Hello MAASters!

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS 2.3 (current development release)

  • Completed Django 1.11 transition
      • MAAS 2.3 snap will use Django 1.11 by default.
      • Ubuntu package will use Django 1.11 in Artful+
  • Network beaconing & better network discovery
      • MAAS now listens for [unicast and multicast] beacons on UDP port 5240. Beacons are encrypted and authenticated using a key derived from the MAAS shared secret. Upon receiving certain types of beacons, MAAS will reply, confirming the sender that existing MAAS on the network has the same shared key. In addition, records are kept about which interface each beacon was received on, and what VLAN tag (if any) was in use on that interface. This allows MAAS to determine which interfaces observed the same beacon (and thus must be on the same fabric). This information can also determine if [what would previously have been assumed to be] a separate fabric is actually an alternate VLAN in an existing fabric.
      • The maas-rack send-beacons command is now available to test the beacon protocol. (This command is intended for testing and support, not general use.) The MAAS shared secret must be installed before the command can be used. By default, it will send multicast beacons out all possible interfaces, but it can also be used in unicast mode.
      • Note that while IPv6 support is planned, support for receiving IPv6 beacons in MAAS is not yet available. The maas-rack send-beacons command, however, is already capable of sending IPv6 beacons. (Full IPv6 support is expected to make beacons more flexible, since IPv6 multicast can be sent out on interfaces without a specific IP address assignment, and without resorting to raw sockets.)
      • Improvements to rack registration are now under development, so that users will see a more accurate representation of fabrics upon initial installation or registration of a MAAS rack controller.
  • Bug fixes
    • LP: #1701056: Show correct information for a device details page as a normal user
    • LP: #1701052: Do not show the controllers tab as a normal user
    • LP: #1683765: Fix format when devices/controllers are selected to match those of machines
    • LP: #1684216 – Update button label from ‘Save selection’ to ‘Update selection’
    • LP: #1682489 – Fix Cancel button on add user dialog, which caused the user to be added anyway
    • LP: #1682387 – Unassigned should be (Unassigned)

MAAS 2.2.1

The past week the team was also focused on preparing and QA’ing the new MAAS 2.2.1 point release, which was released on Friday June the 30th. For more information about the bug fixes please visit the following https://launchpad.net/maas/+milestone/2.2.1 .

MAAS 2.2.1 is available in:

  • ppa:maas/stable

Read more
Alan Griffiths

Mir release 0.27

Mir release 0.27/MirAL release 1.4

This is an interim development release of Mir and MirAL to Ubuntu 17.10 (Artful) that delivers many of the features that were work-in-progress when we needed to restructure the project. The Mir release notes are here: https://launchpad.net/mir/0.27/0.27.0.

The MirAL 1.4 release exposes a few new features and removes support for Mir versions that are no longer supported:

  • Support for passing messages to enable Drag & Drop
  • Support for client requested move
  • Port to the undeprecated Mir APIs
  • Added “–cursor-theme” option when configuring a cursor theme
  • Drop support for Mir versions before 0.26

There will be further Mir releases culminating in a Mir 1.0 release before the Ubuntu 17.10 (Artful) feature freeze in August.

Read more
Christian Brauner

Storage Tools

Having implemented or at least rewritten most storage backends in LXC as well as LXD has left me under the impression that most storage tools suck. Most advanced storage drivers provide a set of tools that allow userspace to administer storage without having to link against an external library. This is a huge advantage if one wants to keep the amount of external dependencies to a minimum. This is a policy to which LXC and LXD always try to adhere. One of the most crucial features such tools should provide is the ability to retrieve each property for each storage entity they allow to administer in a predictable and machine-readable way. As far as I can tell, only the ZFS and LVM tools allow one to do this. For example

zfs get -H -p -o "value" <key> <storage-entity>

will let you retrieve (nearly) all properties. The RBD and BTRFS tools lack this ability which makes them inconvenient to use at times.


Read more
Robin Winslow

Canonical’s webteam manage over 18 websites as well as many supporting projects and frameworks. These projects are built with any combination of Python, Ruby, NodeJS, Go, PostgreSQL, MongoDB or OpenStack Swift.

We have 9 full-time developers – half the number of websites we have. And naturally some of our projects get a lot of time spent on them (like www.ubuntu.com), and others only get worked on once every few months. Most devs will touch most projects at some point, and some may work on a few of them in any given day.

Before any developer can start a new piece of work, they need to get the project running on their computer. These computers may be running any flavour of Linux or macOS (thankfully we don’t yet need to support Windows).

A focus on tooling

If you’ve ever tried to get up and running on a new software project, you’ll certainly appreciate how difficult that can be. Sometimes developers can spend days simply working out how to install a dependency.

XKCD 1742: Will it work?

Given the number and diversity of our projects, and how often we switch between them, this is a delay we simply cannot afford.

This is why we’ve invested a lot of time into refining and standardising the local development tooling, making it as easy as possible for any of our devs, or any contributors, to get up and running as simply as possible.

The standard interface

We needed a simple, standardised set of commands that could be run across all projects, to achieve predictable results. We didn’t want our developers to have to dig into the README or other documentation every time they wanted to get a new project running.

This is the standard interface we chose to implement in all our projects, to cover the basic functions common to almost all our projects:

./run        # An alias for "./run serve"
./run serve  # Prepare the project and run the local server
./run build  # Build the project, ready for distribution or release
./run watch  # Watch local files for changes and rebuild as necessary
./run test   # Check code syntax and run unit tests
./run clean  # Remove any temporary or built files or local databases

We decided on using a single run executable as our single entry-point into all our projects only after previously trying and eventually rejecting a number of alternatives:

  • A Makefile: The syntax can be confusing. Makefiles are really made for compiling system binaries, which doesn’t usually apply to our projects
  • gulp, or NPM scripts: Not all our projects need NodeJS, and NodeJS isn’t always available on a developer’s system
  • docker-compose: Although we do ultimately run everything through Docker (see below), the docker-compose entrypoint alone wasn’t powerful enough to achieve everything we needed

In contrast to all these options, the run script allows us to perform whatever actions we choose, using any interpreter that’s available on the local system. The script is currently written in Bash because it’s available on all Linux and macOS systems. As an additional bonus, ./run is quicker to type than the other options, saving our devs crucial nanoseconds.

The single dependency that developers need to install to run the script is Docker, for reasons outlines below.

Knowing we can run or build our projects through this standard interface is not only useful for humans, but also for supporting services – like our build jobs and automated tests. We can write general solutions, and know they’ll be able to work with any of our projects.

Using ./run is optional

All our website projects are openly available on GitHub. While we believe the ./run script offers a nice easy way of running our projects, we are mindful that people from outside our team may want to run the project without installing Docker, want to have more fine-grained control over how the project is run, or just not trust our script.

For this reason, we have tried to keep the addition of the ./run script from affecting the wider shape of our projects. It remains possible to run each of our projects using standard methods, without ever knowing or caring about the ./run script or Docker.

  • Django projects can still be run with pip install -r requirements.txt; ./manage.py runserver
  • Jekyll projects can still be run with bundle install; bundle exec jekyll serve
  • NodeJS projects can still be run with npm install; npm run serve

While the documentation in our READMEs recommend the ./run script, we also try to mention the alternatives, e.g. www.ubuntu.com’s HACKING.md.

Using Docker for encapsulation

Although we strive to keep our projects as simple as possible, every software project relies on dependent libraries and programs. These dependencies pose 2 problems for us:

  • We need to install and run these dependencies in a predictable way – which may be difficult in some operating systems
  • We must keep these dependencies from affecting the developer’s wider system – there’s nothing worse than having a project break your computer

For a while now, developers have been solving this problem by running applications within virtual machines running Linux (e.g. with VirtualBox and Vagrant), which is a great way of encapsulating software within a predictable environment.

Linux containers offer light-weight encapsulation

More recently, containers have entered the scene.

containers

A container is a part of the existing system with carefully controlled permissions and an encapsulated filesystem, to make it appear and behave like a separate operating system. Containers are much lighter and quicker to run than a full virtual machine, and yet provide similar benefits.

The easiest and most direct way to run containers is probably LXD, but unfortunately there’s no easy way to run LXD on macOS. By contrast, Docker CE is trivial to install and use on macOS, and so this became our container manager of choice. When it becomes easier to run LXD on macOS, we’ll revisit this decision.

Each project uses a number of Docker images

docker cat

Running containers through Docker helps us to carefully manage our projects’ dependencies, by:

  • Keeping all our software, from Python modules to databases, from affecting the wider system
  • Logically grouping our dependencies into separate light-weight containers: one for the database, and a separate one for each technology stack (Python, Ruby, Node etc.)
  • Easily cleaning up a project by simply deleting its associated containers

So the ./run script in each project will run the necessary commands to start the project by running the relevant commands inside the relevant Docker images. For example, in partners.ubuntu.com, the ./run command will:

Docker is the only dependency

By using Docker images in this way, the developer doesn’t need to install any of the project dependencies on their local system (NodeJS, Python, PostgreSQL etc.). Docker – which should be trivial to install on both Linux and macOS – is the single dependency they need to run any of our projects.

Keeping the ./run script up-to-date across projects

A key feature of this our solution is to provide a consistent interface in all of our projects. However, the script itself will vary between projects, as different projects have different requirements. So we needed a way of sharing relevant parts of the script while keeping the ability to customise it locally.

It is also important that we don’t add significant bloat to the project’s dependencies. This script is just meant to be a useful shorthand way of running the project, but we don’t want it to affect the shape of the project at large, or add too much extra complexity.

However, we still need a way of making improvements to the script in a centralised way and easily updating the script in existing projects.

A yeoman generator

To achieve these goals, we maintain a yeoman generator called canonical-webteam. This generator contains a few ways of adding the ./run architecture, for some common types of projects we use:

$ yo canonical-webteam:run            # Add ./run for a basic node-only project
$ yo canonical-webteam:run-django     # Add ./run for a databaseless Django project
$ yo canonical-webteam:run-django-db  # Add ./run for a Django project with a database
$ yo canonical-webteam:run-jekyll     # Add ./run for a Jekyll project

These generator scripts can be used either to add the ./run script to a project that doesn’t have it, or to replace an existing ./run script with the latest version. It will also optionally update .gitignore and package.json with some of our standard settings for our projects.

Try it out!

To see this ./run tooling in action, first install Docker by following the official instructions.

Run the www.ubuntu.com website

You should now be able to run a version of the www.ubuntu.com website on your computer:

  • Download the www.ubuntu.com codebase, E.g.:

    curl -L https://github.com/canonical-websites/www.ubuntu.com/archive/master.zip > www.ubuntu.com-master.zip
    unzip www.ubuntu.com-master.zip
    cd www.ubuntu.com-master
    
  • Run the site!

    $ ./run
    # Wait a while (the first time) for it to download and install dependencies. Until:
    Starting development server at http://0.0.0.0:8001/
    Quit the server with CONTROL-C.
    
  • Visit http://127.0.0.1:8001 in your browser, and you should see the latest version of the https://www.ubuntu.com website.

Forking or improving our work

We have documented this standard interface in our team practices repository, and we keep the central code in our canonical-webteam Yeoman generator.

Feel free to fork our code, or if you’d like to suggest improvements please submit an issue or pull-request against either repository.


Also published on Medium.

Read more
Matthew Paul Thomas

Designing build.snapcraft.io

In January, I was presented with a design challenge. Many open-source software developers use GitHub. Let’s make it as easy as possible for them to build and release their code automatically, as a snap software package for Ubuntu and other Linux systems. The result is now available to the world: build.snapcraft.io.

My first task was to interview project stakeholders, getting an understanding of the data and technical constraints involved. That gave me enough information to draw a basic user flow diagram for the app.

This include a front page, a “Dashboard”, a settings page, repo and build pages, and steps for adding repos, adding YAML, and registering a name, which would require Ubuntu One sign-in.

Next, I worked with visual designer Jamie Young to produce a “competitor analysis” of software CI (continuous integration) services, such as Travis, AppVeyor, and CircleCI. These are not actually competitors — our app works alongside whichever CI service you use, and we use Travis ourselves. But we weren’t aware of any existing service doing auto-building and distribution together. And CI services were useful comparisons because they have many of the same user flows.

Our summary of good and not-so-good details in those services became the basis for a design workshop in February, where designers, engineers, and managers worked together on sketching the pages we’d need.

My design colleague Paty Davila distilled these sketches into initial wireframes. I then drew detailed wireframes that included marketing and instructional text. Whether wireframing and copywriting are done by the same person or different people, doing them in tandem can reveal ways to shorten or eliminate text by improving layout or visual elements. I also wrote a functional specification describing the presence, contents, and behavior of each element in the site.

A sketch of the front page, one of several produced during the workshop. My minimal wireframe, including text. Several iterations later, a mockup from Jamie Young. The front page as it is today.

The design patterns in Canonical’s Vanilla CSS framework, for basic components like headings and buttons, made it possible for engineers to lay out pages based directly on the wireframes and functional spec with little need for visual design. But in a few cases, visual designers produced mockups where we had good reason to diverge from existing patterns. And the front page in particular benefited from illustrations by graphics whiz Matthieu James.

The most challenging part of designing this service has been that it communicates with four external systems: not just GitHub, but also the Launchpad build service, the snap store, and the Ubuntu One sign-on service. This has required special emphasis on handling error cases — where any of the external sites behave unexpectedly or provide incomplete information — and communicating progress through the overall flow.

Since launching the site, we’ve added the ability to build organization repos, making the service more useful for teams of developers. Once a repo admin adds the repo to build.snapcraft.io, it will show up automatically for their colleagues as well.

I continue maintaining the specification, designing planned features and addressing new constraints. As well as improving design consistency, the spec helps smooth the on-ramp for new developers joining the project. Engineers are working on integrating the site better with the Vanilla framework. And at the same time, we’re planning a project that will use build.snapcraft.io as the foundation of something much bigger. Good design never sleeps.

Meanwhile, if you have code on GitHub yourself, and this snap thing sounds intriguing, try it out.

Read more
facundo


Toneladas de películas (la utilización de una medida de peso y no de cantidad es para dar efecto (?)) desde la última vez, pero creo que sólo porque pasó mucho tiempo, ya que también hay quintillones de anotadas nuevas (la utilización ahora sí de un número para la cantidad es sólo para ser internamente inconsistente).

  • A Perfect Day: +1. Gran historia (grandes actores), y me gustó como muestra un universo que no conozco y que no me toca.
  • Air: -0. Con algunos detalles interesantes, pero nada nuevo.
  • Amy: -1. No me gustó para nada la forma en que estaba "armado" el documental, era como la versión video de un diario amarillista. No la pude terminar de ver, y eso que algunas partes eran interesantes.
  • Anesthesia: -0. Tiene algunas partes MUY interesantes, pero es en general aburrida hasta más de la primera mitad, sin terminar de enganchar con las situaciones, y luego termina... abruptamente, sin resolver la mayoría de las cosas.
  • Black Mass: -0. La historia tiene sus detalles interesantes, pero me pasó lo mismo que con otras pelis basadas en una historia real, no tiene "sustento", le falta dinámica de película, no sé... como que no empieza ni termina, es floja en ese sentido.
  • Chloe & Theo: +1. Hermosa peli, pura enseñanza.
  • Crimson Peak: +0. Una de fantasmas, pero bien hecha... por eso creo que mejor diría "una CON fantasmas", no "DE".
  • Deadpool: +0. Sátira de superheroe, me divertí mucho.
  • Doctor Strange: +1. Divertida, interesante, buenas actuaciones y efectos. Me gustó el personaje en sí, también (yo no lo conocía).
  • Experimenter: -0. La info de fondo está buena, pero la película en sí no me gustó nada; creo que prefiero un documental sobre esa persona y su trabajo, antes que algo así aburrido
  • Ghost in the Shell: -0. No vale la pena. Si querés ver la historia bien armada, mirá Kôkaku Kidôtai (el manga original) , y si la querés ver a Scarlett Johansson actuando mirá Lost in Translation (y si la querés ver en pelotas mirá Under the Skin).
  • Hotel Transylvania 2: +0. Todo lo que podés esperar de una peli para chicos.
  • Jane Got a Gun: -0. La película no está mal, pero al final como que no te deja nada.
  • Jason Bourne: -0. Es movida y atrapante, pero no hay nada nuevo en la historia. Un "más de lo mismo" en su máxima expresión. Nunca más.
  • Momentum: -1. Aburrida en un montón de partes (lo cual para una peli de acción es mucho), pero lo que me colmó el vaso es esa forma burda de dejar la historia "pendiente" hasta una próxima película.
  • Now You See Me 2: +1. Muy divertida, aunque le falta un poco de sustancia como a la original.
  • Point Break: -0. Tiene algunas enseñanzas piolas, montañas, y pasiajes hermosos... pero el resto es más escenas de motocross que guión :/
  • Regression: +0. Interesante por la temática y cómo te iba llevando sin entender del todo qué pasaba; cierra un poco floja, pero zafa.
  • Rogue One: A Star Wars Story: +0. Está buena, pero más que nada por lo que cuenta y el universo en el que está embebido, después la peli tiene muchas fallas. Sospecho (temo?) que empezarán a salir mil películas satélites a la historia principal con calidades cada vez más bajas (como están haciendo con las de superheroes)...
  • Space Station 76: -0. Una película mediocre ambientada en una (muy interesante y divertida, sí) estética "sci fi de los '70)
  • Spectre: +0. Movida, sin dar respiro, y con buena fotografía, pero no se escapa de ser "solamente una más de James Bond"
  • Star Trek Beyond: +1. Sigue funcionando, continua con ese ese espíritu de las series originales que, a mi entender y gusto, hace que valgan la pena.
  • The Gunman: +0. La típica del muchacho que era malo, luego bueno, luego mata todos los malos. Pero está bien llevada, muestra una cara de las multinacionales en paises del tercer mundo, es llevadera.
  • The Man from U.N.C.L.E.: +0. Peli de espías de los 60/70. Los agentes de CIPOL, bah. Pasatista, disfrutable, con escenas memorables. Si veías la serie vieja supongo que te va a gustar mucho más.
  • The Zero Theorem: -0. Sorprendentemente aburrida para ser tan bizarra.
  • X-Men: Apocalypse: +0. Es más de lo mismo, pero me gustó la forma en que entrelazan todas las historias en la "cronología X-Men" y van explicando como se formó todo; estaría bueno que en algún momento hagan algo así con el universo Tolkien.


Como decía, un montón de anotadas nuevas...

  • Blind (2017; Drama, Romance) Bestselling novelist, Bill Oakland loses his wife and his sight in a vicious car crash. Five years later Socialite Suzanne Dutchman is forced to read to Bill in an intimate room three times a week as a plea bargain for being associated with her husband's insider trading. A passionate affair ensues, forcing them both to question whether or not it's ever too late to find true love. But when Suzanne's husband is let out on a technicality, she is forced to choose between the man she loves and the man she built a life with. [D: Michael Mailer; A: Demi Moore, Alec Baldwin, Dylan McDermott]
  • Casi leyendas (2017; Comedy, Drama, Music) Three estranged friends reunite and reluctantly reform a rock band that in their youth was about to be famous, but for mysterious reasons, they never succeeded. [D: Gabriel Nesci; A: Florencia Bertotti, Claudia Fontán, Leandro Juarez]
  • Deep Burial (2017; Sci-Fi, Thriller) In the near future, when communications go offline at a remote nuclear power plant isolated in the desert, a young safety inspector, Abby Dixon, is forced to fly out to bring them back online. Once inside the facility, mysterious clues and strange behaviors cause Abby to have doubts about the sanity, and perhaps identities, of the two employees onsite. [D: Dagen Merrill; A: Tom Sizemore, Sarah Habel, Dominic Monaghan]
  • Julie & Julia (2009; Biography, Drama, Romance) Julia Child and Julie Powell - both of whom wrote memoirs - find their lives intertwined. Though separated by time and space, both women are at loose ends... until they discover that with the right combination of passion, fearlessness and butter, anything is possible. [D: Nora Ephron; A: Meryl Streep, Amy Adams, Stanley Tucci]
  • La Sangre del Gallo (2015; Thriller) Damian is a 26 year old that wakes up one morning all beaten up, bound, hooded and alone in an unfamiliar place. He doesn't know how he got there, or why. He doesn't even remember his name. A man arrives; He is clearly not the one who captured him, however he attends him. Damian now remembers an accident where his mother and brother died, he was driving. He remembers a discussion that reveals secrets from his past. The path that led him to hit rock bottom keeps going through his head. He starts a special relationship with his captor, who will form part of the puzzle that Damian must complete. [D: Mariano Dawidson; A: Santiago Pedrero, Eduardo Sapac, Emiliano Carrazzone]
  • La vache (2016; Adventure, Comedy, Drama) An Algerian man's life-long dream finally comes true when he receives an invitation to take his cow Jacqueline to the Paris International Agriculture Fair. [D: Mohamed Hamidi; A: Fatsah Bouyahmed, Lambert Wilson, Jamel Debbouze]
  • Mecánica Popular (2015; Comedy, Drama) After devoting his life to publish philosophy, history and psychoanalysis, the editor Mario Zavadikner, discontented with the social and intellectual reality, decides to shoot himself at the office of his publishing house. An unexpected presence stops his attempt: Silvia Beltran, aspiring writer who threatens to commit suicide if Zavadikner refuses to publish her novel. [D: Alejandro Agresti; A: Alejandro Awada, Patricio Contreras, Marina Glezer]
  • Murder on the Orient Express (2017; Crime, Drama, Mystery) A lavish train ride unfolds into a stylish & suspenseful mystery. From the novel by Agatha Christie, Murder on the Orient Express tells of thirteen stranded strangers & one man's race to solve the puzzle before the murderer strikes again. [D: Kenneth Branagh; A: Johnny Depp, Michelle Pfeiffer, Daisy Ridley]
  • Nieve negra (2017; Crime, Drama, Mystery, Thriller) Accused of killing his brother during adolescence, Salvador lives isolated in the middle of Patagonia. After several decades without seeing, his brother Marcos and his sister-in-law Laura, come to convince him to sell the lands that they share by inheritance. The crossing, in the middle of a lonely and inaccessible place, revives the duel where the roles of victim and murderer are transformed over and over again. [D: Martin Hodara; A: Laia Costa, Ricardo Darín, Dolores Fonzi]
  • Seven Sisters (2017; Sci-Fi, Thriller) In a not so distant future, where overpopulation and famine have forced governments to undertake a drastic One-Child Policy, seven identical sisters (all of them portrayed by Noomi Rapace) live a hide-and-seek existence pursued by the Child Allocation Bureau. The Bureau, directed by the fierce Nicolette Cayman (Glenn Close), enforces a strict family-planning agenda that the sisters outwit by taking turns assuming the identity of one person: Karen Settman. Taught by their grandfather (Willem Dafoe) who raised and named them - Monday, Tuesday, Wednesday, Thursday, Friday, Saturday and Sunday - each can go outside once a week as their common identity, but are only free to be themselves in the prison of their own apartment. That is until, one day, Monday does not come home. [D: Tommy Wirkola; A: Noomi Rapace, Willem Dafoe, Glenn Close]
  • The Assignment (2016; Action, Crime, Thriller) Following an ace assassin who is double crossed by gangsters and falls into the hands of rogue surgeon known as The Doctor who turns him into a woman. The hitman now a hitwoman sets out for revenge, aided by a nurse named Johnnie who also has secrets. [D: Walter Hill; A: Michelle Rodriguez, Sigourney Weaver, Anthony LaPaglia]
  • Unlocked (2017; Action, Thriller) A CIA interrogator is lured into a ruse that puts London at risk of a biological attack. [D: Michael Apted; A: Orlando Bloom, Noomi Rapace, Toni Collette]
  • Absolutely Anything (2015; Comedy, Sci-Fi) When some aliens, who travel from planet to planet to see what kind of species inhabit them, come to Earth. And if they are, according to their standards, decent, they are welcomed to be their friend. And if not the planet is destroyed. To find out they choose one inhabitant and give that person the power to do whatever he/she wants. And they choose Neil Clarke, a teacher who teaches the special kids. He is constantly being berated by the headmaster and is attracted to his neighbor, Catherine but doesn't have the guts to approach her. But now he can do anything he wants but has to be careful. [D: Terry Jones; A: Simon Pegg, Kate Beckinsale, Sanjeev Bhaskar]
  • Atomic Blonde (2017; Action, Mystery, Thriller) The crown jewel of Her Majesty's Secret Intelligence Service, Agent Lorraine Broughton (Theron) is equal parts spycraft, sensuality and savagery, willing to deploy any of her skills to stay alive on her impossible mission. Sent alone into Berlin to deliver a priceless dossier out of the destabilized city, she partners with embedded station chief David Percival (James McAvoy) to navigate her way through the deadliest game of spies. [D: David Leitch; A: Sofia Boutella, Charlize Theron, James McAvoy]
  • El faro de las orcas (2016; Drama, Romance) Beto is a lonely man who works as Ranger of the isolated Peninsula Valdes' National Park (Chubut, Argentina). Lover of the nature and animals, the peace of his days watching orcas, seals and sea lions in the sea ends after the arrival of Lola, a Spanish mother who travels there from Madrid with his autistic 11 years old son Tristán looking for Beto after both watch him in a documentary about whales. Desperate, Lola asks help Beto in order to make a therapy for Tristán, hoping that his isolation caused by the autism can be overcome. Reluctant at the beginning, Beto agrees to help Tristán, sailing by the cost in a boat to meet orcas (defying the rules that prevent touching them and swimming them), the only one that causes emotional responses in Tristán. As days go by, Tristán starts slowly to express emotions, in the same way that Beto's boss tries to fire him in the belief that orcas are a dangerous killers whales, Lola realizes about a familiar trouble in Spain and that they Lola and Beto learns about the feelings between them... [D: Gerardo Olivares; A: Maribel Verdú, Joaquín Furriel, Joaquín Rapalini]
  • La tortue rouge (2016; Animation, Fantasy) Surrounded by the immense and furious ocean, a shipwrecked mariner battles all alone for his life with the relentless towering waves. Right on the brink of his demise, the man set adrift by the raging tempest washes ashore on a small and deserted tropical island of sandy beaches, timid animal inhabitants and a slender but graceful swaying bamboo forest. Alone, famished, yet, determined to break free from his Eden-like prison, after foraging for food and fresh water and encouraged by the dense forest, the stranded sailor builds a raft and sets off to the wide sea, however, an indistinguishable adversary prevents him from escaping. Each day, the exhausted man never giving up hope will attempt to make a new, more improved raft, but the sea is vast with wonderful and mysterious creatures and the island's only red turtle won't let the weary survivor escape that easily. Is this the heartless enemy? [D: Michael Dudok de Wit; A: Emmanuel Garijo, Tom Hudson, Baptiste Goy]
  • Star Wars: The Last Jedi (2017; Action, Adventure, Fantasy, Sci-Fi) Having taken her first steps into a larger world in [D: Rian Johnson; A: Tom Hardy, Daisy Ridley, Billie Lourd]
  • The Autopsy of Jane Doe (2016; Horror, Mystery, Thriller) Cox and Hirsch play father and son coroners who receive a mysterious homicide victim with no apparent cause of death. As they attempt to identify the beautiful young "Jane Doe," they discover increasingly bizarre clues that hold the key to her terrifying secrets. [D: André Øvredal; A: Brian Cox, Emile Hirsch, Ophelia Lovibond]
  • The Circle (2017; Drama, Sci-Fi, Thriller) When Mae is hired to work for the world's largest and most powerful tech and social media company, she sees it as an opportunity of a lifetime. As she rises through the ranks, she is encouraged by the company's founder, Eamon Bailey, to engage in a groundbreaking experiment that pushes the boundaries of privacy, ethics and ultimately her personal freedom. Her participation in the experiment, and every decision she makes, begin to affect the lives and future of her friends, family and that of humanity. [D: James Ponsoldt; A: Emma Watson, Ellar Coltrane, Glenne Headly]
  • The Dark Tower (2017; Action, Adventure, Fantasy, Horror, Sci-Fi, Western) The Gunslinger, Roland Deschain, roams an Old West-like landscape where "the world has moved on" in pursuit of the man in black. Also searching for the fabled Dark Tower, in the hopes that reaching it will preserve his dying world. [D: Nikolaj Arcel; A: Katheryn Winnick, Matthew McConaughey, Idris Elba]
  • The Little Hours (2017; Comedy, Romance) A young servant fleeing from his master takes refuge at a convent full of emotionally unstable nuns in the Middle Ages. Introduced as a deaf blind man, he must fight to hold his cover as the nuns try to resist temptation. [D: Jeff Baena; A: Alison Brie, Dave Franco, Kate Micucci]
  • The Recall (2017; Horror, Sci-Fi, Thriller) When five friends vacation at a remote lake house they expect nothing less than a good time, unaware that planet Earth is under an alien invasion and mass-abduction. [D: Mauro Borrelli; A: Wesley Snipes, RJ Mitte, Jedidiah Goodacre]
  • Thor: Ragnarök (2017; Action, Adventure, Fantasy, Sci-Fi) Thor is imprisoned on the other side of the universe and finds himself in a race against time to get back to Asgard to stop Ragnarok, the destruction of his homeworld and the end of Asgardian civilization, at the hands of an all-powerful new threat, the ruthless Hela. [D: Taika Waititi; A: Benedict Cumberbatch, Idris Elba, Tom Hiddleston]


Finalmente, el conteo de pendientes por fecha:

(Ago-2011)    4
(Ene-2012)   11   3
(Jul-2012)   14  11
(Nov-2012)   11  11   6
(Feb-2013)   14  14   8   2
(Jun-2013)   15  15  15  11   2
(Sep-2013)   18  18  17  16   8
(Dic-2013)   14  12  12  12  12   4
(Abr-2014)    9   9   8   8   8   3
(Jul-2014)       10  10  10  10  10   5   1
(Nov-2014)           24  22  22  22  22   7
(Feb-2015)               13  13  13  13  10
(Jun-2015)                   16  16  15  13  11   1
(Dic-2015)                       21  19  19  18   6
(May-2016)                           26  25  23  21
(Sep-2016)                               19  19  18
(Feb-2017)                                   26  25
(Jun-2017)                                       23
Total:      110 103 100  94  91  89 100  94  97  94

Read more
Alan Griffiths

Mir: the new order

The Past

The Mir project has always been about how best to develop a shell for the modern desktop. It was about addressing concerns like a security model for desktop environments; convergence (which has implications for app lifecycles); and, making efficient use of modern hardware. It has never been only about Unity8, that was just the first of (hopefully) many shells written using Mir. To that end, the Mir developers have tried to ensure that the code wasn’t too tightly coupled to Unity8 (e.g. by developing demo servers with alternative behaviors).

There have been many reasons that no other shells used Mir but to tackle some of them I started a “hobby project” (MirAL) last year. MirAL aimed to make it easier to build shells other than Unity8 with Mir, and one of the examples I produced, miral-kiosk, proved important to Canonical’s support for graphics for the “Internet of Things”. Even on the Internet of Things Mir is more than just a way of getting pixels onscreen, it also fits the security model needed. That secures a future for Mir at Canonical.

The Present

In Canonical the principle target for Mir is now Ubuntu Core, and that is currently based on the 16.04LTS. We’ve recently released Mir 0.26.3 to this series and will be upgrading it to Mir 1.0 when that comes out.

Outside Canonical there are other projects that are making use of Mir.

UBports is taking forward the work begun by Canonical to support phones and tablets. Their current stack is based on an old release of Mir (0.17) but they are migrating to Ubuntu 16.04LTS and will get the latest release of Mir with that.

Yunit is taking forward the work begun by Canonical on “convergence”. They are still in an exploratory phase, but the debs they’ve created for Debian Unstable use the current Mir 0.26 release.

As reported elsewhere there have been discussions with other projects who are interested in using Mir. It remains to see if, and in what way, those discussions develop.

The Future

For all of these projects the Mir team must be more responsive than it has been to the needs of those outside Canonical. I think, with the work done on MirAL, much of the technical side of that has been addressed. But we still have to prove ourselves in other ways.

There’s a new (0.27) release of Mir undergoing testing for release to the Ubuntu 17.10 series. This delivers a lot of the work that was “in progress” before Canonical’s focus shifted from Unity8 to miral-kiosk and marks the point of departure from our previous plans for a Mir 1.0. Mir 0.27 will not be released to other series, as we expect to ship Mir 1.0 in 17.10.

The other thing 0.27 offers is based on the efforts we’ve seen in UBports (and a PR from a former Mir developer that has joined the project): we’ve published the APIs needed to develop a “mir platform” out of the Mir source tree. That means, for example, developing a mir-on-wayland platform doesn’t require forking the whole of Mir. One specific platform that is now out-of-tree is the “mir-on-android” platform – from my experience with MirAL I know that having a real out-of-tree project helps prove things really work.

In addition, while we won’t be releasing Mir 0.27 to 17.04, I’ve included testing there, along with the Unity8 desktop to ensure that all the features required by a “real shell” remain intact.

Beyond Mir 0.27 the plan towards 1.0 diverges from past discussions. We are no longer planning to remove deprecated functions from the libmirclient ABI, instead we are working towards supporting Wayland clients directly.

Read more
Leo Arias

Travis CI offers a great continuous integration service for the projects hosted on GitHub. With it, you can run tests, deliver artifacts and deploy applications every time you push a commit, on pull requests, after they are merged, or with some other frequency.

Last week Travis CI updated the Ubuntu 14.04 (Trusty) machines that run your tests and deployment steps. This update came with a nice surprise for everybody working to deliver software to Linux users, because it is now possible to install snaps in Travis!

I've been excited all week telling people about all the doors that this opens; but if you have been following my adventures in the Ubuntu world, by now you can probably guess that I'm mostly thinking about all the potential this has for automated testing. For the automation of user acceptance tests.

User acceptance tests are executed from the point of view of the user, with your software presented as a black box to them. The tests can only interact with the software through the entry points you define for your users. If it's a CLI application, then the tests will call commands and subcommands and check the outputs. If it's a website or a desktop application, the tests will click things, enter text and check the changes on this GUI. If it's a service with an HTTP API, the tests will make requests and check the responses. On these tests, the closer you can get to simulate the environment and behaviour of your real users, the better.

Snaps are great for the automation of user acceptance tests because they are immutable and they bundle all their dependencies. With this we can make sure that your snap will work the same on any of the operating systems and architectures that support snaps. The snapd service takes care of hiding the differences and presenting a consistent execution environment for the snap. So, getting a green execution of these tests in the Trusty machine of Travis is a pretty good indication that it will work on all the active releases of Ubuntu, Debian, Fedora and even on a Raspberry Pi.

Let me show you an example of what I'm talking about, obviously using my favourite snap called IPFS. There is more information about IPFS in my previous post.

Check below the packaging metadata for the IPFS snap, a single snapcraft.yaml file:

name: ipfs
version: master
summary: global, versioned, peer-to-peer filesystem
description: |
  IPFS combines good ideas from Git, BitTorrent, Kademlia, SFS, and the Web.
  It is like a single bittorrent swarm, exchanging git objects. IPFS provides
  an interface as simple as the HTTP web, but with permanence built in. You
  can also mount the world at /ipfs.
confinement: strict

apps:
  ipfs:
    command: ipfs
    plugs: [home, network, network-bind]

parts:
  ipfs:
    source: https://github.com/ipfs/go-ipfs.git
    plugin: nil
    build-packages: [make, wget]
    prepare: |
      mkdir -p ../go/src/github.com/ipfs/go-ipfs
      cp -R . ../go/src/github.com/ipfs/go-ipfs
    build: |
      env GOPATH=$(pwd)/../go make -C ../go/src/github.com/ipfs/go-ipfs install
    install: |
      mkdir $SNAPCRAFT_PART_INSTALL/bin
      mv ../go/bin/ipfs $SNAPCRAFT_PART_INSTALL/bin/
    after: [go]
  go:
    source-tag: go1.7.5

It's not the most simple snap because they use their own build tool to get the go dependencies and compile; but it's also not too complex. If you are new to snaps and want to understand every detail of this file, or you want to package your own project, the tutorial to create your first snap is a good place to start.

What's important here is that if you run snapcraft using the snapcraft.yaml file above, you will get the IPFS snap. If you install that snap, then you can test it from the point of view of the user. And if the tests work well, you can push it to the edge channel of the Ubuntu store to start the crowdtesting with your community.

We can automate all of this with Travis. The snapcraft.yaml for the project must be already in the GitHub repository, and we will add there a .travis.yml file. They have good docs to prepare your Travis account. First, let's see what's required to build the snap:

sudo: required
services: [docker]

script:
  - docker run -v $(pwd):$(pwd) -w $(pwd) snapcore/snapcraft sh -c "apt update && snapcraft"

For now, we build the snap in a docker container to keep things simple. We have work in progress to be able to install snapcraft in Trusty as a snap, so soon this will be even nicer running everything directly in the Travis machine.

This previous step will leave the packaged .snap file in the current directory. So we can install it adding a few more steps to the Travis script:

[...]

script:
  - docker [...]
  - sudo apt install --yes snapd
  - sudo snap install *.snap --dangerous

And once the snap is installed, we can run it and check that it works as expected. Those checks are our automated user acceptance test. IPFS has a CLI client, so we can just run commands and verify outputs with grep. Or we can get fancier using shunit2 or bats. But the basic idea would be to add to the Travis script something like this:

[...]

script:
  [...]
  - /snap/bin/ipfs init
  - /snap/bin/ipfs cat /ipfs/QmVLDAhCY3X9P2uRudKAryuQFPM5zqA3Yij1dY8FpGbL7T/readme | grep -z "^Hello and Welcome to IPFS!.*$"
  - [...]

If one of those checks fail, Travis will mark the execution as failed and stop our release process until we fix them. If instead, all of the checks pass, then this version is good enough to put into the store, where people can take it and run exploratory tests to try to find problems caused by weird scenarios that we missed in the automation. To help with that we have the snapcraft enable-ci travis command, and a tutorial to guide you step by step setting up the continuous delivery from Travis CI.

For the IPFS snap we had for a long time a manual smoke suite, that our amazing community of testers have been executing over and over again, every time we want to publish a new release. I've turned it into a simple bash script that from now on will be executed frequently by Travis, and will tell us if there's something wrong before anybody gives it a try manually. With this our community of testers will have more time to run new and interesting scenarios, trying to break the application in clever ways, instead of running the same repetitive steps many times.

Thanks to Travis and snapcraft we no longer have to worry about a big part of or release process. Continuous integration and delivery can be fully automated, and we will have to take a look only when something breaks.

As for IPFS, it will keep being my guinea pig to guide new features for snapcraft and showcase them when ready. It has many more commands that have to be added to the automated test suite, and it also has a web UI and an HTTP API. Lots of things to play with! If you would like to help, and on the way learn about snaps, automation and the decentralized web, please let me know. You can take a look on my IPFS snap repo for more details about testing snaps in Travis, and other tricks for the build and deployment.

screenshot of the IPFS smoke test running in travis

Read more