Canonical Voices


Relación costo beneficio

Acabo de llegar a una conclusión, de esas apuradas, donde la certeza se ve reforzada por el no tan completo análisis científico que implica una corazonada.

La comida que más beneficio me da en función del costo (precio de los componentes, complejidad del laburo y tiempo de cocinero) es el pollo al horno con cebolla.

El proceso de armado del plato es, obviamente, sencillo.

Cómprese un par de "patamuslos" en la pollería amiga. Si es de pollo de campo, mejor. Yo en Italia lo hacía con pavo, y estaba igual de bueno. Cómprese cebolla. Se asume que en casa hay aceite neutro (girasol, ponele), sal y pimienta.

(se escuchan ruidos de frenada) Perá, ¿qué? ¿Nada más? No, nada más.

Agárrese una fuente, póngase un poco de aceite en la base (como para que no se pegue, apenas), acomódense las presas de pollo así nomás. Pélese y córtese las cebollas en partes grandes (si son cebollas chiquitas, en mitades; si son medianas, en cuartos; si son grandes, en octavos; y así). Revoléese las cebollas así nomás en la fuente, entre los huecos dejado por el pollo.

Métase en el horno, ya caliente. Una hora. Dese vuelta un poco todo. Salpiméntese. Dejar un rato más hasta que quede doradito doradito. Ante la duda, dejar un rato más.


Lleva menos laburo hacer esta comida para cuatro que lo que me costó hacer este post en el tiempo verbal pelotudo de ese pseudo presente-imperativo-en-tercera-persona que no sé como se llama.

Read more
Colin Watson

PPAs for ppc64el

Personal package archives on Launchpad only build for the amd64 and i386 architectures by default, which meets most people’s needs.  Anyone with an e-mail address can have a PPA, so they have to be securely virtualised, but that’s been feasible on x86 for a long time.  Dealing with the other architectures that Ubuntu supports (currently arm64, armhf, powerpc, and ppc64el) in a robust and scalable way has been harder.  Until recently, all of those architectures were handled either by running one builder per machine on bare metal, or in some cases by running builders on a small number of manually-maintained persistent virtual machines per physical machine.  Neither of those approaches scales to the level required to support PPAs, and we need to make sure that any malicious code run by a given build is strictly confined to that build.  (We support virtualised armhf PPAs, but only by using qemu-user-static in an amd64 virtual machine, which is very fragile and there are many builds that it simply can’t handle at all.)

We’ve been working with our sysadmins for several months to extend ScalingStack to non-x86 architectures, and at the start of Ubuntu’s 16.04 development cycle we were finally able to switch all ppc64el builds over to this system.  Rather than four builders, we now have 30, each of which is reset to a clean virtual machine instance between each build.  Since that’s more than enough to support Ubuntu’s needs, we’ve now “unrestricted” the architecture so that it can be used for PPAs as well, and PPA owners can enable it at will.  To do this, visit the main web page for your PPA (which will look something like “<person-name>/+archive/ubuntu/<ppa-name>”) and follow the “Change details” link; you’ll see a list of checkboxes under “Processors”, and you can enable or disable any that aren’t greyed out.  This also means that you can disable amd64 or i386 builds for your PPA if you want to.

We’re working to extend this to all the existing Ubuntu architectures at the moment.  arm64 is up and running but we’re still making sure it’s sufficiently robust; armhf will run on arm64 guests, and just needs a kernel patch to set its uname correctly; and powerpc builds will run in different guests on the same POWER8 compute nodes as ppc64el once we have suitable cloud images available.  We’ll post further announcements when further architectures are unrestricted.

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20151027 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:


Status: Xenial Development Kernel

Our Xenial kernel is open for development. The repo’s have been opened
in LP:
Our Xenial master branch is still tracking Wily’s v4.2 based kernel.
However, Xenial master-next is currently rebased to v4.3-rc7.
Important upcoming dates:

    Thurs Dec 31 – Alpha 1 (~ weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/lts-utopic/Vivid/Wily

Status for the main kernels, until today:

  • Precise – Testing and Verification
  • Trusty – Testing and Verification
  • lts-Utopic – Testing and Verification
  • Vivid – Testing and Verification
  • Wily – Testing and Verification

    Current opened tracking bugs details:

    For SRUs, SRU report is a good source of information:


    Current cycle: 18-Oct through 07-Nov
    16-Oct Last day for kernel commits for this cycle
    18-Oct – 24-Oct Kernel prep week.
    25-Oct – 31-Oct Bug verification & Regression testing.
    01-Nov – 07-Nov Regression testing & Release to -updates.

    Next cycle: 08-Nov through 28-Nov
    06-Nov Last day for kernel commits for this cycle
    08-Nov – 14-Nov Kernel prep week.
    15-Nov – 21-Nov Bug verification & Regression testing.
    22-Nov – 28-Nov Regression testing & Release to -updates.

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more

2ping 3.0.0 has been released. It is a total rewrite, with the following features:

  • Total rewrite from Perl to Python.
  • Multiple hostnames/addresses may be specified in client mode, and will be pinged in parallel.
  • Improved IPv6 support:
    • In most cases, specifying -4 or -6 is unnecessary. You should be able to specify IPv4 and/or IPv6 addresses and it will "just work".
    • IPv6 addresses may be specified without needing to add -6.
    • If a hostname is given in client mode and the hostname provides both AAAA and A records, the AAAA record will be chosen. This can be forced to one or another with -4 or -6.
    • If a hostname is given in listener mode with -I, it will be resolved to addresses to bind as. If the hostname provides both AAAA and A records, they will both be bound. Again, -4 or -6 can be used to restrict the bind.
    • IPv6 scope IDs (e.g. fe80::213:3bff:fe0e:8c08%eth0) may be used as bind addresses or destinations.
  • Better Windows compatibility.
  • ping(8)-compatible superuser restrictions (e.g. flood ping) have been removed, as 2ping is a scripted program using unprivileged sockets, and restrictions would be trivial to bypass. Also, the concept of a "superuser" is rather muddied these days.
  • Better timing support, preferring high-resolution monotonic clocks whenever possible instead of gettimeofday(). On Windows and OS X, monotonic clocks should always be available. On other Unix platforms, monotonic clocks should be available when using Python 2.7
  • Long option names for ping(8)-compatible options (e.g. adaptive mode can be called as --adaptive in addition to -A). See 2ping --help for a full option list.

Because of the IPv6 improvements, there is a small breaking functionality change. Previously, to listen on both IPv4 and IPv6 addresses, you needed to specify -6, e.g. 2ping --listen -6 -I -I ::1. Now that -6 restricts binds to IPv6 addresses, that invocation will just listen on ::1. Simply remove -6 to listen on both IPv4 and IPv6 addresses.

This is a total rewrite in Python, and the original Perl code was not used as a basis, instead writing the new version from the 2ping protocol specification. (The original Perl version was a bit of a mess, and I didn't want to pick up any of its habits.) As a result of rewriting from the specification, I discovered the Perl version's implementation of the checksum algorithm was not even close to the specification (and when it comes to checksums, "almost" is the same as "not even close"). As the Perl version is the only known 2ping implementation in the wild which computes/verifies checksums, I made a decision to amend the specification with the "incorrect" algorithm described in pseudocode. The Python version's checksum algorithm matches this in order to maintain backwards compatibility.

This release also marks the five year anniversary of 2ping 1.0, which was released on October 20, 2010.

Read more
David Planella

I am thrilled to announce the next big event in the Ubuntu calendar: the UbuCon Summit, taking place in Pasadena, CA, in the US, from the 21st to 22nd of January 2016, hosted at SCALE and with Mark Shuttleworth on the opening keynote.

Taking UbuCons to the next level

UbuCons are a remarkable achievement from the Ubuntu community: a network of conferences across the globe, organized by volunteers passionate about Open Source and about collaborating, contributing, and socializing around Ubuntu. The UbuCon at SCALE has been one of the most successful ones, and this year we are kicking it up a notch.

Enter the UbuCon Summit. In discussions with the Community Council, and after the participation of some Ubuntu team members at the Community Leadership Summit a few months ago, one of the challenges that we identified our community is facing was the lack of a global event to meet face to face after the UDS era. While UbuCons continue to thrive as regional conferences, one of the conclusions we reached was that we needed a way to bring everyone together on a bigger setting to complement the UbuCon fabric: the Summit.

The Summit is the expansion of the traditional UbuCon: more content and at a bigger scale. But at the same maintaining the grass-roots spirit and the community-driven organization that has made these events successful.

Two days and two tracks of content

During these two days, the event will be structured as a traditional conference with presentations, demos and plenaries on the first day and as an unconference for the second one. The idea behind the unconference is simple: participants will propose a set of topics in situ, each one of which will be scheduled as a session. For each session the goal is to have a discussion and reach a set of conclusions and actions to address the topics. Some of you will be familiar with the setting :)

We will also have two tracks to group sessions by theme: Users, for those interested in learning about the non-tech, day-to-day part of using Ubuntu, but also including the component on how to contribute to Ubuntu as an advocate. The Developers track will cover the sessions for the technically minded, including app development, IoT, convergence, cloud and more. One of the exciting things about our community is that there is so much overlap between these themes to make both tracks interesting to everyone.

All in all, the idea is to provide a space to showcase, learn about and discuss the latest Ubuntu technologies, but also to focus on new and vibrant parts of the community and talk about the challenges (and opportunities!) we are facing as a project.

A first-class team

In addition to the support and guidance from the Community Council, the true heroes of the story are Richard Gaskin, Nathan Haines and the Ubuntu California LoCo. Through the years, they have been the engines behind the UbuCon at SCALE in LA, and this time around they were quick again to jump and drive the Summit wagon too.

This wouldn’t have been possible without the SCALE team either: an excellent host to UbuCon in the past and again on this occasion. In particular Gareth Greenaway and Ilan Rabinovitch, who are helping us with the logistics and organization all along the way. If you are joining the Summit, I very much recommend to stay for SCALE as well!

More Summit news coming soon

On the next few weeks we’ll be sharing more details about the Summit, revamping the global UbuCon site and updating the SCALE schedule with all relevant information.

Stay tuned for more, including the session about the UbuCon Summit at the next Ubuntu Online Summit in two weeks.

Looking forward to seeing some known and new faces at the UbuCon Summit in January!

Picture from an original by cm-t arudy

The post Announcing the UbuCon Summit appeared first on David Planella.

Read more
April Wang

手机更新: OTA-7



- 社交应用功能提高, 现在支持点“赞”和转发功能


- 新增搜索历史记录
- 提高的场景菜单有下载链接选项
- Http基础验证支持


- 支持SVG格式
- Soundcloud网页版应用可以在后台播放


- 修复 test.mmrow exploit
- 修复UI冻结 (FD leaks)
- 默认不会在stable channel发布奔溃报告
- 修复 QML cache 和重新存储一致应用启动次数
- 在浏览器中默认使用更少的记忆空间,并且避免网页应用呈现白屏
- 用感应器侦测距离,自动关闭电话背光

Read more
April Wang


原作者:Richard Collins


当一部智能手机能够为用户提供和他们常用电脑同样的用户体验时,这部智能手机才是在真正意义上同时起到了移动手机兼个人电脑的重任。 这也是我们作为真正智能手机融合的一个起点 - (通过一款智能手机)来为成千上百对Ubuntu桌面电脑非常熟悉的用户提供同样的Ubuntu个人电脑体验。 简而言之,就是用户对一台个人电脑的使用体验期待必须也能够在他们的智能手机上获得。 这包括了:

- 轻松的多任务多窗口管理
- 全套支持移动和生产力的桌面应用和瘦客户端支持
- 带有桌面提示的集成服务
- 具有应用管理及便捷打开常用应用的能力
- 简单翻阅文档,创建和管理文档文件夹
- 响应性应用专为触屏和点击输入开发,可以自行根据设备环境调整UI呈现方式
- 综合性系统操控以及在需要时对底层操作系统的访问
- 包含一系列兼容第三方服务的统一应用商店
- 在桌面界面上使用手机电话和短信应用来进行交流

操作系统融合之路最初是从Unity 8开始的。 Unity 8 是Ubuntu自有的用户界面和呈现框架,它将预计被运行于所有基于同样底层代码库的Ubuntu设备上,支持一个常用的应用和服务开发基础架构。Unity 8的目标就是能够作为首要呈现框架运行于任何Ubuntu智能产品上。

这就意味着应用程序拥有了其他操作系统无法提供的一个东西:唯一的视觉框架以及一套让应用程序可以在任何类型的Ubuntu智能设备上运行的工具。为移动设备开发的应用程序可以轻松的扩展适用于桌面呈现,同时还支持点击类输入。我们的SDK会为移动应用开发者提供创建这些应用桌面版场景的工具。 类似的,桌面应用的开发者可以使用我们的SDK来延伸并加强他们程序应用于移动端的功能。 融合为开发者们带来了一套全新的场景,而我们的SDK将为开发者们让他们应用程序轻松应用于任何界面提供了基础类工具。

你在(ubuntu)手机上和(ubuntu)桌面上看到和使用的同一款应用程序, 他们将会是完全一样的一套代码运行着这款应用。Ubuntu不需要区别这款应用是专门为移动端还是为桌面呈现而编写的,而是这款应用会自动根据运行的设备呈现环境来自动调用相应的交互界面。第三方开发者们只需要为Ubuntu编写一次代码完成应用开发, 这款应用便可以运行于不同的Ubuntu界面。

我谈论智能手机进化成为一个融合型形态,提供个人电脑体验,是一个业内真实相关的需求为时很久了。 但是一个真正融合化的智能手机或平板,结合移动和桌面生产力而设计,是在使用搭建于唯一而且完全受控代码库基础上的操作系统才可以为被称为真正完成。

Read more

Antes de cerrar el año armé un nuevo curso de Python, no como parte de un grupo cerrado para una empresa o institución, sino abierto al público en general. Esta vez, intensivo (muchas horas en sólo tres clases).

Será un Curso Introductorio a Python, apuntado a aquellos que no saben nada de este lenguaje, o saben algo pero quieren profundizar o formalizar conocimientos, y también incluirá un popurrí de temas enfocados a devops... todos los desarrolladores terminamos siendo un poco sysadmines a veces y está bueno saber usar algunas herramientas.

El nivel es introductorio, lo que significa que se van a ver muchos conceptos del lenguaje de manera profunda, pero no se tocarán temas avanzados ni satélites a lo que es Python en sí, con la intención que el asistente gane sólidos conocimientos que luego le permitan explorar el resto a su gusto. Para aprovechar el curso en todo su potencial se necesita tener conocimientos previos de programación (pero no hace falta ser un programador avanzado). En detalle, el contenido del curso versará sobre los siguientes ítems:

  • Introducción: ¿Qué es Python?; Primeros pasos; Recursos
  • Tipos de Datos: Haciendo números, y más números; Cadenas de texto; Tuplas y listas; Conjuntos; Diccionarios
  • Controles de flujo: if/elif/else; loops while y for; Excepciones
  • Encapsulando código: Funciones; Clases; Módulos; Espacios de nombres
  • Otros temas: Archivos; Serialización; Trabajando en Red; Ejecución externa; Multithreading/multiprocessing

El formato del curso será presencial, en un ambiente "tipo aula" con pizarrón y proyector, pero no basado en filminas, sino totalmente dinámico y adaptativo. Se hace un foco especial en la interacción profesor-asistente, de forma de ir resolviendo las dudas de todos y lograr un aprendizaje más profundo en el mismo tiempo. En función de esto también se limita el cupo, con una cantidad máxima de asistentes de alrededor de siete personas.

Como parte del curso se entregará un certificado de asistencia al mismo. No se necesita asistir al curso con computadoras, pero pueden traer laptops/netbooks si lo desean (van a disponer de conexión a internet via wifi y conexión eléctrica).

El curso es de 18 horas en total, dividido en tres clases, los Miércoles 4, 11 y 18 de Noviembre. El horario será de 10 a 17, considerando una hora para almorzar. El almuerzo será provisto como parte del curso: la idea es ofrecer esto resuelto, así comemos algo liviano, descansamos un rato, y seguimos trabajando, porque si vamos a almorzar a otro lado no nos va a alcanzar el tiempo. El lugar de realización es Borges 2145, Olivos.

El costo total del curso (incluyendo el almuerzo) es de $2720; es necesario abonar al menos el 50% para reservar la posición (recuerden que, como indicaba arriba, hay un máximo de lugares disponibles), abonando el saldo restante el primer día de clases.

Para reservar me envían un mail para confirmar disponibilidad y ahí les paso los datos necesarios.

Read more
Nicholas Skaggs

Show and Tell: Xenial Edition

It's show and tell time again. Yes, yes, remember my story about growing up in school? It's time for us to gather together as a community again and talk, plan, and share with each other about what's happening in Ubuntu.

UOS is the Ubuntu Online Summit we hold each cycle to talk about what's happening in ubuntu. The next summit is called UOS 15.11 and will be on November 3rd - 5th, 2015. That's coming up very soon!

So what should I do?
First, plan to attend. Register to do so even. Second, consider proposing a session for the 'Show and Tell' track. Sessions are open to everyone as a platform for sharing interesting and unique things with the rest of the community. A typical session may last 5-15 minutes, with time for questions. It's a great way to spend a few minutes talking about something you made, work on, or find interesting.

What type of things can I show off?

Demos, quick talks, and 'show and tell' type things.  Your demo can be unscripted, and informal. This does not have to be a technical talk or demo, though those are certainly welcomed. Please feel free to show off design work, documentation, translation, interesting user tricks or anything else that tickles your fancy!

Got an example?
Yes, we do. Last cycle we had developers talking about new APIs, flavors teams doing Q and A sessions and demos, users sharing tricks, and even a live hacking session where we collectively worked on an application for the phone. Check them out. I'd love to see an even greater representation this time around.

Ok, I'm convinced
Great. Propose the session here. If you need help, check out the wiki page. If you are still stuck, feel free to simply contact me for help.

I'm afraid I don't have a demo, but I'd like to see them!
Awesome, sessions need an audience as well. Mark your calendar for November 3rd - 5th and watch the 'Show and Tell' track page for sessions as they appear.

Thanks for your help making UOS amazing. I'll see you there!

Read more
Daniel Holbach


This morning I chatted with Laura Czajkowski and we quickly figured out that wily is our 23rd Ubuntu release. Crazy in a way – 23 releases, who would’ve thought? But on the other hand, Ubuntu is a constant evolution of great stuff becoming even better. Even after 11 years of Ubuntu I can still easily get excited about what’s new in Ubuntu and what is getting better. If you have read any of my recent blog entries you will know that snappy and snapcraft are a combination too good to be true. Shipping software on Ubuntu has never been that easy and I can’t wait for snappy and snapcraft to reach into further parts of Ubuntu. The 16.04 (‘xenial‘) cycle is going to deliver much more of this. Awesome!

But for now: enjoy the great work wrapped up in our wily 15.10 package. Take it, install it, give it to friends and family and spread great open source software in the world. :-)

When you download it, please consider making a donation. And if you do, please consider donating to “Community projects“. This is what allows us to help LoCos with events, fly people to conferences and do all kinds of other great things. We have docs online which explain who can apply for funding for which purposes and what exactly each penny was spent on previously.

Community donations

Read more
Inayaili de León Persson

Ubuntu 15.10 is here!

And has a brand new homepage too!

The new homepage gives a better overview and links to all Ubuntu products. We also wanted to give visitors easy access to the latest Ubuntu news, so we’ve included a ‘latest news’ strip right below the big intro hero with links to featured articles on Ubuntu Insights.

We’ve also improved the content and flow of the cloud section, and brought the phone and desktop sections up to date.

Let us know what you think. And try the new Ubuntu 15.10! homepage before and after release

Read more
Hardik Dalwadi

Recently i have written blog about assembling Ubuntu Orange Matchbox (Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow) & Demonstration of Snappy Ubuntu Core with it.

Now with Make-Me-Glow LP Project, we have built an application for controlling PiGlow from Ubuntu Phone. The PiGlow is a small add on board for the Raspberry Pi that provides 18 individually controllable LEDs. Recently Victor Tuson Palau has released glowapi for Snappy Ubuntu Core, which will allow us to control PiGlow over HTTP Protocol.

We decided to build quick Ubuntu Phone application to control  PiGlow during Ubuntu Hackathon India using glowapi for Snappy Ubuntu Core. As a result of this, we came with Make-Me-Glow LP Project. Big Thanks


What It Does:

It will allow you to control any LED of PiGlow from Ubuntu Phone. You have to  download “Make Me Glow” application from Ubuntu Phone Store. For example, if you want to On/Off Orange LED on Leg 1 with specific Intensity on PiGlow, Make Me Glow UI will allow you to do the same. Just make sure that you have to enter correct IP address of your Ubuntu Orange Matchbox (Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow). We are assuming that you have already installed  glowapi for Snappy Ubuntu Core on your  Ubuntu Orange Matchbox (Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow)

Make Me Glow Installation from Ubuntu Store @ Ubuntu Phone Launch Make Me Glow Launch Make Me Glow Give you input to control PiGlow LEDs Give you input to control PiGlow LEDs Give you input to control PiGlow LEDs

How It Does:

As i said erlier, we are using glowapi for Snappy Ubuntu Core which can operate PiGlow according to HTTP Protocol POST request.  We are issuing POST request over HTTP Protocol from Ubuntu Phone Application – Make Me Glow as per user configuration and PiGlow does operate accordingly. We have written simple QML function for the same, which request URL using POST over HTTP  according to user input. It is very very easy to develop Ubuntu Phone Application using Ubuntu SDK. Big Thanks to XiaoGuo Liu, without it would not be possible to execute this project.

function request(url) {
var xhr = new XMLHttpRequest();‘POST’, url, true);

Future Roadmap:

We are working on few animations. In fact  i have already made the proof of concept using bash script, like Fan &  FadeOut animation with bash scripting. If you want to contribute for the same please visit Make-Me-Glow LP Project.


Goal behind this demonstration is to defined immense possibility of Snappy Ubuntu Core to control devices remotely, within Ubuntu Ecosystem. If you are planning to dive in to IOT based solutions, Snappy Ubuntu Core is great start for you. After this demonstration, I am planning to control my Home Lights connected to device running Snappy Ubuntu Core and controlling the same through Ubuntu Phone.  Stay tuned…

Read more

De sprint en Boston

Una semana de trabajo intensivo.

Muy intensivo, como son los sprints, porque trabajás de 9 a 18 sin pausa, pero también socializás de 8 a 23. O más, o menos, pero estás todo el tiempo con compañeros de trabajo, y la mayoría del tiempo hablando en inglés.


Pero está bueno, te corta la rutina, hacés cosas diferentes. Este sprint fue en USA, hace bastante que no iba por allá. Era en la zona de Boston, así que aproveché para visitar a mi amigo Nico Cesar.

Llegué el domingo, antes del mediodía, tiré las cosas en el hotel y me tomé un par de bondis para ir a la casa. Salimos a pasear, almorzamos una sopa de almejas (riquísima) y después de cruzar el campus de Harvard y estar un rato mirando un show callejero, nos fuimos a navegar un rato, en un velero.

Nico en el velero

Nunca me había subido a un velero, y esta vez no fui solamente un turista: agarramos uno de los grandes, que tiene una vela adelante (además de la principal), y yo estaba a cargo de la misma (así como de desenganchar y enganchar el velero al salir/volver). Estuvo bueno, aprendí un montón de cosas :)

Pero más allá de eso, me encanta cruzarme con Nico. Podemos pasar horas charlando de mil boludeces, caminando, paseando, tomando algo.

El mismo domingo mi jefe Bret hizo una "langosteada"... unos sanguchitos de langosta, básicamente, pero preparados de la forma local (tostando los panes con manteca, con salsitas especiales, etc... laburó un montón, estuvo bueno :).

Flor silvestre

El resto de la semana, mucho trabajo (como decía) y mucha socialización. Este es el segundo sprint con este equipo y estuvo mucho mejor que el primero.

El lunes estuve paseando un rato, pero terminé tomando algo y casi cenando con mucho del equipo.

El martes fuimos con varios del grupo al centro a una charla de Cory Doctorow que estuvo muy buena. Acá me volví a cruzar con Nico, y nos fuimos a cenar todos juntos. La charla de Cory estuvo muy buena, el resto de la noche también.

Cory Doctorow

El miércoles era cena de equipo... la cena formal, digamos. Fuimos todos a un lugar donde comimos muy rico, pero el lugar no estaba bueno. Muy moderno, muy ruidoso. No fue la mejor noche, el lugar no tenía... swing... no sé.

El jueves estuvo mucho mejor. Entre mi jefe y otro más armaron una "barbacoa" en el patio del hotel. Compraron mil cosas, mucha variedad (y orgánico!), con muy buena cerveza para acompañar. Así que comimos pollo, hamburguesas, salchichás, salmón, carne de vaca. Acompañado con mil y una verduras. Todo muy rico.

Compras para la barbacoa

La barbacoa amerita un punto aparte. Sí, parece una máquina del demonio, o sea, una parrilla a gas... pero debo admitir que estos que estaban ahí estaban muy bien. Por un lado, la llama están apuntando para abajo, así que no tocan la comida... y por otro lado, tienen una tapa que captura el calor. De nuevo: no es una parrilla de verdad, pero termina quedando algo medio parrilla medio horno que no está tan mal.

La verdad es que la comida estuvo muy bien.

El viernes estaba muy cansado, aproveché a quedarme en la habitación, acomodar mil cosas, preparar todo.

Y el sábado, ya casi volviendo, fuimos a pegar una vuelta con Guillo hasta el Best Buy, más como una excusa para caminar que otra cosa. Llegamos nueve y media... ¡y estaba cerrado! Vimos que abría a las 10, así que nos fuimos a un mall que estaba cerca, mientras tanto. Fuimos como pudimos, porque no hay senderos para caminar, ni para cruzar la calle, etc. Si no tenés auto, estás en el horno :/. Nosotros igual caminamos y nos metimos en todos lados. Caminamos bastante, la verdad... cuando nos fijamos habían sido 6.5km!!

El otoño es muy colorido

Después, la vuelta... un viaje siempre demasiado largo, y por fin de nuevo con la familia :)

Read more

Warum hast Du mir das angetan?
Ich hab’s von einem Bekannten erfahren.

— Die Ärtze, Debil, Zu Spät

Its been more than two years since the last Hackfest in Hamburg! So we are indeed much too late (german: Zu Spät) with repeating this wonderful Event. Right a day after everyone updated his or her Desktop to Wily Werewolf we will meet for a weekend of happy hacking again in Hamburg!

Hamburg Hackfest 2013 - carelessly stolen from Eikes RetrospectiveHamburg Hackfest 2013 – carelessly stolen from Eikes Retrospective

So now, we will meet again. You are invited to drop by this weekend, we will celebrate a bit on Friday evening (ignoring the german culinary advise in the song linked above about “Currywurst and Pommes Fritz” — I imagine we prefer Club Mate and Pizza) and hack on LibreOffice on Saturday and Sunday. Curious new faces are more then welcome!

Read more

Warum hast Du mir das angetan?
Ich hab’s von einem Bekannten erfahren.

— Die Ärtze, Debil, Zu Spät

Its been more than two years since the last Hackfest in Hamburg! So we are indeed much too late (german: Zu Spät) with repeating this wonderful Event. Right a day after everyone updated his or her Desktop to Wily Werewolf we will meet for a weekend of happy hacking again in Hamburg!

Hamburg Hackfest 2013 - carelessly stolen from Eikes RetrospectiveHamburg Hackfest 2013 – carelessly stolen from Eikes Retrospective

So now, we will meet again. You are invited to drop by this weekend, we will celebrate a bit on Friday evening (ignoring the german culinary advise in the song linked above about “Currywurst and Pommes Fritz” — I imagine we prefer Club Mate and Pizza) and hack on LibreOffice on Saturday and Sunday. Curious new faces are more then welcome!

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20151020 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:


Status: Wily Development Kernel

We release Wily 15.10 in 2 days this Thurs Oct 22. Any kernel patches submitted for Wily will now be queued for SRU and must adhere to SRU
Important upcoming dates:

    Thurs Oct 22 – 15.10 Release (~2 days away)

Status: CVE’s

The current CVE status can be reviewed at the following link:


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/lts-utopic/Vivid

Status for the main kernels, until today:

  • Precise – Kernel Prep
  • Trusty – Kernel Prep
  • lts-Utopic – Kernel Prep
  • Vivid – Kernel Prep

    Current opened tracking bugs details:

    For SRUs, SRU report is a good source of information:


    cycle: 18-Oct through 07-Nov
    16-Oct Last day for kernel commits for this cycle
    18-Oct – 24-Oct Kernel prep week.
    25-Oct – 31-Oct Bug verification & Regression testing.
    01-Nov – 07-Nov Regression testing & Release to -updates.

    Note: Oct. 22 is release day

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Daniel Holbach

As announced earlier, we had a Ubuntu Snappy Core Clinic yesterday and we had a great time. Sergio Schvezov, Ted Gould and I talked about snapcraft in general, what’s new in the 0.3 release and showed off a couple of examples how to package software for Ubuntu Snappy Core. As you can see in the video, none of the snapcraft.yaml files length exceeded 30 lines (and this file is all that’s required); compared to what packaging on various platforms usually looks like that’s just beautiful.

We are going to have these clinics more regularly now. They will always revolve around the world of Snappy Ubuntu Core and there will always be room for questions, requests, feedback and what your want them to be.

ROS people might be interested in the one: we are very likely going to talk about snapcraft’s catkin plugin.

If you have missed the show yesterday, here it is in full length:

You might be wondering why I’m posting two videos. Unfortunately I accidentally pressed the “stop broadcast” button when I was actually looking for “stop screensharing”. Once I hit the button, we couldn’t find a way to resume the broadcast and we had to start a new one. I’m sorry about that.

If anyone of you knows a browser plugin which shows a “are you sure you want to stop the broadcast” warning, that would be fantastic. I could imagine I’m not the only one who might have confused the two when they were busy doing a demo, getting feedback on IRC and were busy talking. :-)

Update: David Planella showed me the Youtube video editor, so here’s the merged video.

Read more

While testing the upcoming release of Ubuntu (15.10 Wily Warewolf), I ran over a bug that renders the kernel crash dump mechanism unusable by default :

LP: #1496317 : kexec fails with OOM killer with the current crashkernel=128 value

The root cause of this bug is that the initrd.img file that is used by kexec to reboot into a new kernel when the original one panics is getting bigger with kernel 4.2 on Ubuntu.  Hence, it is using too much of the reserved crashkernel memory (default: 128Mb). This triggers the « Out Of Memory (OOM) » killer and the kernel dump capture cannot complete.

One workaround for this issue is to increase the amount of reserved memory to a higher value. 150Mb seems to be sufficient but you may need to increase it to a higher value.  While one solution to this problem could be to increase the default crashkernel= value, it is only pushing the issue forward until we hit this limit once again.

Reduce the size of initrd.img

update-initramfs has an option in its configuration file ( /etc/initramfs-tools/initramfs.conf) that let us modify the modules that are included in the initrd.img file.  Our current default is to add most of the modules :

# MODULES: [ most | netboot | dep | list ]
# most - Add most filesystem and all harddrive drivers.
# dep - Try and guess which modules to load.
# netboot - Add the base modules, network modules, but skip block devices.
# list - Only include modules from the 'additional modules' list


By changing this configuration to MODULES=dep, we can sensibly reduce the size of the initrd.img :

MODULES=most : initrd.img-4.2.0-16-generic = 30Mb

MODULES=dep :initrd.img-4.2.0-16-generic = 12Mb

Identifying this led to a discussion with the Ubuntu Kernel team about using a custom crafted initrd.img for kdump. This would keep the file to a sensible size and avoid triggering the OOM killer.


The current implementation of kdump-tools already provides a mechanism to specify which vmlinuz and initrd.img files to use when settting up kexec (from /etc/default/kdump-tools) :

# ---------------------------------------------------------------
# Kdump Kernel:
# KDUMP_KERNEL - A full pathname to a kdump kernel.
# KDUMP_INITRD - A full pathname to the kdump initrd (if used).
# If these are not set, kdump-config will try to use the current kernel
# and initrd if it is relocatable. Otherwise, you will need to specify 
# these manually.

If we use those variables, defined to point to a generic value that can be adapted according to the running kernel version, we have a way to specify a smaller initrd.img for kdump.

Building a smaller initrd.img

Kernel package hooks already exists in /etc/kernel/postinst.d and /etc/kernel/postrm.d to create the initrd.img. Using those as templates, we created new hooks that will create smaller images in /var/lib/kdump and clean them up if the kernel version they pertain to is removed.

In order to create that smaller initrd.img, the content of the /etc/initramfs-tools directory needs to be replicated in /var/lib/kdump. This is done each time that the hook is executed to assure that the content matches the original source. Otherwise, their content may diverge if the content of the original directory gets modified.

Each time a new kernel package is installed, the hook will create a kdump specific initrd.img using MODULES=dep. and store it in /var/lib/kdump.  When the kernel package is removed, the corresponding file is removed.

Using the smaller initrd.img

As we outlined previously, the /etc/default/kdump-tools file can be used to point to a specific initrd.img/vmlinuz pair. So we can do :


When kexec will be loaded by kdump-config, it will find the appropriate files and load them in memory for future use.  But for that to happen, those new parameter needs to point to the correct file.  Here we use symbolic links to achieve our goal.

Linking to the smaller initrd.img

Using the hooks to create the proper symbolic links turns out to be overly complex. But since kdump-config runs at each boot, we can ask this script to be responsible for doing symlink maintenance.

Symlink creation follow this simple flowchart



This will assure that the symbolic links always  point to the file with the version of the running kernel.

One drawback of this method is that, in the remote eventuality that the running kernel breaks the kernel crash dump functionality, we cannot automatically revert to the previous kernel in order to use a known configuration.

A future evolution of the kdump-config tool will add a function to specify which kernel version to use to create the symbolic link. In the meantime, the links can be created manually with those simple commands :

$ export wanted_version="some version"
$ rm -f /var/lib/kdump/initrd.img
$ ln -s /var/lib/kdump/initrd.img-${wanted_version} /var/lib/kdump/initrd.img
$ rm -f /var/lib/kdump/vmlinuz
$ ln -s /boot/vmlinuz-${wanted_version} /var/lib/kdump/vmlinuz

For those of you interested in nitty-gritty details, you can find the modifications in the following GIT branch :

Update: New git branch with cleanup commit history

Read more
Nicholas Skaggs

Wily Final Image Testing!

Wily is almost here! The summer has past us by (or is arriving for our Southern hemisphere friends). Thus, with the change of the seasons, it's time for another release of ubuntu. Wily will release the final image this Thursday, 22 Oct 2015. It's time to find and squash and last minute bugs in the the installer.

How can I help? 
To help test, visit the iso tracker milestone page for final beta.  The goal is to verify the images in preparation for the release. Find those bugs! The information at the top of the page will help you if you need help reporting a bug or understanding how to test. 

There's a first time for everything! Check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help.

How long is this going on?
The testing runs through Thursday, 22 Oct 2015, when the the images for Wily will be released. 

Thanks and happy testing everyone!

Read more

A little before the summer vacation, I decided that it was a good time to get acquainted with writing Juju charms.  Since I am heavily involved with the kernel crash dump tools, I thought that it would be a good start to allow Juju users to enable kernel crash dumps using a charm.  Since it means acting on existing units, a subordinate charms is the answer.

Theory of operation

Enabling kernel crash dumps on Ubuntu and Debian involves the following :

  • installing
    • kexec-tools
    • makedumpfile
    • kdump-tools
    • crash
  • Adding the crashkernel= boot parameter
  • Enabling kdump-tools in /etc/default/kdump-tools
  • Rebooting

On ubuntu, installing the linux-crashdump meta-package takes care of all the packages installation.

The crashdump subordinate charm does just that : installing the packages, enabling the crashdump= boot parameter as well as kdump-tools and reboot the unit.

Since this charm enable a kernel specific service, it can only be used in a context where the kernel itself is accessible.  This means that testing the charm using the local provider which relies on LXC will fail, since the charm needs to interact with the kernel.  One solution to that restriction is to use KVM with the local provider as outlined in the example.

Deploying the crashdump charm

the crashdump charm being a subordinate charm, it can only be used in a context where there are already existing services running. For this example, we will deploy a simple Ubuntu service :

$ juju bootstrap
$ juju deploy ubuntu --to=kvm:0
$ juju deploy crashdump
$ juju add-relation ubuntu crashdump

This will install the required packages, rebuild the grub configuration file to use the crashkernel= value and reboot the unit.

To confirm that the charm has been deployed adequately, you can run :

$ juju ssh ubuntu/0 "kdump-config show"
Warning: Permanently added '' (ECDSA) to the list of known hosts.
USE_KDUMP:        1
KDUMP_SYSCTL:     kernel.panic_on_oops=1
KDUMP_COREDIR:    /var/crash
crashkernel addr: 0x17000000
current state:    ready to kdump
kexec command:
  /sbin/kexec -p --command-line="BOOT_IMAGE=/boot/vmlinuz-3.13.0-65-generic root=UUID=1ff353c2-3fed-48fb-acc0-6d18086d030b ro console=tty1 console=ttyS0 irqpoll maxcpus=1 nousb" --initrd=/boot/initrd.img-3.13.0-65-generic /boot/vmlinuz-3.13.0-65-generic

The next time a kernel panic occurs, you should find the kernel crash dump in /var/crash of the unit that failed.

Deploying from the GUI

As an alternate method for adding the crashdump subordinate charm to an existing service, you can use the Juju GUI.

In order to get the Juju GUI started, a few simple commands are needed, assuming that you do not have any environment bootstrapped :

$ juju bootstrap
$ juju deploy juju-gui
$ juju expose juju-gui
$ juju deploy ubuntu --to=kvm:0


Here are a few captures of the process, once the ubuntu service is started :

Juju environment with one service

Juju environment with one service


Locate the crashdump charm

Locate the crashdump charm


The charm is added to the environment

The charm is added to the environment


Add the relation between the charms

Add the relation between the charms


Crashdump is now enabled in your service

Crashdump is now enabled in your service

Do not hesitate to leave comments or question. I’ll do my best to reply in a timely manner.

Read more