Canonical Voices

Posts tagged with 'python'

Robin Winslow

Last weekend I went to my first Pycon, my second conference in a fortnight.

The conference runs from Friday to Monday, with 3 days of talks followed by one day of “sprints”, which is basically a hack day.

PyCon has a code of conduct to discourage any form of othering:

Happily, PyCon UK is a diverse community who maintain a reputation as a friendly, welcoming and dynamic group.

We trust that attendees will treat each other in a way that reflects the widely held view that diversity and friendliness are strengths of our community to be celebrated and fostered.

And for me, the conference lived up to this, with a very friendly feel, and a lot of diversity in its attendants. The friendly and informal atmosphere was impressive for such a large event with more than 450 people.

Unfortunately, the Monday sprint day was cut short by the discovery of an unexploded bomb.

Many keynotes, without much Python

There were a lot of “keynote” talks, with 2 on Friday, and one each on Saturday and Sunday. And interestingly none of them were really about Python, instead covering future technology, space travel and the psychology of power and impostor syndrome.

But of course there were plenty of Python talks throughout the rest of the day – you can read about them on my other post. And I think it was a good decision to have more abstract keynotes. It shows that the Python community really is more of a general community than just a special interest group.

Van Lindberg on data economics, Marx and the Internet of Things

In the opening keynote on Friday morning, the PSF chairman showed that total computing power is almost doubling every year, and that by 2020, the total processing power in portable devices will exceed that in PCs and servers.

He then used the fact that data can’t travel faster than 11.8 inches per nanosecond to argue that we will see a fundamental shift in the economics of data processing.

The big-data models of today’s tech giants will be challenged as it starts to be quicker and make more economic sense to process data at source, rather than transfer it to distant servers to be processed. Centralised servers will be relegated to mere aggregators of pre-processed data.

He likened this to Marx seizing the means of production in a movement which will empower users, as our portable Things start to hold the real information, and choose who to share it with.

I really hope he’s right, and that the centralised data companies are doomed to fail to be replaced by the Internet of Autonomous Things, because the world of centralised data is not an equal world.

Does Python have a future on small processors? Isn’t it too inefficient?

In a world where all the interesting software is running on light-weight portable devices, processing efficiency becomes important once again. Van used this to argue that efforts to run Python effectively on low-powered devices, like MicroPython, will be essential for Python as a language to survive.

Daniele Procida: All I really want is power

The second keynote was just after lunch on Friday, Daniele Procida, organiser of DjangoCon Europe openly admitted that what he really wanted out of life was power. He put forward the somewhat controversial idea that power and usefulness are the same thing, and that ideas without power are useless.

He made the very good point that power only comes to those who ask for it, or fight for it. And that if we want power not to be abused, we really need to talk about it a whole lot more, even though it makes people uncomfortable (try asking someone their salary). We should acknowledge who has the power, and what power we have, and watch where the power goes.

He suggested that, while in politics or industry, power is very much a rivalled good, in open source it is entirely an unrivalled good. The way you grab power in the open source community is by doing good for the community, by helping out. And so by weilding power you are actually increasing power for those around you.

I don’t agree with him on this final point. I think power can be and is hoarded and abused in the open source community as well. A lot of people use their power in the community to edge out others, or make others feel small, or to soak up influence through talks and presentations and then exert their will over the will of others. I am certainly somewhat guilty of this. Which is why we should definitely watch the power, especially our own power, to see what effect it’s having.

The takeaway maxim from this for me is that we should always make every effort to share power, as opposed to jealously guarding it. It’s not that sharing power in the open source community is inevitable or necessarily comes naturally, but at least in the open source community sharing power genuinely can help you gain respect, where I fear the same isn’t so true of politics or industry.

Dr Simon Sheridan: Landing on a comet: From planning to reality

Simon Sheridan was an incredibly most humble and unassuming man, given his towering achievements. He is a world-class space scientist who was part of the European Space Agency team who helped to land Rosetta on comet 67P.

Most of what he mentioned was basically covered in the news, but it was wonderful to hear it from his perspective.

Naomi Ceder: Confessions of a True Impostor

When, a short way into her Sunday morning keynote, Naomi Ceder asked the room:

How many of you would say that you have in some way or another suffered from imposter syndrome along with me?

Almost everybody put their hands up. This is why I think this was such an important talk.

She didn’t talk about this per se, but contributing to the open source community is hard. No-one talks about it much, but I certainly feel there’s a lot of pressure. Because of its very nature, your contributions will be open, to be seen by anyone, to be criticised by anyone. And let’s face it, your contributions are never going to be perfect. And the rules of the game aren’t written down anywhere, so the chance of being ridiculed seem pretty high. Open source may be a benevolent idea, but it’s damned scary to take part in.

I believe this is why less than 2% of open source contributors are female, compared with more like 25-30% women in software development in general. And, as with impostor syndrome, the same trend is true of other marginalised groups. It’s not surprising to me that people who are used to being criticised and discriminated against wouldn’t subject themselves to that willingly.

And, as Naomi’s question showed, it is not just marginalised people who feel this pressure, it’s all of us. And it’s a problem. As we know, confidence is no indicator of actual ability, meaning that many many talented people may be too scared to contribute to open source.

As Naomi pointed out, impostor syndrome is a socially created condition – when people are expected to do badly, they do badly. In fact I completely agree with her suggestion that the existing Wikipedia definition of impostor syndrome (at the time of writing) could be more sensitively phrased to define it as a “social condition” rather than a “psychological phenomenon”, as well as avoiding singling out women.

While Naomi chose to focus in her talk on how we personally can try to mitigate feelings of being an impostor, I think the really important message here is one for the community. It’s not our fault that open source is scary, that’s just the nature of openness. But we have to make it more welcoming. The success of the open source movement really does depend on it being diverse and accepting.

What I think is really interesting is that stereotype threat can be mitigated by reminding people of their values, of what’s important to them. And this is what I hope will save open source. The more we express our principles and passion for open source, the more we express our values, the easier it is to counter negative feelings, to be welcoming, to stop feeling like impostors.

A great conference

Overall, the conference was exhausting, but I’m very grateful that I got to attend. It was inspiring and informative, and a great example of how to maintain a great community.

If you want you can now go and read about the other talks.

(Also published on

Read more
Robin Winslow

The weekend before last, I went to PyCon UK 2015.

I already wrote about the keynotes, which were more abstract. Here I’m going to talk about the other talks I saw, which were generally more technical or at least had more to do with Python.


The talks I saw covered a whole range of topics – from testing through documentation and ways to achieve simplicity to leadership. Here are some key take-aways:

The talks

Following are slightly more in-depth summaries of the talks I thought were interesting.


Leadership of Technical Teams – Owen Campbell

There were two key points I took away from this talk. The first was Owen’s suggestion that leaders should take every opportunity to practice leading. Find opportunities in your personal life to lead teams of all sorts.

The second point was more complex. He suggested that all leaders exist on two spectra:

  • Amount of control: hand-off to dictatorial
  • Knowledge of the field: novice to expert

The less you know about a field the more hands-off you should be. And conversely, if you’re the only one who knows what you’re talking about, you should probably be more of a dictator.

Although he cautioned that people tend to mis-estimate their ability, and particularly when it comes to process (e.g. agile), people think they know more than they do. No-one is really an expert on process.

He suggested that leading technical teams is particularly challenging because you slide up and down the knowledge scale on a minute-to-minute basis sometimes, so you have to learn to be authoritative one moment and then permissive the next, as appropriate.

Document all the things – Kristian Glass

Kristian spoke about the importance, and difficulty, of good documentation.
Here are some particular points he made:

  • Document why a step is necessary, as well as what it is
  • Remember that error messages are documentation
  • Try pair documentation – novice sitting with expert
  • Checklists are great
  • Stop answering questions face-to-face. Always write it down instead.
  • Github pages are better than wikis (PRs, better tracking)

One of Kristian’s main points was that it goes against the grain to write documentation, ‘cos the person with the knowledge can’t see why it’s important, and the novice can’t write the documentation.

He suggested pair documentation as a solution, which sounds like a good idea, but I was also wondering if a StackOverflow model might work, where users submit questions, and the team treat them like bugs – need to stay on top of answering them. This answer base would then become the documentation.


Asking About Gender – the Whats, Whys and Hows – Claire Gowler

Claire spoke about how so many online forms expect people to be either simply “male” or “female”, when the truth can be much more complicated.

My main takeaway from this was the basic point that forms very often ask for much more information than they need, and make too many assumptions about their users. When it comes to asking someone’s name, try radically reducing the complexity by just having one text field called “name”. Or better yet, don’t even ask their name if you don’t need it.

I think this feeds into the whole field of simplicity very nicely. A very many apps try to do much more than they need to, and ask for much more information than they need. Thinking about how little you know about your user can help you realise what you actually don’t need to know about your user.

Finding more bugs with less work – David R. MacIver

David MacIver is the author of the Hypothesis testing library.

Hypothesis is a Python library for creating unit tests which are simpler to write and more powerful when run, finding edge cases in your code you wouldn’t have thought to look for. It is stable, powerful and easy to add to any existing test suite.

When we write tests normally, we choose the input cases, and we normally do this and we often end up being really kind to our tests. E.g.:

What Hypothesis does it help us test with a much wider and more challenging range of values. E.g.:

There are many cases where Hypothesis won’t be much use, but it’s certainly good to have in your toolkit.


Simplicity Is A Feature – Cory Benfield

Cory presented simplicity as the opposite of complexity – that is, the fewer options something gives you, the more simple and straightforward it is.

“Simplicity is about defaults”

To present as simple an interface as possible, the important thing is to have many sensible defaults as possible, so the user has to make hardly any choices.

Cory was heavily involved in the Python Requests library, and presented it as an example of how to achieve apparent simplicity in a complex tool.

“Simple things should be simple, complex things should be possible”

He suggested thinking of an “onion model”, where your application has layers, so everything is customisable at one of the layers, but the outermost layer is as simple as possible. He suggested that 3 layers is a good number:

  • Layer 1: Low-level – everything is customisable, even things that are just for weird edge-cases.
  • Layer 2: Features – a nicer, but still customisable interface for all the core features.
  • Layer 3: Simplicity – hardly any mandatory options, sensible defaults
    • People should always find this first
    • Support 80% of users 80% of the time
    • In the face of ambiguity do the right thing

He also mentioned that he likes README driven development, which seems like is an interesting approach.

How (not) to argue – a recipe for more productive tech conversations – Harry Percival

I think this one could be particularly useful for me.

Harry spoke about how many people (including him) have a very strong need to be right. Especially men. Especially those who went to boarding school. And software development tends to be full of these people.

Collaboration is particularly important in open source, and strongly disagreeing with people rarely leads to consensus, in fact it’s more likely to achieve the opposite. So it’s important that we learn how to get along.

He suggests various strategies to try out, for getting along with people better:

  • Try simply giving in, do it someone else’s way once in a while (hard to do graciously)
  • Socratic dialogue: Ask someone to explain their solution to you in simple terms
  • Dogfooding – try out your idea before arguing for its strength
  • Bide your time: Wait for the moment to see how it goes
  • Expose yourself to other social situations, where arguments are less acceptable

All of this comes down to stepping back, waiting and exercising humility. All of which are easier said than done, but all of which are very valuable if I could only manage it.

FIDO – The dog ate my password – Alex Willmer

After covering fairly common ground of how and why passwords suck, Alex introduced the FIDO alliance.

The FIDO alliance’s goal is to standardise authentication methods and hopefully replace passwords. They have created two standards for device-based authentication to try to replace passwords:

  • UAF: First-factor passwordless biometric authentication
  • U2F: Second-factor device authentication

Browsers are just starting to support U2F, whereas support for UAF is farther off. Keep an eye out.

Data Visualisation with Python and Javascript – crafting a data-visualisation for the web – Kyran Dale

Kyran spoke about visualising data, and demoed using Scrapy and Pandas to retrieve the Nobel laureatte data from Wikipedia, using Flask to serve it as a RESTful API, and then using D3 to create an interactive browser-based visualisation.

(Also published on

Read more

Como casi siempre a Córdoba, fuí y volví en micro (porque en el colectivo en general duermo pasablemente bien, entonces aprovecho la noche, y no pierdo medio día "para viajar"). Esta vez, por otro lado, fuí un día antes, porque tenía que hacer un trámite en Córdoba Capital, así que estuve el jueves hospedado en la casa de Nati y Matías, trabajando durante el día, jugando juegos de mesa durante la noche.

El viernes a la mañana hicimos el viaje hasta La Serranita con los chicos. Llegamos a media mañana, y ahí se dió la situación de siempre, que es muchas veces esperada: saludar y abrazar a viejos amigos que uno no puede ver tan seguido (y a cuatro o cinco nuevos que uno todavía no conoce :) ).

El grupo recién reunido, charlando sobre las propuestas

Como quedaron planeadas las actividades

El lugar estuvo muy bueno. Quizás me podría quejar que el salón principal era demasiado ajustado, y que las comidas eran en una hostería a cuatro cuadras de distancia, pero el resto estuvo más que bien. No sólo las instalaciones (habitaciones, parque, quincho, etc, etc), sino la atención humana. Un lujo.

Hasta buena internet tuvimos en este PyCamp, ya que estábamos en la red vecinal abierta que Nico Echaniz y amigos montaron en La Quintana y ciudades aledañas. Eso sí, notamos que cuando teníamos problemas con lo que era comunicaciones el tema estaba en el router que estábamos usando (y eso que terminamos poniendo uno muy bueno). Decidimos que había que invertir en hardware un poco más pro (no algo "de uso hogareño bueno" sino algo "profesional")... veremos cuanto cuesta, pero creo que vamos a gastar unos mangos ahí, ya que nos queda no sólo para el PyCamp sino para otros eventos.

Una terraza dos niveles más arriba que la sala de laburo

Pequeño parque en uno de los niveles

A nivel proyectos y Python: lo de siempre... se hacen mil cosas, desde laburar en proyectos estables hasta delirar con cosas nuevas, se mejoran cosas arrancadas de antes, se agregan funcionalidades, se empiezan proyectos que se terminan en esos cuatro días, se arrancan cosas que luego duran mucho tiempo, etc... pero lo más importante no pasa por ahí.

El núcleo del evento es humano. Charlar con gente que conocés de siempre, y podés delirar ideas, proyectos nuevos, o simplemente charlar. Y conocer gente nueva. Pibes que están haciendo cosas locas o no, con laburos copados o no, con vidas interesantes o no. Pero charlar, compartir tiempo, ver como las otras personas encaran un proyecto, qué aportan, como ayudarlos, como transmitirles experiencias.

El programar Python es casi una excusa para que todo lo otro suceda. Y digo "casi" porque sí, claro, lo que se programa y hace está buenísimo también :D

En el comedor, almorzando

En la sala principal de laburo (no era grande, pero no era la única)

En ese aspecto, yo estuve principalmente con dos proyectos. Por un lado filesync server, recientemente liberado open source, con un cambio muy grande que empecé el jueves mismo estando en la casa de Nati y continué intermitentemente durante los cuatro días de PyCamp.

El otro proyecto en el que invertí mucho tiempo es fades que desarrollo principalmente con Nico. Es que se enganchó mucha gente que le gustaba lo que fades ofrece, y aportaron un montón de ideas buenísimas. ¡Y no sólo ideas! También código, branches que mergeamos o que todavía tenemos que revisar. Ya iremos metiendo todo, y queremos hacer un release en las próximas semanas. Estén atentos, porque fades ahora hace cosas que te vuela la peluca :D

Pero no sólo trabajé en eso. También porté Tritcask a que trabaje simultaneamente con Python 2 y Python 3 (arranqué sólo con esto, pero el 70% del laburo lo hicimos juntos con SAn). Y estuvimos buscando cómo hacer para detectar cuanto de un subtítulo matchea con las voces de un video, de manera de poder determinar si está bien sincronizado o no. Y estuve haciendo algo de código asincrónico usando asyncio. Y estuve charlando con SAn, DiegoM, Bruno y Nico Echaniz sobre una especie de Repositorio Federado de Contenido Educativo. Y estuve ayudando a gente a trabajar en Python mismo durante un cortito Python Bug Day (Jairo solucionó un issue y medio!!).

Camino al río

Recorriendo la vera del río, saltando de piedra en piedra

El mejor asado de un PyCamp, ever

Y tomé sol. Y tuve en mis manos una espada de verdad por primera vez. Y caminé por el costado del río saltando de piedra en piedra. Y comí un asadazo (entre el centenar de kilos de comida que ingeríamos por día por persona). Y conocí La Serranita. Y charlé mil. Y usé un sistema de realidad virtual. Y jugué a muchos juegos de mesa.

Y abracé amigos.

Read more

La semana que viene (casi ahora ahorita) arranca una nueva edición del mejor evento de programación del mundo mundial.

Esta vez se hace en La Serranita, Córdoba.

Hay un montón de propuestas de varias personas, yo en particular propuse armar una especie de verificador de subtítulos (la idea es verificar si un subtítulo matchea con el video... o mejor dicho, con el audio... lo básico es encontrar si en el momento del subtítulo hay alguien hablando, con eso uno ya se asegura que el subtítulo está sincronizado), trabajar un poco en Encuentro y fades, y armar un Python Bug Day (para trabajar un rato en Python en sí, cerrar algún bug del lenguaje propiamente dicho... mucho código del lenguaje es en C, pero también hay mucho en Python mismo, y hay algunas cosas que son sencillas).

Aproveché y preparé/actualicé instrucciones de "cómo configurar/inicializar/arrancar con el proyecto" tanto para Encuentro como para fades. Para Python en sí no hace falta, ya que hay clarísimas instrucciones en la Python Developer's Guide :)

Ya les reportaré como fue todo :)

Read more

Estos últimos días se liberaron nuevas versiones de dos proyectos en los que estoy involucrado activamente.

A principio de mes lancé Encuentro 3.1 (como ya sabrán, este programa permite buscar, descargar y ver contenido del Canal Encuentro, Paka Paka, BACUA, y otros).

La versión 3.1 trae los siguientes cambios con respecto a la versión anterior:

  • Vuelve a funcionar luego de los cambios de backend de Encuentro y Conectate
  • Ahora con CTRL-F se va directamente al campo de filtro (gracias Emiliano)
  • Se rehizo el manejo de la lista de episodios: ahora verlos y filtrarlos es muchísimo más rápido
  • Mejoras en el empaquetado, debería funcionar para muchas (todas?) las versiones de Debian/Ubuntu (gracias Adrián Alves). 
  • Varias mejoras al encontrar nuevos episodios de los distintos backends, y correcciones en general. 

Más info y cómo descargarlo, instalarlo, etc, en la página oficial.

Por otro lado, ayer se lanzó fades 3 (un proyecto orientado a desarrolladores Python, en contraposición a Encuentro que está pensado para el usuario final), que desarrollamos principalmente Nico Demarchi y yo.

fades (en inglés: FAst DEpendencies for Scripts) es un sistema que maneja automáticamente los virtualenvs en los casos simples que uno normalmente encuentra al escribir scripts o programas pequeños.  Crea automáticamente un nuevo virtualenv (o reusa uno creado previamente) instalando las dependencias necesarias, y ejecutando el script dentro de ese virtualenv.

¿Qué hay de nuevo en esta release?

  • Podés usar diferentes versiones del intérprete: simplemente pasá --python=python2 o lo que te convenga.
  • Las dependencias pueden especificarse en la linea de comando: no hay necesidad de cambiar el script para una prueba rápida, simplemente especificá la dependencia necesaria con --dependency.
  • Modo interactivo: es la manera más rápida de probar una nueva biblioteca. Sólo hacé fades -d <dependencia> y te abrirá un intérprete interactivo dentro de un venv con esa dependencia.
  • Soporta tomar argumentos desde el shellbang. De esta manera podés crear un script y poner al principio del mismo algo como: #!/usr/bin/env fades -d <dependencia> --python=python2.7
  • Puede parsear requerimientos desde un archivo. No hay necesidad de ningún cambio si ya tenés un archivo requirements.txt: simplemente indicalo con --requirement.
  • Si no se especifica el repo, toma PyPI por defecto, lo que resulta en código más limpio y simple.
  • Tiene una base de datos integrada para conversiones típicas de nombres: de esta manera se puede marcar con fades un "import bs4" incluso si ese no es el nombre del paquete en PyPI.
  • Otros cambios y correcciones menores.

Toda la info, en la página de PyPI del proyecto.

Read more

Hace un tiempo les hablé de un árbol que hice para sacar prefijos de palabras.

En el laburo estoy estudiando la forma de hacer un autocompletador. Entonces, luego de leer cosas por ahí, decidí probar ese árbol que ya tenía hecho.

Nunca le había tirado tantos datos, pero la verdad es que salió andando de perlas.

Por otro lado, tenía un detalle que necesitaba solucionar: yo quería que la búsqueda de palabras soportara errores en la escritura. O sea, que si uno buscara "maise", encontrara "maizena".

Encontré un paper bastante loco, Efficient Error-tolerant Query Autocompletion, pero que mostraba la forma de soportar errores al buscar palabras completas, no prefijos. Igual, apliqué ideas de ahí, y en un par de días de laburo conseguí lo que quería. Pero, al cargar el millón y medio de registros que tengo que cargar, ¡explotaba por memoria!

Luego de algunas optimizaciones obvias, se me ocurrió lo de deduplicar los subtrees internos. ¿Qué es deduplicar? Deduplicar es la acción por la cual si tengo un objeto A, y luego tengo otro B, que resulta ser igual a A, puedo usar el A directamente en ambos casos, descartando B (libera memoria), y listo.

Deduplicar diccionarios no es un asunto trivial. Tiré el asunto en la lista de PyAr, y en pocas horas logré que todo funcione correctamente. Ahora no sólo no explota, sino que ocupa bastante poca memoria!

    Memory usage after loading the tree: rss: +586 MB  vms: +586 MB
    Time to load the tree: 327190.99 msec
    <WordTree at 3068071276 [tau=1]: 1478347 words 30015540 (2201293) nodes (unique)>

Millón y medio de palabras, 30 millones de nodos (de los cuales 2.2 millones son únicos), ocupando 590 MB de memoria. Nada mal, ¿no? Que tarde 5.5 minutos en armar toda la estructura es un problema, la semana que viene voy a mirar eso bien.

Todo el código, acá.

Read more
Barry Warsaw


Snappy Ubuntu Core is a new edition of the Ubuntu you know and love, with some interesting new features, including atomic, transactional updates, and a much more lightweight application deployment story than traditional Debian/Ubuntu packaging.  Much of this work grew out of our development of a mobile/touch based version of Ubuntu for phones and tablets, but now Ubuntu Core is available for clouds and devices.

I find the transactional nature of upgrades to be very interesting.  While you still get a perfectly normal Ubuntu system, your root file system is read-only, so traditional apt-get based upgrades don't work.  Instead, your system version is image based; today you are running image 231 and tomorrow a new image is released to get you to 232.  When you upgrade to the new image, you get all the system changes.  We support both full and delta upgrades (the latter which reduces bandwidth), and even phased updates so that we can roll out new upgrades and quickly pull them from the server side if we notice a problem.  Snappy devices even support rolling back upgrades on a single device, by using a dual-partition root file system.  Phones generally don't support this due to lack of available space on the device.

Of course, the other part really interesting thing about Snappy is the lightweight, flexible approach to deploying applications.  I still remember my early days learning how to package software for Debian and Ubuntu, and now that I'm both an Ubuntu Core Developer and Debian Developer, I understand pretty well how to properly package things.  There's still plenty of black art involved, even for relatively easy upstream packages such as distutils/setuptools-based Python package available on the Cheeseshop (er, PyPI).  The Snappy approach on Ubuntu Core is much more lightweight and easy, and it doesn't require the magical approval of the archive elves, or the vagaries of PPAs, to make your applications quickly available to all your users.  There's even a robust online store for publishing your apps.

There's lots more about Snappy apps and Ubuntu Core that I won't cover here, so I encourage you to follow the links for more information.  You might also want to stop now and take the tour of Ubuntu Core (hey, I'm a poet and I didn't even realize it).

In this post, I want to talk about building and deploying snappy Python applications.  Python itself is not an officially supported development framework, but we have a secret weapon.  The system image client upgrader -- i.e. the component on the devices that checks for, verifies, downloads, and applies atomic updates -- is written in Python 3.  So the core system provides us with a full-featured Python 3 environment we can utilize.

The question that came to mind is this: given a command-line application available on PyPI, how easy is it to turn into a snap and install it on an Ubuntu Core system?  With some caveats I'll explore later, it's actually pretty easy!

Basic approach

The basic idea is this: let's take a package on PyPI, which may have additional dependencies also on PyPI, download them locally, and build them into a snap that we can install on an Ubuntu Core system.

The first question is, how do we build a local version of a fully-contained Python application?  My initial thought was to build a virtual environment using virtualenv or pyvenv, and then somehow turn that virtual environment into a snap.  This turns out to be difficult in practice because virtual environments aren't really designed for this.  They have issues with being relocated for example, and they can contain a lot of extraneous stuff that's great for development (virtual environment's actual purpose ) but unnecessary baggage for our use case.

My second thought involved turning a Python application into a single file executable, and from there it would be fairly easy to snappify.  Python has a long tradition of such tools, many with varying degrees of cross platform portability and standalone-ishness.  After looking again at some oldies but goodies (e.g. cx_freeze) and some new offerings, I decided to start with pex.

pex is a nice tool developed by Brian Wickman and the Twitter folks which they use to deploy Python applications to their production environment.  pex takes advantage of modern Python's support for zip imports, and a clever trick of zip files.

Python supports direct imports (of pure Python modules) from zip files, and the python executable's -m option works even when the module is inside a zip file.  Further, the presence of a file within a package can be used as shorthand for executing the package, e.g. python -m myapp will run myapp/ if it exists.

Zip files are interesting because their index is at the end of the file.  This allows you to put whatever you want at the front of the file and it will still be considered a zip file.  pex exploits this by putting a shebang in the first line of the file, e.g. #!/usr/bin/python3 and thus the entire zip file becomes a single file executable of Python code.

There are of course, plenty of caveats.  Probably the main one is that Python cannot import extension modules directly from the zip, because the dlopen() function call only takes a file system path.  pex handles this by marking the resulting file as not zip safe, so the zip is written out to a temporary directory first.

The other issue of course, is that the zip file must contain all the dependencies not present in the base Python.  pex is actually fairly smart here, in that it will chase dependencies, much like pip and it will include those dependencies in the zip file.  You can also specify any missed dependencies explicitly on the pex command line.

Once we have the pex file, we need to add the required snappy metadata and configuration files, and run the snappy command to generate the .snap file, which can then be installed into Ubuntu Core.  Since we can extract almost all of the minimal required snappy metadata from the Python package metadata, we only need just a little input from the user, and the rest of work can be automated.

We're also going to avail ourselves of a convenient cheat.  Because Python 3 and its standard library are already part of Ubuntu Core on a snappy device, we don't need to worry about any of those dependencies.  We're only going to support Python 3, so we get its full stdlib for free.  If we needed access to Python 2, or any external libraries or add-ons that can't be made part of the zip file, we would need to create a snappy framework for that, and then utilize that framework for our snappy app.  That's outside the scope of this article though.


To build Python snaps, you'll need to have a few things installed.  If you're using Ubuntu 15.04, just apt-get install the appropriate packages.  Otherwise, you can get any additional Python requirements by building a virtual environment and installing tools like pex and wheel into their, then invoking pex from that virtual environment.  But let's assume you have the Vivid Vervet (Ubuntu 15.04); here are the packages you need:
  •  python3
  •  python-pex-cli
  •  python3-wheel
  •  snappy-tools
  •  git
You'll also want a local git clone of which provides a convenient script called for automating the building of Python snaps.  We'll refer to this script extensively in the discussion below.

For extra credit, you might want to get a copy of Python 3.5 (unreleased as of this writing).  I'll show you how to do some interesting debugging with Python 3.5 later on.

From PyPI to snap in one easy step

Let's start with a simple example: world is a very simple script that can provide forward and reverse mappings of ISO 3166 two letter country codes (at least as of before ISO once again paywalled the database).  So if you get an email from you can find out where the BDFL has his secret lair:

$ world py
py originates from PARAGUAY

world is a pure-Python package with both a library and a command line interface. To get started with the script mentioned above, you need to create a minimal .ini file, such as:

name: world

verbose: true

Let's call this file world.ini.  (In fact, you'll find this very file under the examples directory in the snap git repository.)  What do the various sections and variables control?
  •  name is the name of the project on PyPI.  It's used to look up metadata about the project on PyPI via PyPI's JSON API.
  •  verbose variable just defines whether to pass -v to the underlying pex command.
Now, to create the snap, just run:

$ ./ examples/world.ini

You'll see a few progress messages and a warning which you can ignore.  Then out spits a file called world_3.1.1_all.snap.  Because this is pure Python, it's architecture independent.  That's a good thing because the snap will run on any device, such as a local amd64 kvm instance, or an ARM-based Ubuntu Core-compatible Lava Lamp.

Armed with this new snap, we can just install it on our device (in this case, a local kvm instance) and then run it:

$ snappy-remote --url=ssh://localhost:8022 install world_3.1.1_all.snap
$ ssh -p 8022 ubuntu@localhost
ubuntu@localhost:~$ py
py originates from PARAGUAY

From git repository to snap in one easy step

Let's look at another example, this time using a stupid project that contains an extension module. This aptly named package just prints a yes for every -y argument, and no for every -n argument.

The difference here is that stupid isn't on PyPI; it's only available via git.  The helper is smart enough to know how to build snaps from git repositories.  Here's what the stupid.ini file looks like:

name: stupid
origin: git

verbose: yes

Notice that there's a [project]origin variable.  This just says that the origin of the package isn't PyPI, but instead a git repository, and then the public repo url is given.  The first word is just an arbitrary protocol tag; we could eventually extend this to handle other version control systems or origin types.  For now, only git is supported.

To build this snap:

$ ./ examples/stupid.ini

This clones the repository into a temporary directory, builds the Python package into a wheel, and stores that wheel in a local directory.  pex has the ability to build its pex file from local wheels without hitting PyPI, which we use here.  Out spits a file called stupid_1.1a1_all.snap, which we can install in the kvm instance using the snappy-remote command as above, and then run it after ssh'ing in:

ubuntu@localhost:~$ stupid.stupid -ynnyn

Watch out though, because this snap is really not architecture-independent. It contains an extension module which is compiled on the host platform, so it is not portable to different architectures.  It works on my local kvm instance, but sadly not on my Lava Lamp.

Entry points

pex currently requires you to explicitly name the entry point of your Python application.  This is the function which serves as your main and it's what runs by default when the pex zip file is executed.

Usually, a Python package will define its entry point in its file, like so:

        'console_scripts': ['stupid = stupid.__main__:main'],

And if you have a copy of the package, you can run a command to generate the various package metadata files:

$ python3 egg_info

If you look in the resulting stupid.egg_info/entry_points.txt file, you see the entry point clearly defined there.  Ideally, either pex or would just figure this out explicitly.  As it turns out, there's already a feature request open on pex for this, but in the meantime, how can we auto-detect the entry point?

For the stupid example, it's pretty easy.  Once we've cloned its git repository, we just run the egg_info command and read the entry_points.txt file.  Later, we can build the project's binary wheel from the same git clone.

It's a bit more problematic with world though because the package isn't downloaded from PyPI until pex runs, but the pex command line requires that you specify the entry point before the download occurs.

We can handle this by supporting an entry_point variable in the snap's .ini file.  For example, here's the world.ini file with an explicit entry point setting:

name: world
entry_point: worldlib.__main__:main

verbose: true

What if we still wanted to auto-detect the entry point?  We could of course, download the world package in and run the egg-info command over that.  But pex also wants to download world and we don't want to have to download it twice.  Maybe we could download it in and then build a local wheel file for pex to consume.

As it turns out there's an easier way.

Unfortunately, package egg-info metadata is not availble on PyPI, although arguably it should be.  Fortunately, Vinay Sajip runs an external service that does make the metadata available, such as the metadata for world. makes the entry_point variable optional, and if it's missing, it will grab the package metadata from a link like that given above.  An error will be thrown if the file can't be found, in which case, for now, you'd just add the [project]entry_point variable to the .ini file.

A little more detail

The script is more or less a pure convenience wrapper around several independent tools.  pex of course for creating the single executable zip file, but also the snappy command for building the .snap file.  It also utilizes python3 egg_info where possible to extract metadata and construct the snappy facade needed for the snappy build command.  Less typing for you!  In the case of a snap built from a git repository, it also performs the git cloning, and the python3 bdist_wheel command to create the wheel file that pex will consume.

There's one other important thing does: it fixes the resulting pex file's shebang line.  Because we're running these snaps on an Ubuntu Core system, we know that Python 3 will be available in /usr/bin/python3.  We want the pex file's shebang line to be exactly this.  While pex supports a --python option to specify the interpreter, it doesn't take the value literally.  Instead, it takes the last path component and passes it to /usr/bin/env so you end up with a shebang line like:

#!/usr/bin/env python3

That might work, but we don't want the pex file to be subject to the uncertainties of the $PATH environment variable.

One of the things that does is repack the pex file.  Remember, it's just a zip file with some magic at the top (that magic is the shebang), so we just read the file that pex spits out, and rewrite it with the shebang we want.  Eventually, pex itself will handle this and we won't need to do that anymore.


While I was working out the code and techniques for this blog post, I ran into an interesting problem.  The world script would crash with some odd tracebacks.  I don't have the details anymore and they'd be superfluous, but suffice to say that the tracebacks really didn't help in figuring out the problem.  It would work in a local virtual environment build of world using either the (pip installed) PyPI package or run from the upstream git repository, but once the snap was installed in my kvm instance, it would traceback.  I didn't know if this was a bug in world, in the snap I built, or in the Ubuntu Core environment.  How could I figure that out?

Of course, the go to tool for debugging any Python problem is pdb.  I'll just assume you already know this.  If not, stop everything and go learn how to use the debugger.

Okay, but how was I going to get a pdb breakpoint into my snap?  This is where Python 3.5 comes in!

PEP 441, which has already been accepted and implemented in what will be Python 3.5, aims to improve support for zip applications.  Apropos this blog post, the new zipapp module can be used to zip up a directory into single executable file, with an argument to specify the shebang line, and a few other options.  It's related to what pex does, but without all the PyPI interactions and dependency chasing.  Here's how we can use it to debug a pex file.

Let's ignore snappy for the moment and just create a pex of the world application:

$ pex -r world -o world.pex -e worldlib.__main__:main
Now let's say we want to set a pdb breakpoint in the main() function so that we can debug the program, even when it's a single executable file.  We start by unzipping the pex:
$ mkdir world
$ cd world
$ unzip ../world.pex
If you poke around, you'll notice a file in the current directory.  This is pex's own main entry point.  There are also two hidden directories, .bootstrap and .deps.  The former is more pex scaffolding, but inside the latter you'll see the unpacked wheel directories for world and its single dependency.

Drilling down a little farther, you'll see that inside the world wheel is the full source code for world itself.  Set a break point by visiting .deps/world-3.1.1-py2.py3-none-any.whl/worldlib/ in your editor.  Find the main() function and put this right after the def line:

import pdb; pdb.set_trace()

Save your changes and exit your editor.

At this point, you'll want to have Python 3.5 installed or available.  Let's assume that by the time you read this, Python 3.5 has been released and is the default Python 3 on your system.  If not, you can always download a pre-release of the source code, or just build Python 3.5 from its Mercurial repository.  I'll wait while you do this...

...and we're back!  Okay, now armed with Python 3.5, and still inside the world subdirectory you created above, just do this:

$ python3.5 -m zipapp . -p /usr/bin/python3 -o ../world.dbg

Now, before you can run ../world.dbg and watch the break point do its thing, you need to delete pex's own local cache, otherwise pex will execute the world dependency out of its cache, which won't have the break point set. This is a wart that might be worth reporting and fixing in pex itself.  For now:

$ rm -rf ~/.pex
$ ../world.dbg

And now you should be dropped into pdb almost immediately.

If you wanted to build this debugging pex into a snap, just use the snappy build command directly.  You'll need to add the minimal metadata yourself (since currently doesn't preserve it).  See the Snappy developer documentation for more details.

Summary and Caveats

There's a lot of interesting technology here; pex for building single file executables of Python applications, and Snappy Ubuntu Core for atomic, transactional system updates and lightweight application deployment to the cloud and things.  These allow you to get started doing some basic deployments of Python applications.  No doubt there are lots of loose ends to clean up, and caveats to be aware of.  Here are some known ones:

  • All of the above only works with Python 3.  I think that's a feature, but you might disagree. ;)   This works on Ubuntu Core for free because Python 3 is an essential piece of the base image.  Working out how to deploy Python 2 as a Snappy framework would be an interesting exercise.
  • When we build a snap from a git repository for an application that isn't on PyPI, I don't currently have a way to also grab some dependencies from PyPI.  The stupid example shown here doesn't have any additional dependencies so it wasn't a problem.  Fixing this should be a fairly simple matter of engineering on the wrapper (pull requests welcome!)
  • We don't really have a great story for cross-compilation of extension modules. Solving this is probably a fairly complex initiative involving the distros, setuptools and other packaging tools, and upstream Python.  For now, your best bet might be to actually build the snap on the actual target hardware.
  • Importing extension modules requires a file system cache because of limitations in the dlopen() API.  There have been rumors of extensions to glibc which would provide a dlopen()-from-memory type of API which could solve this, or upstream Python's zip support may want to grow native support for caching.
Even with these caveats, it's pretty easy to turn a Python application into a Snappy Ubuntu Core application, publish it to the world, and profit!  So what are you waiting for?  Snap to it!

Read more

They say that metaclasses make your head explode. They also say that if you're not absolutely sure what are metaclasses, then you don't need them.

And there you go, happily coding through life, jumping and singing in the meadow, until suddenly you get into a dark forest and find the most feared enemy: you realize that some magic needs to be done.

The necessity

Why you may need metaclasses? Let's see this specific case, my particular (real life) experience.

It happened that at work I have a script that verifies the remote scopes service for the Ubuntu Phone, checking that all is nice and crispy.

The test itself is simple, I won't put it here because it's not the point, but it's isolated in a method named _check, that receives the scope name and returns True if all is fine.

So, the first script version did (removed comments and docstrings, for brevity):

    class SuperTestCase(unittest.TestCase):

        def test_all_scopes(self):
            for scope in self._all_scopes:
                resp = self._check(scope)

The problem with this approach is that all the checks are inside the same test. If one check fails, the rest is not executed (because the test is interrupted there, and fails).

Here I found something very interesting, the (new in Python 3) subTest call:

    class SuperTestCase(unittest.TestCase):

        def test_all_scopes(self):
            for scope in self._all_scopes:
                with self.subTest(scope=scope):
                    resp = self._check(scope)

Now, each "sub test" internally is executed independently of the other. So, they all are executed (all checks are done) no matter if one or more fail.

Awesome, right? Well, no.

Why not? Because even if internally everything is handled as independent subtest, from the outside point of view it still is one single test.

This has several consequences. One of those is that the all-inside test takes too long, and you can't know what was going on (note that each of these checks hit the network!), as the test runner just show progress per test (not subtest).

The other inconvenient is that there is not a way to call the script to run only one of those subtests... I can tell it to execute only the all-inside test, but that would mean to execute all the subtests... which, again, takes a lot of time.

So, what I really needed? Something that allows me to express the assertion in one test, but that in reality it were several methods. So, I needed something that, from a single method, reproduce them so the class actually had several ones. This is, write code for a class that Python would find different. This is, metaclasses.

Metaclasses, but easy

Luckily, since a couple of years ago (or more), Python provides a simpler way to achieve the same that could be done with metaclasses. This is: class decorators.

Class decorators, very similar to method decorators, receive the class that is defined below itself, and its response is considered by Python the real definition of the class. If you don't have the concept, you may read a little here about decorators, and a more deep article about decorators and metaclasses here, but it's not mandatory.

So, I wrote the following class decorator (explained below):

    def test_multiplier(klass):
        """Multiply those multipliable tests."""
        for meth_name in (x for x in dir(klass) if x.startswith("test_")):
            meth = getattr(klass, meth_name)
            argspec = inspect.getfullargspec(meth)

            # only get those methods that are to be multiplied
            if len(argspec.args) == 2 and len(argspec.defaults) == 1:
                param_name = argspec.args[1]
                mult_values = argspec.defaults[0]

                # "move" the usefult method to something not automatically executable
                delattr(klass, meth_name)
                new_meth_name = "_multiplied_" + meth_name
                assert not hasattr(klass, new_meth_name)
                setattr(klass, new_meth_name, meth)
                new_meth = getattr(klass, new_meth_name)

                # for each of the given values, create a new method which will call the given method
                # with only a value at the time
                for multv in mult_values:
                    def f(self, multv=multv):
                        return new_meth(self, **{param_name: multv})

                    meth_mult_name = meth_name + "_" + multv.replace(" ", "_")[:30]
                    assert not hasattr(klass, meth_mult_name)
                    setattr(klass, meth_mult_name, f)

        return klass

The basics are: it receives a class, it returns a slightly modified class ;). For each of the methods that starts with "test_", I checked those that had two args (not only 'self'), and that the second argument were named.

So, it would actually get the method defined in the following structure and leave the rest alone:

    class SuperTestCase(unittest.TestCase):

        def test_all_scopes(self, scope=_all_scopes):
            resp = self.checker.hit_search(scope, '')

For that kind of method, the decorator will move it to something not named "test_*" (so we can call it but it won't be called by automatic test infrastructure), and then create, for each value in the "_scopes" there, a method (with a particular name which doesn't really matter, but needs to be different and is nice to be informative to the user) that calls the original method, passing "scope" with the particular value.

So, for example, let's say that _all_scopes is ['foo', 'bar']. Then, the decorator will rename test_all_scopes to _multiplied_test_all_scopes, and then create two new methods like this::

    def test_all_scopes_foo(self, multv='foo'):
        return self._multiplied_test_all_scopes(scope=multv)

    def test_all_scopes_foo(self, multv='bar'):
        return self._multiplied_test_all_scopes(scope=multv)

The final effect is that the test infrastructure (internally and externally) finds those two methods (not the original one), and calls them. Each one individually, informing progress individually, the user being able to execute them individually, etc.

So, at the end, all gain, no loss, and a fun little piece of Python code :)

Read more

Algunas, varias y sueltas.

A nivel de proyectos, le estuvimos metiendo bastante con Nico a fades. La verdad es que la versión 2 que sacamos la semana pasada está piolísima... si usás virtualenvs, no dejes de pegarle una mirada.

Otro proyecto con el que estuve es CDPedia... la parte de internacionalización está bastante potable, y eso también me llevó a renovar la página principal que te muestra cuando la abrís, así que puse a tirar una nueva versión de la de español, y luego seguirá una de portugués (¡cada imagen tarda como una semana!).

Hace un rato subí a la página de tutoriales de Python Argentina el Tutorial de Django en español (¡gracias Matías Bordese por el material!). Este tuto antes estaba en un dominio que ahora venció, y nos pareció interesante que esté todo en el mismo lugar, facilita que la gente lo encuentre.

Finalmente, empecé a organizar mi Segundo Curso Abierto de Python. Esta vez lo quiero hacer por la zona de Palermo, o alrededores (la vez pasada fue en microcentro). Todavía no tengo reservado un lugar, y menos fechas establecidas, pero el formato va a ser similar al anterior. Con respecto al sitio, si alguien conoce un buen lugar para alquilar "aulas", me avisa, :)

Read more
Dustin Kirkland

Gratuitous picture of my pets, the day after we rescued them
The PetName libraries (Shell, Python, Golang) can generate infinite combinations of human readable UUIDs

Some Background

In March 2014, when I first started looking after MAAS as a product manager, I raised a minor feature request in Bug #1287224, noting that the random, 5-character hostnames that MAAS generates are not ideal. You can't read them or pronounce them or remember them easily. I'm talking about hostnames like: sldna, xwknd, hwrdz or wkrpb. From that perspective, they're not very friendly. Certainly not very Ubuntu.

We're not alone, in that respect. Amazon generates forgettable instance names like i-15a4417c, along with most virtual machine and container systems.

Meanwhile, there is a reasonably well-known concept -- Zooko's Triangle -- which says that names should be:
  • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable
  • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
  • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
And, of course we know what XKCD has to say on a somewhat similar matter :-)

So I proposed a few different ways of automatically generating those names, modeled mostly after Ubuntu's beloved own code naming scheme -- Adjective Animal. To get the number of combinations high enough to model any reasonable MAAS user, though, we used Adjective Noun instead of Adjective Animal.

I collected a Adjective list and a Noun list from a blog run by moms, in the interest of having a nice, soft, friendly, non-offensive source of words.

For the most part, the feature served its purpose. We now get memorable, pronounceable names. However, we get a few odd balls in there from time to time. Most are humorous. But some combinations would prove, in fact, to be inappropriate, or perhaps even offensive to some people.

Accepting that, I started thinking about other solutions.

In the mean time, I realized that Docker had recently launched something similar, their NamesGenerator, which pairs an Adjective with a Famous Scientist's Last Name (except they have explicitly blacklisted boring_wozniak, because "Steve Wozniak is not boring", of course!).

Similarly, Github itself now also "suggests" random repo names.

I liked one part of the Docker approach better -- the use of proper names, rather than random nouns.

On the other hand, their approach is hard-coded into the Docker Golang source itself, and not usable or portable elsewhere, easily.

Moreover, there's only a few dozen Adjectives (57) and Names (76), yielding only about 4K combinations (4332) -- which is not nearly enough for MAAS's purposes, where we're shooting for 16M+, with minimal collisions (ie, covering a Class A network).

Introducing the PetName Libraries

I decided to scrap the Nouns list, and instead build a Names list. I started with Last Names (like Docker), but instead focused on First Names, and built a list of about 6,000 names from public census data.  I also built a new list of nearly 38,000 Adjectives.

The combination actually works pretty well! While smelly-Susan isn't particularly charming, it's certainly not an ad hominem attack targeted at any particular Susan! That 6,000 x 38,000 gives us well over 228 million unique combinations!

Moreover, I also thought about how I could actually make it infinitely extensible... The simple rules of English allow Adjectives to modify Nouns, while Adverbs can recursively modify other Adverbs or Adjectives.   How convenient!

So I built a word list of Adverbs (13,000) as well, and added support for specifying the "number" of words in a PetName.
  1. If you want 1, you get a random Name 
  2. If you want 2, you get a random Adjective followed by a Name 
  3. If you want 3 or more, you get N-2 Adverbs, an Adjective and a Name 
Oh, and the separator is now optional, and can be any character or string, with a default of a hyphen, "-".

In fact:
  • 2 words will generate over 221 million unique combinations, over 227 combinations
  • 3 words will generate over 2.8 trillion unique combinations, over 241 combinations (more than 32-bit space)
  • 4 words can generate over 255 combinations
  • 5 words can generate over 268 combinations (more than 64-bit space)
Interestingly, you need 10 words to cover 128-bit space!  So it's





So once the algorithm was spec'd out, I built and packaged a simple shell utility and text word lists, called petname, which are published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:petname/ppa
$ sudo apt-get update

$ sudo apt-get install petname
$ petname
$ petname -w 3
$ petname -s ":" -w 5


That's only really useful from the command line, though. In MAAS, we'd want this in a native Python library. So it was really easy to create python-petname, source now published at:
The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:python-petname/ppa
$ sudo apt-get update

$ sudo apt-get install python-petname
$ python-petname
$ python-petname -w 4
$ python-petname -s "" -w 2

Using it in your own Python code looks as simple as this:

$ python
⟫⟫⟫ import petname
⟫⟫⟫ foo = petname.Generate(3, "_")
⟫⟫⟫ print(foo)


In the way that NamesGenerator is useful to Docker, I though a Golang library might be useful for us in LXD (and perhaps even usable by Docker or others too), so I created:
Of course you can use "go get" to fetch the Golang package:

$ export GOPATH=$HOME/go
$ mkdir -p $GOPATH
$ export PATH=$PATH:$GOPATH/bin
$ go get

And also, the packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

$ sudo apt-add-repository ppa:golang-petname/ppa
$ sudo apt-get update

$ sudo apt-get install golang-petname
$ golang-petname
$ golang-petname -words=1
$ golang-petname -separator="|" -words=10

Using it in your own Golang code looks as simple as this:

package main
import (
func main() {
fmt.Println(petname.Generate(2, ""))
Gratuitous picture of my pets, 7 years later.

Read more

Logging levels

Cuando empecé con el concepto de loguear, me parecía demasiado tener niveles. Con el tiempo y la experiencia me di cuenta que son imprescindibles, :)

En la biblioteca estándar de Python hay un módulo logging que trae varios niveles prefijados. Son estos, con una pequeña anotación de cómo los uso, más un ejemplo de la vida real (tomados de mi programa de Encuentro o de fades).

- CRITICAL: creo que nunca lo usé :)

- ERROR: problemas de todo tipo; cosas que no deberían pasar, y si pasan son un inconveniente; muchas veces el programa no continúa, o continua de forma parcial o limitada, luego de este tipo de linea logueada. En este ejemplo logueo que no se pudo bajar la lista de los backends durante una actualización (también en este caso se le avisa al usuario mediante una ventanita, y el programa sigue, aunque la actualización no se concretó):

        _, backends_file = yield
    except Exception, e:
        logger.error("Problem when downloading backends: %s", e)
        tell_user("Hubo un PROBLEMA al bajar la lista de backends:", e)

- WARNING: para indicar que sucedió algo que en general no debería pasar; en general no son cosas malas, sino más bien anómalas, y no presentan una situación problemática. En el siguiente ejemplo estoy dejando registro que ignoro la opción 'quiet' que pasó el usuario (porque también pasó la opción 'verbose', que es más importante):

    if verbose and quiet:
        l.warning("Overriding 'quiet' option ('verbose' also requested)")

- INFO: información general del funcionamiento del programa, cosas que son imprescindibles saber y que siempre queremos que sean registradas; en general no involucran gran cantidad de lineas, pero permite seguir el flujo de ejecución del programa desde un nivel alto. Normalmente los programas que se entregan a los usuarios o corren en los servidores están configurados para realmente mandar a disco desde este nivel. En las siguientes dos lineas muestro lo primero que loguea Encuentro al arrancar: con qué versión de Python está siendo ejecutado y qué versión de sí mismo es:"Running Python %s on %r", sys.version_info, sys.platform)"Encuentro version: %r", version)

- DEBUG: toda la información necesaria para analizar en detalle la ejecución del programa. Puede involucrar grandes cantidades de información, y hasta ser un problema con respecto al uso de disco o afectar la performance, pero en general no se corren los programas en este nivel, sólo durante el desarrollo o en casos de tratar de analizar un problema específico. No es raro, por ejemplo, pedirle al usuario que ejecute el programa con un parámetro especial que configura los logs en este nivel y que trate de reproducir el problema que tuvo, para luego hacer un análisis forense de la situación. En el siguiente ejemplo estoy dejando constancia que fades tuvo que instalar pip a mano en el virtualenv:

    logger.debug("Installing PIP manually in the virtualenv")

Me ha pasado en sistemas muy complejos de necesitar un nivel más abajo que DEBUG para loguear toda aquella información que podría llegar a ser útil para un análisis del comportamiento del programa, pero que normalmente sería un exceso de datos (lo cual complica desde la lectura de los registros hasta el mismo manejo de los archivos). Entonces, usábamos un nivel TRACE, que casi nunca se prendía, para este propósito.

La macana es que el módulo de logging no tiene un nivel TRACE, pero lo creábamos a mano:

    TRACE = 5
    logging.addLevelName('TRACE', TRACE)

Fíjense el 5 ese: es que DEBUG es 10, entonces queda "más abajo". Claro, para que funcione todo, teníamos que usar un Logger custom:

    class Logger(logging.Logger):
        """Logger that support our custom levels."""

        def trace(self, msg, *args, **kwargs):
            """log at TRACE level"""
            if self.isEnabledFor(TRACE):
                self._log(TRACE, msg, args, **kwargs)

Para más información sobre la infrastructura de logging de Python y consejos generales sobre qué, cómo, o cuándo dejar registro de lo que sucede, pueden ver mi charla sobre el tema (estos sons los slides, y en algún momento se publicará acá el video de esta misma charla que dí en la PyCon de Rafaela).

Read more

PyCon 2014 en Rafaela

Acaba de pasar la sexta PyCon Argentina. Como dice el título, se hizo en Rafaela, provincia de Santa Fe.

Fuimos con Nico Demarchi en auto, salimos el miércoles a la tarde y llegamos una y monedas de la mañana, volvimos el domingo durante el día, arrancando a media mañana. Creo que es el límite de lo que haría en auto... más distancia ya iría en micro o avión.

Yo tenía que llegar el miércoles a la noche porque el jueves abría el día de talleres con Introducción a Python (modo charla extendida, ya que tenía dos horas). El jueves dí dos charlas más: Cómo debuguear en Python, y Cómo los logs me salvaron el alma.Y para cerrar (justo antes de los sorteos y foto grupal), le conté a la gente un poco cómo íbamos con el proyecto de armar la Asociación Civil de PyAr.

Mis charlas salieron bien, aunque la de debugging no me gustó del todo como la había dado (pero luego recibí buen feedback). Para el taller de Intro a Python usé por primera vez a Pysenteishon, un software muy copado y piola para ir pasando los slides desde el teléfono (¡gracias Emiliano por hacerlo!). Y para las charlas del jueves estuve por primera vez descalzo dando la presentación (era algo que quería probar desde hace rato, y aproveché que el escenario del auditorio tenía piso de madera).

Dando la charla en patas

También fuí a muchas charlas, había muchas cosas copadas para ver, y creo que me salté uno o dos timeslots nada más en toda la conferencia. Las keynotes estuvieron bien, pero no me entusiasmaron particularmente. Y todo lo fue lugar y organización estuvo genial, la verdad que se pasaron. Lo mismo con la gente con la que me (re)encontré: es un placer ser parte de una comunidad así.

Yo llevé la cámara... pero la verdad es que colgué sacando fotos. Pero la grosa de Yami sacó un montón, están todas acá. Y una de las últimas que sacó es justamente la grupal, esta que muestro acá:

Foto grupal

Y como siempre que uno no viaja durmiendo o solo, está el efecto de "PyCon extendida". Es que uno viene charlando de mil cosas, de lo más variado, pero también de proyectos, ideas, etc. Con Nico nos venía rondando en la cabeza una idea para facilitar el uso de dependencias en programas Python, estuvimos charlando con gente en la conferencia, nos dieron feedback, la idea fue mutando... y en el viaje de vuelta se nos terminó de ocurrir algo piola, que no debería ser demasiado loco de implementar; ya les traeré la novedad.

¡Pero no sólo un proyecto me traje! (como si tuviera pocos y/o mucho tiempo libre, ¿no?). Tengo ganas de hacer una "maquinita de timelapse" con una Raspi (una cajita que uno puede colgar en cualquier lado y dejarla ahí algunas horas o un par de días y arme un video de esos donde se ve todo rápido, por ejemplo este). El otro proyecto es armar una valija o caja robusta con todo lo necesario en un PyCamp (router, computadora para caché de repositorios, energía, y varios etcéteras), de manera de tener todo listo y de fácil armado, onda llegás y enchufás. Ya veremos cómo se van desarrollando ambos proyectos...

Read more

Satélites argentinos

Estos días fue lanzado exitosamente el tercer nanosatélite argentino, "Tita" (llamado así en honor a Tita Merello).

Se llaman "nanosatélites" porque, justamente, son mucho más chicos (y baratos) que los satélites "tradicionales". En particular, Tita pesa unos 25 kilos, está equipado con tres antenas y lleva una cámara para tomar fotos y videos en alta definición.

El satélite Tita, siendo instalado en el lanzador

Lo desarrolló la empresa argentina Satellogic, pero no lo lanzamos nosotros al espacio (todavía no tenemos esa capacidad), sino que fue lanzado desde la ciudad rusa de Yasny. Su objetivo es tomar imágenes durante tres años, en colaboración con otros nanosatélites, los ya lanzados Capitán Beto (llamado así obviamente en referencia a la canción de Spinetta) y Manolito (por el personaje de Mafalda), y a otros 15 satélites que Satellogic planea lanzar durante el próximo año.

Pero Tita es diferente a los dos anteriores, que pesaban alrededor de dos kilos. También es un prototipo, y usa las mismas estrategias de diseño y fabricación con componentes de uso comercial (resortes de ferretería, electrónica de teléfonos celulares y computadoras personales), pero este permite tomar imágenes y videos de dos metros de resolución. Esencialmente, la gente de Satellogic está haciendo lo mismo que hace un satélite convencional, pero a un precio entre cien y mil veces menor.

En este video bastante interesante podemos ver a Lino Barañao (Ministro de Ciencia y Tecnología) y Emiliano Kargieman (CEO de Satellogic), contándonos un poco todo esto (y de paso se ven pasos de la construcción, y las oficinas, ¡donde se ve bastante gente de PyAr trabajando!).

Como detalle final, les dejo este audio de Adrián Paenza hablando sobre los satélites (en general) en el programa La Mañana de Victor Hugo Morales.

Read more

I’ve been writing Juju charms to automate the deployment of a few different services at work, which all happen to be wsgi applications… some Django apps, others with other frameworks. I’ve been using the ansible support for writing charms which makes charm authoring simpler, but even then, essentially each wsgi service charm needs to do the same thing:

  1. Setup specific users
  2. Install the built code into a specific location
  3. Install any package dependencies
  4. Relate to a backend (could be postgresql, could be elasticsearch)
  5. Render the settings
  6. Setup a wsgi (gunicorn) service (via a subordinate charm)
  7. Setup log rotation
  8. Support updating to a new codebase without upgrading the charm
  9. Support rolling updates a new codebase

Only three of the above really change slightly from service to service: which package dependencies are required, the rendering of the settings and the backend relation (which usually just causes the settings to be rerendered anyway).

After trying (and failing) to create a nice reusable wsgi-app charm, I’ve switched to utilise ansible’s built-in support for reusable roles and created a charm-bootstrap-wsgi repo on github, which demonstrates all of the above out of the box (the README has an example rolling upgrade). The charm’s playbook is very simple, just reusing the wsgi-app role:


    - role: wsgi-app
      listen_port: 8080
      wsgi_application: example_wsgi:application
      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
      when: build_label != ''


and only needs to do two things itself:

    - name: Install any required packages for your app.
      apt: pkg={{ item }} state=latest update_cache=yes
        - python-django
        - python-django-celery
        - install
        - upgrade-charm

    - name: Write any custom configuration files
      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
        - config-changed
        # Also any backend relation-changed hooks for databases etc.
        - Restart wsgi


Everything else is provided by the reusable wsgi-app role. For the moment I’ve got the source of the reusable roles in a separate github repo, but I’d like to get these into the charm-helpers project itself eventually. Of course there will be cases where the service charm may need to do quite a bit more custom functionality, but if we can encapsulate and reuse as much as possible, it’s a win for all of us.

If you’re interested in chatting about easier charms with ansible (or any issues you can see), we’ll be getting together for a hangout tomorrow (Jun 11, 2014 at 1400UTC).

Filed under: ansible, juju, python

Read more

I’ve been writing Juju charms to automate the deployment of a few different services at work, which all happen to be wsgi applications… some Django apps, others with other frameworks. I’ve been using the ansible support for writing charms which makes charm authoring simpler, but even then, essentially each wsgi service charm needs to do the same thing:

  1. Setup specific users
  2. Install the built code into a specific location
  3. Install any package dependencies
  4. Relate to a backend (could be postgresql, could be elasticsearch)
  5. Render the settings
  6. Setup a wsgi (gunicorn) service (via a subordinate charm)
  7. Setup log rotation
  8. Support updating to a new codebase without upgrading the charm
  9. Support rolling updates a new codebase

Only three of the above really change slightly from service to service: which package dependencies are required, the rendering of the settings and the backend relation (which usually just causes the settings to be rerendered anyway).

After trying (and failing) to create a nice reusable wsgi-app charm, I’ve switched to utilise ansible’s built-in support for reusable roles and created a charm-bootstrap-wsgi repo on github, which demonstrates all of the above out of the box (the README has an example rolling upgrade). The charm’s playbook is very simple, just reusing the wsgi-app role:


    - role: wsgi-app
      listen_port: 8080
      wsgi_application: example_wsgi:application
      code_archive: "{{ build_label }}/example-wsgi-app.tar.bzip2"
      when: build_label != ''


and only needs to do two things itself:

    - name: Install any required packages for your app.
      apt: pkg={{ item }} state=latest update_cache=yes
        - python-django
        - python-django-celery
        - install
        - upgrade-charm

    - name: Write any custom configuration files
      debug: msg="You'd write any custom config files here, then notify the 'Restart wsgi' handler."
        - config-changed
        # Also any backend relation-changed hooks for databases etc.
        - Restart wsgi


Everything else is provided by the reusable wsgi-app role. For the moment I’ve got the source of the reusable roles in a separate github repo, but I’d like to get these into the charm-helpers project itself eventually. Of course there will be cases where the service charm may need to do quite a bit more custom functionality, but if we can encapsulate and reuse as much as possible, it’s a win for all of us.

If you’re interested in chatting about easier charms with ansible (or any issues you can see), we’ll be getting together for a hangout tomorrow (Jun 11, 2014 at 1400UTC).

Filed under: ansible, juju, python

Read more

Entre los viajes y las vacaciones, estos meses terminé viendo un montonazo de películas. Encima, no aparecieron muchas peliculas copadas para anotar a futuro.

Por otro lado, no estuve viendo muchas series. Con Moni estamos viendo Battlestar Galactica, y yo tengo varias a punto de arrancar (Black Mirror, Almost Human, Through The Wormhole S03).

Pero, a nivel de películas, sí recuperé bastante terreno, :)

  • Chronicle: -0. Muy bien desarrollado el tema de cómo llevar adelante, explorar, y en algún punto sobrellevar, un superpoder adquirido. El resto de la película no vale.
  • Contagion: -0. Muestra de forma interesante (y ajustado a la realidad, me parece) el proceso social ante una epidemia, y las actuaciones están bien, pero le falta como película, como historia contada, como relato.
  • Dream house: +0. Predecible, predecible, predeci..WHAT? Un giro loco, la historia está buena, las actuaciones también; quiere ser un toque de terror pero blah.
  • El hombre de al lado: -0. Tiene partes interesantes, y Daniel Araoz está genial, pero la historia no llega a evitar el naufragio.
  • Elefante blanco: +0. Una realidad que uno (yo) no conoce; bien crudo como acostumbra Trapero. Darín está bien como siempre. Podría estar mejor la historia.
  • Ender's game: +1. Es una buena adaptación del libro, y la película está buenísima. Sí, el libro está mejor, tiene toda una parte que en la peli ni aparece, y es mucho más profundo... pero todo eso no hace que la peli en sí deje de estar buena.
  • Habitación en Roma: +1. Una película hermosa, cruda, y maravillosa, sobre el "enamoramiento".
  • Haywire: -0. Una película de acción que tiene algunos buenos actores un poco desaprovechados, tiene partes buenas, pero meh, es una más sin nada que la haga valer específicamente.
  • Killer Elite: -0. Al final no es más que una historia (que sí está buena) donde muchos machotes están todo el tiempo midiendo a ver quien tiene la pistola más larga.
  • Margin call: +1. Impecablemente contado la interna humana de un descarrilamiento financiero. Me gusta mucho el punto de vista del trabajador interno de la empresa, me pareció muy veraz. Muy buenas actuaciones.
  • Men in black III: +0. Divertida. Más de lo mismo pero con lo interesante de los saltos temporales y mostrar como era MIB en el pasado :)
  • Mission: Impossible - Ghost protocol: +0. No deja de ser la misma película fantasiosa de siempre, pero esta vez me divertí bastante al verla.
  • Monsters University: +1. Tan buena como la primera, aunque totalmente distinta.
  • No strings attached: -0. Natalie Portman no la llega a rescatar; el tema es trillado, no le dan un giro interesante, y Kutcher, como siempre, resta.
  • Paul: +0. Comedia liviana, nada espectacular, para reirse un rato y disfrutar todas las referencias extraterrestroides.
  • The Avengers: +0. Un poco demasiado violenta, pero en el límite (me hacía acordar a Transformers). Me divirtió. Me gustó los (escasos) planteos filosóficos que tiene, aunque al final siempre el mensaje de "menos mal que tenemos superheroes que nos van a salvar cuando todo esté mal", con el que estoy totalmente opuesto.
  • The King's speech: +1. Fantástica película, con actuaciones soberbias, y una historia muy interesante sobre superación personal.
  • The Ledge: +1. La historia interesante, las actuaciones bien. Muy buenos contrapuntos sobre "la religión". Emotiva. Patrick Wilson mejor de lo que esperaba, y Terrence Howard, como siempre, muy muy bien.
  • The divide: +1. Muy bien hecha. Muestra tan bien las miserias humanas que, aunque no soy impresionable y me banco (casi) cualquier cosa, no la voy a volver a ver.
  • The hobbit: The desolation of Smaug: +1. Segunda parte de la trilogía, sigue estando muy buena. Sorprendente la voz de Smaug (el dragón), ¡es Sherlock!
  • The thing: +1. Es vieja, pero los efectos no están tan mal. Y parece que tiene un montón de lugares comunes... hasta que uno entiende que en esa época no eran comunes! ;)

Las anotadas nuevas:

Finalmente, el conteo de pendientes por fecha:

(Sep-2008)    6
(Ene-2009)   18  12   1   1
(May-2009)   11  10   5
(Oct-2009)   16  15  14
(Mar-2010)   18  18  18  16   4
(Sep-2010)   18  18  18  18  18   9   2   1
(Dic-2010)   13  13  13  12  12  12   5   1
(Abr-2011)   23  23  23  23  23  23  22  17   4
(Ago-2011)       12  12  11  11  11  11  11  11   4
(Ene-2012)           21  21  18  17  17  17  17  11
(Jul-2012)               15  15  15  15  15  15  14
(Nov-2012)                   12  12  11  11  11  11
(Feb-2013)                       19  19  16  15  14
(Jun-2013)                           19  18  16  15
(Sep-2013)                               18  18  18
(Dic-2013)                                   14  14
(Abr-2014)                                        9
Total:      123 121 125 117 113 118 121 125 121 110

Read more

PyCamp 2014

Se fue otro PyCamp. Como siempre, genial. Lo charlaba con Moni, es notable como el formato del evento no decae año a año, ¡siguen siendo bárbaros!

Eso sí, voy a tratar de innovar en lo que es la descripción del mismo, escaparme de hacer una cronología, y orientar más el relato a las situaciones.

Llegando y saliendo

Los viajes bien. Como el año pasado, me quedé hasta "cerrar el evento", y también como el año pasado, luego de vaciar el lugar, entregar la llave y eso, nos quedamos algunos tomando unas cervezas en el bar del lugar, hasta que íbamos partiendo en función del horario de bondi de cada uno.

La diferencia estuvo en la llegada, ya que este año no tuvimos al Joven Ocupado en la Accesibilidad y Conectividad (JOAC), así que tuvimos que armar toda la infraestructura de red sin saber demasiado. Viajé con Nico Demarchi, así que al llegar nos pusimos con eso... y aunque no es rocket science, tampoco es trivial, y estuvimos como tres horas para dejar todo lindo!

Una Antena Sable Laser


El proyecto mío en el que más trabajé fue Encuentro, en parte en esta biblioteca para parsear SWFs que vengo necesitando, pero también porque para este proyecto se anotaron varias personas... ¡y metieron un montón de laburo! Tres branches de Mica Bressan, dos de Nico y uno de Emiliano Dalla Verde Marcozzi, y creo que hay otro más dando vuelta por ahí.

También trabajé en un proyecto nuevo, que arrancó en este PyCamp. Es WeFree, un proyecto para almacenar colaborativamente claves de redes, de manera de hacer que tu computadora o teléfono se conecte automáticamente en todos los lados posibles. Participé todo el primer día, en el diseño general y luego armé la interfaz gráfica para la compu (no toda, pero sí la base, dejando algo usable).

Algo en lo que también trabajé desde cero, pero que no sé si se puede calificar como proyecto, fue algo así como la "búsqueda del testrunner perfecto", que describí en este post. Con la ayuda de Martín Gaitán atacamos como base a nose, y le fuimos agregando plugins y probándolos. El experimento fue un éxito, logramos todo lo que queríamos, ya voy a poner un post acá explicando bien el detalle.

Hubo un proyecto que llevé pero en el que yo no trabajé, que fue Linkode, pero Seba Alvarez estuvo haciendo cosas copadas con la interfaz, me tiene que mandar el código.

Finalmente, arranqué ayudando a un par de chicos a migrar código a Python 3, pero no hicimos mucho de eso (aunque aprendimos algunos detalles interesantes).

Laburando en Encuentro con Nico, Mica y Emi (que sacó la foto)

Las noches

Sólo tres, porque el último día uno viaja, pero las aprovechamos a full :)... la gente se va a dormir sorprendentemente tarde luego de lo arduo que son los días. Bah, más sorprendente es que muchos al otro día nos levantamos temprano :p

La primer noche jugué a un juego que no conocía, el Munchkin, ¡y gané!. Está bueno el juego, pero es uno de esos que tenés que leer mil cartitas, entonces las primeras diez veces que jugás se hace un poco lento.

El sábado fue la reunión de PyAr, y después charlé con gente y programé algo, no jugué a nada.

La tercer noche fue doble... Munchkin primero (ganó Matías), y luego jugamos al Carrera de Mente. Hacía como 15 años que no jugaba un Carrera de Mente, no me acordaba que fuese tan divertido! Nos reimos mucho.

Carrera de Mente

Notas de color

Este año Alecu no pudo venir... y Diego Sarmentero se le ocurrió la idea de nombrarlo Lider Inspiracional, y mandó a imprimir dos cuadros, uno para tenerlo durante el día, y otro para tenerlo luego de las cenas.

A nivel de "actividades al aire libre", este año volvimos a repetir la caminata del año pasado hasta el río (fuimos un grupito de unos 10), y también hice paddle con Hugo Ruscitti, Emilio Ramirez y Hernán Lozano. ¡Jugamos un montón! Bah, menos de dos horas, pero nos arreglamos para meter dos partidos (cinco sets rápidos).

También hicimos una key signing party, y Juanjo Ciarlante nos charló un poco de seguridad y buenas costumbres.



Bien simple, lo afirmo una vez más: ¡PyCamp es el mejor evento del año! (todas las fotos acá).

Read more
David Murphy (schwuk)

Today I was adding tox and Travis-CI support to a Django project, and I ran into a problem: our project doesn’t have a Of course I could have added one, but since by convention we don’t package our Django projects (Django applications are a different story) – instead we use virtualenv and pip requirements files – I wanted to see if I could make tox work without changing our project.

Turns out it is quite easy: just add the following three directives to your tox.ini.

In your [tox] section tell tox not to run

skipsdist = True

In your [testenv] section make tox install your requirements (see here for more details):

deps = -r{toxinidir}/dev-requirements.txt

Finally, also in your [testenv] section, tell tox how to run your tests:

commands = python test

Now you can run tox, and your tests should run!

For reference, here is a the complete (albeit minimal) tox.ini file I used:

envlist = py27
skipsdist = True

deps = -r{toxinidir}/dev-requirements.txt
setenv =
    PYTHONPATH = {toxinidir}:{toxinidir}
commands = python test

Read more

Corriendo tests

En la vida del programador hay una tarea que lleva bastante tiempo, y es la de correr tests, ya sean "unit tests" (pruebas unitarias) o "integration tests" (pruebas donde se hacen interactuar subsistemas entre sí).

Es cierto, no todos los proyectos tienen tests, pero deberían. ¡Y son un vicio! Una vez que los probaste, querés pruebas en todos los proyectos. Pero claro, a los tests hay que correrlos, y hay muchas maneras de hacerlo.

La verdad es que la estructura de los tests es siempre la misma (o casi siempre), obviamente hablando de proyectos en Python, pero la forma de correrlos, y especialmente la forma de presentar los resultados, varía mucho de un corredor de tests a otros.

A lo largo de años he probado distintos, y debo decir que ninguno cumple 100% con lo que a mi me gustaría tener en el test runner ideal. Por otro lado, seguramente alguno (como nosetests, por ejemplo), cumpla gran porcentaje de lo que quiero, es cuestión de lograr lo que falta.

Acá está la listita de las cosas que cumpliría mi test runner soñado. Propuse un proyecto en el PyCamp de este mes para laburar en esto (obviamente no escribir algo desde cero, sino lograr el objetivo con el menor esfuerzo posible).

Le puse un número a cada ítem para que sea más fácil referenciar en cualquier discusión:

01. Debería soportar que le pase un directorio (default a '.') y que descubra todo ahí y para abajo:

        $ <testrunner> project/tests/
        $ <testrunner>

02. Debería soportar que le pase un archivo, y que corra sólo los tests de ese archivo:

        $ <testrunner> project/tests/

03. Debería soportar que le pase "paths de import de Python", y que corra sólo tests de ese paquete, módulo, clase, o lo que corresponda:

        $ <testrunner> project.tests
        $ <testrunner> project.tests.test_stuff
        $ <testrunner> project.tests.test_stuff.StuffTestCase
        $ <testrunner> project.tests.test_stuff.StuffTestCase.test_feature

04. Debería poder pasarle una regex para que corra sólo lo que encuentra en el path completo del método:

        $ <testrunner> project/tests/ --search feature
            no correría:

        $ <testrunner> project/tests/ --search feature$
            no correría:

05. Debería poder decirle que pare de correr los tests al encontrar el primer error o falla.

06. Debería poder indicarle que mida los tiempos de cada test (y al final que presente un reporte con los N tests que más tardaron).

07. Debería mostrar los resultados usando los nombres de paquete/módulo/clase/método, en una jerarquía de árbol o en la misma linea:

        $ <testrunner> project/tests/
            test_feature_1                        OK
            test_feature_2                      FAIL
            test_feature_A                        OK

        $ <testrunner> project/tests/
        OK    project.tests.test_stuff.StuffTestCase.test_feature_1
        FAIL  project.tests.test_stuff.StuffTestCase.test_feature_2
        OK    project.tests.test_stuff.OtherStuffTestCase.test_feature_A

    De cualquier manera, esto no afecta el órden de ejecución de las pruebas (secuencial, aleatoria, etc), sólo es cómo mostrar los resultados.

08. Los OKs deberían ser verdes; ERRORs y FAILs deberían ser rojos.

09. Los OKs/FAILs/ERRORs para cada prueba, en el listado, deberían estar alineados verticalmente.

10. No debería capturar stdout/stderr.

11. En el reporte final (luego del listado que va mostrando al ejecutar todo), debería mostrar el path completo del test que falla (o de los tests que fallan), junto con el (los) errores, de manera que si uno copia y pega ese path, sirva para correr ese único test.

Read more
Michael Hall

Ubuntu API Website

For much of the past year I’ve been working on the Ubuntu API Website, a Django project for hosting all of the API documentation for the Ubuntu SDK, covering a variety of languages, toolkits and libraries.  It’s been a lot of work for just one person, to make it really awesome I’m going to need help from you guys and gals in the community.

To help smooth the onramp to getting started, here is a breakdown of the different components in the site and how they all fit together.  You should grab a copy of the branch from Launchpad so you can follow along by running: bzr branch lp:ubuntu-api-website


First off, let’s talk about the framework.  The API website uses Django, a very popular Python webapp framework that’s also used by other community-run Ubuntu websites, such as Summit and the LoCo Team Portal, which makes it a good fit. A Django project consists of one or more Django “apps”, which I will cover below.  Each app consists of “models”, which use the Django ORM (Object-Relational Mapping) to handle all of the database interactions for us, so we can stick to just Python and not worry about SQL.  Apps also have “views”, which are classes or functions that are called when a URL is requested.  Finally, Django provides a default templating engine that views can use to produce HTML.

If you’re not familiar with Django already, you should take the online Tutorial.  It only takes about an hour to go through it all, and by the end you’ll have learned all of the fundamental things about building a Django site.

Branch Root

When you first get the branch you’ll see one folder and a handful of files.  The folder, developer_network, is the Django project root, inside there is all of the source code for the website.  Most of your time is going to be spent in there.

Also in the branch root you’ll find some files that are used for managing the project itself. Most important of these is the README file, which gives step by step instructions for getting it running on your machine. You will want to follow these instructions before you start changing code. Among the instructions is using the requirements.txt file, also in the branch root, to setup a virtualenv environment.  Virtualenv lets you create a Python runtime specifically for this project, without it conflicting with your system-wide Python installation.

The other files you can ignore for now, they’re used for packaging and deploying the site, you won’t need them during development.


As I mentioned above, this folder is the Django project root.  It has sub-folders for each of the Django apps used by this project. I will go into more detail on each of these apps below.

This folder also contains three important files for Django:, and is used for a number of commands you can give to Django.  In the README you’ll have seen it used to call syncdbmigrate and initdb.  These create the database tables, apply any table schema changes, and load them with initial data. These commands only need to be run once.  It also has you run collectstatic and runserver. The first collects static files (images, css, javascript, etc) from all of the apps and puts them all into a single ./static/ folder in the project root, you’ll need to run that whenever you change one of those files in an app.  The second, runserver, runs a local HTTP server for your app, this is very handy during development when you don’t want to be bothered with a full Apache server. You can run this anytime you want to see your site “live”. contains all of the Django configuration for the project.  There’s too much to go into detail on here, and you’ll rarely need to touch it anyway. is the file that maps URLs to an application’s views, it’s basically a list of regular-expressions that try to match the requested URL, and a python function or class to call for that match. If you took the Django project tutorial I recommended above, you should have a pretty good understanding of what it does. If you ever add a new view, you’ll need to add a corresponding line to this file in order for Django to know about it. If you want to know what view handles a given URL, you can just look it up here.


If you followed the README in the branch root, the first thing it has you do is grab another bzr branch and put it in ./developer_network/ubuntu_website.  This is a Django app that does nothing more than provide a base template for all of your project’s pages. It’s generic enough to be used by other Django-powered websites, so it’s kept in a separate branch that each one can pull from.  It’s rare that you’ll need to make changes in here, but if you do just remember that you need to push you changes branch to the ubuntu-community-webthemes project on Launchpad.


This is a 3rd party Django app that provides the RESTful JSON API for the site. You should not make changes to this app, since that would put us out of sync with the upstream code, and would make it difficult to pull in updates from them in the future.  All of the code specific to the Ubuntu API Website’s services are in the developer_network/service/ app.


This app isn’t being used yet, but it is intended for giving better search functionality to the site. There are some models here already, but nothing that is being used.  So if searching is your thing, this is the app you’ll want to work in.


This is another app that isn’t being used yet, but is intended to allow users to link additional content to the API documentation. This is one of the major goals of the site, and a relatively easy area to get started contributing. There are already models defined for code snippets, Images and links. Snippets and Links should be relatively straightforward to implement. Images will be a little harder, because the site runs on multiple instances in the cloud, and each instance will need access to the image, so we can’t just use the Django default of saving them to local files. This is the best place for you to make an impact on the site.


The common app provides views for logging in and out of the app, as well as views for handling 404 and 500 errors when the arise.  It also provides some base models the site’s page hierarchy. This starts with a Topic at the top, which would be qml or html5 in our site, followed by a Version which lets us host different sets of docs for the different supported releases of Ubuntu. Finally each set of docs is placed within a Section, such as Graphical Interface or Platform Service to help the user browse them based on use.


This app provides models that correspond directly to pieces of documentation that are being imported.  Documentation can be imported either as an Element that represents a specific part of the API, such as a class or function, or as a Page that represents long-form text on how to use the Elements themselves.  Each one of these may also have a given Namespace attached to it, if the imported language supports it, to further categorize them.


Finally we get into the app that is actually generates the pages.  This app has no models, but uses the ones defined in the common and apidocs apps.  This app defines all of the views and templates used by the website’s pages, so no matter what you are working on there’s a good chance you’ll need to make changes in here too. The templates defined here use the ones in ubuntu_website as a base, and then add site and page specific markup for each.

Getting Started

If you’re still reading this far down, congratulations! You have all the information you need to dive in and start turning a boring but functional website into a dynamic, collaborative information hub for Ubuntu app developers. But you don’t need to go it alone, I’m on IRC all the time, so come find me (mhall119) in #ubuntu-website or #ubuntu-app-devel on Freenode and let me know where you want to start. If you don’t do IRC, leave a comment below and I’ll respond to it. And of course you can find the project, file bugs (or pick bugs to fix) and get the code all from the Launchpad project.

Read more