Canonical Voices

Dustin Kirkland

I hope you'll enjoy a shiny new 6-part blog series I recently published at
  1. The first article is a bit of back story, perhaps a behind-the-scenes look at the motivations, timelines, and some of the work performed between Microsoft and Canonical to bring Ubuntu to Windows.
  2. The second article is an updated getting-started guide, with screenshots, showing a Windows 10 user exactly how to enable and run Ubuntu on Windows.
  3. The third article walks through a dozen or so examples of the most essential command line utilities a Windows user, new to Ubuntu (and Bash), should absolutely learn.
  4. The fourth article shows how to write and execute your first script, "Howdy, Windows!", in 6 different dynamic scripting languages (Bash, Python, Perl, Ruby, PHP, and NodeJS).
  5. The fifth article demonstrates how to write, compile, and execute your first program in 7 different compiled programming languages (C, C++, Fortran, Golang).
  6. The sixth and final article conducts some performance benchmarks of the CPU, Memory, Disk, and Network, in both native Ubuntu on a physical machine, and Ubuntu on Windows running on the same system.
I really enjoyed writing these.  Hopefully you'll try some of the examples, and share your experiences using Ubuntu native utilities on a Windows desktop.  You can find the source code of the programming examples in Github and Launchpad:

Read more
Steph Wilson

Back in June we hosted a competition that asked developers to use the AdaptivePageLayout component from the UI toolkit to create an app that converges across devices. We had some very impressive entires that used the component in different ways; choosing a winner was hard. However, after testing all the apps the design team chose a winner: the Timer App.

How does the AdaptivepageLayout work?

The AdaptivePageLayout component eliminates guesswork for developers when adapting from one form factor to another. It works by tracking an infinite number of virtual columns that may be displayed on a screen at once. For example, an app will automatically switch between a 1-panel and 2-panel layout when the user changes the size of the window or surface, by dragging the app from the main stage to the side stage.

You can read more about convergence and how the adaptive page layout works in the App design guides.

Timer app

The Timer app impressed the design team the most with its slick transitions, well thought-out design and ease of use. It used the AdaptivepageLayout well when it translated to different screens.

Design feedback

  • Design: well-considered touches with design, animation and various cool themes.
  • Usability: a favourite is the ability to drag seconds / minutes / hours directly on the clock.
  • Convergence: adjusts beautifully to different screen sizes.

Screen Shot 2016-08-09 at 09.10.32

Screen Shot 2016-08-09 at 09.14.14

Timer app

Other entries

Thank you to all who participated in making their apps look even more slick. Here are the other entries:

2nd: AIDA64 App

  • Design: clean, readable with clear content
  • Usability: pretty flawless
  • Convergence: Adaptive Page Layout suits this type of application well and is used well

3rd: Movie Time

  • Design: functional with good management of all the content
  • Usability: live results for search works smoothly as well as trailer links
  • Convergence: the gridview of poster art lends itself well to various screen sizes

4th: Ubuntu Hangups

  • Design: clean and follows guidelines well
  • Usability: easy to message / chat, with user-friendly functionality
  • Convergence: easy to use particularly on Phone and Tablet

5th: uBeginner

  • Design: basic and clean
  • Usability: information is well-presented
  • Convergence: uses the Adaptive Page Layout well

Try it yourself!

To get involved in building apps, click here.

Read more
Anthony Dillon

Web team hack day

Last week the developers in the web team swapped the office for the lobby of the hotel across the street. The day was geared up to allow us to leave our daily tasks in the office and think of ideas that we would like to work on.

The morning started with coffee and sitting in sofas brainstorming ideas. We collected a list of ideas from each person would like to work on. The ideas ranged from IRC bots to a performance audit of a few of our sites.

Choosing ideas

We wrote all the ideas on post it notes and lay them out on the table. Then each of us chose the idea we were most interested in working on by putting our hand on it. I worked out an almost perfect split of two people per idea, so we broke up into our teams and got to work.

Here are the things we worked on during this “hack day”.

IRC bots

These are bots that can listen to an action and report it to our IRC channel. For example, the creator of a pull request wouldn’t have to paste a link to their PR into our channel to be picked up for review.

This task was picked up by Karl and Will, who started by setting up a Hubot on Heroku. They attached a webhook to all projects under the ubuntudesign to listen for pull requests and report this in the web team channel.

This bot can be used for many other things like reporting deployments, CI failures, etc. We also discussed a method of subscribing to the notifications you are interested in, instead of the whole team being notified about everything.

Asset manager search improvements

Our asset manager and server acts as an internal asset storage. By storing an asset here we get a link to it which can be used by any website. As there are many assets stored in the asset manager we usually need to search existing assets to see if one already exists before making a new one.

Graham and myself picked this task and started by working out how to setup both the manager and server locally.

Previously the search would return results that contained either of a two-word query, but now the result has to contain all search terms to return.

We added our Vanilla framework to the front end, as it is obviously good to use our framework for all internal and external projects.

We have also implemented filtering results by file type, which makes it easy to go through what can sometimes be dozens of search results.

GitHub CMS

GitHub CMS is a nicer and more restricted interface that the marketing team can use to edit the GitHub repository containing page content for example,

Rich and Robin picked this task and began work on it straight away, by discussing the best approach and list of possible features.

Even though Robin was also helping out with. setting up the asset manager and server locally for Graham and me, he still managed to investigate the best Python framework to use and selected one. Rich on the other hand went ahead with the front end and developed a bunch of page templates using the new MAAS GUI Vanilla theme.

Commit linting

Commit linting is a service that gives a committer to a project a nice step by step wizard to build a high quality commit message.

Barry picked up this task and got the service up and running, and working, but hit a blocker at the point of choosing between different methods of committing. For instance, Tower would bypass this step and also we do not necessarily want to dictate to contributors which way they should commit code. This is something we will leave as an investigation for the time being.


Our hack day went well, in fact better than I imagined it would. We all had fun, got to work on things we found important but struggle to get prioritised in our day to day. It gave the developers a feeling of achievement and the buzz of landing and releasing something at the end of the day.

We will be attempting to do a hack day once every month or so, so watch this space!

Read more
Andrea Bernabei

Refreshed scrollbars!

You may have noticed that the scrollbars available on Ubuntu Touch and the Unity8 environment have recently received a huge overhaul in both visual appearance and user experience. More specifically, we redesigned the Scrollbar component (which is already provided in the Ubuntu UI Toolkit) and added a new ScrollView component that builds on top of it and caters for convergence.

Technical note to app developers: the Scrollbar component is still available for compatibility purposes, but we recommend transitioning to the new and convergent ScrollView.

How did we do it?

The process was as follows:

  • Interaction design was specified
  • Applied visual style to the component
  • Prototyped the component
  • Iterated over design and performed user testing
  • Sent for review to the SDK team and integrated into the UI toolkit.

We started by researching the field, exploring the possibilities and thoroughly analyzing the history of the scrollbar component. The output of this step was an interaction specification. The visual design team picked that up and applied our visual style to the component, ensuring a consistent visual language across all the elements provided by the UI Toolkit.

Once we had a first draft of the interaction and visual specs, we created a prototype of the component.
We then iterated over the design choices and refined the prototype. While iterating we also took into account the results of the user testing research that we conducted in the meanwhile. The testers found the new scrollbars easy to use and visually appealing. The only pain point highlighted by the research were the stepper buttons, that were deemed a bit small. We refined them by creating a new crispier graphic asset and by tweaking their size as well as the visual feedback they provided, especially when being hovered with a mouse.

Once we were happy with the result, we submitted our work to be reviewed by the SDK team. The SDK team are the final gatekeeper and decide whether the implementation of a component is ready to be merged to the UI Toolkit or not. The review process can be lengthy, but it is of great help in ensuring a higher code quality, less bugs and clearer documentation. Once the SDK team gave the green light, the component was merged to the next release of the UI Toolkit.

What did we change?

A critical requirement of the new solution was to be “convergence-ready”. Convergence means implementing a UI that scales not just across form factors, but also across different input devices. This is particularly important in the case of scrolling, as it must be responsive to all input devices.

The new ScrollView can be interacted with using the touchscreen of your phone or tablet, but thanks to convergence, now your tablet can turn into a full fledged computing device with a hardware keyboard and a pointer device, such as a mouse. The component must be up to the task and adapt to the capabilities provided by the input device which is currently in use.

At any time, the new scrollbar can be in one of the 3 following modes:

  • Indicator mode, where the scrollbar is a non-interactive visual aid;
  • Thumb mode, that allows quick scrolling using a touch screen;
  • Steppers mode, optimized for pointer devices interactions.

Let’s go through the modes in more detail.

Indicator mode

Whenever the user scrolls content without directly interacting with the scrollbar, i.e. he or she performs a flick or uses the mouse wheel or keyboard keys, the scrollbar gently fades in as an overlay on top of the content. In this mode the scrollbar is not interactive, and just acts as a visual aid to provide information about the position of the content. The indicator gently fades out following a short timeout after the surface stops scrolling.

Please note: we will be replacing these images with GIFs soon.



Thumb mode

Imagine you want to send a picture to a friend of yours, but the file is somewhere down the very lengthy grid of pictures. Let’s also suppose you’re using a smartphone or tablet and you have no mouse or keyboard connected to it. Wouldn’t it be handy to have a way to quickly scroll a long distance without having to repeatedly flick the list? We designed Thumb mode to address that use case.

When the content on screen reaches a length of 10 or more pages, the thin indicator provided by the indicator mode grows thicker into an interactive thumb. That marks the transition to the Thumb mode. While the scrollbar is in Thumb mode you can drag the thumb using touchscreen to quickly scroll the content.

The component still fades out when the user stops interacting with the surface and the surface stops scrolling, in order to leave as much real estate to the application content as possible.

Stepper mode


When the user is interacting with the UI using a pointer device, they expect a different experience than a touchscreen. Pointer devices allow for much more precise interactions and also provide different ways of interacting with the UI components, such as hovering. A good convergent component exploits those additional capabilities to provide the best user experience for the input device which is currently in use.

When the user hovers over the area occupied by the (initially hidden) scrollbar, the bar reveals itself, this time in what we call Stepper mode.

This mode is optimized for pointer device interactions, although it can be interacted with using touchscreen as well. More generally, for a component to be defined convergent the user must be able to start interacting with it using one input device (in this case, a mouse) and switch to another (e.g. touch screen) whenever they wish to. That transition must be effortless and non disruptive.

When in Stepper mode, the scrollbar has a thick and interactive thumb. This is similar to the Thumb mode we presented in the previous section. However, Stepper mode also provides a semi-transparent background and the two clickable stepper buttons desktop users are already accustomed to. The steppers buttons can be clicked to scroll a short distance. Holding a stepper button pressed will scroll multiple times.

The areas above and below the thumb are also interactive. You can click/tap or press-and-hold to scrolling by one or more pages.

Once the user moves the pointer away from the scrollbar area and the surface stops scrolling, the component elegantly fades out, just like in the other modes.

Visual convergence

We put a lot of efforts into making the transitions between the different modes as smooth and visually pleasing as possible. The alignment of the sub components (the thumb, its background, the stepper buttons), their sizes, their colours have been carefully chosen to achieve that goal.

When the bar grows from Indicator to Thumb modes and vice versa, it does so by anchoring one side and expanding only the opposite one. This minimizes unexpected movements and produces a simple yet crisp animated transition. Those same principles also apply to the transitions from thumb to stepper modes and indicator to stepper and vice versa. We wanted to create transitions that would look elegant but not distrustful.

The new scrollbar also provides visual aids to indicate when a pointer device is hovering over any of the sub components. Both the stepper buttons and the thumb react to hovering by adjusting their colours.

Scroller variations

Interaction handling convergence

A lot of effort has gone into tweaking the interactions to provide an effortless interaction model. Here’s a summary of how we handle touch screen and pointer devices:

  • Thumb mode features a thicker interactive thumb to allow quick scrolling using touch screen;
  • Press-and-holding the steppers buttons provides an effortless way to perform multiple short scrolls;
  • Press-and-holding the areas above and below the thumb makes provides easy multiple page scrolling;
  • Mouse hovering is exploited to reveal or hide the scrollbar;
  • Visual feedback on press/tap;
  • Visual feedback on pointer device hover.

It is a lot of small (and sometimes trivial!) details that make up for a great user experience.

Some of you might be wondering: “what about keyboard input?”

I’m glad you asked! That is an important feature to realize full convergence. The ScrollView components handles that transparently for you. Scrolling content using the keyboard is just as easy as scrolling using the touchscreen or any pointer device:

  • Arrow keys trigger a short scroll;
  • PageUp/PageDown trigger a page scroll;
  • Home/End keys trigger scrolling to the top/bottom of the content, respectively
  • Holding a key down triggers multiple scrolls;

What did we achieve?

The new scrollbars fully implement our vision of convergence. The user can interact with any of the input device he has available and switch from one to the other at any time. The interactions feel snappy and we think the component looks great too!

We can’t wait for you to try it and let us know your opinion!

What does the future hold?

The focus so far has been on getting the right visual appearance and user experience.

However, in order to have a complete solution, we also need to make sure adding such a feature as scrollbars to the applications does not come with a big performance drawback. Ideally, all the scrollable surfaces (images, text fields, etc) should include a scrollbar and that means it’s very important to provide a component that is not just easy to use and visually appealing but also extremely performant.

There are two main aspects where the performance of this component comes into play: the performance of interactions, so that they happen immediately and without unexpected delays. I believe we’re in a very good shape there. The second is: the time it takes to create a scrollbar when an application needs one; this affects application startup time and the time it takes to load a new view that holds scrollable content.

A few changes have already been implemented, which has resulted in a speed-up of about 25%. These changes should be released with OTA13.

If you have ideas or want to provide any feedback, here are the contact details of the people who worked on this project.

IRC: #ubuntu-touch channel on FreeNode server

Alternatively, start a thread on the ubuntu-phone mailing list.

Read more
Justin McPherson

Introducing React Native Ubuntu

In the Webapps team at Canonical, we are always looking to make sure that web and near-web technologies are available to developers. We want to make everyone's life easier, enable the use of tools that are familiar to web developers and provide an easy path to using them on the Ubuntu platform.

We have support for web applications and creating and packaging Cordova applications, both of these enable any web framework to be used in creating great application experiences on the Ubuntu platform.

One popular web framework that can be used in these environments is React.js; React.js is a UI framework with a declarative programming model and strong component system, which focuses primarily on the composition of the UI, so you can use what you like elsewhere.

While these environments are great, sometimes you need just that bit more performance, or to be able to work with native UI components directly, but working in a less familiar environment might not be a good use of time. If you are familiar with React.js, it's easy to move into full native development with all your existing knowledge and tools by developing with React Native. React Native is the sister to React.js, you can use the same style and code to create an application that works directly with native components with native levels of performance, but with the ease of and rapid development you would expect.

We are happy to announce that along with our HTML5 application support, it is now possible to develop React Native applications on the Ubuntu platform. You can port existing iOS or Android React Native applications, or you can start a new application leveraging your web-dev skills.

You can find the source code for React Native Ubuntu here,

To get started, follow the instructions in and create your first application.

The Ubuntu support includes the ability to generate packages. Managed by the React Native CLI, building a snap is as easy as 'react-native package-ubuntu --snap'. It's also possible to build a click package for Ubuntu devices; meaning React Native Ubuntu apps are store ready from the start.

Over the next little while there will be blogs posts on everything you need to know about developing a React Native Application for the Ubuntu platform; creating the app, the development process, packaging and releasing to the store. There will also be some information on how to develop new reusable modules, that can add extra functionality to the runtime and be distributed as Node Package Manager (npm) modules.

Go and experiment, and see what you can create.

Read more

This is the third in a series of blog posts detailing my experience acclimating to a fully remote work experience. You may also enjoy my original posts detailing my first week and my first month.

Has it really been 4 months since I started riding the raging river of remote work? After my first month, I felt pretty good about my daily schedule and work/life balance. Since the last post, I’ve met many of my co-workers IRL, learned to find my own work, and figured out how to shake things up when I get in a rut.

But First, Did I Handle My Action Items?

During my first month, I struggled to figure out what to work on after finishing a task. At this point, it’s extremely rare that I don’t have a dozen things in the queue like a pile of papers in an In Box about to topple over. I am happily busy all the time, either pulling things off the top of the stack, plucking things from the middle, or coming up with some new feature that I need to add.

I feel a lot more comfortable on chat. Our IRC channels are a bit daunting at first, especially when there’s a lot of action going on. I’ve learned some good ways to interject or reach out to the people that I need to talk to.

Oh, and I still haven’t gotten a new floormat. Yes, this one’s still broken. For some reason, it just doesn’t annoy me as much as it used to. It’s almost endearing: like a three-legged puppy that I roll my chair across and stand on top of for 8 hours a day.

Meeting IRL

Why would you guys actually meet IRL? - Elliot, Mr Robot

I know a bunch of my coworkers by IRC nicks, and I see a few of their faces in a Google Hangout for our daily standup. This has been sufficient for me, but we did something truly magical in June. We met IRL.

The team I’m on (~10 people) and our sibling team (~10 people) met in Montréal, Québec, Canada, for about a week, and it was unlike any meeting-of-the-minds I’ve ever been to. The engineers were largely from the US and Europe. We all stayed in the hotel downtown and used a small conference room to hang out and work all day, every day. We woke up and ate breakfast together, met in the conference room at 9, had snacks and coffee, ate lunch together, stopped working precisely at 6, and met back in the lobby a few minutes later to go out on the town until 11 or midnight. It’s a truly intense social experience, especially for a group of people who spend most of their days only interacting with other humans through IRC, especially for a group of people who only meet twice a year or so.

This coming together allowed us to hash out a lot of our plans for the coming months, but I believe the true victory in this type of event is the camaraderie it creates. It’s nice to be able to put faces to nicks, think about the inflection in a person’s voice coming out in the way they type, and know exactly who I need to ping on IRC to accomplish certain tasks. It’s fun to hang out with people from all over the world, and it’s fun to go drinking with your coworkers, a thing I had temporarily forgotten.

I’ll note that we also happened to be in Montréal during the 23rd annual Mondial de la Biere, a huge international beer festival that lasted several days. I’ll also note that it was fun to try to speak little bits of French in Montréal, and I’m really looking forward to wherever the next sprint abroad may take us (most likely Europe in October).

Finding Work

With a decentralized company, the issue of finding things to work on for new employees can be tough. Do you give them something small and easy and possibly belittling? Do you give them something massive that will leave them scratching their heads for weeks and completely out of touch with the rest of the company? How can you find a good middle ground here?

I’d say I was “eased in” with smaller tasks into my current project. After the first few tasks were completed, it was often unclear to me where I should go next. There was always a list of bugs - was I the right person to work on them? Was there any other impending feature or unlisted bug that I should start looking into instead? These are hard questions for someone who hasn’t been around for very long.

Over time, I gained more and more responsibilities. I needed a bug fixed pronto in another project, so I did it myself and submitted an MP. Oh, you understand this codebase? You can be a maintainer now! Oh, I think I remember from your resume that you have some golang experience. We need this fixed ASAP, using SD cards is completely broken without it!

It’s all a slippery slope to having a bottomless bucket of super-interesting things to choose from to do work, perform code reviews, and answer community questions. Yet somehow it’s all happened in a way that is not overwhelming, but freeing. When I get stuck on a problem or need a short break from my current project, there’s plenty of other work to go around. Which leads me to…

Shaking Things Up

Routine can be helpful, but it can also be terrible. For the parts of May and June that I wasn’t traveling, I was largely waking up at the same time, making the same lunch, listening to the same music every day, petting the same cats, and picking up the same kinds of tasks from the backlog. None of this was bad per se, but I found myself getting a tad sluggish. It would be harder to work on menial tasks, and easier to come up with elaborate solutions to simple problems. So what did I do?

I started varying my sleep schedule. Some days I get out of bed at 7, other days I get out of bed at 8:45. Because I work from home, I don’t have to worry about fighting traffic or skipping breakfast.

I started varying my lunches. I was making some fun rissotto recipes, but since it’s summer I’ve been mixing up a bunch of different vegetables into a menagerie of salads: sauteed green tomatoes, onions, and zucchini; cukes and onions; pineapple, succotash, and eggs.

As I’ve gained more responsibilities for various projects, I’ve been able to branch into different kinds of tasks. After finishing up a big rework of one codebase, I can start jumping into bugs somewhere else. I can dig deep somewhere, and pivot when I’m ready to go back. There’s always something interesting to work on, and I can see the way different tasks help us towards our end goal. Not to mention I can always do bug hunts.

Remote Life 4-eva

It’s not just working remotely that makes this possible - it’s the people, the culture, and the fun and interesting products we’re creating. Maybe I’ll start blogging more about those things in the future. I don’t have much in this area that I’m looking to improve on, so this could be the last post in the series. Maybe I’ll do one at the 1-year mark to note any new tricks I’ve learned. Until then, keep looking our for other non-diary-entry blog posts.

Looking for advice on working remotely? Not sure if you’d like it? Do you have a strong disagreement with me as a person or my lifestyle choices? Hit me up and we can chat!

Read more
David Callé

The latest version of snapd, the service powering snaps, has just landed in Ubuntu 16.04, here are some of the highlights of this release.

New commands: buy, find private, disable, revert

A lot of new commands are available, allowing you, for example, to downgrade, disable and buy snaps:

  • When logged into a store, snap find --private lets you see snaps that have been shared with you privately.
  • The new buy command presents you a choice of payment backends for non-free snaps.
  • snap disable allows you to disable specific snaps. A disabled snap won't be updated or launched anymore. It can be enabled with the snap enable command.
  • snap revert allows you to revert a snap to its previous installed version.
  • The refresh command now works with snaps installed in devmode.

Snap try and broken states handling

When using the snap try command to mount a folder containing a snap tree as an installed snap, you can end up with a broken snap if you happen to delete the folder without removing the snap first.

This "broken" state is now acknowledged as a potential snap state and handled gracefully by the system. The broken tag now appears next to the snap in the snap list output and you can remove it with snap remove.

Interfaces changes

  • getsockopt has been allowed for connected x11 plugs.
  • /usr/bin/locale access is now part of the default confinement policy.
  • A new hardware-observe interface that gives snaps read access to hardware information from the system. See the implementation for details.

Snapcraft 2.13

Snapcraft has also seen a new release (2.13) that brings:

  • Enhanced Ubuntu Store integration with the introduction of snapcraft push (which deprecates upload) and snapcraft release. These are very important pieces to the Continuous Integration aspect of snapcraft, you will have more to read on this front very soon!
  • A new plainbox plugin which allows parts containing a Plainbox test collection.
  • Many improvements on sanitizing cloud parts declarations.

Java plugins

There has also been a strong focus on improving Java plugins with, for example:

  • Improvements to the ant and maven plugins (support for targets).
  • Introduction of a gradle plugin

To learn how to use these plugins, the easiest way is to run snapcraft help ant, snapcraft help maven and snapcraft help gradle.

Usage examples can be found in the Playpen repository and guidance in the snapcraft documentation.

Read more
Steph Wilson

Last week we released phase 1 of the new App Design Guides, which included Get started and Building blocks. Now we have just released phase 2: Patterns. This includes handy guidance on gestures, navigation and layout possibilities to provide a great user experience in your app.

Navigation: user journeys

Find guidance for utilizing components for effective and natural user journeys within your UI.

750w_Navigation_UserJourney (2)

Layouts: using Grid Units

Use the Grid Unit System to help visualise how much space you have in order to create a consistent and proportionate UI.


More to come…

More sections will be added to patterns in the future, such as search, accessibility and communication.

Up next is phase 3: System integration; which includes the number of a touchpoints your app can plug into inside the Ubuntu operating system shell, such as the launcher, notifications and indicators.

If you want to help us improve these guides, join our mailing list. We’d love to hear from you!

Read more

Redes sociales

Desde que existen sociedades, desde que existen comunidades, desde que existe acción colectiva, existen redes sociales.

Hoy existen plataformas que median esas relaciones sociales, y ahí lo que existen son empresas haciendo lucro y haciendo negocio con las redes sociales.

Entonces nosotros preferimos separar aquellas plataformas que son empresas con fines de lucro, llámese por ejemplo Facebook, de las redes sociales que nos permiten construir colectivamente, construir en comunidad.

Hay que recuperar las redes sociales para la ciudadanía, el concepto de red social. Red social no es Facebook, red social somos nosotros hablando, somos nosotros organizándonos, usemos los medios que usemos.

Por Bea Busaniche, miembro de la Fundación Via Libre, en el capítulo "Ciberactivismo" del programa "En el medio digital" para el Canal Encuentro.

Read more

Sensors are an important part of IoT. Phones, robots and drones all have a slurry of sensors. Sensor chips are everywhere, doing all kinds of jobs to help and entertain us. Modern games and game consoles can thank sensors for some wonderfully active games.

Since I became involved with sensors and wrote QtSensorGestures as part of the QtSensors team at Nokia, sensors have only gotten cheaper and more prolific.

I used Ubuntu Server, snappy, a raspberry pi 3, and the senseHAT sensor board to create a senseHAT sensors snap. Of course, this currently only runs in devmode on raspberry pi3 (and pi2 as well) .

To future proof this, I wanted to get sensor data all the way up to QtSensors, for future QML access.

I now work at Canonical. Snappy is new and still in heavy development so I did run into a few issues. First up was QFactoryLoader which finds and loads plugins, was not looking in the correct spot. For some reason, it uses $SNAP/usr/bin as it's QT_PLUGIN_PATH. I got around this for now by using a wrapper script and setting QT_PLUGIN_PATH to $SNAP/usr/lib/arm-linux-gnueabihf/qt5/plugins

Second issue was that QSensorManager could not see it's configuration file in /etc/xdg/QtProject which is not accessible to a snap. So I used the wrapper script to set up  XDG_CONFIG_DIRS as $SNAP/etc/xdg

[NOTE] I just discovered there is a part named "qt5conf" that can be used to setup Qt's env vars by using the included command qt5-launch  to run your snap's commands.

Since there is no libhybris in Ubuntu Core, I had to decide what QtSensor backend to use. I could have used sensorfw, or maybe iio-sensor-proxy but RTIMULib already worked for senseHAT. It was easier to write a QtSensors plugin that used RTIMULib, as opposed to adding it into sensorfw. iio-sensor-proxy is more for laptop like machines and lacks many sensors.
RTIMULib uses a configuration file that needs to be in a writable area, to hold additional device specific calibration data. Luckily, one of it's functions takes a directory path to look in. Since I was creating the plugin, I made it use a new variable SENSEHAT_CONFIG_DIR so I could then set that up in the wrapper script.

This also runs in confinement without devmode, but involves a simple sensors snapd interface.
One of the issues I can already see with this is that there are a myriad ways of accessing the sensors. Different kernel interfaces - iio,  sysfs, evdev, different middleware - android SensorManager/hybris, libhardware/hybris, sensorfw and others either I cannot speak of or do not know about.

Once the snap goes through a review, it will live here, but for now, there is working code is at my sensehat repo.

Next up to snapify, the Matrix Creator sensor array! Perhaps I can use my sensorfw snap or iio-sensor-proxy snap for that.

Read more

So there I was. I did have to use a proprietary library, for which I had no sources and no real hope of support from the creators. I built my program against it, I ran it, and I got a segmentation fault. An exception that seemed to happen inside that insidious library, which was of course stripped of all debugging information. I scratched my head, changed my code, checked traces, tried valgrind, strace, and other debugging tools, but found no obvious error. Finally, I assumed that I had to dig deeper and do some serious debugging of the library’s assembly code with gdb. The rest of the post is dedicated to the steps I followed to find out what was happening inside the wily proprietary library that we will call libProprietary. Prerequisites for this article are some knowledge of gdb and ARM architecture.

Some background on the task I was doing: I am a Canonical employee that works as developer for Ubuntu for Phones. In most, if not all, phones, the BSP code is not 100% open and we have to use proprietary libraries built for Android. Therefore, these libraries use bionic, Android’s libc implementation. As we want to call them inside binaries compiled with glibc, we resort to libhybris, an ingenious library that is able to load and call libraries compiled against bionic while the rest of the process uses glibc. This will turn out to be critical in this debugging. Note also that we are debugging ARM 32-bits binaries here.

The Debugging Session

To start, I made sure I had installed glibc and other libraries symbols and started to debug by using gdb in the usual way:

$ gdb myprogram
GNU gdb (Ubuntu 7.9-1ubuntu1) 7.9
Starting program: myprogram
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/arm-linux-gnueabihf/".
[New Thread 0xf49de460 (LWP 7101)]
[New Thread 0xf31de460 (LWP 7104)]
[New Thread 0xf39de460 (LWP 7103)]
[New Thread 0xf41de460 (LWP 7102)]
[New Thread 0xf51de460 (LWP 7100)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xf49de460 (LWP 7101)]
0x00000000 in ?? ()
(gdb) bt
#0  0x00000000 in ?? ()
#1  0xf520bd06 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb) info proc mappings
process 7097
Mapped address spaces:

	Start Addr   End Addr       Size     Offset objfile
	   0x10000    0x17000     0x7000        0x0 /usr/bin/myprogram
	0xf41e0000 0xf49df000   0x7ff000        0x0 [stack:7101]
	0xf51f6000 0xf5221000    0x2b000        0x0 /android/system/lib/
	0xf5221000 0xf5222000     0x1000        0x0 
	0xf5222000 0xf5224000     0x2000    0x2b000 /android/system/lib/
	0xf5224000 0xf5225000     0x1000    0x2d000 /android/system/lib/

We can see here that we get the promised crash. I execute a couple of gdb commands after that to see the backtrace and part of the process address space that will be of interest in the following discussion. The backtrace shows that a segment violation happened when the CPU tried to execute instructions in address zero, and we can see by checking the process mappings that the previous frame lives inside the text segment of There is no backtrace beyond that point, but that should come as no surprise as there is no DWARF information in libProprietary, and also noting that usage of frame pointer is optimized away quite commonly these days.

After this I tried to get a bit more information on the CPU state when the crash happened:

(gdb) info reg
r0             0x0	0
r1             0x0	0
r2             0x0	0
r3             0x9	9
r4             0x0	0
r5             0x0	0
r6             0x0	0
r7             0x0	0
r8             0x0	0
r9             0x0	0
r10            0x0	0
r11            0x0	0
r12            0xffffffff	4294967295
sp             0xf49dde70	0xf49dde70
lr             0xf520bd07	-182403833
pc             0x0	0x0
cpsr           0x60000010	1610612752
(gdb) disassemble 0xf520bd02,+10
Dump of assembler code from 0xf520bd02 to 0xf520bd0c:
   0xf520bd02:	b	0xf49c9cd6
   0xf520bd06:	movwpl	pc, #18628	; 0x48c4	<UNPREDICTABLE>
   0xf520bd0a:	andlt	r4, r11, r8, lsr #12
End of assembler dump.

Hmm, we are starting to see weird things here. First, in 0xf520bd02 (which probably has been executed little before the crash) we get an unconditional branch to some point in the thread stack (see mappings in previous figure). Second, the instruction in 0xf520bd06 (which should be executed after returning from the procedure that provokes the crash) would load into the pc (program counter) an address that is not mapped: we saw that the first mapped address is 0x10000 in the previous figure. The movw instruction has also a “pl” suffix that makes the instruction execute only when the operand is positive or zero… which is obviously unnecessary as 0x48c4 is encoded in the instruction.

I resorted to doing objdump -d to disassemble the library and compare with gdb output. objdump shows, in that part of the file (subtracting the library load address gives us the offset inside the file: 0xf520bd02-0xf51f6000=0x15d02):

   15d02:	f7f3 eade 	blx	92c0 <__android_log_print@plt>;
   15d06:	f8c4 5304 	str.w	r5, [r4, #772]	; 0x304
   15d0a:	4628      	mov	r0, r5
   15d0c:	b00b      	add	sp, #44	; 0x2c
   15d0e:	e8bd 8ff0 	ldmia.w	sp!, {r4, r5, r6, r7, r8, r9, sl, fp, pc}

which is completely different from what gdb shows! What is happening here? Taking a look at addresses for both code chunks, we see that instructions are always 4 bytes in gdb output, while they are 2 or 4 in objdump‘s. Well, you have guessed, don’t you? We are seeing “normal” ARM instructions in gdb, while objdump is decoding THUMB-2 instructions. Certainly objdump seems to be right here as the output is more sensible: we have a call to an executable part of the process space in 0x15d02 (it is resolved to a known function, __android_log_print), and the following instructions seems like a normal function epilogue in ARM: a return value is stored in r0, the sp (stack pointer) is incremented (we are freeing space in the stack), and we restore registers.

If we get back to the register values, we see that cpsr (current program status register [1]) does not have the T bit set, so gdb thinks we are using ARM instructions. We can change this by doing

(gdb) set $cpsr=0x60000030
(gdb) disass 0xf520bd02,+15
Dump of assembler code from 0xf520bd02 to 0xf520bd11:
   0xf520bd02:	blx	0xf51ff2c0
   0xf520bd06:	str.w	r5, [r4, #772]	; 0x304
   0xf520bd0a:	mov	r0, r5
   0xf520bd0c:	add	sp, #44	; 0x2c
   0xf520bd0e:	ldmia.w	sp!, {r4, r5, r6, r7, r8, r9, r10, r11, pc}
End of assembler dump.

Ok, much better now [2]. The thumb bit in cpsr is determined by last bx/blx call: if the address is odd, the procedure to which we are calling contains THUMB instructions, otherwise they are ARM (a good reference for these instructions is [3]). In this case, after an exception the CPU moves to arm mode, and gdb is unable to know which is the right mode when disassembling. We can search for hints on which parts of the code are arm/thumb by looking at the values in registers used by bx/blx, or by looking at the lr (link register): we can see above that the value after the crash was 0xf520bd07, which is odd and indicates that 0xf520bd06 contains a thumb instruction. However, for some reason gdb is not able to take advantage of this information.

Of course this problem does not happen if we have debugging information: in that case we have special symbols that let gdb know if the section where the code is contains thumb instructions or not [4]. As those are not found, gdb uses the cpsr value. Here objdump seems to have better heuristics though.

After solving this issue with instruction decoding, I started to debug __android_log_print to check what was happening there, as it looked like the crash was happening in that call. I spent quite a lot of time there, but found nothing. All looked fine, and I started to despair. Until I inserted a breakpoint in address 0xf520bd06, right after the call to __android_log_print, run the program… and it stopped at that address, no crash happened. I started to execute the program instruction by instruction after that:

(gdb) b *0xf520bd06
(gdb) run
Breakpoint 1, 0xf520bd06 in ?? ()
(gdb) si
0xf520bd0a in ?? ()
(gdb) si
0xf520bd0c in ?? ()
(gdb) si
0xf520bd0e in ?? ()
Cannot insert breakpoint 0.
Cannot access memory at address 0x0

Something was apparently wrong with instruction ldmia, which restores registers, including the pc, from the stack. I took a look at the stack in that moment (taking into account that ldmia had already modified the sp after restoring 9 registers == 36 bytes):

(gdb) x/16xw $sp-36
0xf49dde4c:	0x00000000	0x00000000	0x00000000	0x00000000
0xf49dde5c:	0x00000000	0x00000000	0x00000000	0x00000000
0xf49dde6c:	0x00000000	0x00000000	0x00000000	0x00000000
0xf49dde7c:	0x00000000	0x00000000	0x00000000	0x00000000

All zeros! At this point it is clear that this is the real point where the crash is happening, as we are loading 0 into the pc. This looked clearly like a stack corruption issue.

But, before moving forward, why are we getting a wrong backtrace from gdb? Well, gdb is seeing a corrupted stack, so it is not able to unwind it. It would not be able to unwind it even if having full debug information. The only hint it has is the lr. This register contains the return address after execution of a bl/blx instruction [3]. If the called procedure is non-leaf, it is saved in the prologue, and restored in the epilogue, because it gets overwritten when branching to other procedures. In this case, it is restored on the pc and sometimes it is also saved back in the lr, depending on whether we have arm-thumb interworking built in the procedure or not [5]. It is not overwritten if we have a leaf procedure (as there are no procedure calls inside these).

As gdb has no additional information, it uses the lr to build the backtrace, assuming we are in a leaf procedure. However this is not true and the backtrace turns out to be wrong. Nonetheless, this information was not completely useless: lr was pointing to the instruction right after the last bl/blx instruction that was executed, which was not that far away from the real point where the program was crashing. This happened because fortunately __android_log_print has interworking code and restores the lr, otherwise the value of lr could have been from a point much far away from the point where the real crash happens. Believe or not, but it could have been even worse!

Having now a clear idea of where and why the crash was happening, things accelerated. The procedure where the crash happened, as disassembled by objdump, was (I include here only the more relevant parts of the code)

00015b1c <ProprietaryProcedure@@Base>:
   15b1c:	e92d 4ff0 	stmdb	sp!, {r4, r5, r6, r7, r8, r9, sl, fp, lr}
   15b20:	b08b      	sub	sp, #44	; 0x2c
   15b22:	497c      	ldr	r1, [pc, #496]	; (15d14 <ProprietaryProcedure@@Base+0x1f8>)
   15b24:	2500      	movs	r5, #0
   15b26:	9500      	str	r5, [sp, #0]
   15b28:	4604      	mov	r4, r0
   15b2a:	4479      	add	r1, pc
   15b2c:	462b      	mov	r3, r5
   15b2e:	f8df 81e8 	ldr.w	r8, [pc, #488]	; 15d18 <ProprietaryProcedure@@Base+0x1fc>
   15b32:	462a      	mov	r2, r5
   15b34:	f8df 91e4 	ldr.w	r9, [pc, #484]	; 15d1c <ProprietaryProcedure@@Base+0x200>
   15b38:	ae06      	add	r6, sp, #24
   15b3a:	f8df a1e4 	ldr.w	sl, [pc, #484]	; 15d20 <ProprietaryProcedure@@Base+0x204>
   15b3e:	200f      	movs	r0, #15
   15b40:	f8df b1e0 	ldr.w	fp, [pc, #480]	; 15d24 <ProprietaryProcedure@@Base+0x208>
   15b44:	f7f3 ef76 	blx	9a34 <prctl@plt>
   15b48:	44f8      	add	r8, pc
   15b4a:	4629      	mov	r1, r5
   15b4c:	44f9      	add	r9, pc
   15b4e:	2210      	movs	r2, #16
   15b50:	44fa      	add	sl, pc
   15b52:	4630      	mov	r0, r6
   15b54:	44fb      	add	fp, pc
   15b56:	f7f3 ea40 	blx	8fd8 <memset@plt>
   15b5a:	a807      	add	r0, sp, #28
   15b5c:	f7f3 ef70 	blx	9a40 <sigemptyset@plt>
   15b60:	4b71      	ldr	r3, [pc, #452]	; (15d28 <ProprietaryProcedure@@Base+0x20c>)
   15b62:	462a      	mov	r2, r5
   15b64:	9508      	str	r5, [sp, #32]
   15b66:	4631      	mov	r1, r6
   15b68:	447b      	add	r3, pc
   15b6a:	681b      	ldr	r3, [r3, #0]
   15b6c:	200a      	movs	r0, #10
   15b6e:	9306      	str	r3, [sp, #24]
   15b70:	f7f3 ef6c 	blx	9a4c <sigaction@plt>
   15d02:	f7f3 eade 	blx	92c0 <__android_log_print@plt>
   15d06:	f8c4 5304 	str.w	r5, [r4, #772]	; 0x304
   15d0a:	4628      	mov	r0, r5
   15d0c:	b00b      	add	sp, #44	; 0x2c
   15d0e:	e8bd 8ff0 	ldmia.w	sp!, {r4, r5, r6, r7, r8, r9, sl, fp, pc}

The addresses where this code is loaded can be easily computed by adding 0xf51f6000 to the file offsets shown in the first column. We see that a few calls to different external functions [6] are performed by ProprietaryProcedure, which is itself an exported symbol.

I restarted the debug session, added a breakpoint at the start of ProprietaryProcedure, right after stmdb saves the state, and checked the stack values:

(gdb) b *0xf520bb20
Breakpoint 1 at 0xf520bb20
(gdb) cont
Breakpoint 1, 0xf520bb20 in ?? ()
(gdb) p $sp
$1 = (void *) 0xf49dde4c
(gdb) x/16xw $sp
0xf49dde4c:	0xf49de460	0x0007df00	0x00000000	0xf49dde70
0xf49dde5c:	0xf49de694	0x00000000	0xf77e9000	0x00000000
0xf49dde6c:	0xf75b4491	0x00000000	0xf49de460	0x00000000
0xf49dde7c:	0x00000000	0xfd5b4eba	0xfe9dd4a3	0xf49de460

We can see that the stack contains something, including a return address that looks valid (0xf75b4491). Note also that the procedure must never touch this part of the stack, as it belongs to the caller of ProprietaryProcedure.

Now it is a simply a matter of bisecting the code between the beginning and the end of ProprietaryProcedure to find out where we are clobbering the stack. I will save you of developing here this tedious process. Instead, I will just show, that, in the end, it turned out that the call to sigemptyset() is the culprit [7]:

(gdb) b *0xf520bb5c
Breakpoint 1 at 0xf520bb5c
(gdb) b *0xf520bb60
Breakpoint 2 at 0xf520bb60
(gdb) run
Breakpoint 1, 0xf520bb5c in ?? ()
(gdb) x/16xw 0xf49dde4c
0xf49dde4c:	0xf49de460	0x0007df00	0x00000000	0xf49dde70
0xf49dde5c:	0xf49de694	0x00000000	0xf77e9000	0x00000000
0xf49dde6c:	0xf75b4491	0x00000000	0xf49de460	0x00000000
0xf49dde7c:	0x00000000	0xfd5b4eba	0xfe9dd4a3	0xf49de460
(gdb) cont
Breakpoint 2, 0xf520bb60 in ?? ()
(gdb) x/16xw 0xf49dde4c
0xf49dde4c:	0x00000000	0x00000000	0x00000000	0x00000000
0xf49dde5c:	0x00000000	0x00000000	0x00000000	0x00000000
0xf49dde6c:	0x00000000	0x00000000	0x00000000	0x00000000
0xf49dde7c:	0x00000000	0x00000000	0x00000000	0x00000000

Note here that I am printing the part of the stack not reserved by the function (0xf49dde4c is the value of the sp before execution of the line at offset 0x15b20, see the code).

What is going wrong here? Now, remember that at the beginning of the article I mentioned that we were using libhybris. libProprietary assumes a bionic environment, and the libc functions it calls are from bionic’s libc. However, libhybris has hooks for some bionic functions: for them bionic is not called, instead the hook is invoked. libhybris does this to avoid conflicts between bionic and glibc: for instance having two allocators fighting for process address space is a recipe for disaster, so malloc() and related functions are hooked and the hooks call in the end the glibc implementation. Signals related functions were hooked too, including sigemptyset(), and in this case the hook simply called glibc implementation.

I looked at glibc and bionic implementations, in both cases sigemptyset() is a very simple utility function that clears with memset() a sigset_t variable. All pointed to different definitions of sigset_t depending on the library. Definition turned out to be a bit messy when looking at the code as it depended on build time definitions, so I resorted to gdb to print the type. For a executable compiled for glibc, I saw

(gdb) ptype sigset_t
type = struct {
    unsigned long __val[32];

and for one using bionic

(gdb) ptype sigset_t
type = unsigned long

This finally confirms where the bug is, and explains it: we are overwriting the stack because libProprietary reserves in the stack memory for bionic’s sigset_t, while we are using glibc’s sigemptyset(), which uses a different definition for it. As this definition is much bigger, the stack gets overwritten after the call to memset(). And we get the crash later when trying to restore registers when the function returns.

After knowing this, the solution was simple: I removed the libhybris hooks for signal functions, recompiled it, and… all worked just fine, no crashes anymore!

However, this is not the final solution: as signals are shared resources, it makes sense to hook them in libhybris. But, to do it properly, the hooks have to translate types between bionic in glibc, thing that we were not doing (we were simply calling glibc implementation). That, however, is “just work”.

Of course I wondered why the heck a library that is kind of generic needs to mess around with signals, but hey, that is not my fault ;-).


I can say I learned several things while debugging this:

  1. Not having the sources is terrible for debugging (well, I already knew this). Unfortunately not open sourcing the code is still a standard practice in part of the industry.
  2. The most interesting technical bit here is IMHO that we need to be very cautious with the backtrace that debuggers shows after a crash. If you start to see things that do not make sense it is possible that registers or stack have been messed up and the real crash happens elsewhere. Bear in mind that the very first thing to do when a program crashes is to make sure that we know the exact point where that happens.
  3. We have to be careful in ARM when disassembling, because if there is no debug information we could be seeing the wrong instruction set. We can check evenness of addresses used by bx/blx and of the lr to make sure we are in the right mode.
  4. Some times taking a look at assembly code can help us when debugging, even when we have the sources. Note that if I had had the C sources I would have seen the crash happening right when returning from a function, and it might not have been that immediate to find out that the stack was messed up. The assembly clearly pointed to an overwritten stack.
  5. Finally, I personally learned some bits of ARM architecture that I did not know, which was great.

Well, this is it. I hope you enjoyed the (lengthy, I know) article. Thanks for your reading!

[2] We can get the same result by executing in gdb set arm fallback-mode thumb, but changing the register seemed more pedagogical here.
[6] In fact the calls are to the PLT section, which is inside the library. The PLT calls in turn, by using addresses in the GOT data section, either directly the function or the dynamic loader, as we are doing lazy loading. See, for instance.
[7] I had to use two breakpoints between consecutive instructions because the “ni” gdb command was not working well here.

Read more
Victor Palau

I recently blogged about deploying kubernetes in Azure.  After doing so, I wanted to keep an eye on usage of the instances and pods.

Kubernetes recommends Heapster as a cluster aggregator to monitor usage of nodes and pods. Very handy if you are deploying in Google Compute (GCE) as it has a pre-build dashboard to hook it to.

Heapster runs on each node, collects statistics of the system and pods which pipes to a storage backend of your choice. A very handy part of Heapster is that export user labels as part of metadata, which I believe can be used to create custom reports on services across nodes.


If you are not using GCE or just don’t want to use their dashboard, you can deploy a combo of InfluxDB and Grafana as a DIY solution. While this seems promising the documentation, as usual, is pretty short on details..

Start by using the “detailed” guide to deploy the add on, which basically consists of:

**wait! don’t run this yet until you finished reading article**

git clone
cd heapster
kubectl create -f deploy/kube-config/influxdb/

These steps exposes Grafana and InfluxDB via the api proxy, you can see them in your deployment by doing:

kubectl cluster-info

This didn’t quite work for me, and while rummaging in the yamls, I found out that this is not really the recommended configuration for live deployments anyway…

So here is what I did:

  1. Remove env variables influxdb-grafana-controller.yaml
  2. Expose service as NodePort or LoadBalancer depends of your preference in grafana-service.yaml. E.g. Under spec section add: type: NodePort
  3. Now run >kubectl create -f deploy/kube-config/influxdb/

You can see the expose port for Grafana by running:
kubectl --namespace=kube-system describe service grafana-service

In this deployment, all the services, rc and pods are added under the kube-system namespace, so remember to add the –namespace flag to your kubectl commands.

Now you should be able to access Grafana on any external ip or dns on the port listed under NodePort. But I was not able to see any data.

Login to Grafana as admin (admin:admin by default), select DataSources>influxdb-datasource and test the connection. The connection is set up as http://monitoring-influxdb:8086, this failed for me.

Since InfluxDB and Grafana are both in the same pod, you can use localhost to access the service. So change the url to http://localhost:8086, save and test the connection again. This worked for me and a minute later I was getting realtime data from nodes and pods.

Proxying Grafana

I run an nginx proxy that terminates https  requests for my domain and a created a https://mydomain/monitoring/ end point as part of it.

For some reason, Grafana needs to know the root-url format that is being accessed from to work properly. This is defined in a config file.. while you could change it and rebuild the image, I preferred to override it via an enviroment variable in the influxdb-grafana-controller.yaml kubernetes file. Just add to the Grafana container section:

value: "%(protocol)s://%(domain)s:%(http_port)s/monitoring"

You can do this with any of the Grafana config values, which allows you to reuse the official Grafana docker image straight from the main registry.

Read more

mgo r2016.08.01

A major new release of the mgo MongoDB driver for Go is out, including the following enhancements and fixes:

Decimal128 support

Introduces the new bson.Decimal128 type, upcoming in MongoDB 3.4 and already available in the 3.3 development releases.

Extended JSON support

Introduces support for marshaling and unmarshaling MongoDB’s extended JSON specification, which extend the syntax of JSON to include support for other BSON types and also allow for extensions such as unquoted map keys and trailing commas.

The new functionality is available via the bson.MarshalJSON and bson.UnmarshalJSON functions.

New Iter.Done method

The new Iter.Done method allows querying whether an iterator is completely done or there’s some likelyhood of more items being returned on the next call to Iter.Next.

Feature implemented by Evan Broder.

Retry on Upsert key-dup errors

Curiously, as documented the server can actually report a key conflict error on upserts. The driver will now retry a number of times on these situations.

Fix submitted by Christian Muirhead.

Switched test suite to daemontools

Support for supervisord has been removed and replaced by daemontools, as the latter is easier to support across environments.

Travis CI support

All pull requests and master branch changes now being tested on several server releases.

Initial collation support in indexes

Support for collation is being widely introduced in 3.4 release, with experimental support already visible in 3.3.

This releases introduces the Index.Collation field which may be set to a mgo.Collation value.

Removed unnecessary unmarshal when running commands

Code which marshaled the command result for debugging purposes was being run out of debug mode. This has been fixed.

Reported by John Morales.

Fixed Secondary mode over mongos

Secondary mode wasn’t behaving properly when connecting to the cluster over a mongos. This has been fixed.

Reported by Gabriel Russell.

Fixed BuildInfo.VersionAtLeast

The VersionAtLeast comparison was broken when comparing certain strings. Logic was fixed and properly tested.

Reported by John Morales.

Fixed unmarshaling of ,inline structs on 1.6

Go 1.6 changed the behavior on unexported anonymous structs.

Livio Soares submitted a fix addressing that.

Fixed Apply on results containing errmsg

The Apply method was confusing a resulting object containing an errmsg field as an actual error.

Reported by Moshe Revah.

Improved documentation

Several contributions on documentation improvements and fixes.

Read more

在我们先前的文章"如何在QML应用中创建一个Context Menu",我们使用了一个popup的方法来显示一个我们自己需要的context menu.那个方法虽好,但是显示起来和背景的颜色很相近,不容易看出来.当然我们可以通过一些方法来改进我的设计.在今天的例程中,我们通过另外一种方法来做一个同样的context menu.这个方法的好处是,我们可以随意来设计我们所需要的效果.我们从另外一个角度来实现同样的东西.我们设计的最终效果为:





import QtQuick 2.4
import Ubuntu.Components 1.3

AbstractButton {
    id: optionValueButton


    property alias label: label.text
    property alias iconName:
    property bool selected
    property bool isLast
    property int columnWidth
    property int marginSize:

    width: marginSize + iconLabelGroup.width + marginSize

    Item {
        id: iconLabelGroup
        width: childrenRect.width
        height: icon.height

        anchors {
            left: (iconName) ? undefined : parent.left
            leftMargin: (iconName) ? undefined : marginSize
            horizontalCenter: (iconName) ? parent.horizontalCenter : undefined
            verticalCenter: parent.verticalCenter
            topMargin: marginSize
            bottomMargin: marginSize

        Icon {
            id: icon
            anchors {
                verticalCenter: parent.verticalCenter
                left: parent.left
            width: optionValueButton.height - optionValueButton.marginSize * 2
            color: "white"
            opacity: optionValueButton.selected ? 1.0 : 0.5
            visible: name !== ""

        Label {
            id: label
            anchors {
                left: != "" ? icon.right : parent.left
                verticalCenter: parent.verticalCenter

            color: "white"
            opacity: optionValueButton.selected ? 1.0 : 0.5
            width: paintedWidth

    Rectangle {
        anchors {
            left: parent.left
            bottom: parent.bottom
        width: parent.columnWidth
        height: units.dp(1)
        color: "red"
        opacity: 0.5
        visible: true



import QtQuick 2.4
import Ubuntu.Components 1.3

    \brief MainView with a Label and Button elements.

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "mapfromitem.liu-xiao-guo"

    height: :"Ubuntu.Components.Themes.SuruDark"
    property bool optionValueSelectorVisible: false

    Page {
        header: PageHeader {
            id: pageHeader

        ListModel {
            id: model
            property int selectedIndex: 0

            ListElement {
                icon: "account"
                label: "On"
                value: "flash-on"
            ListElement {
                icon: "active-call"
                label: "Auto"
                value: "Auto"
            ListElement {
                icon: "call-end"
                label: "Off"
                value: "flash-off"

        Column {
            id: optionValueSelector
            objectName: "optionValueSelector"
            width: childrenRect.width

            property Item caller

            function toggle(model, callerButton) {
                if (optionValueSelectorVisible && optionsRepeater.model === model) {
                } else {
                    show(model, callerButton);

            function show(model, callerButton) {
                optionValueSelector.caller = callerButton;
                optionsRepeater.model = model;
                optionValueSelectorVisible = true;

            function hide() {
                optionValueSelectorVisible = false;
                optionValueSelector.caller = null;

            function alignWith(item) {
                // horizontally center optionValueSelector with the center of item
                // if there is enough space to do so, that is as long as optionValueSelector
                // does not get cropped by the edge of the screen
                var itemX = parent.mapFromItem(item, 0, 0).x;
                var centeredX = itemX + item.width / 2.0 - width / 2.0;
                var margin =;

                if (centeredX < margin) {
                    x = itemX;
                } else if (centeredX + width > item.parent.width - margin) {
                    x = itemX + item.width - width;
                } else {
                    x = centeredX;

                // vertically position the options above the caller button
                y = Qt.binding(function() { return item.y - height - });

                console.log("x: " + x + " y: " + y)

            visible: opacity !== 0.0
            onVisibleChanged: if (!visible) optionsRepeater.model = null;
            opacity: optionValueSelectorVisible ? 1.0 : 0.0
            Behavior on opacity {UbuntuNumberAnimation {duration: UbuntuAnimation.FastDuration}}

            Repeater {
                id: optionsRepeater

                delegate: OptionValueButton {
                    anchors.left: optionValueSelector.left
                    columnWidth: optionValueSelector.childrenRect.width
                    label: model.label
                    iconName: model.icon
                    selected: optionsRepeater.model.selectedIndex == index
                    isLast: index === optionsRepeater.count - 1
                    onClicked: {
                        optionsRepeater.model.selectedIndex = index

        Icon {
            id: optionButton
            height: width
            anchors.centerIn: parent
            name: model.get(model.selectedIndex).icon

            MouseArea {
                anchors.fill: parent
                onClicked: {
                    console.log("optionValueSelectorVisible: " + optionValueSelectorVisible)
                    optionValueSelector.toggle(model, optionButton)

        Component.onCompleted: {
            console.log("width: " + width + " height: " +height)



作者:UbuntuTouch 发表于2016/5/25 10:07:36 原文链接
阅读:374 评论:0 查看评论

Read more




const static string CAT_RENDERER102
        "schema_version" : 1,
        "template" :
            "category-layout" : "grid",
            "card-layout": "vertical",
            "card-size" : "large",
            "card-background": "#00FF00",
            "overlay": true,
        "components" :
            "title" : "title",
            "art" : "art",
            "subtitle": "subtitle",
            "mascot": "mascot",
            "emblem": "emblem",
            "summary": "summary",
            "overlay-color": "overlay-color",
            "attributes": {
                "field": "attributes",
                "max-count": 2




作者:UbuntuTouch 发表于2016/5/26 17:18:47 原文链接
阅读:343 评论:0 查看评论

Read more

在最新的QtMultiMedia 5.6中,AudioAPI中有一个playlist属性.我们可以充分利用这个属性来做一个简单的音乐播放器.在我们的官方文档中,写的是QttMultiMedia 5.4.正确的是5.6版本.Audio API可以让我们很方便地播放我们所需要的音乐.在我们之前的文章"如何在Ubuntu QML应用中播放音乐"也讨论过如何利用MediaPlayerSoundEffect来播放音乐.

如何利用Audio 中的playlist呢?我们先来看一个简单的例子:

        Audio {
            id: player;
            autoPlay: true
            autoLoad: true
            playlist: Playlist {
                id: playlist

            Component.onCompleted: {
                console.log("playlist count: " + playlist.itemCount)
                console.log("metaData type: " + typeof(meta))



    Audio {
        id: player;
        playlist: Playlist {
            id: playlist
            PlaylistItem { source: "song1.ogg"; }
            PlaylistItem { source: "song2.ogg"; }
            PlaylistItem { source: "song3.ogg"; }
    ListView {
        model: playlist;
        delegate: Text {
            font.pixelSize: 16;
            text: source;



import QtQuick 2.4
import Ubuntu.Components 1.3
import QtMultimedia 5.6

MainView {
    // objectName for functional testing purposes (autopilot-qt5)
    objectName: "mainView"

    // Note! applicationName needs to match the "name" field of the click manifest
    applicationName: "playlist.liu-xiao-guo"

    property var meta: player.metaData

    Page {
        id: page
        header: PageHeader {
            id: pageHeader

        Audio {
            id: player;
            autoPlay: true
            autoLoad: true
            playlist: Playlist {
                id: playlist

            Component.onCompleted: {
                console.log("playlist count: " + playlist.itemCount)
                console.log("metaData type: " + typeof(meta))

                console.log("The properties of metaData is:")
                var keys = Object.keys(meta);
                for( var i = 0; i < keys.length; i++ ) {
                    var key = keys[ i ];
                    var data = key + ' : ' + meta[ key ];
                    console.log( key + ": " + data)


        Flickable {
            anchors {
                left: parent.left
                right: parent.right
                top: pageHeader.bottom
                bottom: parent.bottom
            contentHeight: layout.childrenRect.height +
                           layout1.height + layout1.spacing

            Column {
                id: layout
                anchors.fill: parent

                ListView {
                    anchors.left: parent.left
                    anchors.right: parent.right
                    height: page.height/2

                    model: playlist;
                    delegate: Label {
                        fontSize: "x-large"
                        text: {
                            var filename = String(source);
                            var name = filename.split("/").pop();
                            return name;

                        MouseArea {
                            anchors.fill: parent

                            onClicked: {
                                if (player.playbackState != Audio.PlayingState) {
                                    player.playlist.currentIndex = index;
                                } else {

                Rectangle {
                    width: parent.width
                    color: "red"

                Slider {
                    id: defaultSlider
                    anchors.horizontalCenter: parent.horizontalCenter
                    width: parent.width * 0.8
                    maximumValue: 100
                    value: player.position/player.duration * maximumValue

                CustomListItem {
                    title.text:  {
                        switch (player.availability ) {
                        case Audio.Available:
                            return "availability: available";
                        case Audio.Busy:
                            return "availability: Busy";
                        case Audio.Unavailable:
                            return "availability: Unavailable";
                        case Audio.ResourceMissing:
                            return "availability: ResourceMissing";
                            return "";

                CustomListItem {
                    title.text: "bufferProgress: " + player.bufferProgress;

                CustomListItem {
                    title.text: "duration: " + player.duration/1000 + " sec"

                CustomListItem {
                    title.text: "hasAudio: " + player.hasAudio

                CustomListItem {
                    title.text: "hasVideo: " + player.hasVideo

                CustomListItem {
                    title.text: "loops: " + player.loops

                CustomListItem {
                    title.text: "muted: " + player.muted

                CustomListItem {
                    title.text: "playbackRate: " + player.playbackRate

                CustomListItem {
                    title.text: {
                        switch (player.playbackState) {
                        case Audio.PlayingState:
                            return "playbackState : PlayingState"
                        case Audio.PausedState:
                            return "playbackState : PausedState"
                        case Audio.StoppedState:
                            return "playbackState : StoppedState"
                            return ""

                CustomListItem {
                    title.text: "seekable: " + player.seekable

                CustomListItem {
                    title.text: "url: " + String(player.source)

                CustomListItem {
                    title.text: "volume: " + player.volume

                CustomListItem {
                    title.text: {
                        switch (player.status) {
                        case Audio.NoMedia:
                            return "status: NoMedia"
                        case Audio.Loading:
                            return "status: Loading"
                        case Audio.Loaded:
                            return "status: Loaded"
                        case Audio.Buffering:
                            return "status: Buffering"
                        case Audio.Stalled:
                            return "status: Stalled"
                        case Audio.Buffered:
                            return "status: Buffered"
                        case Audio.EndOfMedia:
                            return "status: EndOfMedia"
                        case Audio.InvalidMedia:
                            return "status: InvalidMedia"
                        case Audio.UnknownStatus:
                            return "status: UnknownStatus"
                            return ""


        Row {
            id: layout1
            anchors.bottom: parent.bottom
            anchors.horizontalCenter: parent.horizontalCenter

            Button {
                text: "Previous"
                onClicked: {
                    console.log("Previouse is clicked")
                    var previousIndex = player.playlist.previousIndex(1)
                    console.log("previousIndex: " + player.playlist.previousIndex(1))
                    if ( previousIndex == -1 ) {
                        player.playlist.currentIndex = player.playlist.itemCount - 1;
                    } else {


            Button {
                text: "Next"
                onClicked: {
                    console.log("Next is clicked")
                    var nextIndex = player.playlist.nextIndex(1)
                    console.log("nextIndex: " + nextIndex )
                    if (nextIndex == -1) {
                        player.playlist.currentIndex = 0
                    } else {





作者:UbuntuTouch 发表于2016/5/27 12:18:52 原文链接
阅读:409 评论:4 查看评论

Read more

El evento estuvo dividido principalmente en tres partes, tutoriales para principiantes, conferencia propiamente dicha, y los sprints.

A los tutoriales (Beginners day, y Django girls), que duraban un día, no fui.  La conferencia fue de lunes a viernes. Y los sprints fueron sábado y domingo.  Del primer día de conferencia ya les conté, y el domingo estuve viajando. El resto, se los cuento acá :)

Charlas interesantes

Recopilación de lo que más me gustó de la conferencia... ojo, en algunos casos incluyo links a los videos o presentaciones mismas, en otros no porque me dió paja buscarla, pero tienen que estar :)

Por lejos, la mejor Keynote fue la de Jameson Rollins, "LIGO: The Dawn of Gravitational Wave Astronomy", aunque también estuvo buena la de Naomi Ceder, "Come for the Language, Stay for the Community". Tercera podríamos poner "Scientist meets web dev: how Python became the language of data", por Gaël Varoquaux. El resto me aburrió un poco, o no me interesó tanto.

LIGO is a...

Otras charlas que me gustaron fueron "High Performance Networking in Python" de Yury Selivanov, "Build your first OpenStack application with OpenStack PythonSDK" por Victoria Martinez de la Cruz, "Implementación de un Identificador de Sonido en Python" por Cameron Macleod, "FAT Python: a new static optimizer for Python 3.6" de Victor Stinner, "CFFI: calling C from Python" de Armin Rigo, "The Gilectomy" de Larry Hastings, "A Gentle Introduction to Neural Networks (with Python)" de Tariq Rashid, y "Music transcription with Python" de Anna Wszeborowska.

De esta última charla me quedé con el proyecto a futuro (ya lo anoté, está en la posición 1783461° entre otros proyectos) de mostrar en tiempo real, usando Bokeh, la info que levanta y las transformaciones que va haciendo.

Imagen típica de Bilbao

También quiero resaltar dos lightning talks: a Armin Rigo mostrando un "Reverse debugging for Python", y una de alguien que no me acuerdo mostrando "A better Python REPL".

Mis presentaciones

Ya les hablé de la charla que había dado el lunes, pero aprovecho y les dejo el video de la misma.

El martes dí Entendiendo Unicode, en castellano. Fue la 12° vez que la doy, y me podrán decir "dejá de robar con la misma charla"... qué quieren que les diga, el público se renueva. Yo también a veces pienso si no será demasiado, ¡pero a la gente le gusta y le sirve! Una decena de personas me saludaron y me comentaron lo buena y lo útil que fue la charla. Así que nada, la seguiré ofreciendo en próximas conferencias, :). El video, acá.

Espacio común de trabajo

Además de esas dos presentaciones "largas", dí dos lightning talks. La primera sobre fades; no es la primera vez que la doy, pero la había renovado y traducido al inglés, y estuvo muy bien. La segunda fue sobre Python Argentina. La hice el mismo viernes, a los apurones, pero a la gente le gustó mucho (me sorprendió la cantidad de veces que se rieron en esos cinco minutos (cinco minutos que tuve que pelear, como ven en el video, porque me querían dar dos, luego la confusión de que yo iba a hablar de una PyCon).


El sábado, estuve sprinteando, trabajando con fades, más que nada ofreciendo ayuda a gente que quería usarlo o que querían enterarse más sobre el proyecto. Incluso se acercó alguien con un detalle, lo charlamos, lo solucionamos y hasta hice un pull request.


Ese sábado era mi última noche en Bilbao. Medio coordinamos con Juan Luis y fuimos a cenar pinchos con otras personas, luego por una cerveza. Y cuando estaba cerrando la noche, tipo once y media, me comentaron de una zona en la ciudad donde hay toda una movida heavy y punk.

No me la podía perder.

Así que nos fuimos cinco personas hasta allí, saltamos por tres o cuatro bares, tomando algo en cada uno, escuchando muy buena música, terminando en un antro de mala muerte, jugando metegol, pasando música que elegíamos nosotros, y disfrutando mucho.

En un bar punkie

A eso de las dos y media dí por concluido el paseo, porque a las cuatro me pasaba a buscar el taxi, así que con Oriol (uno de los chicos) nos tomamos un taxi, llegué a la habitación, terminé de armar todo, me pegué una ducha, dejé las llaves en la mesa de la cocina y arranqué las 23 horas de viaje que me iban a reecontrar con mi familia :)

Todas las fotos de la conferencia y Bilbao, acá.

Read more
Colin Ian King

Scanning the Linux kernel for error messages

The Linux kernel contains lots of error/warning/information messages; over 130,000 in the current 4.7 kernel.  One of the tests in the Firmware Test Suite (FWTS) is to find BIOS/ACPI/UEFI related kernel error messages in the kernel log and try to provide some helpful advice on each error message since some can be very cryptic to the untrained eye.

The FWTS kernel error log database is currently approaching 800 entries and I have been slowly working through another 800 or so more relevant and recently added messages.  Needless to say, this is taking a while to complete.  The hardest part was finding relevant error messages in the kernel as they appear in different forms (e.g. printk(), dev_err(), ACPI_ERROR() etc).

In order to scrape the Linux kernel source for relevant error messages I hacked up the kernelscan parser to find error messages and dump these to stdout.  kernelscan can scan 43,000 source files (17,900,000 lines of source) in under 10 seconds on my Lenovo X230 laptop, so it is relatively fast.

I also have been using kernelscan to find spelling mistakes in kernel messages and I've been punting trivial fixes upstream to fix these.  These mistakes are small and petty, but I find it a little irksome when I see the kernel emit a message that contains a typo or spelling mistake - it just looks a bit unprofessional.

I've created a kernelscan snap (which was really easy and fast to do using scancraft), so it is now available Ubuntu.  The source code is also available from the kernel team git web at

The code is designed to only parse kernel source, and it is a very rough and ready parser designed for speed;  fundamentally, it is a big quick hack.  When I get a few spare minutes I will try and see if there is any correlation between the number of error messages with the size of the kernel over the various releases.

Read more