Canonical Voices

Victor Palau

First of all, I wanted to recommend the following recipe from Digital Ocean on how to rollout your own Docker Registry in Ubuntu 14.04. As with most of their stuff, it is super easy to follow.

I also wanted to share a small improvement on the recipe to include a UI front-end to the registry.

Once you have completed the recipe and have a repository secured and running, you extend your docker-compose file to look like this:

nginx:
 image: "nginx:1.9"
 ports:
 - 443:443
 - 8080:8080
 links:
 - registry:registry
 - web:web
 volumes:
 - ./nginx/:/etc/nginx/conf.d:ro

web:
 image: hyper/docker-registry-web
 ports:
 - 8000:8080
 links:
 - registry
 environment:
 REGISTRY_HOST: registry

registry:
 image: registry:2
 ports:
 - 127.0.0.1:5000:5000
 environment:
 REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
 volumes:
 - ./data:/data

You will also need to include a configuration file for web in the nginx folder.

file: ~/docker-registry/nginx/web.conf

upstream docker-registry-web {
 server web:8080;
 }

server {
 listen 8080;
 server_name [YOUR DOMAIN];

# SSL
 ssl on;
 ssl_certificate /etc/nginx/conf.d/domain.crt;
 ssl_certificate_key /etc/nginx/conf.d/domain.key;

location / {

# To add basic authentication to v2 use auth_basic setting plus add_header
 auth_basic "registry.localhost";
 auth_basic_user_file /etc/nginx/conf.d/registry.password;

proxy_pass http://docker-registry-web;
 proxy_set_header Host $http_host; # required for docker client's sake
 proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_read_timeout 900;
 }
 }

docker-compose up and you should be able to have a ssl secured UI frontend in port 8080 (https://yourdomain:8080/)
If you have any improvement tips I am all ears!


Read more
Colin Ian King

What's new in stress-ng 0.06.07?

Since my last blog post about stress-ng, I've pushed out several more small releases that incorporate new features and (as ever) a bunch more bug fixes.  I've been eyeballing gcov kernel coverage stats to find more regions in the kernel where stress-ng needs to exercise.   Also, testing on a range of hardware (arm64, s390x, etc) and a range of kernels has eeked out some bugs and helped me to improve stress-ng.  So what's new?

New stressors:

  • ioprio  - exercises ioprio_get(2) and ioprio_set(2) (I/O scheduling classes and priorities)
  • opcode - generates random object code and executes this, generating and catching illegal instructions, bus errors,  segmentation  faults,  traps and floating  point errors.
  • stackmmap - allocates a 2MB stack that is memory mapped onto a temporary file. A recursive function works down the stack and flushes dirty stack pages back to the memory mapped file using msync(2) until the end of the stack is reached (stack overflow). This exercises dirty page and stack exception handling.
  • madvise - applies random madvise(2) advise settings on pages of a 4MB file backed shared memory mapping.
  • pty - exercise pseudo terminal operations.
  • chown - trivial chown(2) file ownership exerciser.
  • seal - fcntl(2) file SEALing exerciser.
  • locka - POSIX advisory locking exerciser.
  • lockofd - fcntl(2) F_OFD_SETLK/GETLK open file description lock exerciser.
Improved stressors:
  • msg: add in IPC_INFO, MSG_INFO, MSG_STAT msgctl calls
  • vecmath: add more ops to make vecmath more demanding
  • socket: add --sock-type socket type option, e.g. stream or seqpacket
  • shm and shm-sysv: add msync'ing on the shm regions
  • memfd: add hole punching
  • mremap: add MAP_FIXED remappings
  • shm: sync, expand, shrink shm regions
  • dup: use dup2(2)
  • seek: add SEEK_CUR, SEEK_END seek options
  • utime: exercise UTIME_NOW and UTIME_OMIT settings
  • userfaultfd: add zero page handling
  • cache:  use cacheflush() on systems that provide this syscall
  • key:  add request_key system call
  • nice: add some randomness to the delay to unsync nicenesses changes
If any new features land in Linux 4.8 I may add stressors for them, but for now I suspect that's about it for the big changes for stress-ng for the Ubuntu Yakkey 16.10 release.

Read more
David Callé

Snapcraft 2.12 is here and is making its way to your 16.04 machines today.

This release takes Snapcraft to a whole new level. For example, instead of defining your own project parts, you can now use and share them from a common, open, repository. This feature was already available in previous versions, but is now much more visible, making this repo searchable and locally cached.

Without further ado, here is a tour of what’s new in this release.

Commands

2.12 introduces ‘snapcraft update’, ‘search’ and ‘define’, which bring more visibility to the Snapcraft parts ecosystem. Parts are pieces of code for your app, that can also help you bundle libraries, set up environment variables and other tedious tasks app developers are familiar with.

They are literally parts you aggregate and assemble to create a functional app. The benefits of using a common tool is that these parts can be shared amongst developers. Here is how you can access this repository.

  • snapcraft update : refresh the list of remote parts
  • snapcraft search : list and search remote parts
  • snapcraft define : display information and content about a remote part

5273725bbff337eaf4eb07a81af97cd82051866b.png

To get a sense of how these commands are used, have a look at the above example, then you can dive into details and what we mean by “ecosystem of parts”.

Snap name registration

Another command you will find useful is the new ‘register’ one. Registering a snap name is reserving the name on the store.

  • snapcraft register

6875784c98c671707e1de1b27bb0cdba4690d68e.png

As a vendor or upstream, you can secure snap names when you are the publisher of what most users expect to see under this name.

Of course, this process can be reverted and disputed. Here is what the store workflow looks like when I try to register an already registered name:

snap-name-register.png

On the name registration page of the store, I’m going to try to register ‘my-cool-app’, which already exists.

snap-name-register-failed.png

I’m informed that the name has already been registered, but I can dispute this or use another name.

snap-name-register-dispute.png

I can now start a dispute process to retrieve ownership of the snap name.

Plugins and sources

Two new plugins have been added for parts building: qmake and gulp.

qmake

The qmake plugin has been requested since the advent of the project, and we have seen many custom versions to fill this gap. Here is what the default qmake plugin allows you to do:

  • Pass a list of options to qmake
  • Specify a Qt version
  • Declare list of .pro files to pass to the qmake invocation

gulp

The hugely popular nodejs builder is now a first class citizen in Snapcraft. It inherits from the existing nodejs plugin and allows you to:

  • Declare a list of gulp tasks
  • Request a specific nodejs version

Subversion

SVN is still a major version control system and thanks to Simon Quigley from the Lubuntu project, you can now use svn: URIs in the source field of your plugins.

Highlights

Many other fixes made their way into the release, with two highlights:

  • You can now use hidden .snapcraft.yaml files
  • snapcraft cleanbuild’ now creates ephemeral LXC containers and won’t clutter your drive anymore

The full changelog for this milestone is available here and the list of bugs in sight for 2.13 can be found here. Note that this list will probably change until the next release, but if you have a Snapcraft itch to scratch, it’s a good list to pick your first contribution from.

Install Snapcraft

On Ubuntu

Simply open up a terminal with Ctrl+Alt+t and run these commands to install Snapcraft from the Ubuntu archives on Ubuntu 16.04 LTS

sudo apt update
sudo apt install snapcraft

On other platforms

Get the Snapcraft source code ›

Get snapping!

There is a thriving community of developers who can give you a hand getting started or unblock you when creating your snap. You can participate and get help in multiple ways:

Read more
David Barth

Cordova Ubuntu Update

A few weeks ago we participated to Phonegap Day EU 2016

A few weeks ago we participated to Phonegap Day EU 2016. It was a great opportunity to meet with the Cordova development team and app developers gathered for this occasion.

We demo'ed the latest Ubuntu 16.04 LTS release, running on a brand new BQ M10 tablet in convergence mode. It was really interesting to discuss with app developers. Creating responsive user interfaces is already a common topic for web developers, and Cordova developers by extension. 

On the second day, we hosted a workshop on developing Ubuntu applications with Cordova and popular frameworks like Ionic. Alexandre Abreu also showed his new cordova-plugin-ble-central for Ubuntu. This one lets you connect to an IoT device, like one of those new RPI boards, directly to an Ubuntu app using the Bluetooth Low Energy stack. Snappy, Ubuntu and Cordova all working together !

Last but not least, we started the release process for cordova-ubuntu 4.3.4. This is the latest stable update to the Ubuntu platform support code for Cordova apps. It's coming along with a set of documentation updates available here and on the upstream cordova doc site

We've made a quick video to summarize this and walk you through the first steps of creating your own Ubuntu app using Cordova. You can now watch it at: https://www.youtube.com/watch?v=ydnG7wVrsW4

Let us know about your ideas : we're eager to see what you can do with the new release and plugins.

Read more
Luca Paulina

Juju GUI 2.0

Juju is a cloud orchestration tool which enables users to build web environments to run applications. You can just as easily use it on a simple WordPress blog or on a complex big data platform. Juju is a command line tool but also has a graphical user interface (GUI) where users can choose services from a store, assemble them visually in the GUI, build relations and configure them with the service inspector.

Juju GUI allows users to

  • Add charms and bundles from the charm store
  • Configure services
  • Deploy applications to a cloud of their choice
  • Manage charm settings
  • Monitor environment health

Over the last year we’ve been working on a redesign of the Juju GUI. This redesign project focused on improving four key areas, which also acted as our guiding design principles.

1. Improve the functionality of the core features of the GUI

  • Organised similar areas of the core navigation to create a better UI model.
  • Reduced the visual noise of the canvas and the inspector to help users navigate complex environments.
  • Introduced a better flow between the store and the canvas to aid adding services without losing context.
Hero before
Hero after

‹ ›

Empty state of the canvas

 

Hero before
Hero after

‹ ›

Integrated store

 

Hero before
Hero after

‹ ›

Apache charm details

 

2. Reduce cognitive load and pace the user

  • Reduced the amount of interaction patterns to minimise the amount of visual translation.
  • Added animation to core features to inform users of the navigation model in an effort to build a stronger concept of home.
  • Created a symbiotic relationship between the canvas and the inspector to help navigation of complex environments.
Hero before
Hero after

‹ ›

Mediawiki deployment

 

3. Provide an at-a-glance understanding of environment health

  • Prioritised the hierarchy of status so users are always aware of the most pressing issues and can discern which part of the application is effected.
  • Easier navigation to units with a negative status to aid the user in triaging issues.
  • Used the same visual patterns throughout the web app so users can spot problematic issues.
Hero before
Hero after

‹ ›

Mediawiki deployment with errors

 

4. Surface functions and facilitate task-driven navigation

  • Established a new hierarchy based on key tasks to create a more familiar navigation model.
  • Redesigned the inspector from the ground up to increase discoverability of inspector led functions.
  • Simplified the visual language and interaction patterns to help users navigate at-a-glance and with speed to triage errors, configure or scale out.
  • Surfaced relevant actions at the right time to avoid cluttering the UI.
Hero before
Hero after

‹ ›

Inspector home view

 

Hero before
Hero after

‹ ›

Inspector errors view

 

Hero before
Hero after

‹ ›

Inspector config view

 

The project has been amazing, we’re really happy to see that it’s launched and are already planning the next updates.



<>

Read more
Luca Paulina

Design in the open

As the Juju design team grew it was important to review our working process and to see if we could improve it to create a more agile working environment. The majority of employees at Canonical work distributed around the globe, for instance the Juju UI engineering team has employees from Tasmania to San Francisco. We also work on a product which is extremely technical and feedback is crucial to our velocity.

We identified the following aspects of our process which we wanted to improve:

  • We used different digital locations for storing their design outcomes and assets (Google Drive, Google Sites and Dropbox).
  • The entire company used Google Drive so it was ideal for access, but its lacklustre performance, complex sharing options and poor image viewer meant it wasn’t good for designs.
  • We used Dropbox to store iterations and final designs but it was hard to maintain developer access for sharing and reference.
  • Conversations and feedback on designs in the design team and with developers happened in email or over IRC, which often didn’t include all interested parties.
  • We would often get feedback from teams after sign-off, which would cause delays.
  • Decisions weren’t documented so it was difficult to remember why a change had been made.

Finding the right tool

I’ve always been interested in the concept of designing in the open. Benefits of the practice include being more transparent, faster and more efficient. They also give the design team more presence and visibility across the organisation. Kasia (Juju’s project manager) and I went back and forth on which products to use and eventually settled on GitHub (GH).

The Juju design team works in two week iterations and at the beginning of a new iteration we decided to set up a GH repo and trial the new process. We outlined the following rules to help us start:

  • Issues should be created for each project.
  • All designs/ideas/wireframes should be added inline to the issues.
  • All conversations should be held within GH, no more email or IRC conversations, and notes from any meetings should be added to relevant issues to create a paper trail.

Reaction

As the iteration went on, feedback started rolling in from the engineering team without us requesting it. A few developers mentioned how cool it was to see how the design process unfolded. We also saw a lot of improvement in the Juju design team: it allowed us to collaborate more easily and it was much easier to keep track of what was happening.

At the end of the trial iteration, during our clinic day, we closed completed issues and uploaded the final assets to the “code” section of the repo, creating a single place for our files.

After the first successful iteration we decided to carry this on as a permanent part of our process. The full range of benefits of moving to GH are:

  • Most employees of Canonical have a GH account and can see our work and provide feedback without needing to adopt a new tool.
  • Project management and key stakeholders are able to see what we’re doing, how we collaborate, why a decision has been made and the history of a project.
  • Provides us with a single source for all conversations which can happen around the latest iteration of a project.
  • One place where anyone can view and download the latest designs.
  • A single place for people to request work.

Conclusion

As a result of this change our designs are more accessible which allows developers and stakeholders to comment and collaborate with the design team aiding in our agile process. Below is an example thread where you can see how GH is used in the process. I shows how we designed the new contextual service block actions.

GH_conversation_new_navigation

Read more
Benjamin Zeller

New Ubuntu SDK Beta Version

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications. 

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications.

The first reports were positive, however one big problem was discovered pretty quickly:

Applications would not start on machines using the proprietary Nvidia drivers. Reason for this is that indirect GLX is not allowed by default when using those. The applications need to have access to:

  1. The glx libraries for the currently used driver
  2. The DRI and Nvidia device files

Luckily the snappy team already tackled a similar problem, so thanks to Michael Vogt (a.k.a mvo) we had a first idea how to solve it by reusing the Nvidia binaries and device files from the host by mounting them into the container.

However it is a bit more complicated in our case, because once we have the devices and directories mounted into the containers they will stay there permanently. This is a problem because the Nvidia binary directory has a version numbering, e.g. /usr/lib/nvidia-315, which changes with the currently loaded module and would stop the container from booting after the driver was changed and the old directory on the host is gone, or the container would use the wrong nvidia dir if it was not removed from the host.

The situation gets worse with optimus graphics cards were the user can switch between a integrated and dedicated graphics chip, which means device files in /dev can come and go between reboots.

Our solution to the problem is to check the integrity of the containers on every start of the Ubuntu SDK IDE and if problems are detected, the user is informed and asked for the root password to run automatic fixes. Those checks and fixes are implemented in the “usdk-target” tool and can be used from the CLI as well.

As a bonus this work will enable direct rendering for other graphics chips as well, however since we do not have access to all possible chips there might be still special cases that we could not catch.

So please report all problems to us on one of those channels:

We have released the new tool into the Tools-development PPA where the first beta was released too. However existing container might not be completely fixed automatically. They are better be recreated or manually fixed. To manually fix an existing container use the maintain mode from the options menu and add the current user into the “video” group.

To get the new version of the IDE please update the installed Ubuntu SDK IDE package:

$ sudo apt-get update && sudo apt-get install ubuntu-sdk-ide ubuntu-sdk-tools

Read more
Sergio Schvezov

The Snapcraft Parts Ecosystem

Today I am going to be discussing parts. This is one of the pillars of snapcraft (together with plugins and the lifecycle).

For those not familiar, this is snapcraft’s general purpose landing page, http://snapcraft.io/ but if you are a developer and have already been introduced to this new world of snaps, you probably want to just go and hop on to http://snapcraft.io/create/

If you go over this snapcraft tour you will notice the many uses of parts and start to wonder how to get started or think that maybe you are duplicating work done by others, or even better, maybe an upstream. This is where we start to think about the idea of sharing parts and this is exactly what we are going to go over in this post.

To be able to reproduce what follows, you’d need to have snapcraft 2.12 installed.

An overview to using remote parts

So imagine I am someone wanting to use libcurl. Normally I would write the part definition from scratch and be on with my own business but surely I might be missing out on something about optimal switches used to configure the package or even build it. I would also need to research on how to use the specific plugin required. So instead, I’ll see if someone already has done the work for me, hence I will,

$ snapcraft update
Updating parts list... |
$ snapcraft search curl
PART NAME  DESCRIPTION
curl       A tool and a library (usable from many languages) for client side URL tra...

Great, there’s a match, but is this what I want?

$ snapcraft define curl
Maintainer: 'Sergio Schvezov <sergio.schvezov@ubuntu.com>'
Description: 'A tool and a library (usable from many languages) for client side URL transfers, supporting FTP, FTPS, HTTP, HTTPS, TELNET, DICT, FILE and LDAP.'

curl:
  configflags:
  - --enable-static
  - --enable-shared
  - --disable-manual
  plugin: autotools
  snap:
  - -bin
  - -lib/*.a
  - -lib/pkgconfig
  - -lib/*.la
  - -include
  - -share
  source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
  source-type: tar

Yup, it’s what I want.

An example

There are two ways to use these parts in your snapcraft.yaml, say this is your parts section

parts:
    client:
       plugin: autotools
       source: .

My client part which is using sources that sit alongside this snapcraft.yaml, will hypothetically fail to build as it depends on the curl library I don’t yet have. There are some options here to get this going, one using after in the part definition implicitly, another involving composing and last but not least just copy pasting what snapcraft define curl returned for the part.

Implicitly

The implicit path is really straightforward. It only involves making the part look like:

parts:
    client:
       plugin: autotools
       source: .
       after: [curl]

This will use the cached definition of the part and may potentially be updated by running snapcraft update.

Composing

What if we like the part, but want to try out a new configure flag or source release? Well we can override pieces of the part; so for the case of wanting to change the source:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        source: http://curl.haxx.se/download/curl-7.45.0.tar.bz2

And we will get to build curl but using a newer version of curl. The trick is that the part definition here is missing the plugin entry, thereby instructing snapcraft to look for the full part definition from the cache.

Copy/Pasting

This path is a path one would take if they want full control over the part. It is as simple as copying in the part definition we got from running snapcraft define curl into your own. For the sake of completeness here’s how it would look like:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        configflags:
            - --enable-static
            - --enable-shared
            - --disable-manual
        plugin: autotools
        snap:
            - -bin
            - -lib/*.a
            - -lib/pkgconfig
            - -lib/*.la
            - -include
            - -share
        source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
        source-type: tar

Sharing your part

Now what if you have a part and want to share it with the rest of the world? It is rather simple really, just head over to https://wiki.ubuntu.com/snapcraft/parts and add it.

In the case of curl, I would write a yaml document that looks like:

origin: https://github.com/sergiusens/curl.git
maintainer: Sergio Schvezov <sergio.schvezov@ubuntu.com>
description:
  A tool and a library (usable from many languages) for
  client side URL transfers, supporting FTP, FTPS, HTTP,
  HTTPS, TELNET, DICT, FILE and LDAP.
project-part: curl

What does this mean? Well, the part itself is not defined on the wiki, just a pointer to it with some meta data, the part is really defined inside a snapcraft.yaml living in the origin we just told it to use.

The extent of the keywords is explained in the documentation, that is an upstream link to it.

The core idea is that a maintainer decides he wants to share a part. Such a maintainer would add a description that provides an idea of what that part (or collection of parts) is doing. Then, last but not least, the maintainer declares which parts to expose to the world as maybe not all of them should. The main part is exposed in project-part and will carry a top level name, the maintainer can expose more parts from snapcraft.yaml using the general parts keyword. These parts will be namespaced with the project-part.

Read more
Steph Wilson

Meet the newest member of the Design Team, project manager Davide Casa. He will be working with the Platform Team to keep us all in check and working towards our goals. I sat down with him to discuss his background, what he thinks makes a good project manager and what his first week was like at Canonical (spoiler alert – he survived it).

delete_me_my_pic

You can read Davide’s blog here, and reach out to him on Github and Twitter with @davidedc.

Tell us a bit about your background?

My background is in Computer Science (I did a 5 year degree). I also studied for an MBA in London.

Computer science is a passion of mine. I like to keep up to date with latest trends and play with programming languages. However, I never got paid for it, so it’s more like a hobby now to scratch an artistic itch. I often get asked in interviews: “why aren’t you a coder then?” The simple answer is that it just didn’t happen. I got my first job as a business analyst, which then developed into project management.

What do you think makes a good project manager?

I think the soft skills are incredibly relevant and crucial to the role. For example: gathering what the team’s previous experience of project management was, and what they expect from you, and how deeply and quickly you can change things.

Is project management perceived as a service or is there a practise of ‘thought leadership’?

In tech companies it varies. I’ve worked in Vodafone as a PM and you felt there was a possibility to practice a “thought leadership”, because it is such a huge company and things have to be dealt with in large cycles. Components and designs have to be agreed on in batches, because you can’t hand-wave your way through 100s of changes across a dozen mission-critical modules, it would be too risky. In some other companies less so. We’ll see how it works here.

Apart from calendars, Kanban boards and post-it notes  – what else can be used to help teams collaborate smoothly?

Indeed one of the core values of Agile is “the team”. I think people underestimate the importance of cohesiveness in a team, e.g. how easy it is for people to step forward and make mistakes without fear. A cohesive team is something that is very precious and I think that’s a regularly underestimated. You can easily buy tools and licenses, which are “easy solutions” in a way. The PM should also help to improve the cohesiveness of a team, for example creating processes that people can rely on in order to avoid attrition, and resolve things. Also to avoid treating everything like a special case to help deal with things “proportionally”.

What brings you to the Open Source world?

I like coding, and to be good coder, one must read good code. With open source the first thing you do is look around to see what others are doing and then you start to tinker with it. It has almost never been relevant for me to release software without source.

Have you got any side projects you’re currently working on?

I dabble in livecoding, which is an exotic niche of people that do live visuals and sounds with code (see our post on Qtday 2016). I am also part of the Toplap collective which works a lot on those lines too.

I also dabble in creating an exotic desktop system that runs on the web. It’s inspired by the Squeak environment, where everything is an object and is modifiable and inspectable directly within the live system. Everything is draggable, droppable and composable. For example, for a menu pops up you can change any button, both the labelling or the function it performs, or take apart any button and put it anywhere else on the desktop or in any open window. It all happens via “direct manipulation”. Imagine a paint application where at any time while working you can “open” any button from the toolbar and change what the actual painting operation does (John Maeda made such a paint app actually).

The very first desktop systems all worked that way. There was no concept of a big app or “compile and run again”. Something like a text editor app would just be a text box providing functions. The functions are then embodied in buttons and stuck around the textbox, and voila, then you have your very own flavour of text editor brought to life. Also in these live systems most operations are orthogonal: you can assume you can rotate images, right? Hence by the same token you can rotate anything on the screen. A whole window for example, or text. Two rotating lines and a few labels become a clock. The user can combine simple widgets together to make their own apps on the fly!

What was the most interesting thing you’ve learned in your first week here?

I learned a lot and I suspect that will never stop. The bread and butter here is strategy and design, which in other companies is only just a small area of work. Here it is the core of everything! So it’ll be interesting to see how this ‘strategy’ works. And how the big thinking starts with the visuals or UX in mind, and from that how it steers the whole platform. An exciting example of this can be seen in the Ubuntu Convergence story.

That’s the essence of open source I guess…

Indeed. And the fact that anti-features such as DRM, banners, bloatware, compulsory registrations and basic compilers that need 4GB of installation never live long in it. It’s our desktop after all, is it not?

Read more
Steph Wilson

The Ubuntu App Design Clinic is back! This month members of the Design Team James Mulholland (UX Designer), Jouni Helminen (Visual Designer) and Andrea Bernabei (UX Engineer) sat down with Dan Wood, contributor to the OwnCloud app.

What is OwnCloud?

OwnCloud is an open source project, self-hosted file sync and share app platform. Access & sync your files, contacts, calendars & bookmarks across your devices.

You can contribute to it here.

We covered:

  • First case usage – the first point of entry for the user, maybe a file manager or a possible tooltip introduction.
  • Convergent thinking – how the app looks across different surfaces.
  • Top-level navigation – using the header to display actions, such as settings.
  • Using Online Accounts to sync other accounts to the cloud.
  • Using sync frequency or instant syncing.

If you missed it, or want to watch it again, here it is:

The next App Design Clinic is yet to be confirmed. Stay tuned.

 

Read more
pitti

I don’t want to criticize the outcome of the UK’s EU referendum — first of all I’m not wiser than everyone else, and second in a democracy you always have the right to decide both ways. Freedom absolutely includes the freedom to hurt yourself and do bad decisions (note, I’m explicitly not saying — or even knowing! — which is which!).

What concerns me though, is how the course of political debates at large and this referendum in particular have been going. Real policital debates and consensus finding are the essence of democracy, but they essentially stopped many years ago in the US already; with the two major parties just talking/swearing about each other but not any more with each other, and every little proposal gets ridiculously blown up to a crusade. The EU is of course not exempt from that in general, although for most day-to-day political work it’s much more moderate due to most states having proportional instead of majority voting, which enforces coalitions and thus compromises on an institutional level. But the very same bad dispute style immediately came to surface with the Brexit referendum — the arguments have been highly emotional, misleading, populistic, and were often outright lies, like £50M a day (it’s just a third of that, and the ROI is enormous!), or the visa issue for Turkey. This causes voting to be based on stirred emotions, false information, whoever shouts the loudest, and which politician of the day you really want to give a slap in the face, instead of voting rationally on the actual matter at hand and what the best long-term path is.

But we have a saying in Germany: “Nichts wird so heiß gegessen wie es gekocht wird.”, which translates as “Nothing get eaten as hot as it gets cooked.”. In the end, the EU treaties are all just paper, and as long as there are enough people agreeing the rules have been, and will be bent/ignored/adjusted. And dear UK, you of all people should know this ☺ (SCNR). So while today emotions are high, bank charts look crazy, some colleagues are worrying about their employment in the UK etc., there’s nothing more reliable than the human nature — all off this will eventually be watered down, procrastinated, and re-negotiated during the next two (haha, maybe 10) years.

If this has taught us anything though: this looks like yet another example of bad application of direct democracy. In my opinion representative democracy is the better structure for such utterly complex and rather abstract topics that we can’t in good faith expect the general populus to understand. This isn’t meant to sound derogatory — it’s just a consequence of a highly developed world with an extreme grade of division of work. You don’t propose (I hope!) a referendum about how to build a bridge, airplane turbine, pacemaker, or OS kernel; we educate, train, and pay specialists for that. But for the exact same reason we have professional politicians who have the time to think about/negotiate/understand complex issues like EU treaties, and what their benefits and costs are. That said, direct democracy certainly has its place for issues that you can expect the general populus to have a qualified opinion on: Should we rather build a highway or 10 kindergartens? Do you want both for 3% more taxes? Should smoking be prohibited in public places? So the tricky question is how to tell these apart and who decides that.

Read more
David Callé

As of today and part of our weekly release cadence, a new snapd is making its way to your 16.04 systems. Here is what’s new!

Command line

  • snap interfaces can now give you a list of all snaps connected to a specific interface:
    1a42fb817c663169453b0c7c5e24302d24ecb376.png
  • Introduction of snap run <app.command>, which will provide a clean and simple way to run commands and hooks for any installed revision of a snap. As of writing this post, to try it, you need to wait for a newer core snap to be promoted to the stable channel, or alternatively, switch to the beta channel with snap refresh --channel=beta ubuntu-core

Ecosystem

  • Enable full confinement on Elementary 0.4 (Loki)
  • If a distribution doesn’t support full confinement through Apparmor and seccomp, snaps are installed in devmode by default.

Misc

  • Installing the core snap will now request a restart
  • Rest API: added support to send apps per snap, to allow finer-grained control of snaps from the Software center.

Have a look at the full changelog for more details.

What’s next?

Here are some of the fixes already lined up for the next snapd release:

  • New interfaces to enable more system access for confined snaps, such as “camera”, “optical-drive” and “mpris”. This will give a lot more latitude for media players (control through the mpris dbus interface, playing DVDs, etc.) and communication apps. You can try them now by building snapd from source.
  • Better handling of snaps on nvidia graphics
  • And much more to come, watch the new Snapcraft social channels ( twitter, google+, facebook) for updates!

Read more
facundo


True Blood es una serie yanqui medio pelo (me gusta, pero no es la gran cosa).

Como dice la página de Wikipedia sobre la serie, "su argumento se centra en un conservador pueblo de Luisiana llamado Bon Temps, y en cómo su gente debe adaptarse y enfrentarse a los cambios que se han producido en la sociedad desde que los vampiros salieron a la luz pública y en especial en cómo las criaturas de la noche y su mundo afectan a la vida de una camarera con poderes telepáticos llamada Sookie Stackhouse".

Pero no me interesa puntualmente hablar de la serie, sino de la intro que tiene, que es fantástica. Antes de seguir, se toman un minuto y medio, hacen click acá, y la miran (¡y escuchan!).

(... espero que vuelvan ...)

Como vieron, es densa, espesa, con muchas imágenes en poco tiempo, y me empezó a dar curiosidad sobre las imágenes que se suceden en flashes.

Agarré entonces la intro, la separé en cuadros, y extraje una (y solamente una) imagen por cada secuencia de video (algunas secuencias duran un par de segundos, otras medio segundo, etc.).

El resultado me gustó tanto que las guardé y las pueden ver acá.

Read more
Daniel Holbach

It takes a special kind of people who enjoy being in the first in a new community. It’s a time when there’s a lot of empty canvas, wide landscapes to uncover, lots of dragons still on a map, I guess you already see what I mean. It takes some pioneer spirit to feel comfortable when the rules are not all figured out yet and stuff is still a bit harder than it should be.

The last occurrence where I saw this live was the Snappy Playpen. A project where all the early snap contributors hang out, figure out problems, document best-practices and have fun together.

We use Github and Gitter/IRC to coordinate things, we have been going for a bit more than two weeks now and I’m quite happy with where we’ve got. We had about 60 people in the Gitter channel, had more than 30 snaps contributed and about the same number or more being in the works.

playpen

But it’s not just the number of snaps. It’s also the level of helping each other out and figuring out bigger problems together. Here’s just a (very) few things as an example:

  • David Planella wrote a common launcher for GTK apps and we could move snaps like leafpad, galculator and ristretto off of their own custom launchers today. It’s available as a wiki part, so it’s quite easy to consume today.
  • Simon Quigley and Didier Roche figured out better contribution guidelines and moved the existing snaps to use them instead.
  • With new interfaces landing in snapd, it was nice to see how they were picked up in existing snaps and formerly existing issues resolved. David Callé for example fixed the vlc and scummvm snaps this way.
  • Sometimes it takes perseverance to your snap landed. It took Andy Keech quite a while to get imagemagick (both stable and from git) to build and work properly, but thanks to Andy’s hard work and collaboration with the Snapcraft developers they’re included now.
  • The docs are good, but they don’t cover all use-cases yet and we’re finding new ways to use the tools every day.

As I said earlier: it takes some pioneer spirit to be happy in such circumstances and all the folks above (and many others) have been working together as a team together in the last days. For me, as somebody who’s supporting the project, this was very nice to see. Particularly seeing people from all over the open source spectrum (users of cloud tools, GTK and Qt apps, python scripts, upstream developers, Java tools and many more).

Tomorrow we are going to have our kickoff event for week 3 of Snappy Playpen. As I said in the mail, one area of focus is going to be server apps and electron based apps, but feel free to bring whatever you enjoy working on.

I’d like to thank each and everyone of you who is participating in this initiative (not just the people who committed something). The atmosphere is great, we’re solving problems together and we’re excited to bring a more complete, easier to digest and better to use snap experience to new users.

Read more
Dustin Kirkland



I had the honor and privilege a couple of weeks ago, to participate in a recording of The Changelog, a podcast dedicated to Open Source technology.

You can listen to it here.

These guys -- Jerod and Adam -- produce a fantastic show, and we covered a lot of ground!

Give it a listen, and follow the links at the bottom of their page (their site is hosted on Ubuntu, of course!) to learn more.

Cheers!
Dustin

Read more
Daniel Holbach

Week 3 of the Snappy Playpen

Next week we're going into the third week of the Snappy Playpen. An initiative to snap together, learn from each other, document best practices and get together as a team.

Get started with Snappy

The Snappy Playpen is hosted in github and we meet in both #snappy on Freenode and our gitter channel. We are hanging out there most of the time, but next week on Tuesday, 21st June we will get all experts in one room and together we will make a push to get both

snapped. Obviously you can bring whatever own app you are interested in. Particularly if you are an upstream of a project, we're keen to help you get started.

Snaps are a beautiful and simple way to get your app out to users, so let's make this happen together.

If you are curious and want to take a first look, go to https://snapcraft.io and we'll take care of the questions together.

  • WHAT: Snappy playpen sprint
  • WHEN: Tuesday, 21st June 2016 all day
  • WHERE: Join us on gitter or IRC

Read more
Andrea Bernabei

QtDay is the only Italian event dedicated to Qt. It is held yearly by Develer and brings together company products that are developed using Qt, as well as Qt developers and customers who want the latest developments and solutions in the Qt world. This year the conference was held in Florence, where I was lucky enough to attend and present.

I had previously attended the 2011, 2012 and 2014 QtDay events whilst I was studying Computer Science at the University of Pisa. This year it was different because Develer invited me to give a talk about Ubuntu and Qt. The funny thing was e I was already planning on sending my presentation to the Call for Proposals anyway! So I was already prepared.

What I do at Ubuntu

My role in Canonical is UX Engineer, basically a developer acting as the bridge between designers and engineers. It is a pretty cool job, and I’m very lucky to be part of such an energetic team.

Over the last year there was a strong push in both the Design and Engineering teams working on Ubuntu Touch to finalize and deliver the convergent experience. This was a great opportunity to spread the word about how to develop convergent apps for the new Ubuntu platform, and get developers interested in where we are and where we are heading.

My talk – “Standing on the shoulders of giants: developing Ubuntu convergent apps with Qt”

When I first thought about giving the presentation, I decided it would only be about the current state of the UI components provided by Ubuntu SDK, with a strong focus on their “convergent” features, and how to use them to realize your convergent apps. However, as time went by I realized it would have been more interesting for developers to also get some context about the platform itself, and how to best integrate their apps with the platform.

By the time QtDay arrived, the presentation had almost doubled in size! You can find it here.

A slideshow or an app? How about both!

This is a detail the geeks in the audience might be interested in…I thought it would be neat to talk about the development of Qt/QML apps and use the same framework to implement the presentation as well!

That’s right, the presentation is actually a QML application that uses (a modified version of) the QML presentation system available as a Qt Labs addon. Having the power of Qt underneath your presentation means you can do pretty much everything You’re not tied to the boundaries set by the “standard” presentation systems (such as Beamer, LibreOffice Impress, Microsoft Powerpoint, etc) anymore!

In my case, I exploited that to implement a live-coding view as a pull-down layer that you can open on-demand whenever you want by using keyboard shortcuts. Once the livecoding view is open, you can write code (or use the code automatically provided when you’re at one of the special “Livecoding!” slides) in the text editor on the left side and see the result in the right side in real time without leaving or interrupting your presentation. You can also increase or decrease the font size, and there’s also a sparkling particle effect that follows the cursor to make it easier for the audience to follow your changes to the text. That’s only one of the things you can do once you replace your “standard” presentation with a full featured application. If you’re a software developer then I highly recommend giving it a try for your next presentation!

The sourcecode and the PDF version of the presentation is available here and my fork of the QML presentation system is available here.

And here’s a screenshot of the livecoding view in action (sparkling particle effect included) :)

livecoding_Screenshot_2016-06-15_09-13-38

The morning

The conference was held in the middle of Florence at the Hotel Londra Conference Centre. It was quite a nice location I have to say! Very easy to reach as it is very close to the main railway station, Santa Maria Novella.

My talk was in the first time slot after the main keynote, which was good because, you know, that meant I could relax and enjoy the rest of the day!

13173300_1020935067988814_8428457406371463764_o

I started by giving an overview of the current state of Ubuntu and the fact that it’s doing great in the Cloud field. Ubuntu can now scale to run on IoT devices as well as phones, tablets, notebooks, servers and Clouds.

I then presented the concept of convergence and how the UI components provided by the Ubuntu SDK can be best utilised to create great convergent apps, including some livecoding. Livecoding is fun because it gives a pragmatic idea of how to go from theory to practice, and also keeps the attendees awake, because they know things can go wrong at any moment (demo effect) and they enjoy that, for some reason :)

After UI components section, I went on to talk about platform integration topics such as the app lifecycle management, the app isolation features, and the integration with the Content Hub which is the secure way to share data between applications.

I then briefly talked about internationalization and how to publish your application on the Ubuntu Store (it’s very easy!)

For this occasion, I brought with me a BQ M10 tablet, which is the convergent Ubuntu tablet that we released just a few months ago! I connected it to a bluetooth mouse and keyboard, and set it up on a table for people to try. Lots of people played with it. After the talk it was exciting to see the audience interest in the whole convergence story.

The other talks during the morning were very interesting as well, I particularly enjoyed Marco Piccolino’s “A design system and pattern libraries can significantly improve the quality of your UI” (Find the slides here).

And then it came to lunchtime…

Food…Italian food

The food was great and, coming from the UK, I enjoyed it even more. Big kudos to Develer (the company behind the event) for finding such a good catering company!

Here’s a pic of the goodies available during coffee breaks. Mmmm…

13131744_1020935184655469_8301292868650133109_o

Afternoon talks

The afternoon talks were as interesting as the morning ones. Marco Arena, from the Italian C++ Community, gave a talk about QCustomPlot, which is a library to draw graphs and plots using Qt (slides here).

If you’re interested in Virtual Reality, partially BCI (Brain Computer Interface) and machine learning, make sure you check out the slides of Sebastiano Galazzo’s talk (once they’re available, at that page). His project involves manipulating what the user sees in a Google Cardboard by reading his/her brain waves to interpret emotions. Pretty neat.

Stefano Cordibella’s presentation was about his experience optimizing the startup time of an application running on embedded hardware (using Qt4). They exploited the power of the QtQuick Loader component and other QML tricks to decrease the loading time of the application.
Check his slides out if you’re interested in QML optimization. I’m sure you’ll find them useful.

The final talk I attended was more of a roundtable about how to contribute to the development of Qt itself, led by Giuseppe D’Angelo, who has the role of “Approver” in the Qt Project Open Governance Model.

As a result of attending that roundtable not only I started contributing to Qt (See the changes I contributed to here), but I also improved theQt Wiki Contribution Guidelines so that it will be easier for other people to start contributing. The power of open source and open governance! :)

The closing talk also included a raffle, where a couple of developers won an embedded devboard sponsored by Atmel. I’ve been quite lucky with Qt-related raffles in the past, but this wasn’t one of those days, oh well :)

13131322_1020935404655447_4502313797681298972_o

Closing remarks

What a great day it was. I want to thank Develer for organizing the conference and the guys from Community team (Alan Pope, David Planella, Daniel Holbach) and Zsombor Egri from the SDK team at Canonical for providing feedback and ideas for the presentation.

It was also great to see so many people interested in the convergence story and in the M10 tablet. The technology has great potential and it’s our job to make the best of it :)

See you all at the next QtDay!

Note: the pictures are courtesy of Develer’s QtDay Facebook page.

Read more
liam zheng

经过一段漫长的开发过程,我们很高兴地宣布,Ubuntu SDK IDE 的下一个版本从今天起进入 Beta 测试阶段。新版本包含全新的构建器 (Builder) 和运行时后端,最终消除了 SDK IDE 目前存在的最大问题。

Ubuntu SDK IDE 的下一个迭代

简单来说︰LXD 来了

经过一段漫长的开发过程,我们很高兴地宣布,Ubuntu SDK IDE 的下一个版本从今天起进入 Beta 测试阶段。新版本包含全新的构建器 (Builder) 和运行时后端,最终消除了 SDK IDE 目前存在的最大问题。

之前已经有传闻说基于 LXD 的新构建器将会取代基于 schroot 的构建器。没错,这些传闻都是真的。在几位值得信赖的测试人员对概念验证版本进行了一段时间的内部测试后,我们认为向更多人展示新版本 IDE 的时机已经到来。

下面,在直接介绍新软件包前,我们先来回顾一下不得不放弃 schroot 构建器的一些原因:

最大的问题无疑在于安装 SDK 后立即创建新的 chroot。从实时档案文件启动引导 (Bootstrapping) 完整的 Ubuntu 根文件系统非常缓慢,而且容易出错。每当档案文件或 Overlay PPA 存在打包问题时,就无法创建新的构建目标。这基本上导致 SDK 在打包问题修复前将一直不可用。LXD 已经解决了这个问题。新容器以现成可用的压缩映像文件形式下载,下载速度比以往快得多,而且得到的容器肯定可用,因为容器在发布前已经过我们的测试,而不是像 Overlay PPA 那样不断改变。映像下载完毕后将被缓存,而从缓存启动一个新容器只需要几秒钟!

第二个要强调的问题是,我们需要在桌面本地执行应用程序,但仍支持目前官方支持的所有 Ubuntu 版本。这意味着必须解决不同 Qt 和 UITK 版本的问题。我们曾经尝试过通过提供单独的 Qt+UITK 软件包来解决这个问题。但事实证明,这种方法需要破解和重构太多的软件包,因此是不可行的。而且,这不仅仅是构建时的问题,还是一个运行时的问题。那么如何既能在桌面上运行应用,使用最新、最流行的组件,同时又能保持 LTS 兼容性呢?

答案其实很简单:使用容器作为运行时目标,并在主机的 X 服务器上显示 UI。

此外还有一些问题,例如整体速度缓慢、挂载点泄漏(每个曾经因 schroot 而设置了数百个挂载的人都能明白我的意思)以及 ecryptfs 的问题等。

现在说够了过去,我们来聊聊将来,看看新版本有了哪些变化。在开始前需要指出的是,我们已经停止了对默认桌面套件的支持。默认已不再支持在主机上构建和运行应用。除了 qmake 和 cmake 插件自动创建的配置以外,SDK IDE 不会创建其他桌面运行配置。当然,我们还是有办法在主机上构建和运用应用的,但是需要手动创建运行配置。今后,我们需要创建一个与主机架构一致的容器来执行应用程序。这意味着,在主机系统上,几乎不需要使用额外的软件包作为依赖项。

IDE 将不再使用任何现有的基于 schroot 的构建器。click chroot 仍然留在主机上,但是将与 Ubuntu SDK IDE 分离。

开始

做法很简单,我们只需添加 SDK 发行版和适用于 Ubuntu SDK 工具的 Tools Development PPA:

sudo add-apt-repository ppa:ubuntu-sdk-team/tools-development

sudo apt update && sudo apt install ubuntu-sdk-ide

完成上面的操作后,IDE 现在完全可用。它会按照过去使用 click chroot 时相同的方式来发现容器。从各个方面看,开发者的体验并不会有太大变化。需要注意的是,目前我们还在 Beta 测试阶段,因此容器映像或 IDE 本身很有可能存在一些 Bug。请直接在 IRC 上或通过邮件列表向我们报告 Bug,更好的方式是通过 launchpad 中的官方 ubuntu-sdk-ide 项目来报告 Bug:https://bugs.launchpad.net/ubuntu-sdk-ide

已知问题和故障排除

lxd 组成员资格

通常,LXD 安装进程会配置必要的组成员资格。但如果该进程未配置成员资格,我们就需要确保当前用户属于 lxd 组。请发出以下命令:

sudo useradd -G lxd `whoami`

之后,重新登录将新组通知给登录会话。

重置 QtCreator 设置

有时,在不同版本间来回切换时,QtCreator(Ubuntu SDK IDE 的 Qt 应用程序)的设置会发生损坏。当发现已损坏或无法使用的套件、配置上可能有误的设备或者任何不寻常的问题时,按下 Qtcreator 上的重置按钮可能会有帮助。注意,这是一种相当激进的修复方法。操作上很简单,只需执行下面这条命令即可:

$ rm ~/.config/QtProject/qtcreator ~/.config/QtProject/QtC*

清理旧的 click chroot

前面提到,旧的 schroot 已与 SDK IDE 分离,但是仍留在文件系统中。使用以下命令可清理 click chroot:

$ sudo click chroot -a armhf -f ubuntu-sdk-15.04 destroy

$ sudo click chroot -a i386 -f ubuntu-sdk-15.04 destroy

这两个命令将释放大约 1.4GB 的磁盘空间。click chroot 位于 /var/lib/schroot/chroots 下。最好检查一下该文件夹是否为空,并且没有挂载任何内容。

$ mount|grep schroot

NVIDIA 显卡驱动程序

 

在使用 NVIDIA 显卡驱动程序的主机上,无法从 LXD 容器本地部署应用。如果主机具有双图形处理器,一种变通的方法是使用另一个图形处理器。

检查系统是否具有备用显卡

$ sudo lshw -class display

如果列表显示除 NVIDIA 以外的其他条目,则激活另一个显卡。prime-select 工具是一个简单易用的工具。

$ sudo prime-select intel

注意,你的系统上可能未安装这个工具,而且它不能与 bumblebee 一起使用。如果主机已安装 bumblebee 且缺少 prime-select 工具:

$ sudo apt-get remove bumblebee

$ sudo apt-get install nvidia-prime

如果主机除 NVIDIA 以外没有其他显卡,可以尝试 Nouveau 驱动程序,该驱动程序也许能用。不管怎样,这是一个非常严重的已知问题,我们目前正在着手解决。

启动新的 IDE

首先备份一些设置,以防在极少数情况下我们需要恢复回当前的 IDE。

$ tar zcvf ~/Qtproject.tar.gz ~/.config/QtProject

然后,在 Dash 中找到 Ubuntu SDK IDE 并启动它。

Ubuntu SDK IDE 首先会检查环境是否已正确设置。除非你是 LXC/LXD 超级用户,否则安全的做法是选择此对话框中的“Yes(是)”。

如果 Ubuntu SDK 是第一次启动,会打开一个欢迎向导帮助你设置套件和设备。

接下来,最好的建议是阅读向导的每个页面,并按照上面的说明操作。整个过程相当简单。
在下一页上,向导将帮助你创建套件。

按下“Create new Kit(创建新套件)”按钮,查看目标创建对话框。

在这一步中,可以在 3 种类型的目标间进行选择︰

  • Build to run on the desktop(构建以便在桌面上运行)- 筛选出所有与桌面兼容的映像
  • Build to run on device or emulator(构建以便在设备或模拟器上运行)- 筛选出所有可用于设备的映像
  • Show all available images(显示所有可用的映像)- 显示所有可用映像

我们选择“Show all available images”,查看所有现有映像的概览。

下一步,选择首选的目标架构。Ubuntu 手机和平板电脑是 armhf,主机 PC 是 i386 或 amd64。因此,要创建适用于手机的 click 包,需要 armhf 目标;要在桌面上测试应用程序,需要原生的 amd64 或 i386 目标。

我们可以为套件使用默认命名。

创建 LXD 容器需要系统管理员权限,所以下面我们需要验证自己的身份。

输入正确的密码后,LXD 映像下载将开始。

下载需要些时间,具体取决于网络的带宽。每个映像大约为 400MB。在向导下载和配置 LXD 映像期间,我们刚好有足够时间来看一篇简短的博客文章,了解一下到底什么是套件:你想了解却又羞于发问的关于套件的一切 。毫不夸张地说,花时间阅读这篇博客文章并了解开发套件是什么,是最佳的选择。

容器创建完毕后,会弹出一个简单的对话框显示一些基本详情。

向导的下一页将帮助你设置目标设备。在我们的例子中,我们已经有了一个 BQ (krillin) 手机和一个来自 rc-proposed 通道的模拟器。

但是,即使没有可用的手机、平板电脑或模拟器设备,结束向导肯定也是安全的。
在这个阶段,IDE 将自动发现 LXD 容器,并提示我们可以更新它。

这并不是一个必须要做的步骤,取消该对话框完全没有问题。

完成该向导后,IDE 将打开。

 

 

Read more
Daniel Holbach

We are in the second week of the Snappy Playpen and it’s simply beautiful to see how new folks are coming in and collaborate on getting snaps done, improve existing ones, answer questions and work together. The team spirit is strong and we’re all learning loads.

Keep up the good work everyone! </p>
            <a href=Read more

niemeyer

Over the last several months there has been noticeable and growing pain associated with the evolving integration tests around snapd, and given the project goal of being a cross-distribution platform, we are very keen on solving this problem appropriately so that stability is guaranteed everywhere.

With that mindset a more focused effort was made over the last few weeks to produce a tool that can get the project out of those problems, and onto a runway of more pleasant stability. Despite the short amount of time, I’m very happy about the Spread project which resulted from this effort.

Spread is not Jenkins or Travis, and is not a language or library either. Spread is a tool that will very conveniently ship your code to one or more systems, in parallel, and then offer the right set of options so you can run whatever you need to run to make sure the logic is working, and drive it all from the local system. That implies you can run Spread inside Travis, Jenkins, or your terminal, in a similar way to how your unit tests work.

Here is a short list of interesting facts about Spread:

  • Full-system tests with on demand machine allocation.
  • Multi-backend with Linode and LXD (for local runs) out of the box for now.
  • Multi-language since it can run arbitrary remote code.
  • Agent-less and driven via embedded ssh (kudos to Go team).
  • Convenient harness with project+backend+suite+test prepare and restore scripts.
  • Variants feature for test duplication without copy & paste.
  • Great debugging support – add -debug and stop with a shell inside every failure.
  • Reuse of servers – server allocation is fast, but not allocating is faster.
  • Reasonable test outputs with the shell’s +x mode on failures.
  • … and so forth.

This is all well documented, so I’ll just provide one example here to offer a real taste of how the system feels like.

This is spread.yaml, put in the project root to define the basics:

project: spread

backends:
    lxd:
        systems:
            - ubuntu-16.04
            - ubuntu-14.04

path: /home/test

prepare: |
    echo Entering project...
restore: |
    echo Leaving project...

suites:
    tests/: 
        summary: Integration tests
        prepare: |
            echo Entering suite...
        restore: |
            echo Leaving suite...

The suite name is also the path under which the tests are found.

Then, this is tests/hello/task.yaml:

summary: Greet the world
prepare: |
    echo "Entering task..."
restore: |
    echo "Leaving task..."
environment:
    FOO/a: one
    FOO/b: two
execute: |
    echo "Hello world!"
    [ $FOO = one ] || exit 1

The outcome should be almost obvious (intended feature :-). The one curious detail here is the FOO/a and FOO/b environment variables. This is how to introduce variants, which means this one test will in fact become two: first with FOO=one, and then with FOO=two. Now consider that such environment variables can be defined at any level – project, backend, suite, and task – and imagine how easy it is to test small variations without any copy & paste. After cascading takes place (project→backend→suite→task) all environment variables using a given variant key will be present at once on the same execution.

Now let’s try to run this configuration, including the -debug flag so we get a shell on the failures. Note how with a single test we get four different jobs, two variants over two systems, with the variant b failing as instructed:

$ spread -debug

2016/06/11 19:09:27 Allocating lxd:ubuntu-14.04...
2016/06/11 19:09:27 Allocating lxd:ubuntu-16.04...
2016/06/11 19:09:41 Waiting for LXD container to have an address...
2016/06/11 19:09:43 Waiting for LXD container to have an address...
2016/06/11 19:09:44 Allocated lxd:ubuntu-14.04.
2016/06/11 19:09:44 Connecting to lxd:ubuntu-14.04...
2016/06/11 19:09:48 Allocated lxd:ubuntu-16.04.
2016/06/11 19:09:48 Connecting to lxd:ubuntu-16.04...
2016/06/11 19:09:52 Connected to lxd:ubuntu-14.04.
2016/06/11 19:09:52 Sending project data to lxd:ubuntu-14.04...
2016/06/11 19:09:53 Connected to lxd:ubuntu-16.04.
2016/06/11 19:09:53 Sending project data to lxd:ubuntu-16.04...

2016/06/11 19:09:54 Error executing lxd:ubuntu-14.04:tests/hello:b :
-----
+ echo Hello world!
Hello world!
+ [ two = one ]
+ exit 1
-----

2016/06/11 19:09:54 Starting shell to debug...

lxd:ubuntu-14.04 ~/tests/hello# echo $FOO
two
lxd:ubuntu-14.04 ~/tests/hello# cat /etc/os-release | grep ^PRETTY
PRETTY_NAME="Ubuntu 14.04.4 LTS"
lxd:ubuntu-14.04 ~/tests/hello# exit
exit

2016/06/11 19:09:55 Error executing lxd:ubuntu-16.04:tests/hello:b :
-----
+ echo Hello world!
Hello world!
+ [ two = one ]
+ exit 1
-----

2016/06/11 19:09:55 Starting shell to debug...

lxd:ubuntu-16.04 ~/tests/hello# echo $FOO
two
lxd:ubuntu-16.04 ~/tests/hello# cat /etc/os-release | grep ^PRETTY
PRETTY_NAME="Ubuntu 16.04 LTS"
lxd:ubuntu-16.04 ~/tests/hello# exit
exit


2016/06/11 19:10:33 Discarding lxd:ubuntu-14.04 (spread-129)...
2016/06/11 19:11:04 Discarding lxd:ubuntu-16.04 (spread-130)...
2016/06/11 19:11:05 Successful tasks
2016/06/11 19:11:05 Aborted tasks: 0
2016/06/11 19:11:05 Failed tasks: 2
    - lxd:ubuntu-14.04:tests/hello:b
    - lxd:ubuntu-16.04:tests/hello:b
error: unsuccessful run

This demonstrates many of the stated goals (parallelism, clarity, convenience, debugging, …) while running on a local system. Running on a remote system is just as easy by using an appropriate backend. The snapd project on GitHub, for example, is hooked up on Travis to run Spread and then ship its tests over to Linode. Here is a real run output with the initial tests being ported, and a basic smoke test.

If you like what you see, by all means please go ahead and make good use of it.

We’re all for more stability and sanity everywhere.

@gniemeyer

Read more