Canonical Voices

Posts tagged with 'development'

Robin Winslow

Canonical’s webteam manage over 18 websites as well as many supporting projects and frameworks. These projects are built with any combination of Python, Ruby, NodeJS, Go, PostgreSQL, MongoDB or OpenStack Swift.

We have 9 full-time developers – half the number of websites we have. And naturally some of our projects get a lot of time spent on them (like www.ubuntu.com), and others only get worked on once every few months. Most devs will touch most projects at some point, and some may work on a few of them in any given day.

Before any developer can start a new piece of work, they need to get the project running on their computer. These computers may be running any flavour of Linux or macOS (thankfully we don’t yet need to support Windows).

A focus on tooling

If you’ve ever tried to get up and running on a new software project, you’ll certainly appreciate how difficult that can be. Sometimes developers can spend days simply working out how to install a dependency.

XKCD 1742: Will it work?

Given the number and diversity of our projects, and how often we switch between them, this is a delay we simply cannot afford.

This is why we’ve invested a lot of time into refining and standardising the local development tooling, making it as easy as possible for any of our devs, or any contributors, to get up and running as simply as possible.

The standard interface

We needed a simple, standardised set of commands that could be run across all projects, to achieve predictable results. We didn’t want our developers to have to dig into the README or other documentation every time they wanted to get a new project running.

This is the standard interface we chose to implement in all our projects, to cover the basic functions common to almost all our projects:

./run        # An alias for "./run serve"
./run serve  # Prepare the project and run the local server
./run build  # Build the project, ready for distribution or release
./run watch  # Watch local files for changes and rebuild as necessary
./run test   # Check code syntax and run unit tests
./run clean  # Remove any temporary or built files or local databases

We decided on using a single run executable as our single entry-point into all our projects only after previously trying and eventually rejecting a number of alternatives:

  • A Makefile: The syntax can be confusing. Makefiles are really made for compiling system binaries, which doesn’t usually apply to our projects
  • gulp, or NPM scripts: Not all our projects need NodeJS, and NodeJS isn’t always available on a developer’s system
  • docker-compose: Although we do ultimately run everything through Docker (see below), the docker-compose entrypoint alone wasn’t powerful enough to achieve everything we needed

In contrast to all these options, the run script allows us to perform whatever actions we choose, using any interpreter that’s available on the local system. The script is currently written in Bash because it’s available on all Linux and macOS systems. As an additional bonus, ./run is quicker to type than the other options, saving our devs crucial nanoseconds.

The single dependency that developers need to install to run the script is Docker, for reasons outlines below.

Knowing we can run or build our projects through this standard interface is not only useful for humans, but also for supporting services – like our build jobs and automated tests. We can write general solutions, and know they’ll be able to work with any of our projects.

Using ./run is optional

All our website projects are openly available on GitHub. While we believe the ./run script offers a nice easy way of running our projects, we are mindful that people from outside our team may want to run the project without installing Docker, want to have more fine-grained control over how the project is run, or just not trust our script.

For this reason, we have tried to keep the addition of the ./run script from affecting the wider shape of our projects. It remains possible to run each of our projects using standard methods, without ever knowing or caring about the ./run script or Docker.

  • Django projects can still be run with pip install -r requirements.txt; ./manage.py runserver
  • Jekyll projects can still be run with bundle install; bundle exec jekyll serve
  • NodeJS projects can still be run with npm install; npm run serve

While the documentation in our READMEs recommend the ./run script, we also try to mention the alternatives, e.g. www.ubuntu.com’s HACKING.md.

Using Docker for encapsulation

Although we strive to keep our projects as simple as possible, every software project relies on dependent libraries and programs. These dependencies pose 2 problems for us:

  • We need to install and run these dependencies in a predictable way – which may be difficult in some operating systems
  • We must keep these dependencies from affecting the developer’s wider system – there’s nothing worse than having a project break your computer

For a while now, developers have been solving this problem by running applications within virtual machines running Linux (e.g. with VirtualBox and Vagrant), which is a great way of encapsulating software within a predictable environment.

Linux containers offer light-weight encapsulation

More recently, containers have entered the scene.

containers

A container is a part of the existing system with carefully controlled permissions and an encapsulated filesystem, to make it appear and behave like a separate operating system. Containers are much lighter and quicker to run than a full virtual machine, and yet provide similar benefits.

The easiest and most direct way to run containers is probably LXD, but unfortunately there’s no easy way to run LXD on macOS. By contrast, Docker CE is trivial to install and use on macOS, and so this became our container manager of choice. When it becomes easier to run LXD on macOS, we’ll revisit this decision.

Each project uses a number of Docker images

docker cat

Running containers through Docker helps us to carefully manage our projects’ dependencies, by:

  • Keeping all our software, from Python modules to databases, from affecting the wider system
  • Logically grouping our dependencies into separate light-weight containers: one for the database, and a separate one for each technology stack (Python, Ruby, Node etc.)
  • Easily cleaning up a project by simply deleting its associated containers

So the ./run script in each project will run the necessary commands to start the project by running the relevant commands inside the relevant Docker images. For example, in partners.ubuntu.com, the ./run command will:

Docker is the only dependency

By using Docker images in this way, the developer doesn’t need to install any of the project dependencies on their local system (NodeJS, Python, PostgreSQL etc.). Docker – which should be trivial to install on both Linux and macOS – is the single dependency they need to run any of our projects.

Keeping the ./run script up-to-date across projects

A key feature of this our solution is to provide a consistent interface in all of our projects. However, the script itself will vary between projects, as different projects have different requirements. So we needed a way of sharing relevant parts of the script while keeping the ability to customise it locally.

It is also important that we don’t add significant bloat to the project’s dependencies. This script is just meant to be a useful shorthand way of running the project, but we don’t want it to affect the shape of the project at large, or add too much extra complexity.

However, we still need a way of making improvements to the script in a centralised way and easily updating the script in existing projects.

A yeoman generator

To achieve these goals, we maintain a yeoman generator called canonical-webteam. This generator contains a few ways of adding the ./run architecture, for some common types of projects we use:

$ yo canonical-webteam:run            # Add ./run for a basic node-only project
$ yo canonical-webteam:run-django     # Add ./run for a databaseless Django project
$ yo canonical-webteam:run-django-db  # Add ./run for a Django project with a database
$ yo canonical-webteam:run-jekyll     # Add ./run for a Jekyll project

These generator scripts can be used either to add the ./run script to a project that doesn’t have it, or to replace an existing ./run script with the latest version. It will also optionally update .gitignore and package.json with some of our standard settings for our projects.

Try it out!

To see this ./run tooling in action, first install Docker by following the official instructions.

Run the www.ubuntu.com website

You should now be able to run a version of the www.ubuntu.com website on your computer:

  • Download the www.ubuntu.com codebase, E.g.:

    curl -L https://github.com/canonical-websites/www.ubuntu.com/archive/master.zip > www.ubuntu.com-master.zip
    unzip www.ubuntu.com-master.zip
    cd www.ubuntu.com-master
    
  • Run the site!

    $ ./run
    # Wait a while (the first time) for it to download and install dependencies. Until:
    Starting development server at http://0.0.0.0:8001/
    Quit the server with CONTROL-C.
    
  • Visit http://127.0.0.1:8001 in your browser, and you should see the latest version of the https://www.ubuntu.com website.

Forking or improving our work

We have documented this standard interface in our team practices repository, and we keep the central code in our canonical-webteam Yeoman generator.

Feel free to fork our code, or if you’d like to suggest improvements please submit an issue or pull-request against either repository.


Also published on Medium.

Read more
Robin Winslow

Nowadays free software is everywhere – from browsers to encryption software to operating systems.

Even so, it is still relatively rare for the code behind websites and services to be opened up.

Stepping into the open

Three years ago we started to move our website projects to Github, and we also took this opportunity to start making them public. We started with the www.ubuntu.com codebase, and over the next couple of years almost all our team’s other sites have followed suit.

canonical-websites org

At this point practically all the web team’s sites are open source, and you can find the code for each site in our canonical-websites organisation.

www.ubuntu.com developer.ubuntu.com www.canonical.com
partners.ubuntu.com design.ubuntu.com maas.io
tour.ubuntu.com snapcraft.io build.snapcraft.io
cn.ubuntu.com jp.ubuntu.com conjure-up.io
docs.ubuntu.com tutorials.ubuntu.com cloud-init.io
assets.ubuntu.com manager.assets.ubuntu.com vanillaframework.io

We’ve tried to make it as easy as possible to get them up and running, with accurate and simple README files. Each of our projects can be run in much the same way, and should work the same across Linux and macOs systems. I’ll elaborate more on how we manage this in a future post.

README example

We also have many supporting projects – Django modules, snap packages, Docker images etc. – which are all openly available in our canonical-webteam organisation.

Reaping the benefits

Opening up our sites in this way means that anyone can help out by making suggestions in issues or directly submitting fixes as pull requests. Both are hugely valuable to our team.

Another significant benefit of opening up our code is that it’s actually much easier to manage:

  • It’s trivial to connect third party services, like Travis, Waffle or Percy;
  • Similarly, our own systems – such as our Jenkins server – don’t need special permissions to access the code;
  • And we don’t need to worry about carefully managing user permissions for read access inside the organisation.

All of these tasks were previously surprisingly time-consuming.

Designing in the open

Shortly after we opened up the www.ubuntu.com codebase, the design team also started designing in the open, as Anthony Dillon recently explained.

Read more
Anthony Dillon

Over the past year, a change has emerged in the design team here at Canonical: we’ve started designing our websites and apps in public GitHub repositories, and therefore sharing the entire design process with the world.

One of the main things we wanted to improve was the design sign off process whilst increasing visibility for developers of which design was the final one among numerous iterations and inconsistent labelling of files and folders.

Here is the process we developed and have been using on multiple projects.

The process

Design work items are initiated by creating a GitHub issue on the design repository relating to the project. Each project consists of two repositories: one for the code base and another for designs. The work item issue contains a short descriptive title followed by a detailed description of the problem or feature.

Once the designer has created one or more designs to present, they upload them to the issue with a description. Each image is titled with a version number to help reference in subsequent comments.

Whenever the designer updates the GitHub issue everyone who is watching the project receives an email update. It is important for anyone interested or with a stake in the project to watch the design repositories that are relevant to them.

The designer can continue to iterate on the task safe in the knowledge that everyone can see the designs in their own time and provide feedback if needed. The feedback that comes in at this stage is welcomed, as early feedback is usually better than late.

As iterations of the design are created, the designer simply adds them to the existing issue with a comment of the changes they made and any feedback from any review meetings.

Table with actions design from MAAS project

When the design is finalised a pull request is created and linked to the GitHub issue, by adding “Fixes #111” (where #111 is the number of the original issue) to the pull request description. The pull request contains the final design in a folder structure that makes sense for the project.

Just like with code, the pull request is then approved by another designer or the person with the final say. This may seem like an extra step, but it allows another person to look through the issue and make sure the design completes the design goal. On smaller teams, this pull request can be approved by a stakeholder or developer.

Once the pull request is approved it can be merged. This will close and archive the issue and add the final design to the code section of the design repository.

That’s it!

Benefits

If all designers and developers of a project subscribe to the design repository, they will be included in the iterative design process with plenty of email reminders. This increases the visibility of designs in progress to stakeholders, developers and other designers, allowing for wider feedback at all stages of the design process.

Another benefit of this process is having a full history of decisions made and the evolution of a design all contained within a single page.

If your project is open source, this process makes your designs available to your community or anyone that is interested in the product automatically. This means that anyone who wants to contribute to the project has access to all the information and assets as the team members.

The code section of the design repository becomes the home for all signed off designs. If a developer is ever unsure as to what something should look like, they can reference the relevant folder in the design repository and be confident that it is the latest design.

Canonical is largely a company of remote workers and sometimes conversations are not documented, this means some people will be aware of the decisions and conversations. This design process has helped with the issue, as designs and discussions are all in a single place, with nicely laid out emails for all changes that anyone may be interested.

Conclusion

This process has helped our team improve velocity and transparency. Is this something you’ve considered or have done in your own projects? Let us know in the comments, we’d love to hear of any way we can improve the process.

Read more
Barry McGee

One of the most complex aspects of managing continuous development on a large codebase is ensuring that it remains stable.

This problem is particularly acute when building out front end architecture using HTML & CSS due to the inherently global nature of CSS.

How many times have you shipped a CSS change to one small part of a website only to find you’ve inadvertently broken a page element on a different page entirely?

This problem usually arises because of all your CSS loading via one external file, added to each page of your website. If you don’t namespace or isolate your styles correctly, changes to your CSS may have unintended consequences.

Structuring your CSS using the BEM convention or similar can help prevent such clashes. However, in a fast moving team where multiple developers are working on a large codebase daily, relying on code convention alone is often not enough to stop visual regression bugs from creeping in.

Ideally, you or a team member should check each page of your site, in turn, to make sure nothing has broken, right? While that’s a solid QA approach, it doesn’t scale very well. As your site grows, it can become all time consuming to check each page, especially if you consider you may also need to check each page over multiple breakpoints.

That’s where automated Visual Regression Testing (VRT) tools can seriously lighten your workload. A VRT tool will typically run through your site and capture a baseline snapshot of all your pages to use as a benchmark.

After you then make some changes, you run the process again and the VRT tool will compare the latest capture of your pages with the baseline capture and highlight the differences. It’s at this stage where you’ll be alerted to any unintended consequences.

The concept of VRT has been around for a few years but up until now, most solutions have involved setting up your process locally, typically involving quite a few moving parts. When trying to get a project team to integrate VRT as part of their workflow using one of these solutions, we always ran into trouble as it was so difficult to keep individual developer setups consistent – inevitably, I’d spend longer debugging VRT setup than I would visual diffs.

I then stumbled upon Percy.io, which offers VRT software as a service. I was immediately interested in how we might utilise it for Vanilla Framework, our constantly evolving CSS framework.

I immediately signed up for a trial and was quickly impressed with their GitHub integration and ease of use. Percy is unobtrusive, and it’s only when a feature progresses to the Pull Request stage does Percy come into play. It will run as part of the Travis CI build and then report back if it has found any visual diffs for review. You can also configure Percy to test across defined breakpoints.


Percy’s Github integration is a big win

The person reviewing the PR can then click through to the project dashboard on percy.io and review the highlighted diffs. If the changes are expected based on the what has been outlined in the PR, then the changes can be approved.


Comparing different pages for visual differences

When the feature merges, these changes then become the baseline. If unexpected changes are highlighted, the reviewer can then highlight this to the developer for amendment.

As we make multiple changes a day to our Vanilla codebase while aiming for a weekly release, having VRT as part of our continuous integration has afforded us extra confidence that our releases do not contain missed bugs and regressions.

Related:

Read more
Anthony Dillon

The Vanilla team needed to solve two issues which have been paining the development of Vanilla Framework for some time.

Firstly we needed to improve our workflow for testing and QAing components on our local machines. Up until now, we have been using npm link on our local branches of Vanilla with our local website branch, then reviewing the examples in the components page of the documentation. This caused a lot of extra overhead to reviewing Vanilla.

Secondly, since we actually build the docs.vanillaframework.io site using the Documentation theme (vanilla-docs-theme), the Vanilla pattern examples we ended up reviewing were no longer purely styled by Vanilla Framework, but as they were extended by the theme.

The documentation of the matrix pattern in Vanilla

The documentation of the matrix pattern in Vanilla

The solution

To solve both these issues, we decided to decouple the examples from the documentation. This change allowed us to move the coded examples of the patterns into a separate “examples” directory of the codebase and remove the hard-coded examples from the documentation.

As the examples were a part of the Vanilla Framework code we simply linked each example page with the Vanilla built from the same branch. This means all examples are only styled by Vanilla and nothing else.

Another benefit that came from this change was that now we have an easy way to find an example of a pattern when reviewing or QAing a pull request. Whereas previously we had to do the npm link dance. Now we simply check out the branch and run the internal Jekyll site to build Vanilla giving us a directory of pattern pages.

Examples in the docs

So we were happy with these changes: we had solved the issues at hand and were ready to head off and have a celebratory coffee.  But, we couldn’t leave the documentation without examples and code snippets.

To solve this issue, we used an embedding paradigm like on Codepen.

Example of a Codepen embed

Example of a Codepen embed

We set about creating a small JavaScript project that would find a link to the page with a specific class and grab the href attribute from it, replacing the link with an iframe of the link. This gave us a nice progressively enhanced experience:

Example of progressive enhancement - on the left is an example with JavaScript enabled, right is an example is JavaScript disabled.

An example of progressive enhancement – on the left is an example with JavaScript enabled, right is an example is JavaScript disabled.

We were still lacking the code snippets, so we made the script also pull the HTML source of the linked page into the example, then display the contents of the body in a code block appended after the iframe.

The wrap-up

And that was it. The solution gives us:

  • A single place for example code
  • Examples only displayed using Vanilla
  • A local testing environment
  • Documentation examples that are automatically up to date

We named this mini project example-js. Please feel free to fork it, use it or file any issues you may find.

Read more
Will Moggridge

Introducing tutorials.ubuntu.com

The web team has been hard at work on our new Ubuntu Tutorials website and we are proud to share our work with the community. Our first set of tutorials are based around snap usage and building snaps with snapcraft. We will continue to work on our catalogue to broaden it to a variety of subjects.

Ubuntu Tutorials is part of a bigger project to improve our documentation across our other projects. Our goals are to improve the discoverability and the ease of use for our documentation. Having followed Ubuntu and been part of the community for many years, I am excited to be involved with this project. I hope we can keep moving forward with this work and give back to the community.

Polymer and our source code

The website is built using Google’s Polymer framework with their Codelabs web components. Polymer has been a great and enjoyable experience and really made the web components so much more more exciting. I am already looking to see where I can use these technologies in the rest of our projects. We recently had a hack day and had the opportunity to explore putting Vanilla Framework in web components. I am happy with our initial work with Vanilla web components we are looking forward to continue exploring and developing them.

The Ubuntu Tutorials website source code is available for you to dive into, at the Ubuntu Tutorials GitHub repository.
A big thank you to Didier Roche, whose work was the foundation for this.

Our next steps

Looking to the future, we are already thinking about and preparing improvements for the site. We have been really happy with the feedback we are getting on the GitHub issues page. A number of the issues have been requests for tutorials on certain topics. This is really useful and interesting to us, so that we can see which areas to focus.

I am interested in simplifying our process for creating and contributing to Ubuntu Tutorials. Not only for us but also to empower you. One strong area for this is adding functionality to write tutorials using markdown. This will increase visibility for all and remove some overhead to us, while also making it simpler for people to contribute to our catalogue. We are currently looking into this and hope we will have a solution soon.

Read more
Anthony Dillon

Hack day 2

This week, the web team managed to get away for our second hack day. These hack days give us an opportunity to scratch our own itches and work on things we find interesting.

We wrote about our first hack day in August last year.

Getting started

We began by outlining the day and reviewing ideas that had been suggested on a Google Doc throughout the previous week by everyone on the team. We each voted by marking the ideas we would be interested in working. Then we chose the most voted ones and assigned groups of 2 or 3 people to each.

The groups broke up and turned their idea into a formal project with a list of the tasks required to produce an MVP. Below is a list of the ideas and outcomes from each team.

Performance audit of the current websites

Team: Rich, Andrea, Robin

The team discovered a tool called Lighthouse by Google Chrome which analyses a web page and returns a full audit of the dependent assets and accessibility issues.

The team spent some time trying to create a service using Lighthouse to produce an API, then realised that Google Chrome team had done this work already. The service is called Moonlight. Moonlight is a SaaS to test the performance of a page.

As Moonlight takes a single webpage endpoint to test., we need a way to recursively test pages. The team created a profiling script to gather the references endpoints of a site.

Canonical web team dashboard

Team: Luke, Ant, Yaili

The goal of this project was to motivate the team to improve key areas at a glance. The  metrics we wanted to capture were:

  • Whether the site is up or down
  • Live visitors countsMonthly unique visitors
  • Monthly unique visitorsOpen issues on the project
  • Open issues on the projectOpen PRs on the project
  • Open PRs on the project
  • Information about the last commit to the sites code base
  • PageSpeed insights tests results

We gathered a set of sites we would like to collect these metrics on:

  • www.ubuntu.com
  • www.canonical.com
  • maas.io
  • jujucharms.com
  • landscape.canonical.com
  • design.ubuntu.com
  • design.canonical.com
  • insights.ubuntu.com
  • developer.ubuntu.com
  • community.ubuntu.com
  • summit.ubuntu.com

The team used MERN stack (MongoDB, expressjs, React and Nodejs) and modified its sample project to create a interface which could be displayed depending on the state of the data. For example, the up or down card would display all sites as up but once one went down the card would change to an error state and only display the information about the site that is down. By designing for emotion in this way, we can intelligently utilise the limited space available in a dashboard.

The team also used a few plugins to gather some data:

  • ping-monitor to ping our sites to check if they’re up, down or broken
  • node-http-ping to get response times for the same set of Canonical sites

Storing the data in MongoDB to keep historical data and using the /api endpoint to return the response time and status for each site, the team managed to produce a simplified dashboard showing the available state of our list of sites.

Ubuntu.com dev tools

Team: Graham, Karl


As a team, we have been using gulp scripts to lint and test our code locally and in our CI environments for sometime. But we have never got around to applying these checks to our flagship website, www.ubuntu.com.

The plan here was to implement gulp scripts to lint Sass and JavaScript. And, to also look into further options like spell-checking, auto-prefixing and HTML validation.

The team added Sass linting and borrowed the linting tasks from our styling framework vanilla-framework. This produced a long list of lint issues. The team tracked the lint errors and quickly fixed them to get a passing CI run.

Adding JavaScript linting (jsHint)

The team also implemented JavaScript linting using jsHint on the current JavaScript within the sites code base. This produced a number of JavaScript lint errors which were fixed, ignoring the plugin code.

Finally adding the new linting steps to the Travis configuration. So the linting is tested on each pull request.

Vanilla web components prototype

Team: Barry, Will, Robin

To enable Vanilla on a variety of platforms. This would allow people to use Vanilla in modern web apps.

The team  created a base repository using Polymer’s tools and started creating web components for Vanilla.

They discovered that the styling needs tweaking to be compatible with web components. Possibly just by building a shared styles import which is included in each web component.

The team started by importing vanilla-framework from NPM, then built modular scss files containing only relevant parts from Vanilla, and finally imported the modular style file in web component.

Inside the repository there is a vanilla.html which imports all of the components. Components can individually be included as needed.

This work includes a demo system, with API documentation. The demo system displays the component and the markup used to create it. This is accessed by running `polymer serve` and accessing the site.

This work can be used to build solid web components for use in Polymer and we can also use this work to jumpstart React components.

HTTP/2 on vanillaframework.io

In the midst of all this work. Robin found time to tackle the task of hosting our styling frameworks website on HTTP/2. It’s currently a proof of concept but can now be considered as the start of work item to roll out.

Demo site

Conclusion

Again, this was a successful hack day with everyone busy working on things that interest them. Although there were less completed outcomes this time, we did set up a number of good projects which are ready to be continued.




Read more
Robin Winslow

We’ve been making an effort to secure all our websites with HTTPS. While some Canonical sites have enforced HTTPS for a while (e.g.: landscape.canonical.com, jujucharms.com, launchpad.net), it’s been missing from our other sites until now.

Why HTTPS?

The HTTPS movement has been building for years to help secure internet users against black-hat hackers and spies. The movement became more urgent after Edward Snowden revealed significant efforts by government agencies to spy on the world population.

The EFF have helped create two projects: LetsEncrypt – which massively simplifies the free installation of HTTPS certificates; and HTTPS Everywhere – a browser plugin to help you use HTTPS whenever it’s available. The advent of HTTP/2 has helped negate performance concerns when moving to HTTPS.

Google have also made efforts to encourage websites to enable HTTPS: First announcing in 2014 that they would consider HTTPS support in their search ranking algorithm; and last year, that Google Chrome would start visually warning users of “insecure” (non-HTTPS) websites.

Our sites

We made https://www.ubuntu.com HTTPS-only in October of last year, and have since done so on 10 more sites:

We hope to enable HTTPS on our other sites in the coming months.

Although enabling HTTPS can be relatively simple there were a number of specific challenges we had to overcome for some of our websites. I hope to write more about these in a follow-up post.

Read more
Grazina Borosko

The Yakkety Yak 16.10 is released and now you can download the new wallpaper by clicking here. It’s the latest part of the set for the Ubuntu 2016 releases following Xenial Xerus. You can read about our wallpaper visual design process here.

Ubuntu 16.10 Yakkety Yak

yakkety_yak_wallpaper_4096x2304

Ubuntu 16.10 Yakkety Yak (light version)

yakkety_yak_wallpaper_4096x2304_grey_version

Read more
Jouni Helminen

We have been looking at ways of making the Terminal app more pleasing, in terms of the user experience, as well as the visuals.

I would like to share the work so far, invite users of the app to comment on the new designs, and share ideas on what other new features would be desirable.

On the visual side, we have brought the app in line with our Suru visual language. We have also adopted the very nice Solarized palette as the default palette – though this will of course be completely customisable by the user.

On the functionality side we are proposing a number of improvements:

-Keyboard shortcuts
-Ability to completely customise touch/keyboard shortcuts
-Ability to split the screen horizontally/vertically (similar to Terminator)
-Ability to easily customise the palette colours, and window transparency (on desktop)
-Unlimited history/scrollback
-Adding a “find” action for searching the history

email-desktop

Tabs and split screen

On larger screens tabs will be visually persistent. In addition it’s desirable to be able split a panel horizontally and vertically, and use keyboard shortcuts or focusing with a mouse/touch to move between the focused panel.

On mobile, the tabs will be accessed through the bottom edge, as on the browser app.

terminal-blog-phone

Quick mobile access to shortcuts and commands

We are discussing the option of having modifier (Ctrl, Alt etc) keys working together with the on-screen keyboard on touch – which would be a very welcome addition. While this is possible to do in theory with our on-screen keyboard, it’s something that won’t land in the immediate near future. In the interim modifier key combinations will still be accessible on touch via the shortcuts at the bottom of the screen. We also want to make these shortcuts ordered by recency, and have the ability to add your own custom key shortcuts and commands.

We are also discussing with the on-screen keyboard devs about adding an app specific auto-correct dictionary – in this case terminal commands – that together with a swipe keyboard should make a much nicer mobile terminal user experience.

email-desktop

More themability

We would like the user to be able to define their own custom themes more easily, either via in-app settings with colour picker and theme import, or by editing a JSON configuration file. We would also like to be able to choose the window transparency (in windowed mode), as some users want a see-through terminal.

We need your help!

These visuals are work in progress – we would love to hear what kind of features you would like to see in your favourite terminal app!

Also, as Terminal app is a fully community developed project, we are looking for one or two experienced Qt/QML developers with time to contribute to lead the implementation of these designs. Please reach out to alan.pope@canonical.com or jouni.helminen@canonical.com to discuss details!

EDIT: To clarify – these proposed visuals are improvements for the community developed terminal app currently available for the phone and tablet. We hope to improve it, but it is still not as mature as older terminal apps. You should still be able to run your current favourite terminal (like gnome-terminal, Terminator etc) in Unity8.

Read more
Anthony Dillon

Web team hack day

Last week the developers in the web team swapped the office for the lobby of the hotel across the street. The day was geared up to allow us to leave our daily tasks in the office and think of ideas that we would like to work on.

The morning started with coffee and sitting in sofas brainstorming ideas. We collected a list of ideas from each person would like to work on. The ideas ranged from IRC bots to a performance audit of a few of our sites.

Choosing ideas

We wrote all the ideas on post it notes and lay them out on the table. Then each of us chose the idea we were most interested in working on by putting our hand on it. I worked out an almost perfect split of two people per idea, so we broke up into our teams and got to work.

Here are the things we worked on during this “hack day”.

IRC bots

These are bots that can listen to an action and report it to our IRC channel. For example, the creator of a pull request wouldn’t have to paste a link to their PR into our channel to be picked up for review.

This task was picked up by Karl and Will, who started by setting up a Hubot on Heroku. They attached a webhook to all projects under the ubuntudesign to listen for pull requests and report this in the web team channel.

This bot can be used for many other things like reporting deployments, CI failures, etc. We also discussed a method of subscribing to the notifications you are interested in, instead of the whole team being notified about everything.

Asset manager search improvements

Our asset manager and server acts as an internal asset storage. By storing an asset here we get a link to it which can be used by any website. As there are many assets stored in the asset manager we usually need to search existing assets to see if one already exists before making a new one.

Graham and myself picked this task and started by working out how to setup both the manager and server locally.

Previously the search would return results that contained either of a two-word query, but now the result has to contain all search terms to return.

We added our Vanilla framework to the front end, as it is obviously good to use our framework for all internal and external projects.

We have also implemented filtering results by file type, which makes it easy to go through what can sometimes be dozens of search results.

GitHub CMS

GitHub CMS is a nicer and more restricted interface that the marketing team can use to edit the GitHub repository containing page content for example, www.ubuntu.com.

Rich and Robin picked this task and began work on it straight away, by discussing the best approach and list of possible features.

Even though Robin was also helping out with. setting up the asset manager and server locally for Graham and me, he still managed to investigate the best Python framework to use and selected one. Rich on the other hand went ahead with the front end and developed a bunch of page templates using the new MAAS GUI Vanilla theme.

Commit linting

Commit linting is a service that gives a committer to a project a nice step by step wizard to build a high quality commit message.

Barry picked up this task and got the service up and running, and working, but hit a blocker at the point of choosing between different methods of committing. For instance, Tower would bypass this step and also we do not necessarily want to dictate to contributors which way they should commit code. This is something we will leave as an investigation for the time being.

Conclusion

Our hack day went well, in fact better than I imagined it would. We all had fun, got to work on things we found important but struggle to get prioritised in our day to day. It gave the developers a feeling of achievement and the buzz of landing and releasing something at the end of the day.

We will be attempting to do a hack day once every month or so, so watch this space!

Read more
Barry McGee

Developing for Vanilla v1

As Inayaili recently blogged, we are now working towards a goal of releasing Version one (v1) of Vanilla for early September.

Maturity

Vanilla was created just over a year ago and in that time has been used to build a wide range of sites across Canonical and beyond. It currently averages around 1,500 downloads a month on NPM. We’ve been delighted to see it grow in popularity and see the myriad of different experiences people have been building using Vanilla.

A big advantage of this wide adoption is the feedback we’ve received from developers on the front line, including within our own teams at Canonical. This feedback has enabled us to identify growing pains and mark out clear areas for improvement.

The overarching themes for v1 are maturity and stability — ensuring the framework is a cohesive set of building blocks and also making sure those building blocks are stress tested and robust.

Practical steps

The first step we will be taking is to audit the codebase and ensure it adheres to our coding standards. This will include encapsulating all components with the BEM methodology which we have introduced to our coding standards within the last year.

We will also be working to improve accessibility and responsiveness of each component while making some aspects of the codebase less opinionated to help increase its applicability to a broad range of use cases.

Another big area earmarked for love is the documentation provided for Vanilla. Given that the framework is now used by a wide and diverse set of people, we can make no assumptions about what they may know. So we need to provide comprehensive documentation that not only details how to implement each component but that also explains where each component should or should not be used.

It’s also important that everything in Vanilla is visible. Over time, code has slipped into Vanilla that is not documented on the demo.  This can cause page elements to display in ways a developer might not expect. We will be addressing this by building a comprehensive documentation site at a dedicated URL. This will be the one-stop-shop for all things Vanilla and will replace the current Vanilla demo page and Sass docs.

We will also be restructuring Vanilla so it is in a better place for scalability and extensibility going forward. Vanilla currently employs a flat structure for simplicity but we’ve come to realise that it can be confusing to mix components with utilities and presentation with configuration.

We recently had a team discussion on possible ways to structure the code within Vanilla and settled on an approach minted by Harry Roberts — Inverted Triangle CSS or ITCSS. Structuring Vanilla in this way will not only improve the quality of the resulting CSS but make it much easier to initiate new developers to building with Vanilla.

itcss-triangle-foundation

Layers of ITCSS – courtesy of Harry Roberts

Exciting times

I’m very excited about this project and think it has huge potential to help shape how we in Canonical approach building experiences on the web, not to mention how the wider community will benefit from these changes.

If you have any feedback or ideas on the future direction of Vanilla from a development point of view, please do comment below – we’d love to hear from you.

Read more
Andrea Bernabei

QtDay is the only Italian event dedicated to Qt. It is held yearly by Develer and brings together company products that are developed using Qt, as well as Qt developers and customers who want the latest developments and solutions in the Qt world. This year the conference was held in Florence, where I was lucky enough to attend and present.

I had previously attended the 2011, 2012 and 2014 QtDay events whilst I was studying Computer Science at the University of Pisa. This year it was different because Develer invited me to give a talk about Ubuntu and Qt. The funny thing was e I was already planning on sending my presentation to the Call for Proposals anyway! So I was already prepared.

What I do at Ubuntu

My role in Canonical is UX Engineer, basically a developer acting as the bridge between designers and engineers. It is a pretty cool job, and I’m very lucky to be part of such an energetic team.

Over the last year there was a strong push in both the Design and Engineering teams working on Ubuntu Touch to finalize and deliver the convergent experience. This was a great opportunity to spread the word about how to develop convergent apps for the new Ubuntu platform, and get developers interested in where we are and where we are heading.

My talk – “Standing on the shoulders of giants: developing Ubuntu convergent apps with Qt”

When I first thought about giving the presentation, I decided it would only be about the current state of the UI components provided by Ubuntu SDK, with a strong focus on their “convergent” features, and how to use them to realize your convergent apps. However, as time went by I realized it would have been more interesting for developers to also get some context about the platform itself, and how to best integrate their apps with the platform.

By the time QtDay arrived, the presentation had almost doubled in size! You can find it here.

A slideshow or an app? How about both!

This is a detail the geeks in the audience might be interested in…I thought it would be neat to talk about the development of Qt/QML apps and use the same framework to implement the presentation as well!

That’s right, the presentation is actually a QML application that uses (a modified version of) the QML presentation system available as a Qt Labs addon. Having the power of Qt underneath your presentation means you can do pretty much everything You’re not tied to the boundaries set by the “standard” presentation systems (such as Beamer, LibreOffice Impress, Microsoft Powerpoint, etc) anymore!

In my case, I exploited that to implement a live-coding view as a pull-down layer that you can open on-demand whenever you want by using keyboard shortcuts. Once the livecoding view is open, you can write code (or use the code automatically provided when you’re at one of the special “Livecoding!” slides) in the text editor on the left side and see the result in the right side in real time without leaving or interrupting your presentation. You can also increase or decrease the font size, and there’s also a sparkling particle effect that follows the cursor to make it easier for the audience to follow your changes to the text. That’s only one of the things you can do once you replace your “standard” presentation with a full featured application. If you’re a software developer then I highly recommend giving it a try for your next presentation!

The sourcecode and the PDF version of the presentation is available here and my fork of the QML presentation system is available here.

And here’s a screenshot of the livecoding view in action (sparkling particle effect included) :)

livecoding_Screenshot_2016-06-15_09-13-38

The morning

The conference was held in the middle of Florence at the Hotel Londra Conference Centre. It was quite a nice location I have to say! Very easy to reach as it is very close to the main railway station, Santa Maria Novella.

My talk was in the first time slot after the main keynote, which was good because, you know, that meant I could relax and enjoy the rest of the day!

13173300_1020935067988814_8428457406371463764_o

I started by giving an overview of the current state of Ubuntu and the fact that it’s doing great in the Cloud field. Ubuntu can now scale to run on IoT devices as well as phones, tablets, notebooks, servers and Clouds.

I then presented the concept of convergence and how the UI components provided by the Ubuntu SDK can be best utilised to create great convergent apps, including some livecoding. Livecoding is fun because it gives a pragmatic idea of how to go from theory to practice, and also keeps the attendees awake, because they know things can go wrong at any moment (demo effect) and they enjoy that, for some reason :)

After UI components section, I went on to talk about platform integration topics such as the app lifecycle management, the app isolation features, and the integration with the Content Hub which is the secure way to share data between applications.

I then briefly talked about internationalization and how to publish your application on the Ubuntu Store (it’s very easy!)

For this occasion, I brought with me a BQ M10 tablet, which is the convergent Ubuntu tablet that we released just a few months ago! I connected it to a bluetooth mouse and keyboard, and set it up on a table for people to try. Lots of people played with it. After the talk it was exciting to see the audience interest in the whole convergence story.

The other talks during the morning were very interesting as well, I particularly enjoyed Marco Piccolino’s “A design system and pattern libraries can significantly improve the quality of your UI” (Find the slides here).

And then it came to lunchtime…

Food…Italian food

The food was great and, coming from the UK, I enjoyed it even more. Big kudos to Develer (the company behind the event) for finding such a good catering company!

Here’s a pic of the goodies available during coffee breaks. Mmmm…

13131744_1020935184655469_8301292868650133109_o

Afternoon talks

The afternoon talks were as interesting as the morning ones. Marco Arena, from the Italian C++ Community, gave a talk about QCustomPlot, which is a library to draw graphs and plots using Qt (slides here).

If you’re interested in Virtual Reality, partially BCI (Brain Computer Interface) and machine learning, make sure you check out the slides of Sebastiano Galazzo’s talk (once they’re available, at that page). His project involves manipulating what the user sees in a Google Cardboard by reading his/her brain waves to interpret emotions. Pretty neat.

Stefano Cordibella’s presentation was about his experience optimizing the startup time of an application running on embedded hardware (using Qt4). They exploited the power of the QtQuick Loader component and other QML tricks to decrease the loading time of the application.
Check his slides out if you’re interested in QML optimization. I’m sure you’ll find them useful.

The final talk I attended was more of a roundtable about how to contribute to the development of Qt itself, led by Giuseppe D’Angelo, who has the role of “Approver” in the Qt Project Open Governance Model.

As a result of attending that roundtable not only I started contributing to Qt (See the changes I contributed to here), but I also improved theQt Wiki Contribution Guidelines so that it will be easier for other people to start contributing. The power of open source and open governance! :)

The closing talk also included a raffle, where a couple of developers won an embedded devboard sponsored by Atmel. I’ve been quite lucky with Qt-related raffles in the past, but this wasn’t one of those days, oh well :)

13131322_1020935404655447_4502313797681298972_o

Closing remarks

What a great day it was. I want to thank Develer for organizing the conference and the guys from Community team (Alan Pope, David Planella, Daniel Holbach) and Zsombor Egri from the SDK team at Canonical for providing feedback and ideas for the presentation.

It was also great to see so many people interested in the convergence story and in the M10 tablet. The technology has great potential and it’s our job to make the best of it :)

See you all at the next QtDay!

Note: the pictures are courtesy of Develer’s QtDay Facebook page.

Read more
Grazina Borosko

April marks the release of Xerus 16.4 and with it we bring a new design of our iconic wallpaper. This post will take you through our design process and how we have integrated our Suru visual language.

Evolution

The foundation of our recent designs are based on our Suru visual language, which encompasses our core brand values, bringing consistency across the Ubuntu brand.

Our Suru language is influenced by the minimalist nature of Japanese culture. We have taken elements of their Zen culture that give us a precise yet simplistic rhythm and used it in our designs. Working with paper metaphors we have drawn inspiration from the art of origami that provides us with a solid and tangible foundation to work from. Paper is also transferable, meaning it can be used in all areas of our brand in two and three dimensional forms.

Design process

We started by looking at previously released wallpapers across Ubuntu to see how each has evolved from each other. After seeing the previous designs we started to apply our new Suru patterns, which inspired us to move in a new direction.

Ubuntu 14.10 ‘Utopic Unicorn’’

wallpaper_unicorn

Ubuntu 15.04 ‘Vivid Vervet’

suru-desktop-wallpaper-ubuntu-vivid (1)

Ubuntu 15.10 ‘Wily Werewolf’

ubuntu-1510-wily-werewolf-wallpaper

Step-by-step process

Step 1. Origami animal

Since every new Ubuntu release is named after animal, the Design Team wanted to bring this idea closer to the wallpaper and the Suru language. The folds are part of the origami animal and become the base of which we start our design process.

Origarmi

To make your own origami Xerus squirrel, you can find the instructions here.

Step.2 Searching for the shape

We started to look at different patterns by using various techniques with origami paper. We zoomed into particular folds of the paper, experimented with different light sources, photography, and used various effects to enhance the design.

The idea was to bring actual origami to the wallpaper as much as possible. We had to think about composition that would work across all screen sizes, especially desktop. As the wallpaper is a prominent feature in a desktop environment, we wanted to make sure that it was user friendly, allowing users to find documents and folders located on the computer screen easily. The main priority was to not let the design get in the way of everyday usage, but enhance it aesthetically and provide a great user experience.

After all the experiments with fold patterns and light sources, we started to look at colour. We wanted to integrate both the Ubuntu orange and Canonical aubergine to balance the brightness and played with gradient levels.

We balanced the contrast of the wallpaper color palette by using a long and subtle gradient that kept the bright and feel look. This made the wallpaper became brighter and more colorful.

Step.3 Final product

The result was successful. The new concept and usage of Suru language helped to create a brighter wallpaper that fitted into our overall visual aesthetic. We created a three-dimensional look and feel that gives the feeling of actual origami appearance. The wallpaper is still recognizable as Ubuntu, but at the same time looks fresh and different.

Ubuntu 16.04 Xenial Xerus

Xerus - purple

Ubuntu 16.04 Xenial Xerus ( light version)

Xerus - Grey

What is next?

The Design Team is now looking at ways to bring the Suru language into animation and fold usage. The idea is to bring an overall seamless and consistent experience to the user, whilst reflecting our tone of voice and visual identity.

Read more
Anthony Dillon

The Juju web resources are made up of two entities: a website jujucharms.com and an app called Juju GUI, which can be demoed at demo.jujucharms.com.

Applying Vanilla to jujucharms.com

Luckily the website was already using our old style guidelines, which we refactored and improved to become Vanilla, so I removed the guidelines link from the head and the site fell to pieces. Once I NPM installed vanilla-framework and included it into the main sass file things started to look up.

A few areas of the site needed to be updated, like moving the search markup outside of the nav element. This is due to header improvements in the transition from guidelines to Vanilla. Also we renamed and BEMed our inline-list component, so I updated its markup in the process. The mobile navigation was also replaced with the new non-JavaScript version from Vanilla.

To my relief with these minor changes the site looked almost exactly as it did before. There were some padding differences, which resulted in some larger spacing between rows, but this was a purposeful update.

All in all the process of replacing guidelines with Vanilla on the website was quick and easy.

Now into the unknown…

Applying Vanilla to Juju GUI

I expected this step to be trickier as the GUI had not started life using guidelines and was using entirely bespoke CSS. So I thought: let’s install it, link the Vanilla framework and see how it looks.

To my surprise the app stayed together, apart from some element movement and overriding of input styling. We didn’t need the entire framework to be included so I selectively included only the core modules like typography, grid, etc.

The only major difference is that Vanilla applies bottom margin to lists, which did not exist on the app before, so I applied “margin-bottom: 0” to each list component as a local override.

Once I completed these changes it looked exactly as before.

What’s the benefit

You might be thinking, as I did at the beginning of the project, “that is a lot of work to have both projects look exactly the same”, when in fact it brings a number of benefits.

Now we have consistent styling across the Juju real estates, which are tied together with one single base CSS framework. This means we have exactly the same grid, buttons, typography, padding and much more. The tech debt to keep these in sync has been cut and allows designers to work from a single component list.

Future

We’re not finished there, as Vanilla framework is a bare bones CSS framework it also has a concept of theming. The next step will be to refactor the SCSS on both projects and identify the common components. The theme itself depends on Vanilla, so we have logical layering.

In the end

It is exciting to see how versatile Vanilla is. Whether it’s a web app or web site, Vanilla helps us keep our styles consistent. The layered inheritance gives us the flexibility to selectively include modules and extend them when required.

Read more
Rae Shambrook

We previously posted about the clock app’s new look and today we are getting to know one of the developers behind the clock ( as well as other community apps.)  Bartosz Kosiorek gives us a glimpse into developing for Ubuntu and how he got started.

1) First, can you give us a bit of background about yourself and tell us how you started developing for Ubuntu?

My name is Bartosz and I’m from Poland. Currently I’m the developer for the Ubuntu Clock and Ubuntu Calculator. I started contributing to Ubuntu in 2008, by submitting bug reports into launchpad and fixing translations. Later I participated in the One Hundred Papercuts project. I made SRU verifications and eventually started developing.

My adventure with Ubuntu started from Ubuntu 8.10 (Interpid Idex). Previously I tried many different distributions (Debian, Fedora, SuSE etc.). I chose Ubuntu because it is easy to use and after installation, I have fully functional system. I like that after Ubuntu is installed, there are no duplicate applications and those already installed work perfectly with the system.

2) How long have you been working on the Clock and Calculator? How did you get involved in these projects?

I started to develop for Ubuntu about two years ago when I first heard about Ubuntu Touch and convergence. I started by contributing to Ubuntu Core Apps by testing, submitting bug reports and patches. Most commits were done for Ubuntu Calculator and Ubuntu Clock by fixing bugs which were approved by Riccardo Padovani, Mihir Soni and Nekhelesh Ramananthan. After some time, I became member of Ubuntu Core Apps. It’s very fun to work with these guys and the Ubuntu community. I’ve learned a lot about Qt/QML and user experience design.

3) How do you approach implementing a design in your apps?

Generally I follow the design document during implementation and sometimes find parts that need to be improved. After speaking with the Ubuntu UX team, we discuss various issues and agree on a final design solution. Sometimes the UX team gives us freehand, so we could design some parts by ourselves (eg. Stopwatch, Welcome Wizard, Landscape View). It’s really fun to work with such awesome guys.

4) What feature would you like to see in the future?

I think from user perspective, longer battery life is a very important topic. The power usage is higher with white background: https://www.quora.com/Does-a-white-background-use-more-energy-on-a-LCD-than-if-it-was-set-to-black especially with OLED screen. I wish that Ubuntu Touch came with a darker theme, to save battery on OLED screens.

Read more
Robin Winslow

It’s becoming more and more important for websites to carefully consider how their resources are cached in users’ browsers. Get the caching wrong, and you either end up with a woefully slow experience for the user, or a very strange looking website as users are left with stale CSS files and images.

Or often both.

For our China site, we’ve decided that the HTML pages should be cached for 5 minutes, and the CSS and JavaScript can be cached for a year – as every time we update them we change the URL.

Caching headers in Django

Telling the browser how long to cache a resource is done with one of two headers:

  • Cache-Control: In HTTP/1.1, this can set the maximum age before a resource should be re-downloaded.
  • Expires: In the older HTTP/1.0 standard, this sets the date and time that a resource becomes outdated and should be refreshed.

To control these headers in Django is less simple than you might think. If you’re happy to use the cache framework then it will take care of these headers for you, but as we have a separate Squid cache in front of our application, this was a more heavyweight solution than we needed.

Modifying HTML responses using View classes

In our case, all of our HTML pages are served with an extended version of the TemplateView class:

To add headers, we need to modify the HTTPResponse, which we can intercept by extending the render_to_response method.

Django also provides patch_response_headers a handy helper function to generate our caching headers for us and attach them to the response:

And now we can see our extra caching headers in the HTTP response:

Browsers and proxies will now cache the HTML pages for 5 minutes.

Controlling caching for static files

Django recommends serving static files separately from the rest of your application.

However, for simplicity and dev-prod parity we’ve been using DJ-Static to serve static files with the Django WSGI app, as introduced by Kenneth Reitz. This was also, at the time we implemented it, the method recommended by Heroku for managing static files in Django.

However, as it turns out DJ-Static doesn’t offer any control over caching headers. And Heroku now recommend using WhiteNoise instead.

Serving static files with WhiteNoise is pretty simple (as it was with DJ-Static):

WhiteNoise will add a Cache-Control header, although it doesn’t support set the older Expires header. By default, the Cache-Control header is initially set to no caching:

We wanted our static files to be cached for a year, so we set the WHITENOISE_MAX_AGE setting in settings.py:

This will set the max-age in the Cache-Control header to achieve the browser caching we’re looking for:

Now we have control

Leveraging browser caching is an invaluable tool in performance, and so understanding how we can control the user’s cache with Django is very helpful.

Hopefully I’ve demonstrated some ways that this can be achieved, which we’ve just implemented on cn.ubuntu.com.

Also published on my blog.

Read more
Jouni Helminen

Visual design of convergent apps

It is an exciting time as we’re starting to see more and more of the new, convergence-enabled UI toolkit and features for Unity 8 come to life. Some classic X11 apps (Gimp, Libre Office and a few others) are already running on Unity 8 using new hardware from our partners, including the award winning M10 tablet from BQ – very cool.

At the same time, we want to help people write or port more applications to our platform, using our modern UI toolkit designed to smoothly flow the user experience through touch and pointer inputs, a range of screen and keyboard types and all of the permutations in between! It has been an interesting design challenge to imagine, design, and begin to build a world where all interfaces, regardless of input type or form factor, all emerge from the same core user experience and design language.

Where we are now

Our UX and SDK teams have been working on version 1.3 of Qt based UI toolkit, which allows developers to write applications that can be used comfortably with both touch and pointer interfaces. The work is still very much in progress, but some of it can be used today. You can check out the developer docs here.

Suru, our visual design language, has evolved into a new, much lighter, flatter and modern approach. It not only looks great (in our humble opinion), but helps app developers design good looking and well-functioning apps with less effort. Continuous visual and user experience refinements will will be rolling out across the whole OS (scopes, shell and apps) this coming year.

The new design guidelines for UX and UI patterns as well as Suru will be out soon. In the meanwhile hopefully these example apps will inspire you to have a look at the developer docs, get active on IRC and have a go yourself. We will also be releasing design source files and templates for the refreshed UI toolkit so that you can start applying them in your own app designs.

Dekko – Email

email-phone-tablet

The first example app is Dekko – the default email client  for mobile and tablet devices from BQ and Meizu. We have been very lucky to have the incredible talents of our community member Dan Chapman working on the development of Dekko, and the app is progressing at a fantastic rate. James Mulholland helped Dan with the UX and I have been working on the UI.

Like many apps, Dekko uses a list view to represent the primary level, and a detail view to show the secondary level. Where there’s room, these views can be displayed side by side, but on small screen screens or very shrunk windows, a PageStack showing only the list becomes the primary screen. On larger screens or expanded windows, the page stack automatically progresses into the familiar two-panel configuration. This adaptive layout is common on responsive websites, and our SDK team have built a component in the UI toolkit that does most of the hard work for you – AdaptivePageLayout.

email-desktop

The list item, which lives in the list component, is another example of ready made component that helps developers write convergent apps with less effort. The new ListItem in our toolkit has useful, well designed default layouts baked in when using ListItemLayout. It is also optimised for both touch and pointer interaction – via ListItemActions. A common pattern of interacting with list items on touch devices is to drag them left or right revealing key actions such as delete. When using a pointer, however, you would typically right click and use the contextual menu to access the same actions. Our UI Toolkit supports both types of input at all times, so you could drag the item left or right using a mobile or touch-enabled monitor, or right click using a mouse. We believe users should be free to mix how they interact with our components using whatever means is at their disposal and to their liking.

This behaviour is already baked into our ListItem component, so users will have a consistent experience when using apps, and developers will save time not having to roll their own solutions.

Music

 convergence-music
The music app is another example of the super talented Ubuntu community getting involved in building some of our core apps together with our internal teams. You might remember Andrew Hayzen and Victor Thompson from a previous interview on this blog. They have since been adding features and functionality to the app, and a convergent music app using multiple panels is currently working in a branch and will be landing in the master release soon. We are also looking at adding support for streaming music functionality, keep an eye out for this in the near future :)

music-closeup

The multi-panel music app reacts to window size changes intelligently – the album cards resize and shuffle themselves on window size changes. On smaller screen devices we have a persistent “Now playing” control bar at the bottom of the screen, but on larger screen sizes we have enough real estate to reimagine the play bar as an extra panel on the right with “Now playing” information, along with cover art, controls and a scrollable queue.

Calendar

convergent_calendar

The calendar app has been on the phone for a while but until now it hadn’t really had any UI design love or designs for larger screens.  We wanted to apply our visual language in the context of an app that is by default very minimal, allowing the few design elements to stand on their own.

Suru, our visual language, is light and flat, minimizing distractions, with carefully selected tones of gray, consistent spacing and margins to help the content breathe. We’ve added considered splashes of highlight colours that enhance the visual hierarchy without overwhelming it.

On the calendar app we are again making use of multiple panels, surfacing several layers when we have the real estate available. The same feature set of the app is of course available on all sizes, and the navigation feels intuitive with whatever input method or screen size you are using.

calendar-closeup

This design hasn’t been implemented yet, and in fact we are looking for new developers to join our Community Team. If you are a developer who would like to get involved in writing some of the core apps people use on Ubuntu – get in touch with alan.pope@canonical.com – we would love to hear from you!

Hopefully these examples have given inspiration and pointers to anyone who would like to have a go at designing apps for convergent Ubuntu. If you have any questions, don’t hesitate to reach out – jouni.helminen@canonical.com

 

Read more
Barry McGee

Maybe, like me, you seen more of the inside of your gym in January than you had for the six months previous. New year, new diet, new me.. or something like that.

A big creeping problem in recent years is that websites have been on an all out binge, and not just over the winter holidays — big videos, big images, fancy fonts, third-party libraries — they just can’t get enough of ’em.

Average page weights increased by 15% in 2014 and although I haven’t yet seen any similar research done for 2015 yet, I’m willing to bet that trend did not reverse.

Last week I was tasked with making some performance optimisations to the Ubuntu online tour.

This legacy codebase stretches all the way back to 2012, and as such was not benefitting from some of the modern tools we now have at our disposal as web developers.

We have been maintaining our largest codebases such as ubuntu.com and canonical.com to ensure they are as performant as they can be but this Ubuntu tour repository slipped through the cracks somewhat.

We have users all over the world and many of them don’t enjoy the luxury of fat internet pipes that we enjoy in our London office. Time to trim the fat…

At first look, I noted on load of the site it required 235 HTTP requests to download 2.7MB of data. Chunky Charlie!

 

Network waterfall screenshot

 

Delving into the codebase, I immediately spotted some big areas ripe for improvement:

  • The CSS files were not being concatenated nor were they minified.
  • The Javascript was also being loaded in separate files, also un-minified.
  • The image assets were uncompressed.
  • The HTML was un-minified.

Beyond that – I ran the site URL through Google’s PageSpeed Insights and also discovered;

  • Browser cacheing was not being being leveraged as static assets did not have any Expires headers specified
  • There were quite a few CSS and javascript dependancies blocking rendering of the page.

As you see, the site was only scoring a lowly 46/100, not great.

 

Google Page Speed Insights screenshot

 

For jobs such as this, my first weapon of choice is the task runner, Gulp. It’s quick and easy to drop Gulp on top of any existing site and use some of it’s wide array of plugins to optimise source assets for performance.

For this job I used gulp-concat, gulp-htmlmin, gulp-imagemin, gulp-minify-css, gulp-renamegulp-uglify, gulp with critical & gulp-rev.

Explaining how to use each of them is beyond the scope of this article but you can view my Gulpfile.js and accompanying package.json file to see what I did.

When retro-optimising a site, you might find you have to make certain compromises such as placing “src” folders inside folders you are optimising to store the original documents, then output the optimised versions into the original folder to ensure everything is backwards compatible and you haven’t broken any relative links. You should also be careful when globbing Javascript files as they may need to be loaded in a certain order to prevent race conditions. This is also true when concatenating and including Javascript libraries such as jQuery.

In an ideal world, you would not deploy any files from the repository you have compiled locally. They should be ignored by version control and compiled on the fly by running your task runner on the server using a continuous integration engine such as Jenkins or Travis CI. This is much cleaner and will prevent merge conflicts when multiple developers are working on the same codebase.

So — when we have all of the above configured and then run it over our legacy codebase, how much weight did it shave?

 

Network Waterfall - After

 

Good news! Now to load the site, we only need 166 HTTP (-29%) requests to download 2.2MB(-18%) of data. Slim(mer) Jim for the win!

This should mean our users with slower connections will have a much improved experience.

When we run the leaner site now deployed through Google Pagespeed Insights – we now get a much healthier score also.

 

Google Pagespeed - After

 

This was a valuable exercise for our team and reminded us we not only have a responsibility to keep all our new and upcoming work performant but we should also address any legacy sites still currently in use wherever possible.

A leaner web is a faster web and I’m sure that’s something we can all get behind.

 

Read more
Femma

We arrived in Helsinki on Sunday evening, ready to start our week long SDK sprint on Monday. Our hotel was in a nice location, by the sea.

The work stuff

The SDK is a core part of Ubuntu and provides an array of components and flexibility needed to create applications across staged and windowed form factors, with good design and user experience in mind.

The purpose of the sprint was to have the designers and engineers come together to work on tools and components such as palette themes, bottom edge, header, scrollbars, focus handling, dialogs, buttons, menus, text selections and developer tasks such as IDE, packaging and application startup.

Monday morning started with walking into our venue that looked somewhat like a classroom.

 

Classroom

The first task of the day required some physical activity of moving all the tables around so that the environment was much more conducive to a collaborative sprint.

Jouni presenting

Each day we broke off into working groups for our respective sessions and ironed out any existing issues, as well as working through new and exciting features that would enhance different SDK components.

Theme palette sessionJamie, Pierre and Zsombor working hard on the colour palette.

Jamie the professor

Old school pointing devices, Jamie gives it a go, looking very much like a professor!

What we achieved

During the course of the week we achieved what we’d set out to do:

  • Amended the theme palette to include any missing colours and then apply these to various components
  • Completed the implementation and release the bottom edge component into the staging environment
  • Completed the section scrolling prototype and have it reviewed by visual design and UX
  • Completed the portrait and landscape edit mode header prototype
  • Worked out behaviour of complex SDK components for focus handling and added some best practice examples to the specification
  • Communicated and gained concensus on the context menu design, who are now gearing up for some pre-requisite work and then implementation of context menus
  • Prepared the visual rules for buttons and made the Ubuntu shape ready to use for buttons
  • Completed the design for sliders  
  • Discussed a tree view component for navigation
  • Created a first draft of tabs wireframes and functionality agreed
  • Created a first draft of text selections visuals and reviewed, UX and functionality was discussed ready to include in the specification
  • Created the Libertine packaging project and containers
  • Tidied up the IDE
  • Created some Snapp packages and got them working
  • Ramped up some new  investigative work that arose in our collaboration

The planets aligned… literally

In the early hours of Wednesday morning  (before breakfast) a few of us managed to witnessed a planetary conjunction (Venus, Mars and Jupiter) which was truly amazing… a surprise benefit of sprinting in the arctic circle.
Even though there were a few hours of daylight, we managed to embrace the cold and stand outside to enjoy the beautiful views during lunch and coffee breaks.

The bay

All in all, it was a very productive and fun sprint. We left with a sense of accomplishment and camaraderie.

Read more