Canonical Voices

Posts tagged with 'development'

Igor Ljubuncic

Last week, we published Introduction to snapcraft, a tutorial that provided a detailed overview of the snap build process. We touched on the concepts like snap ecosystem components, snapcraft command line, snapcraft.yaml syntax, and more. We’d like to expand on the first lesson, and today, we are going to talk about parts and plugins, used in the build process of snaps.

What are we going to do?

Our tasks for today include:

  • Overview of parts.
  • Overview of plugins.
  • Basic examples.

Parts

As their name implies, parts are the raw building blocks of a snap, used to collect and build binaries and their dependencies. Each part defines a source needed to assemble your application into a snap. Parts can be anything – programs, libraries, or other needed assets.

Parts support a large number of keys and values to describe the build process. However, before we demonstrate, let’s discuss plugins too, as the two are tightly coupled.

Plugins

If you work with complex projects, having to manually define every part can be tedious. Snapcraft uses plugins, which help simplify the build process. They are declared within parts to better integrate projects using existing programming languages and frameworks.

In the background, plugins will perform various language-specific commands. For example, the nodejs plugin create parts that use Node.js and/or the JavaScript packages manager npm or yarn. Snapcraft supports many languages. To see the full list, in a command-line window, type:

snapcraft list-plugins

You can also check specific details for each plugin, like:

snapcraft help ruby

Each plugin comes with its own capabilities and language-specific keywords. To illustrate the power and flexibility of plugins, let’s examine a real snap use case.

Python example

This is a section of code from a simple snapcraft.yaml file:

parts:
 tqdm:
    plugin: python
    python-version: python2
    source: https://github.com/tqdm/tqdm.git

What do we have here?

  • plugin: python declares the use of the Python plugin. It can be used for Python projects where you would want to do import Python modules with a requirements.txt, build a Python project that has a setup.py, or install packages straight from pip.
  • The python-version keyword has two valid options: python2 and python3. The selected version of Python will also be included in your snap, so it will run on systems that don’t have Python installed. You can examine this by expanding the built snap with the unsquashfs command. As we’ve seen in the first guide, relative to the snap directory, you will find directories like bin, lib, usr. In the tqdm example, python2.7 will be included under usr/bin inside the snap.
  • The source keyword points to the root of your Python project and can be a local directory or a remote Git repository.

Go example

Now, let’s examine a Go plugin example:

parts:
 caire:
    plugin: go
    source: https://github.com/esimov/caire.git
    go-importpath: github.com/esimov/caire
    build-packages:
      - build-essential

What do we have here?

  • Much like what we’ve seen with the Python plugin, we declare the use of the plugin and the root of the project containing the Go code.
  • Furthermore, the Go plugin will build using the version of Go available in the system’s traditional package achive. However, if you require a different version, you can use additional Go-specific keywords available in the plugin to override the defaults. Specifically, you can do this with the go-channel keyword (e.g. “1.8/stable”).
  • Also, we can see another difference from the Python example. We’re also using the go-importpath keyword. When using local sources, snapcraft needs to construct a suitable GOPATH. For this it uses go-importpath to know where the sources should live within GOPATH/src.
  • There’s another declaration (build-packages), which we will examine in greater detail below.

Additional parts keys

In the Go example, you can also see the use of another keyword: build-packages. This is one of the important keys available in the parts section of the snapcraft.yaml file. Specifically, build-packages defines the list of packages required to build a snap.

If you have compiled code on Ubuntu-based systems, you may be familiar with the build-essential package. This is a meta package that defines a large number of compilation tools and dependencies, like gcc, make, headers, and more.

You can also list specific packages that your project needs. For instance: libnetfilter-queue-dev or portaudio19-dev.

These packages are installed using the package manager inside the snapcraft build instance. Some other examples of parts keywords include:

  • stage-packages – A list of packages required at runtime by a snap, e.g. python-bcrypt.
  • prepare – This key runs a script before the plugin’s build step.
  • organize – A map of files to rename.

And there are many other keys, available in the snapcraft documentation.

Conclusion

This brings us to the end of this tutorial. Today, we learned about parts and plugins, used in the build process of snaps. We also saw several language-specific examples and examined the differences between them. In the next episode, we will talk more about snap confinement.

If you have any questions or feedback, please join our forum for a discussion.

Photo by Tory Bishop on Unsplash.


The post Snapcraft parts & plugins appeared first on Ubuntu Blog.

Read more
Igor Ljubuncic

First steps are always hard, especially in technology. Most of the time, you need a primer, just the right dose of knowledge, to get started with a platform. This tutorial and upcoming sequels are designed to provide developers with simple, clear and effective tips and tricks on how to build and publish their applications as snaps. In today’s guide, we will learn more about snapcraft, the innovative, flexible command-line tool for developing snap packages.

What are we going to learn today?

Our agenda for today will include:

  • Before you get started…notes & prerequisites.
  • Overview of the snap system – basic concepts and components.
  • Build environment setup – snapd and snapcraft installation.
  • Overview of snapcraft.yaml – what it is and what it does.
  • Overview of the snap application build.
  • Overview of the snap file format.
  • Overview of the snap publication process.
  • Some simple examples.

Notes & prerequisites

Before we begin, there are a few small things you should know:

  • This tutorial is a part of a gradually progressive series. Some terms may look vague at this point. They will be explained in detail in later guides.
  • You should treat each part in the series as a building block for the next installment. It is advisable to follow the tutorials in sequential order to get the best understanding of the tools and concepts presented here.

Basic concepts

To make this tutorial easier to follow and understand, let us first define several concepts.

  • Snaps are confined, standalone Linux applications bundled with all the necessary dependencies to run independently. They differ from the traditional Linux applications in that they do not rely on the system libraries.
  • Snaps are isolated from the system.
  • Snaps are updated automatically and safely. The updates are transactional.
  • Snaps come with multiple release channels that allow flexibility in software testing.
  • Snaps are available in the Snap Store, allowing out of the box discovery by millions of users.
  • Snaps are built using the snapcraft command line tool.

Major components

The Snap system consists of several major components:

  • Snapd is a background service that allows users to install and run snaps.
  • Snap is the userspace component of the snapd service. For instance, snap install <foo> will install the application named <foo> from an online store called the Snap store.
  • The Snap Store is a central repository of snap applications. If offers security and cryptographic signatures. The store contents are available to any system running snapd. The store features release channels, which allow multiple versions of the application to be available at the same time. Developers who want to publish their applications as snaps need to register for a free account in the store.
  • Snapcraft is the command line tool that allows developers to build and publish their applications as snaps.

Build environment setup

You may want to practice while reading this tutorial. To that end, please make sure you have the following.

  • A Linux system with snapd service installed. While snaps are available on many Linux distributions, it is easiest to practice and run snaps on the Ubuntu family. A standard Ubuntu 18.04 instance is the most convenient way to do so. Once you have snapd installed, you can install the snapcraft tool.
snap install snapcraft
  • We also need to install multipass, which is used by snapcraft in the background to create isolated build environment instances inside (Ubuntu) virtual machines. The build process is transparent to the end user.
snap install multipass
  • Some knowledge of the Linux command line.
  • Some basic familiarity with the YAML language syntax.

Snapcraft capabilities

Let’s start with an overview of snapcraft and its capabilities.

Snapcraft is the command line tool that allows developers to build and publish their applications as snaps, in the snap file format. Snaps are created as the final artifact of the build process, with packages bearing a .snap extension. The snap file format is a single compressed SquashFS filesystem. It includes the application code and declarative metadata, which is interpreted by the snapd service to set up a secure sandbox for that application.

Snapcraft commands

Snapcraft supports a wide range of commands, designed to help developers start their project, build their applications, upload these applications to the online snap store, and publish them to the end user. Just to give you more context, some of these commands include:

build       Builds artifacts based on the snapcraft.yaml file.
clean       Remove content - cleans downloads, builds or…
init        Initializes a snapcraft project.
push        Pushes a snap to the online snap store.
register    Registers a snap with the online snap store.
snap        Create a snaps.

And many more.

Where does it all start?

Snapcraft builds applications based on declarations written in a file called snapcraft.yaml.

Snapcraft.yaml is a configuration file written in the YAML language, with stanzas defining the application structure and behavior. When snapcraft runs and parses this file, it will use the declared information to build the snap.

Developers coming from the traditional Linux world will find it easier to relate to snapcraft.yaml as being somewhat similar to a Makefile or an RPM spec file. Some familiarity with the YAML language syntax is helpful in understanding the declarations and logic in the snapcraft.yaml files.

Snapcraft.yaml

You can create the snapcraft.yaml file manually in your project directory, or you can run the snapcraft init command, which will create a template file you can then populate with the required information.

snapcraft init

A minimal valid snapcraft.yaml file that can be built into a working snap requires three stanzas: metadata, confinement level, and build definition.

Let’s illustrate this with an example. Below, you can see a complete snapcraft.yaml file contents for an application called wethr, a command line weather tool. We will now look at each section of the code separately and learn what it does.

name: wethr
version: "1.4.0"
summary: Command line weather tool.
Description: |
 Get current weather.

confinement: strict
base: core18

apps:
wethr:
command: wethr
plugs:
- network

parts:
wethr:
plugin: nodejs
source-tag: "v1.4.0"
source: https://github.com/twobucks/wethr.git

Metadata

Several mandatory fields define and describe the application – name, version, summary and description. These fields allow end users to find snaps and install them.

name: wethr
version: "1.4.0"
summary: Command line weather tool.
Description: |
 Get current weather.

Let’s examine these fields in more detail:

  • Name – A string that defines the name of the snap. It must start with an ASCII character and can only use ASCII lowercase letters, numbers and hyphens. It must be between 1 and 40 characters in length. The name must be unique in the Snap Store.
  • Version – A string that defines the version of the application to the user. Max. length is 32 characters. You can also use ‘git’ for the version qualifier. By specifying ‘git’, the current git tag or commit will be used as the version string. Please note that versions carry no semantic meaning in snaps.
  • Summary – A string that briefly describes the application. The summary can not exceed 79 characters. You can use a chevron ‘>’ character or a pipe ‘|’ character in this key to declare a multi-line summary.
  • Description – A string that describes the application. You can use multiple lines. There is no practical limitation on the length of this key.
FieldDefinitionTypeLengthNote
NameNameString1-40Unique
VersionApp versionString1-32No semantic meaning
SummaryBriefString1-79
DescriptionFullStringHigh

When the application is built with snapcraft, this metadata will be made available to users. Information about additional directives and their syntax is available in the snapcraft YAML reference documentation.

Confinement level

By design, snaps are confined and limited in what they can do. This is an important feature that distinguishes snaps from software distributed using the traditional repository methods. The confinement allows for a high level of isolation and security, and prevents snaps from being affected by underlying system changes, affecting one another or affecting the system.

Different confinement levels describe what type of access the application will have once installed on the user’s system. Confinement levels can be treated as filters that define what type of system resources the application can access outside the snap.

Confinement is defined by general levels and fine-tuned using interfaces. We will discuss interfaces later in the series.

There are three levels of confinement:

  • Strict – This confinement level uses Linux kernel security features to lock down the applications inside the snap. By default, a strictly confined application cannot access the network, the users’ home directory, any audio subsystems or webcams, and it cannot display any graphical output via X or Wayland.
  • Devmode – This is a debug mode level used by developers as they iterate on the creation of their snap. This allows developers to troubleshoot applications, because they may behave differently when confined.
  • Classic – This is a permissive level equivalent to the full system access that traditionally packaged applications have. Classic confinement is often used as a stop-gap measure to enable developers to publish applications that need more access than the current set of permissions allow. The classic level should be used only when required for functionality, as it lowers the security of the application.
Access typeStrictDevmodeClassic
Access to networkNYSystem
Access to home dirNYSystem
Access to audioNYSystem
Access to webcamNYSystem
Access to displayNYSystem
Used forPreferredTroubleshootingStopgap measure
OtherInterfaces overrideRequires review

Classically confined snaps are reviewed by the Snap Store reviewers team before they can be published. Snaps that use classic confinement may be rejected if they don’t meet the necessary requirements.

If we look at the wethr example, we define the confinement level as strict:

confinement: strict

Bases

In general, snaps cannot see the root filesystem on end user systems. This prevents conflict with other applications and increases security. However, applications still need some location to act as the root filesystem. They would also benefit from common libraries (e.g. libc) being in this root filesystem rather than bundled into each application.

The base keyword specifies a special kind of snap that provides a minimal set of libraries common to most applications. It will be mounted as the root filesystem for your application.

The core18 base is recommended.

base: core18

Build definition

The build definition stanza consists of several declarations on how the application is going to be built. If we look at the wethr application example, the build definition consists of the following:

apps:
 wethr:
   command: wethr
   plugs:
     - network
parts:
 wethr:
   plugin: nodejs
   source-tag: "v1.4.0"
   source: https://github.com/twobucks/wethr.git

Build definition – apps

Let’s look at the first part of the YAML code:

apps:
 wethr:
   command: wethr
   plugs:
     - network

apps: defines the application(s) that are going to be part of the snap. The wethr example has a single application – wethr. Other snaps may have multiple sub-applications or executables.

wethr: defines a block for the wethr application.

command: defines the path to the executable (relative to snap) and arguments to use when this application runs.

plugs: is a new concept that we have not yet discussed. It will be explained in more detail later in the series. For the time being, we will go with a simplified description. In this example, the declaration specifies that the wethr application will be allowed access to the network interface, which is not available by default under strict confinement. This allows the wethr snap to override the strict confinement and provide network access.

Build definition – parts

Let’s take a closer look at the second part of the YAML code, aptly named parts.

parts:
 wethr:
   plugin: nodejs
   source-tag: "v1.4.0"
   source: https://github.com/twobucks/wethr.git

parts: defines what sources are needed to assemble your app. Parts can be anything – programs, libraries, or other needed assets. We then further define how the wethr application should be built.

plugin: defines the use of an existing tool that will perform various language-specific commands in the background. For example, the nodejs plugin create parts that use Node.js and/or the JavaScript package manager npm.

source-tag: defines a specific tag for source repositories under version control. This plugin will then look for the source with the specific tag and try to obtain the necessary data for the application build.

source: defines the URL or a path of the application code that needs to be downloaded for the build. It can be a local or remote path, and can refer to a directory tree, a compressed archive or a revision control repository.

Application build process

Once you have a complete snapcraft.yaml file, you can build it.

The best way to start a new build is with a clean build environment. This ensures there are no library or dependency conflicts in your build setup, and that all snaps are built against the same common baseline.

This will guarantee that your snaps can run on all available snap platforms without any errors. To build your snap in a clean build environment, run:

snapcraft

This command will start an Ubuntu 18.04 minimal install virtual machine instance via multipass, download the necessary packages, build the snap, save the final artifact in your work directory, and then stop the virtual machine. The virtual machine will not be deleted, and it will be reused for re-builds of the snap, in case you require it.

Snap created

You may encounter errors during the build process, such as invalid syntax, missing libraries, or your snap may not work correctly. We will learn more about the steps to understand and troubleshoot these later in the series. For the time being, let’s assume that our snap builds correctly and without any errors.

If the build is successful, you will have a file with the .snap extension in your work directory.

Snap file format

To better understand snaps, let’s take a closer look at the snap file format. Each snap is a single SquashFS filesystem containing the application code and metadata. You can unpack the snaps using the unsquashfs command or by mounting a snap as a loopback device.

unsquashfs <file>.snap
mount <file>.snap <mount point> -t squashfs -o loop

The contents of the extracted or mounted snap will resemble a typical Linux filesystem. There will be directories like bin, lib, usr and others. There will also be a command-<snap name>.wrapper file, which defines the snap environment under which the snap will run.

drwxr-xr-x 8 igor igor 4096 Dec  5 12:48 ./
drwxrwxr-x 4 igor igor 4096 Dec  5 12:48 ../
drwxr-xr-x 2 igor igor 4096 Dec  5 12:48 bin/
-rwxr-xr-x 1 igor igor   61 Dec 5 12:48 command-foo.wrapper*
drwxr-xr-x 3 igor igor 4096 Dec  5 12:48 etc/
drwxr-xr-x 4 igor igor 4096 Dec  5 12:48 lib/
drwxr-xr-x 3 igor igor 4096 Dec  5 12:48 meta/
drwxr-xr-x 3 igor igor 4096 Dec  5 12:48 snap/
drwxr-xr-x 6 igor igor 4096 Apr 16  2018 usr/

Again, to make this easier to understand for developers coming from the traditional world, this is somewhat similar to extracting a package (like an RPM), and running an application from a relative path by loading the necessary libraries using the $LD_LIBRARY_PATH environment variable. Snaps use the $SNAP* variables to define and determine their “virtual” environment:

  • $SNAP (install path, RO)
  • $SNAP_DATA (path in /var, RW)
  • $SNAP_USER_DATA (path in /home, RW)

And others.

A snap that has been published in the snap store will have several additional files, like the assert file, desktop launcher and others. We will discuss these in more detail later in the series, once we learn how to use plugs and interfaces.

Publishing a snap

Now we are ready to publish our snap. The process consists of the following:

  • Create your developer account at snapcraft.io.
  • Register your app’s name (above or the command line).
  • Release your app.
snapcraft login
snapcraft register

Once you have completed the first two steps, you can upload the snap to the store.

snapcraft push --release=<channel> <file>.snap

The second command will upload the snap file into the store. However, before you run this command, we need to talk a little bit about the different release channels.

Snap Store channels

The Snap Store comes with a very high level of release management flexibility by using multiple channels, which allow developers to publish their applications in a staged, controlled manner. The channels can be treated as a multi-dimensional version control. Each channel consists of three components:

<track>/<risk>/<branch>
  • Track – represents a progressive potential trade-off between stability and new features.
  • Risk – enables snap developers to publish multiple supported releases of their application under the same snap name.
  • Branch – are optional and hold temporary releases intended to help with bug-fixing.

A typical channel will look something like:

--channel=latest/edge

Track

All snaps must have a default track called latest. This is the implied track unless specified otherwise. A track contains releases based on the developer’s versioning convention. A track could be used to track minor updates (2.0.1, 2.0.2), major updates (2.1, 2.2), or releases held for long-term support (3.2, 4.1).

Risk

This is the most important aspect of the channels. It defines the readiness of your application, much in the same way you would version any application – alpha, beta, RC, GA, etc. The risk levels used in the snap store are: stable, candidate, beta, and edge.

Snaps are installed using the stable risk-level by default. Installing from a less stable risk-level will typically mean more frequent updates.

You can also use –stable, –candidate, –beta, –edge options instead of writing the full channel declaration. For instance, –stable is equivalent to –channel=stable or even –channel=latest/stable.

End users of snap have the option to switch between different channels for their own convenience.

Branch

Branches are optional. They allow the creation of a short-lived sequences of snaps that can be published on demand by snap developers to help with fixes or temporary experimentation.

Branch names convey their purpose, such as fix-for-bug123, but the name isn’t exposed in the normal way, such as with snap info. Instead, they can be tracked by anyone simply knowing the name. After 30 days with no further updates, a branch will be closed automatically.

Upload your snap

Now that we understand the concept of channels, we can upload our snap.

Please note you should NOT upload your snap to the stable channel right away. You should do the publication in a staged manner, and allow for sufficient testing, to make sure the snap application is stable and works correctly before it is promoted into the stable channel.

snapcraft push --release=beta <file>.snap

The Snap Store will run several automated checks against your snap. There may also be a manual review, depending on how you have built your snap (like classic confinement). If the checks pass without errors, your snap will be available in the store.

You are encouraged to create a compelling page for your application, to attract users. The use of an accurate description, high-quality logo and screenshots can all help make your application more visible and appealing.

And that’s it. Congratulations, you’ve just successfully published your first snap!

Conclusion

Today, we learned about snaps as a concept and technology. We learned about the basic architecture of snaps and its components. We delved a little deeper into the snapcraft internals, focusing on the commands, the snapcraft.yaml syntax and how to build and publish your snaps.

We touched on the important concepts of confinement levels, clean build environment, and channels. We used a simple example of a command-line weather application in the build process. The next step is to expand on the build definitions, and make more complex snaps.

Indeed, snapcraft has many useful components, and through the series, we will discuss them, including interfaces, plugins, hooks and health checks, and how to develop snaps with different languages, like Python, Electron, Go, and others.

Meanwhile, if you’re interested, please take a look at the following blog posts, as they provide additional details and insight into the snap build process:

Bootstrap Your Snaps

Zero to Snap – Rev up your packaging

Zero to Hero – Snap me up before you GO

Publish Your Unity Games in the Snap Store

And of course, we welcome feedback and ideas, so if you like, please join the discussion forum.

Photo by Barn Images on Unsplash.

The post Introduction to snapcraft appeared first on Ubuntu Blog.

Read more
Robin Winslow

I’ve been thinking about the usability of command-line terminals a lot recently.

Command-line interfaces remain mystifying to many people. Usability hobbyists seem as inclined to ask why the terminal exists, as how to optimise it. I’ve also had it suggested to me that the discipline of User Experience (UX) has little to offer the Command-Line Interface (CLI), because the habits of terminal users are too inherent or instinctive to be defined and optimised by usability experts.

As an experienced terminal user with a keen interest in usability, I disagree that usability has little to offer the CLI experience. I believe that the experience can be improved through the application of usability principles just as much as for more graphical domains.

Steps to learn a new CLI tool

To help demystify the command-line experience, I’m going to lay out some of the patterns of thinking and behaviour that define my use of the CLI.

New CLI tools I’ve learned recently include snap, kubectl and nghttp2, and I’ve also dabbled in writing command-line tools myself.

Below I’ll map out an example of the steps I might go through when discovering a new command-line tool, as a basis for exploring how these tools could be optimised for CLI users.

  1. Install the tool
    • First, I might try apt install {tool} (or brew install {tool} on a mac)
    • If that fails, I’ll probably search the internet for “Install {tool}” and hope to find the official documentation
  2. Check it is installed, and if tab-complete works
    • Type the first few characters of the command name (sna for snap) followed by <tab> <tab>, to see if the command name auto-completes, signifying that the system is aware of its existence
    • Hit space, and then <tab> <tab> again, to see if it shows me a list of available sub-commands, indicating that tab completion is set up correctly for the tool
  3. Try my first command
    • I’m probably following some documentation at this point, which will be telling me the first command to run (e.g. snap install {something}), so I’ll try that out and expect prompt succinct feedback to show me that it’s working
    • For basic tools, this may complete my initial interaction with the tool. For more complex tools like kubectl or git I may continue playing with it
  4. Try to do something more complex
    • Now I’m likely no longer following a tutorial, instead I’m experimenting on my own, trying to discover more about the tool
    • If what I want to do seems complex, I’ll straight away search the internet for how to do it
    • If it seems more simple, I’ll start looking for a list of subcommands to achieve my goal
    • I start with {tool} <tab> <tab> to see if it gives me a list of subcommands, in case it will be obvious what to do next from that list
    • If that fails I’ll try, in order, {tool} <enter>, {tool} -h, {tool} --help, {tool} help or {tool} /?
    • If none of those work then I’ll try man {tool}, looking for a Unix manual entry
    • If that fails then I’ll fall back to searching the internet again

UX recommendations

Considering my own experience of CLI tools, I am reasonably confident the following recommendations make good general practice guidelines:

  • Always implement a --help option on the main command and all subcommands, and if appropriate print out some help when no options are provided ({tool} <enter>)
  • Provide both short- (e.g. -h) and long- (e.g. --help) form options, and make them guessable
  • Carefully consider the naming of all subcommands and options, use familiar words where possible (e.g. help, clean, create)
  • Be consistent with your naming – have a clear philosophy behind your use of subcommands vs options, verbs vs nouns etc.
  • Provide helpful, readable output at all times – especially when there’s an error (npm I’m looking at you)
  • Use long-form options in documentation, to make commands more self-explanatory
  • Make the tool easy to install with common software management systems (snap, apt, Homebrew, or sometimes NPM or pip)
  • Provide tab-completion. If it can’t be installed with the tool, make it easy to install and document how to set it up in your installation guide
  • Command outputs should use the appropriate output streams (STDOUT and STDERR) and should be as user-friendly and succinct as possible, and ideally make use of terminal colours

Some of these recommendations are easier to implement than others. Ideally every command should consider their subcommands and options carefully, and implement --help. But writing auto-complete scripts is a significant undertaking.

Similarly, packaging your tool as a snap is significantly easier than, for example, adding software to the official Ubuntu software sources.

Although I believe all of the above to be good general advice, I would very much welcome research to highlight the relative importance of addressing each concern.

Outstanding questions

There are a number of further questions for which the answers don’t seem obvious to me, but I’d love to somehow find out the answers:

  • Once users have learned the short-form options (e.g. -h) do they ever use the long-form (e.g. --help)?
  • Do users prefer subcommands (mytool create {something}) or options (mytool --create {something})?
  • For multi-level commands, do users prefer {tool} {object} {verb} (e.g. git remote add {remote_name}), or {tool} {verb} {object} (e.g. kubectl get pod {pod_name}), or perhaps {tool} {verb}-{object} (e.g. juju remove-application {app_name})?
  • What patterns exist for formatting command output? What’s the optimal length for users to read, and what types of formatting do users find easiest to understand?

If you know of either authoritative recommendations or existing research on these topics, please let me know in the comments below.

I’ll try to write a more in-depth follow-up to this post when I’ve explored a bit further on some of these topics.

Read more
Anthony Dillon

Webteam development summary

Iteration 6

dating between 14th to the 25th of August

This iteration saw a lot of work on tutorials.ubuntu.com and on the migration of design.ubuntu.com from WordPress to a fresh new Jekyll site project. Continued research and planning into the new snapcraft.io site, with some beginnings of the development framework.

Vanilla Framework put a lot of emphasis into polishing the existing components and porting the old theme concept patterns into the code base.

Websites issues: 66 closed, 33 opened (551 in total)

Some highlights include:
– Fixing content of card touching card edge in tutorials – https://github.com/canonical-websites/tutorials.ubuntu.com/issues/312
– Migrate canonical.com to Vanilla: Polish and custom patterns – https://github.com/canonical-websites/www.canonical.com/issues/172
– Prepare for deploy of design.ubuntu.comhttps://github.com/canonical-websites/design.ubuntu.com/issues/54
– Redirect from https://www.ubuntu.com/usn/ to https://usn.ubuntu.com/usn were broken – https://github.com/canonical-websites/www.ubuntu.com/issues/2128
design.ubuntu.com/web-style-guide build page and then hide pages – https://github.com/canonical-websites/design.ubuntu.com/issues/66
– Snapcraft prototype: Snap page – https://github.com/canonical-websites/snapcraft.io/issues/346
– Create Flask skeleton application –https://github.com/canonical-websites/snapcraft-flask/issues/2

Vanilla Framework issues: 24 closed, 16 opened (43 in total)

Some highlights include:
– Combine the entire suite of brochure theme patterns to Vanilla’s code base – https://github.com/vanilla-framework/vanilla-framework/issues/1177
– Many improvements to the documentation theme – https://github.com/vanilla-framework/vanilla-docs-theme/issues/45
– External link icon seems stretched – https://github.com/vanilla-framework/vanilla-framework/issues/1058
– .p-heading–icon pattern remove text color – https://github.com/vanilla-framework/vanilla-framework/issues/1272
– Remove margin rules on card content – https://github.com/vanilla-framework/vanilla-framework/issues/1277

All of these projects are open source. So please file issues if you find any bugs or even better propose a pull request. See you in two weeks for the next update from the web team here at Canonical.

Read more
Robin Winslow

Canonical’s webteam manage over 18 websites as well as many supporting projects and frameworks. These projects are built with any combination of Python, Ruby, NodeJS, Go, PostgreSQL, MongoDB or OpenStack Swift.

We have 9 full-time developers – half the number of websites we have. And naturally some of our projects get a lot of time spent on them (like www.ubuntu.com), and others only get worked on once every few months. Most devs will touch most projects at some point, and some may work on a few of them in any given day.

Before any developer can start a new piece of work, they need to get the project running on their computer. These computers may be running any flavour of Linux or macOS (thankfully we don’t yet need to support Windows).

A focus on tooling

If you’ve ever tried to get up and running on a new software project, you’ll certainly appreciate how difficult that can be. Sometimes developers can spend days simply working out how to install a dependency.

XKCD 1742: Will it work?

Given the number and diversity of our projects, and how often we switch between them, this is a delay we simply cannot afford.

This is why we’ve invested a lot of time into refining and standardising the local development tooling, making it as easy as possible for any of our devs, or any contributors, to get up and running as simply as possible.

The standard interface

We needed a simple, standardised set of commands that could be run across all projects, to achieve predictable results. We didn’t want our developers to have to dig into the README or other documentation every time they wanted to get a new project running.

This is the standard interface we chose to implement in all our projects, to cover the basic functions common to almost all our projects:

./run        # An alias for "./run serve"
./run serve  # Prepare the project and run the local server
./run build  # Build the project, ready for distribution or release
./run watch  # Watch local files for changes and rebuild as necessary
./run test   # Check code syntax and run unit tests
./run clean  # Remove any temporary or built files or local databases

We decided on using a single run executable as our single entry-point into all our projects only after previously trying and eventually rejecting a number of alternatives:

  • A Makefile: The syntax can be confusing. Makefiles are really made for compiling system binaries, which doesn’t usually apply to our projects
  • gulp, or NPM scripts: Not all our projects need NodeJS, and NodeJS isn’t always available on a developer’s system
  • docker-compose: Although we do ultimately run everything through Docker (see below), the docker-compose entrypoint alone wasn’t powerful enough to achieve everything we needed

In contrast to all these options, the run script allows us to perform whatever actions we choose, using any interpreter that’s available on the local system. The script is currently written in Bash because it’s available on all Linux and macOS systems. As an additional bonus, ./run is quicker to type than the other options, saving our devs crucial nanoseconds.

The single dependency that developers need to install to run the script is Docker, for reasons outlines below.

Knowing we can run or build our projects through this standard interface is not only useful for humans, but also for supporting services – like our build jobs and automated tests. We can write general solutions, and know they’ll be able to work with any of our projects.

Using ./run is optional

All our website projects are openly available on GitHub. While we believe the ./run script offers a nice easy way of running our projects, we are mindful that people from outside our team may want to run the project without installing Docker, want to have more fine-grained control over how the project is run, or just not trust our script.

For this reason, we have tried to keep the addition of the ./run script from affecting the wider shape of our projects. It remains possible to run each of our projects using standard methods, without ever knowing or caring about the ./run script or Docker.

  • Django projects can still be run with pip install -r requirements.txt; ./manage.py runserver
  • Jekyll projects can still be run with bundle install; bundle exec jekyll serve
  • NodeJS projects can still be run with npm install; npm run serve

While the documentation in our READMEs recommend the ./run script, we also try to mention the alternatives, e.g. www.ubuntu.com’s HACKING.md.

Using Docker for encapsulation

Although we strive to keep our projects as simple as possible, every software project relies on dependent libraries and programs. These dependencies pose 2 problems for us:

  • We need to install and run these dependencies in a predictable way – which may be difficult in some operating systems
  • We must keep these dependencies from affecting the developer’s wider system – there’s nothing worse than having a project break your computer

For a while now, developers have been solving this problem by running applications within virtual machines running Linux (e.g. with VirtualBox and Vagrant), which is a great way of encapsulating software within a predictable environment.

Linux containers offer light-weight encapsulation

More recently, containers have entered the scene.

containers

A container is a part of the existing system with carefully controlled permissions and an encapsulated filesystem, to make it appear and behave like a separate operating system. Containers are much lighter and quicker to run than a full virtual machine, and yet provide similar benefits.

The easiest and most direct way to run containers is probably LXD, but unfortunately there’s no easy way to run LXD on macOS. By contrast, Docker CE is trivial to install and use on macOS, and so this became our container manager of choice. When it becomes easier to run LXD on macOS, we’ll revisit this decision.

Each project uses a number of Docker images

docker cat

Running containers through Docker helps us to carefully manage our projects’ dependencies, by:

  • Keeping all our software, from Python modules to databases, from affecting the wider system
  • Logically grouping our dependencies into separate light-weight containers: one for the database, and a separate one for each technology stack (Python, Ruby, Node etc.)
  • Easily cleaning up a project by simply deleting its associated containers

So the ./run script in each project will run the necessary commands to start the project by running the relevant commands inside the relevant Docker images. For example, in partners.ubuntu.com, the ./run command will:

Docker is the only dependency

By using Docker images in this way, the developer doesn’t need to install any of the project dependencies on their local system (NodeJS, Python, PostgreSQL etc.). Docker – which should be trivial to install on both Linux and macOS – is the single dependency they need to run any of our projects.

Keeping the ./run script up-to-date across projects

A key feature of this our solution is to provide a consistent interface in all of our projects. However, the script itself will vary between projects, as different projects have different requirements. So we needed a way of sharing relevant parts of the script while keeping the ability to customise it locally.

It is also important that we don’t add significant bloat to the project’s dependencies. This script is just meant to be a useful shorthand way of running the project, but we don’t want it to affect the shape of the project at large, or add too much extra complexity.

However, we still need a way of making improvements to the script in a centralised way and easily updating the script in existing projects.

A yeoman generator

To achieve these goals, we maintain a yeoman generator called canonical-webteam. This generator contains a few ways of adding the ./run architecture, for some common types of projects we use:

$ yo canonical-webteam:run            # Add ./run for a basic node-only project
$ yo canonical-webteam:run-django     # Add ./run for a databaseless Django project
$ yo canonical-webteam:run-django-db  # Add ./run for a Django project with a database
$ yo canonical-webteam:run-jekyll     # Add ./run for a Jekyll project

These generator scripts can be used either to add the ./run script to a project that doesn’t have it, or to replace an existing ./run script with the latest version. It will also optionally update .gitignore and package.json with some of our standard settings for our projects.

Try it out!

To see this ./run tooling in action, first install Docker by following the official instructions.

Run the www.ubuntu.com website

You should now be able to run a version of the www.ubuntu.com website on your computer:

  • Download the www.ubuntu.com codebase, E.g.:

    curl -L https://github.com/canonical-websites/www.ubuntu.com/archive/master.zip > www.ubuntu.com-master.zip
    unzip www.ubuntu.com-master.zip
    cd www.ubuntu.com-master
    
  • Run the site!

    $ ./run
    # Wait a while (the first time) for it to download and install dependencies. Until:
    Starting development server at http://0.0.0.0:8001/
    Quit the server with CONTROL-C.
    
  • Visit http://127.0.0.1:8001 in your browser, and you should see the latest version of the https://www.ubuntu.com website.

Forking or improving our work

We have documented this standard interface in our team practices repository, and we keep the central code in our canonical-webteam Yeoman generator.

Feel free to fork our code, or if you’d like to suggest improvements please submit an issue or pull-request against either repository.


Also published on Medium.

Read more
Robin Winslow

Nowadays free software is everywhere – from browsers to encryption software to operating systems.

Even so, it is still relatively rare for the code behind websites and services to be opened up.

Stepping into the open

Three years ago we started to move our website projects to Github, and we also took this opportunity to start making them public. We started with the www.ubuntu.com codebase, and over the next couple of years almost all our team’s other sites have followed suit.

canonical-websites org

At this point practically all the web team’s sites are open source, and you can find the code for each site in our canonical-websites organisation.

www.ubuntu.com developer.ubuntu.com www.canonical.com
partners.ubuntu.com design.ubuntu.com maas.io
tour.ubuntu.com snapcraft.io build.snapcraft.io
cn.ubuntu.com jp.ubuntu.com conjure-up.io
docs.ubuntu.com tutorials.ubuntu.com cloud-init.io
assets.ubuntu.com manager.assets.ubuntu.com vanillaframework.io

We’ve tried to make it as easy as possible to get them up and running, with accurate and simple README files. Each of our projects can be run in much the same way, and should work the same across Linux and macOs systems. I’ll elaborate more on how we manage this in a future post.

README example

We also have many supporting projects – Django modules, snap packages, Docker images etc. – which are all openly available in our canonical-webteam organisation.

Reaping the benefits

Opening up our sites in this way means that anyone can help out by making suggestions in issues or directly submitting fixes as pull requests. Both are hugely valuable to our team.

Another significant benefit of opening up our code is that it’s actually much easier to manage:

  • It’s trivial to connect third party services, like Travis, Waffle or Percy;
  • Similarly, our own systems – such as our Jenkins server – don’t need special permissions to access the code;
  • And we don’t need to worry about carefully managing user permissions for read access inside the organisation.

All of these tasks were previously surprisingly time-consuming.

Designing in the open

Shortly after we opened up the www.ubuntu.com codebase, the design team also started designing in the open, as Anthony Dillon recently explained.

Read more
Anthony Dillon

Over the past year, a change has emerged in the design team here at Canonical: we’ve started designing our websites and apps in public GitHub repositories, and therefore sharing the entire design process with the world.

One of the main things we wanted to improve was the design sign off process whilst increasing visibility for developers of which design was the final one among numerous iterations and inconsistent labelling of files and folders.

Here is the process we developed and have been using on multiple projects.

The process

Design work items are initiated by creating a GitHub issue on the design repository relating to the project. Each project consists of two repositories: one for the code base and another for designs. The work item issue contains a short descriptive title followed by a detailed description of the problem or feature.

Once the designer has created one or more designs to present, they upload them to the issue with a description. Each image is titled with a version number to help reference in subsequent comments.

Whenever the designer updates the GitHub issue everyone who is watching the project receives an email update. It is important for anyone interested or with a stake in the project to watch the design repositories that are relevant to them.

The designer can continue to iterate on the task safe in the knowledge that everyone can see the designs in their own time and provide feedback if needed. The feedback that comes in at this stage is welcomed, as early feedback is usually better than late.

As iterations of the design are created, the designer simply adds them to the existing issue with a comment of the changes they made and any feedback from any review meetings.

Table with actions design from MAAS project

When the design is finalised a pull request is created and linked to the GitHub issue, by adding “Fixes #111” (where #111 is the number of the original issue) to the pull request description. The pull request contains the final design in a folder structure that makes sense for the project.

Just like with code, the pull request is then approved by another designer or the person with the final say. This may seem like an extra step, but it allows another person to look through the issue and make sure the design completes the design goal. On smaller teams, this pull request can be approved by a stakeholder or developer.

Once the pull request is approved it can be merged. This will close and archive the issue and add the final design to the code section of the design repository.

That’s it!

Benefits

If all designers and developers of a project subscribe to the design repository, they will be included in the iterative design process with plenty of email reminders. This increases the visibility of designs in progress to stakeholders, developers and other designers, allowing for wider feedback at all stages of the design process.

Another benefit of this process is having a full history of decisions made and the evolution of a design all contained within a single page.

If your project is open source, this process makes your designs available to your community or anyone that is interested in the product automatically. This means that anyone who wants to contribute to the project has access to all the information and assets as the team members.

The code section of the design repository becomes the home for all signed off designs. If a developer is ever unsure as to what something should look like, they can reference the relevant folder in the design repository and be confident that it is the latest design.

Canonical is largely a company of remote workers and sometimes conversations are not documented, this means some people will be aware of the decisions and conversations. This design process has helped with the issue, as designs and discussions are all in a single place, with nicely laid out emails for all changes that anyone may be interested.

Conclusion

This process has helped our team improve velocity and transparency. Is this something you’ve considered or have done in your own projects? Let us know in the comments, we’d love to hear of any way we can improve the process.

Read more
Barry McGee

One of the most complex aspects of managing continuous development on a large codebase is ensuring that it remains stable.

This problem is particularly acute when building out front end architecture using HTML & CSS due to the inherently global nature of CSS.

How many times have you shipped a CSS change to one small part of a website only to find you’ve inadvertently broken a page element on a different page entirely?

This problem usually arises because of all your CSS loading via one external file, added to each page of your website. If you don’t namespace or isolate your styles correctly, changes to your CSS may have unintended consequences.

Structuring your CSS using the BEM convention or similar can help prevent such clashes. However, in a fast moving team where multiple developers are working on a large codebase daily, relying on code convention alone is often not enough to stop visual regression bugs from creeping in.

Ideally, you or a team member should check each page of your site, in turn, to make sure nothing has broken, right? While that’s a solid QA approach, it doesn’t scale very well. As your site grows, it can become all time consuming to check each page, especially if you consider you may also need to check each page over multiple breakpoints.

That’s where automated Visual Regression Testing (VRT) tools can seriously lighten your workload. A VRT tool will typically run through your site and capture a baseline snapshot of all your pages to use as a benchmark.

After you then make some changes, you run the process again and the VRT tool will compare the latest capture of your pages with the baseline capture and highlight the differences. It’s at this stage where you’ll be alerted to any unintended consequences.

The concept of VRT has been around for a few years but up until now, most solutions have involved setting up your process locally, typically involving quite a few moving parts. When trying to get a project team to integrate VRT as part of their workflow using one of these solutions, we always ran into trouble as it was so difficult to keep individual developer setups consistent – inevitably, I’d spend longer debugging VRT setup than I would visual diffs.

I then stumbled upon Percy.io, which offers VRT software as a service. I was immediately interested in how we might utilise it for Vanilla Framework, our constantly evolving CSS framework.

I immediately signed up for a trial and was quickly impressed with their GitHub integration and ease of use. Percy is unobtrusive, and it’s only when a feature progresses to the Pull Request stage does Percy come into play. It will run as part of the Travis CI build and then report back if it has found any visual diffs for review. You can also configure Percy to test across defined breakpoints.


Percy’s Github integration is a big win

The person reviewing the PR can then click through to the project dashboard on percy.io and review the highlighted diffs. If the changes are expected based on the what has been outlined in the PR, then the changes can be approved.


Comparing different pages for visual differences

When the feature merges, these changes then become the baseline. If unexpected changes are highlighted, the reviewer can then highlight this to the developer for amendment.

As we make multiple changes a day to our Vanilla codebase while aiming for a weekly release, having VRT as part of our continuous integration has afforded us extra confidence that our releases do not contain missed bugs and regressions.

Related:

Read more
Anthony Dillon

The Vanilla team needed to solve two issues which have been paining the development of Vanilla Framework for some time.

Firstly we needed to improve our workflow for testing and QAing components on our local machines. Up until now, we have been using npm link on our local branches of Vanilla with our local website branch, then reviewing the examples in the components page of the documentation. This caused a lot of extra overhead to reviewing Vanilla.

Secondly, since we actually build the docs.vanillaframework.io site using the Documentation theme (vanilla-docs-theme), the Vanilla pattern examples we ended up reviewing were no longer purely styled by Vanilla Framework, but as they were extended by the theme.

The documentation of the matrix pattern in Vanilla

The documentation of the matrix pattern in Vanilla

The solution

To solve both these issues, we decided to decouple the examples from the documentation. This change allowed us to move the coded examples of the patterns into a separate “examples” directory of the codebase and remove the hard-coded examples from the documentation.

As the examples were a part of the Vanilla Framework code we simply linked each example page with the Vanilla built from the same branch. This means all examples are only styled by Vanilla and nothing else.

Another benefit that came from this change was that now we have an easy way to find an example of a pattern when reviewing or QAing a pull request. Whereas previously we had to do the npm link dance. Now we simply check out the branch and run the internal Jekyll site to build Vanilla giving us a directory of pattern pages.

Examples in the docs

So we were happy with these changes: we had solved the issues at hand and were ready to head off and have a celebratory coffee.  But, we couldn’t leave the documentation without examples and code snippets.

To solve this issue, we used an embedding paradigm like on Codepen.

Example of a Codepen embed

Example of a Codepen embed

We set about creating a small JavaScript project that would find a link to the page with a specific class and grab the href attribute from it, replacing the link with an iframe of the link. This gave us a nice progressively enhanced experience:

Example of progressive enhancement - on the left is an example with JavaScript enabled, right is an example is JavaScript disabled.

An example of progressive enhancement – on the left is an example with JavaScript enabled, right is an example is JavaScript disabled.

We were still lacking the code snippets, so we made the script also pull the HTML source of the linked page into the example, then display the contents of the body in a code block appended after the iframe.

The wrap-up

And that was it. The solution gives us:

  • A single place for example code
  • Examples only displayed using Vanilla
  • A local testing environment
  • Documentation examples that are automatically up to date

We named this mini project example-js. Please feel free to fork it, use it or file any issues you may find.

Read more
Will Moggridge

Introducing tutorials.ubuntu.com

The web team has been hard at work on our new Ubuntu Tutorials website and we are proud to share our work with the community. Our first set of tutorials are based around snap usage and building snaps with snapcraft. We will continue to work on our catalogue to broaden it to a variety of subjects.

Ubuntu Tutorials is part of a bigger project to improve our documentation across our other projects. Our goals are to improve the discoverability and the ease of use for our documentation. Having followed Ubuntu and been part of the community for many years, I am excited to be involved with this project. I hope we can keep moving forward with this work and give back to the community.

Polymer and our source code

The website is built using Google’s Polymer framework with their Codelabs web components. Polymer has been a great and enjoyable experience and really made the web components so much more more exciting. I am already looking to see where I can use these technologies in the rest of our projects. We recently had a hack day and had the opportunity to explore putting Vanilla Framework in web components. I am happy with our initial work with Vanilla web components we are looking forward to continue exploring and developing them.

The Ubuntu Tutorials website source code is available for you to dive into, at the Ubuntu Tutorials GitHub repository.
A big thank you to Didier Roche, whose work was the foundation for this.

Our next steps

Looking to the future, we are already thinking about and preparing improvements for the site. We have been really happy with the feedback we are getting on the GitHub issues page. A number of the issues have been requests for tutorials on certain topics. This is really useful and interesting to us, so that we can see which areas to focus.

I am interested in simplifying our process for creating and contributing to Ubuntu Tutorials. Not only for us but also to empower you. One strong area for this is adding functionality to write tutorials using markdown. This will increase visibility for all and remove some overhead to us, while also making it simpler for people to contribute to our catalogue. We are currently looking into this and hope we will have a solution soon.

Read more
Anthony Dillon

Hack day 2

This week, the web team managed to get away for our second hack day. These hack days give us an opportunity to scratch our own itches and work on things we find interesting.

We wrote about our first hack day in August last year.

Getting started

We began by outlining the day and reviewing ideas that had been suggested on a Google Doc throughout the previous week by everyone on the team. We each voted by marking the ideas we would be interested in working. Then we chose the most voted ones and assigned groups of 2 or 3 people to each.

The groups broke up and turned their idea into a formal project with a list of the tasks required to produce an MVP. Below is a list of the ideas and outcomes from each team.

Performance audit of the current websites

Team: Rich, Andrea, Robin

The team discovered a tool called Lighthouse by Google Chrome which analyses a web page and returns a full audit of the dependent assets and accessibility issues.

The team spent some time trying to create a service using Lighthouse to produce an API, then realised that Google Chrome team had done this work already. The service is called Moonlight. Moonlight is a SaaS to test the performance of a page.

As Moonlight takes a single webpage endpoint to test., we need a way to recursively test pages. The team created a profiling script to gather the references endpoints of a site.

Canonical web team dashboard

Team: Luke, Ant, Yaili

The goal of this project was to motivate the team to improve key areas at a glance. The  metrics we wanted to capture were:

  • Whether the site is up or down
  • Live visitors countsMonthly unique visitors
  • Monthly unique visitorsOpen issues on the project
  • Open issues on the projectOpen PRs on the project
  • Open PRs on the project
  • Information about the last commit to the sites code base
  • PageSpeed insights tests results

We gathered a set of sites we would like to collect these metrics on:

  • www.ubuntu.com
  • www.canonical.com
  • maas.io
  • jujucharms.com
  • landscape.canonical.com
  • design.ubuntu.com
  • design.canonical.com
  • insights.ubuntu.com
  • developer.ubuntu.com
  • community.ubuntu.com
  • summit.ubuntu.com

The team used MERN stack (MongoDB, expressjs, React and Nodejs) and modified its sample project to create a interface which could be displayed depending on the state of the data. For example, the up or down card would display all sites as up but once one went down the card would change to an error state and only display the information about the site that is down. By designing for emotion in this way, we can intelligently utilise the limited space available in a dashboard.

The team also used a few plugins to gather some data:

  • ping-monitor to ping our sites to check if they’re up, down or broken
  • node-http-ping to get response times for the same set of Canonical sites

Storing the data in MongoDB to keep historical data and using the /api endpoint to return the response time and status for each site, the team managed to produce a simplified dashboard showing the available state of our list of sites.

Ubuntu.com dev tools

Team: Graham, Karl


As a team, we have been using gulp scripts to lint and test our code locally and in our CI environments for sometime. But we have never got around to applying these checks to our flagship website, www.ubuntu.com.

The plan here was to implement gulp scripts to lint Sass and JavaScript. And, to also look into further options like spell-checking, auto-prefixing and HTML validation.

The team added Sass linting and borrowed the linting tasks from our styling framework vanilla-framework. This produced a long list of lint issues. The team tracked the lint errors and quickly fixed them to get a passing CI run.

Adding JavaScript linting (jsHint)

The team also implemented JavaScript linting using jsHint on the current JavaScript within the sites code base. This produced a number of JavaScript lint errors which were fixed, ignoring the plugin code.

Finally adding the new linting steps to the Travis configuration. So the linting is tested on each pull request.

Vanilla web components prototype

Team: Barry, Will, Robin

To enable Vanilla on a variety of platforms. This would allow people to use Vanilla in modern web apps.

The team  created a base repository using Polymer’s tools and started creating web components for Vanilla.

They discovered that the styling needs tweaking to be compatible with web components. Possibly just by building a shared styles import which is included in each web component.

The team started by importing vanilla-framework from NPM, then built modular scss files containing only relevant parts from Vanilla, and finally imported the modular style file in web component.

Inside the repository there is a vanilla.html which imports all of the components. Components can individually be included as needed.

This work includes a demo system, with API documentation. The demo system displays the component and the markup used to create it. This is accessed by running `polymer serve` and accessing the site.

This work can be used to build solid web components for use in Polymer and we can also use this work to jumpstart React components.

HTTP/2 on vanillaframework.io

In the midst of all this work. Robin found time to tackle the task of hosting our styling frameworks website on HTTP/2. It’s currently a proof of concept but can now be considered as the start of work item to roll out.

Demo site

Conclusion

Again, this was a successful hack day with everyone busy working on things that interest them. Although there were less completed outcomes this time, we did set up a number of good projects which are ready to be continued.




Read more
Robin Winslow

We’ve been making an effort to secure all our websites with HTTPS. While some Canonical sites have enforced HTTPS for a while (e.g.: landscape.canonical.com, jujucharms.com, launchpad.net), it’s been missing from our other sites until now.

Why HTTPS?

The HTTPS movement has been building for years to help secure internet users against black-hat hackers and spies. The movement became more urgent after Edward Snowden revealed significant efforts by government agencies to spy on the world population.

The EFF have helped create two projects: LetsEncrypt – which massively simplifies the free installation of HTTPS certificates; and HTTPS Everywhere – a browser plugin to help you use HTTPS whenever it’s available. The advent of HTTP/2 has helped negate performance concerns when moving to HTTPS.

Google have also made efforts to encourage websites to enable HTTPS: First announcing in 2014 that they would consider HTTPS support in their search ranking algorithm; and last year, that Google Chrome would start visually warning users of “insecure” (non-HTTPS) websites.

Our sites

We made https://www.ubuntu.com HTTPS-only in October of last year, and have since done so on 10 more sites:

We hope to enable HTTPS on our other sites in the coming months.

Although enabling HTTPS can be relatively simple there were a number of specific challenges we had to overcome for some of our websites. I hope to write more about these in a follow-up post.

Read more
Grazina Borosko

The Yakkety Yak 16.10 is released and now you can download the new wallpaper by clicking here. It’s the latest part of the set for the Ubuntu 2016 releases following Xenial Xerus. You can read about our wallpaper visual design process here.

Ubuntu 16.10 Yakkety Yak

yakkety_yak_wallpaper_4096x2304

Ubuntu 16.10 Yakkety Yak (light version)

yakkety_yak_wallpaper_4096x2304_grey_version

Read more
Jouni Helminen

We have been looking at ways of making the Terminal app more pleasing, in terms of the user experience, as well as the visuals.

I would like to share the work so far, invite users of the app to comment on the new designs, and share ideas on what other new features would be desirable.

On the visual side, we have brought the app in line with our Suru visual language. We have also adopted the very nice Solarized palette as the default palette – though this will of course be completely customisable by the user.

On the functionality side we are proposing a number of improvements:

-Keyboard shortcuts
-Ability to completely customise touch/keyboard shortcuts
-Ability to split the screen horizontally/vertically (similar to Terminator)
-Ability to easily customise the palette colours, and window transparency (on desktop)
-Unlimited history/scrollback
-Adding a “find” action for searching the history

email-desktop

Tabs and split screen

On larger screens tabs will be visually persistent. In addition it’s desirable to be able split a panel horizontally and vertically, and use keyboard shortcuts or focusing with a mouse/touch to move between the focused panel.

On mobile, the tabs will be accessed through the bottom edge, as on the browser app.

terminal-blog-phone

Quick mobile access to shortcuts and commands

We are discussing the option of having modifier (Ctrl, Alt etc) keys working together with the on-screen keyboard on touch – which would be a very welcome addition. While this is possible to do in theory with our on-screen keyboard, it’s something that won’t land in the immediate near future. In the interim modifier key combinations will still be accessible on touch via the shortcuts at the bottom of the screen. We also want to make these shortcuts ordered by recency, and have the ability to add your own custom key shortcuts and commands.

We are also discussing with the on-screen keyboard devs about adding an app specific auto-correct dictionary – in this case terminal commands – that together with a swipe keyboard should make a much nicer mobile terminal user experience.

email-desktop

More themability

We would like the user to be able to define their own custom themes more easily, either via in-app settings with colour picker and theme import, or by editing a JSON configuration file. We would also like to be able to choose the window transparency (in windowed mode), as some users want a see-through terminal.

We need your help!

These visuals are work in progress – we would love to hear what kind of features you would like to see in your favourite terminal app!

Also, as Terminal app is a fully community developed project, we are looking for one or two experienced Qt/QML developers with time to contribute to lead the implementation of these designs. Please reach out to alan.pope@canonical.com or jouni.helminen@canonical.com to discuss details!

EDIT: To clarify – these proposed visuals are improvements for the community developed terminal app currently available for the phone and tablet. We hope to improve it, but it is still not as mature as older terminal apps. You should still be able to run your current favourite terminal (like gnome-terminal, Terminator etc) in Unity8.

Read more
Anthony Dillon

Web team hack day

Last week the developers in the web team swapped the office for the lobby of the hotel across the street. The day was geared up to allow us to leave our daily tasks in the office and think of ideas that we would like to work on.

The morning started with coffee and sitting in sofas brainstorming ideas. We collected a list of ideas from each person would like to work on. The ideas ranged from IRC bots to a performance audit of a few of our sites.

Choosing ideas

We wrote all the ideas on post it notes and lay them out on the table. Then each of us chose the idea we were most interested in working on by putting our hand on it. I worked out an almost perfect split of two people per idea, so we broke up into our teams and got to work.

Here are the things we worked on during this “hack day”.

IRC bots

These are bots that can listen to an action and report it to our IRC channel. For example, the creator of a pull request wouldn’t have to paste a link to their PR into our channel to be picked up for review.

This task was picked up by Karl and Will, who started by setting up a Hubot on Heroku. They attached a webhook to all projects under the ubuntudesign to listen for pull requests and report this in the web team channel.

This bot can be used for many other things like reporting deployments, CI failures, etc. We also discussed a method of subscribing to the notifications you are interested in, instead of the whole team being notified about everything.

Asset manager search improvements

Our asset manager and server acts as an internal asset storage. By storing an asset here we get a link to it which can be used by any website. As there are many assets stored in the asset manager we usually need to search existing assets to see if one already exists before making a new one.

Graham and myself picked this task and started by working out how to setup both the manager and server locally.

Previously the search would return results that contained either of a two-word query, but now the result has to contain all search terms to return.

We added our Vanilla framework to the front end, as it is obviously good to use our framework for all internal and external projects.

We have also implemented filtering results by file type, which makes it easy to go through what can sometimes be dozens of search results.

GitHub CMS

GitHub CMS is a nicer and more restricted interface that the marketing team can use to edit the GitHub repository containing page content for example, www.ubuntu.com.

Rich and Robin picked this task and began work on it straight away, by discussing the best approach and list of possible features.

Even though Robin was also helping out with. setting up the asset manager and server locally for Graham and me, he still managed to investigate the best Python framework to use and selected one. Rich on the other hand went ahead with the front end and developed a bunch of page templates using the new MAAS GUI Vanilla theme.

Commit linting

Commit linting is a service that gives a committer to a project a nice step by step wizard to build a high quality commit message.

Barry picked up this task and got the service up and running, and working, but hit a blocker at the point of choosing between different methods of committing. For instance, Tower would bypass this step and also we do not necessarily want to dictate to contributors which way they should commit code. This is something we will leave as an investigation for the time being.

Conclusion

Our hack day went well, in fact better than I imagined it would. We all had fun, got to work on things we found important but struggle to get prioritised in our day to day. It gave the developers a feeling of achievement and the buzz of landing and releasing something at the end of the day.

We will be attempting to do a hack day once every month or so, so watch this space!

Read more
Barry McGee

Developing for Vanilla v1

As Inayaili recently blogged, we are now working towards a goal of releasing Version one (v1) of Vanilla for early September.

Maturity

Vanilla was created just over a year ago and in that time has been used to build a wide range of sites across Canonical and beyond. It currently averages around 1,500 downloads a month on NPM. We’ve been delighted to see it grow in popularity and see the myriad of different experiences people have been building using Vanilla.

A big advantage of this wide adoption is the feedback we’ve received from developers on the front line, including within our own teams at Canonical. This feedback has enabled us to identify growing pains and mark out clear areas for improvement.

The overarching themes for v1 are maturity and stability — ensuring the framework is a cohesive set of building blocks and also making sure those building blocks are stress tested and robust.

Practical steps

The first step we will be taking is to audit the codebase and ensure it adheres to our coding standards. This will include encapsulating all components with the BEM methodology which we have introduced to our coding standards within the last year.

We will also be working to improve accessibility and responsiveness of each component while making some aspects of the codebase less opinionated to help increase its applicability to a broad range of use cases.

Another big area earmarked for love is the documentation provided for Vanilla. Given that the framework is now used by a wide and diverse set of people, we can make no assumptions about what they may know. So we need to provide comprehensive documentation that not only details how to implement each component but that also explains where each component should or should not be used.

It’s also important that everything in Vanilla is visible. Over time, code has slipped into Vanilla that is not documented on the demo.  This can cause page elements to display in ways a developer might not expect. We will be addressing this by building a comprehensive documentation site at a dedicated URL. This will be the one-stop-shop for all things Vanilla and will replace the current Vanilla demo page and Sass docs.

We will also be restructuring Vanilla so it is in a better place for scalability and extensibility going forward. Vanilla currently employs a flat structure for simplicity but we’ve come to realise that it can be confusing to mix components with utilities and presentation with configuration.

We recently had a team discussion on possible ways to structure the code within Vanilla and settled on an approach minted by Harry Roberts — Inverted Triangle CSS or ITCSS. Structuring Vanilla in this way will not only improve the quality of the resulting CSS but make it much easier to initiate new developers to building with Vanilla.

itcss-triangle-foundation

Layers of ITCSS – courtesy of Harry Roberts

Exciting times

I’m very excited about this project and think it has huge potential to help shape how we in Canonical approach building experiences on the web, not to mention how the wider community will benefit from these changes.

If you have any feedback or ideas on the future direction of Vanilla from a development point of view, please do comment below – we’d love to hear from you.

Read more
Andrea Bernabei

QtDay is the only Italian event dedicated to Qt. It is held yearly by Develer and brings together company products that are developed using Qt, as well as Qt developers and customers who want the latest developments and solutions in the Qt world. This year the conference was held in Florence, where I was lucky enough to attend and present.

I had previously attended the 2011, 2012 and 2014 QtDay events whilst I was studying Computer Science at the University of Pisa. This year it was different because Develer invited me to give a talk about Ubuntu and Qt. The funny thing was e I was already planning on sending my presentation to the Call for Proposals anyway! So I was already prepared.

What I do at Ubuntu

My role in Canonical is UX Engineer, basically a developer acting as the bridge between designers and engineers. It is a pretty cool job, and I’m very lucky to be part of such an energetic team.

Over the last year there was a strong push in both the Design and Engineering teams working on Ubuntu Touch to finalize and deliver the convergent experience. This was a great opportunity to spread the word about how to develop convergent apps for the new Ubuntu platform, and get developers interested in where we are and where we are heading.

My talk – “Standing on the shoulders of giants: developing Ubuntu convergent apps with Qt”

When I first thought about giving the presentation, I decided it would only be about the current state of the UI components provided by Ubuntu SDK, with a strong focus on their “convergent” features, and how to use them to realize your convergent apps. However, as time went by I realized it would have been more interesting for developers to also get some context about the platform itself, and how to best integrate their apps with the platform.

By the time QtDay arrived, the presentation had almost doubled in size! You can find it here.

A slideshow or an app? How about both!

This is a detail the geeks in the audience might be interested in…I thought it would be neat to talk about the development of Qt/QML apps and use the same framework to implement the presentation as well!

That’s right, the presentation is actually a QML application that uses (a modified version of) the QML presentation system available as a Qt Labs addon. Having the power of Qt underneath your presentation means you can do pretty much everything You’re not tied to the boundaries set by the “standard” presentation systems (such as Beamer, LibreOffice Impress, Microsoft Powerpoint, etc) anymore!

In my case, I exploited that to implement a live-coding view as a pull-down layer that you can open on-demand whenever you want by using keyboard shortcuts. Once the livecoding view is open, you can write code (or use the code automatically provided when you’re at one of the special “Livecoding!” slides) in the text editor on the left side and see the result in the right side in real time without leaving or interrupting your presentation. You can also increase or decrease the font size, and there’s also a sparkling particle effect that follows the cursor to make it easier for the audience to follow your changes to the text. That’s only one of the things you can do once you replace your “standard” presentation with a full featured application. If you’re a software developer then I highly recommend giving it a try for your next presentation!

The sourcecode and the PDF version of the presentation is available here and my fork of the QML presentation system is available here.

And here’s a screenshot of the livecoding view in action (sparkling particle effect included) :)

livecoding_Screenshot_2016-06-15_09-13-38

The morning

The conference was held in the middle of Florence at the Hotel Londra Conference Centre. It was quite a nice location I have to say! Very easy to reach as it is very close to the main railway station, Santa Maria Novella.

My talk was in the first time slot after the main keynote, which was good because, you know, that meant I could relax and enjoy the rest of the day!

13173300_1020935067988814_8428457406371463764_o

I started by giving an overview of the current state of Ubuntu and the fact that it’s doing great in the Cloud field. Ubuntu can now scale to run on IoT devices as well as phones, tablets, notebooks, servers and Clouds.

I then presented the concept of convergence and how the UI components provided by the Ubuntu SDK can be best utilised to create great convergent apps, including some livecoding. Livecoding is fun because it gives a pragmatic idea of how to go from theory to practice, and also keeps the attendees awake, because they know things can go wrong at any moment (demo effect) and they enjoy that, for some reason :)

After UI components section, I went on to talk about platform integration topics such as the app lifecycle management, the app isolation features, and the integration with the Content Hub which is the secure way to share data between applications.

I then briefly talked about internationalization and how to publish your application on the Ubuntu Store (it’s very easy!)

For this occasion, I brought with me a BQ M10 tablet, which is the convergent Ubuntu tablet that we released just a few months ago! I connected it to a bluetooth mouse and keyboard, and set it up on a table for people to try. Lots of people played with it. After the talk it was exciting to see the audience interest in the whole convergence story.

The other talks during the morning were very interesting as well, I particularly enjoyed Marco Piccolino’s “A design system and pattern libraries can significantly improve the quality of your UI” (Find the slides here).

And then it came to lunchtime…

Food…Italian food

The food was great and, coming from the UK, I enjoyed it even more. Big kudos to Develer (the company behind the event) for finding such a good catering company!

Here’s a pic of the goodies available during coffee breaks. Mmmm…

13131744_1020935184655469_8301292868650133109_o

Afternoon talks

The afternoon talks were as interesting as the morning ones. Marco Arena, from the Italian C++ Community, gave a talk about QCustomPlot, which is a library to draw graphs and plots using Qt (slides here).

If you’re interested in Virtual Reality, partially BCI (Brain Computer Interface) and machine learning, make sure you check out the slides of Sebastiano Galazzo’s talk (once they’re available, at that page). His project involves manipulating what the user sees in a Google Cardboard by reading his/her brain waves to interpret emotions. Pretty neat.

Stefano Cordibella’s presentation was about his experience optimizing the startup time of an application running on embedded hardware (using Qt4). They exploited the power of the QtQuick Loader component and other QML tricks to decrease the loading time of the application.
Check his slides out if you’re interested in QML optimization. I’m sure you’ll find them useful.

The final talk I attended was more of a roundtable about how to contribute to the development of Qt itself, led by Giuseppe D’Angelo, who has the role of “Approver” in the Qt Project Open Governance Model.

As a result of attending that roundtable not only I started contributing to Qt (See the changes I contributed to here), but I also improved theQt Wiki Contribution Guidelines so that it will be easier for other people to start contributing. The power of open source and open governance! :)

The closing talk also included a raffle, where a couple of developers won an embedded devboard sponsored by Atmel. I’ve been quite lucky with Qt-related raffles in the past, but this wasn’t one of those days, oh well :)

13131322_1020935404655447_4502313797681298972_o

Closing remarks

What a great day it was. I want to thank Develer for organizing the conference and the guys from Community team (Alan Pope, David Planella, Daniel Holbach) and Zsombor Egri from the SDK team at Canonical for providing feedback and ideas for the presentation.

It was also great to see so many people interested in the convergence story and in the M10 tablet. The technology has great potential and it’s our job to make the best of it :)

See you all at the next QtDay!

Note: the pictures are courtesy of Develer’s QtDay Facebook page.

Read more
Grazina Borosko

April marks the release of Xerus 16.4 and with it we bring a new design of our iconic wallpaper. This post will take you through our design process and how we have integrated our Suru visual language.

Evolution

The foundation of our recent designs are based on our Suru visual language, which encompasses our core brand values, bringing consistency across the Ubuntu brand.

Our Suru language is influenced by the minimalist nature of Japanese culture. We have taken elements of their Zen culture that give us a precise yet simplistic rhythm and used it in our designs. Working with paper metaphors we have drawn inspiration from the art of origami that provides us with a solid and tangible foundation to work from. Paper is also transferable, meaning it can be used in all areas of our brand in two and three dimensional forms.

Design process

We started by looking at previously released wallpapers across Ubuntu to see how each has evolved from each other. After seeing the previous designs we started to apply our new Suru patterns, which inspired us to move in a new direction.

Ubuntu 14.10 ‘Utopic Unicorn’’

wallpaper_unicorn

Ubuntu 15.04 ‘Vivid Vervet’

suru-desktop-wallpaper-ubuntu-vivid (1)

Ubuntu 15.10 ‘Wily Werewolf’

ubuntu-1510-wily-werewolf-wallpaper

Step-by-step process

Step 1. Origami animal

Since every new Ubuntu release is named after animal, the Design Team wanted to bring this idea closer to the wallpaper and the Suru language. The folds are part of the origami animal and become the base of which we start our design process.

Origarmi

To make your own origami Xerus squirrel, you can find the instructions here.

Step.2 Searching for the shape

We started to look at different patterns by using various techniques with origami paper. We zoomed into particular folds of the paper, experimented with different light sources, photography, and used various effects to enhance the design.

The idea was to bring actual origami to the wallpaper as much as possible. We had to think about composition that would work across all screen sizes, especially desktop. As the wallpaper is a prominent feature in a desktop environment, we wanted to make sure that it was user friendly, allowing users to find documents and folders located on the computer screen easily. The main priority was to not let the design get in the way of everyday usage, but enhance it aesthetically and provide a great user experience.

After all the experiments with fold patterns and light sources, we started to look at colour. We wanted to integrate both the Ubuntu orange and Canonical aubergine to balance the brightness and played with gradient levels.

We balanced the contrast of the wallpaper color palette by using a long and subtle gradient that kept the bright and feel look. This made the wallpaper became brighter and more colorful.

Step.3 Final product

The result was successful. The new concept and usage of Suru language helped to create a brighter wallpaper that fitted into our overall visual aesthetic. We created a three-dimensional look and feel that gives the feeling of actual origami appearance. The wallpaper is still recognizable as Ubuntu, but at the same time looks fresh and different.

Ubuntu 16.04 Xenial Xerus

Xerus - purple

Ubuntu 16.04 Xenial Xerus ( light version)

Xerus - Grey

What is next?

The Design Team is now looking at ways to bring the Suru language into animation and fold usage. The idea is to bring an overall seamless and consistent experience to the user, whilst reflecting our tone of voice and visual identity.

Read more
Anthony Dillon

The Juju web resources are made up of two entities: a website jujucharms.com and an app called Juju GUI, which can be demoed at demo.jujucharms.com.

Applying Vanilla to jujucharms.com

Luckily the website was already using our old style guidelines, which we refactored and improved to become Vanilla, so I removed the guidelines link from the head and the site fell to pieces. Once I NPM installed vanilla-framework and included it into the main sass file things started to look up.

A few areas of the site needed to be updated, like moving the search markup outside of the nav element. This is due to header improvements in the transition from guidelines to Vanilla. Also we renamed and BEMed our inline-list component, so I updated its markup in the process. The mobile navigation was also replaced with the new non-JavaScript version from Vanilla.

To my relief with these minor changes the site looked almost exactly as it did before. There were some padding differences, which resulted in some larger spacing between rows, but this was a purposeful update.

All in all the process of replacing guidelines with Vanilla on the website was quick and easy.

Now into the unknown…

Applying Vanilla to Juju GUI

I expected this step to be trickier as the GUI had not started life using guidelines and was using entirely bespoke CSS. So I thought: let’s install it, link the Vanilla framework and see how it looks.

To my surprise the app stayed together, apart from some element movement and overriding of input styling. We didn’t need the entire framework to be included so I selectively included only the core modules like typography, grid, etc.

The only major difference is that Vanilla applies bottom margin to lists, which did not exist on the app before, so I applied “margin-bottom: 0” to each list component as a local override.

Once I completed these changes it looked exactly as before.

What’s the benefit

You might be thinking, as I did at the beginning of the project, “that is a lot of work to have both projects look exactly the same”, when in fact it brings a number of benefits.

Now we have consistent styling across the Juju real estates, which are tied together with one single base CSS framework. This means we have exactly the same grid, buttons, typography, padding and much more. The tech debt to keep these in sync has been cut and allows designers to work from a single component list.

Future

We’re not finished there, as Vanilla framework is a bare bones CSS framework it also has a concept of theming. The next step will be to refactor the SCSS on both projects and identify the common components. The theme itself depends on Vanilla, so we have logical layering.

In the end

It is exciting to see how versatile Vanilla is. Whether it’s a web app or web site, Vanilla helps us keep our styles consistent. The layered inheritance gives us the flexibility to selectively include modules and extend them when required.

Read more
Rae Shambrook

We previously posted about the clock app’s new look and today we are getting to know one of the developers behind the clock ( as well as other community apps.)  Bartosz Kosiorek gives us a glimpse into developing for Ubuntu and how he got started.

1) First, can you give us a bit of background about yourself and tell us how you started developing for Ubuntu?

My name is Bartosz and I’m from Poland. Currently I’m the developer for the Ubuntu Clock and Ubuntu Calculator. I started contributing to Ubuntu in 2008, by submitting bug reports into launchpad and fixing translations. Later I participated in the One Hundred Papercuts project. I made SRU verifications and eventually started developing.

My adventure with Ubuntu started from Ubuntu 8.10 (Interpid Idex). Previously I tried many different distributions (Debian, Fedora, SuSE etc.). I chose Ubuntu because it is easy to use and after installation, I have fully functional system. I like that after Ubuntu is installed, there are no duplicate applications and those already installed work perfectly with the system.

2) How long have you been working on the Clock and Calculator? How did you get involved in these projects?

I started to develop for Ubuntu about two years ago when I first heard about Ubuntu Touch and convergence. I started by contributing to Ubuntu Core Apps by testing, submitting bug reports and patches. Most commits were done for Ubuntu Calculator and Ubuntu Clock by fixing bugs which were approved by Riccardo Padovani, Mihir Soni and Nekhelesh Ramananthan. After some time, I became member of Ubuntu Core Apps. It’s very fun to work with these guys and the Ubuntu community. I’ve learned a lot about Qt/QML and user experience design.

3) How do you approach implementing a design in your apps?

Generally I follow the design document during implementation and sometimes find parts that need to be improved. After speaking with the Ubuntu UX team, we discuss various issues and agree on a final design solution. Sometimes the UX team gives us freehand, so we could design some parts by ourselves (eg. Stopwatch, Welcome Wizard, Landscape View). It’s really fun to work with such awesome guys.

4) What feature would you like to see in the future?

I think from user perspective, longer battery life is a very important topic. The power usage is higher with white background: https://www.quora.com/Does-a-white-background-use-more-energy-on-a-LCD-than-if-it-was-set-to-black especially with OLED screen. I wish that Ubuntu Touch came with a darker theme, to save battery on OLED screens.

Read more