Canonical Voices

Posts tagged with 'design'

Katie Taylor

Edges are special to us. We use them for finding apps, tools and system services, so using the edges will be second nature to Ubuntu phone users. By using the launcher, how to launch your favourite app will become ingrained in your muscle memory of the left edge.

The design vision behind Ubuntu for phones includes the use of fast and natural interactions, so taking that to the welcome screen means that if your phone is locked, you can still access the launcher, system services and the right edge. If you have a pin set up, you only need to enter your pin when accessing private data, in the Gallery app or the Dash for example.

 

 

If you’ve flashed your phone recently, you will be able to activate the lock screen for the phone using a temporary hack (love it!). You’ll notice that the blur has not yet been implemented, but will be added later. Thanks to Michael Zanetti for originally posting instructions to the Ubuntu Phone mailing list. Here they are:

To enable the pin lock, log into the phone and create a file /home/phablet/.unity8-greeter-demo, with the content: password=pin

If you want to see the password unlock screen instead, put this into the file:  password=keyboard
For now, the pin is hardcoded to “1234″ and the password is “password”. Note that this functionality can (and will) disappear at any time as we bring all the bits and pieces together. This is a temporary, simple way to enable the visual part of the lock screen for us all to have a play with.

Let us know what you think on the Ubuntu Phone mailing list and the IRC channel.

Read more
Lisette Slegers

In my previous blog post, we looked at the key screens for Shorts, the organic grid and the reading view. You can read about the list view behaviour in this document. In this post, I would like to look at journeys for adding and editing content, sharing an article and adjusting the reading view.

Sharing and adjusting the reading view

From the reading view, pull up the toolbar to reveal options:

Reveal reading options

Options for adjusting the reading view are:

  • Font size
  • Light / dark theme

Article view options

Any changes made to the reading view are persistent until the view is changed again.

Adding things to read 

There are different options for adding content to the Shorts app. All of the options described here are for when the user already has feed subscriptions in the app; the first use scenario is not yet covered. To get to the different options for adding content, pull up the toolbar from the topic view:

Adding content

1. Adding a topic

Adding a topic

Adding a new topic with feed suggestions makes finding things to read much easier for users who don’t understand RSS. However, suggesting feeds for subjects could easily become quite complex; for example feeds related to ‘News’ are location specific. Whether we can suggest feeds for users will depend on if we can automate this process.

2. Adding feeds

Add feeds 1

Add feeds 2

When adding one or more feeds, the user needs to select a topic to organise it under. Selecting the topic is done with the expanded option selector.

3. Add online accounts

This might not be possible for version 1, but being able to read articles that were posted on your social networks would be a great feature to have in Shorts. Connecting to social networks will be done through Ubuntu Online Accounts.

4. Import subscriptions

Exact functionality for importing and exporting subscriptions will depend on the how the file manager works.

5. Other

Depending on browser functionality, it might be possible to add feeds from the browser.

Edit topics

In Shorts, feeds are organised under topics. Occasionally, users might want to change the names of their topics and the organisation of their feeds. Under ‘edit topics’, users can:

1. Change topic names

Edit topic names

2. Change topic organisation by adding a new one

Edit topics: add a new one

3. Moving feeds into a different topic

Move feeds into a different topic

The above proposal lets users drag feeds from one topic and drop it into another. The list of feeds under a topic could be very long, so there is an option to collapse the topic. Whether this is possible depends on the drag and drop pattern available in the SDK. Drag and drop is not the easiest thing to do on a touchscreen. A possible alternative would be to long-press on a feed, go into selection mode and have move topic as one of the options.

4. Deleting feeds or entire topics 

Same as in the messaging menu, we will use the swipe to delete pattern – this will soon be in the app design guides.

Next steps

We aim to make the app powerful but simple by having the more complex options easily accessible where they are needed, and to cater for both advanced and novice users. Do you think this app can work without pages and pages of settings? Looking forward to hear your feedback and ideas. You can follow our progress on Google+, the Ubuntu Phone mailing list and IRC channel. Next thing to do is look at first use and no content scenarios.

Read more
niemeyer

Note: This is a candidate version of the specification. This note will be removed once v1 is closed, and any changes will be described at the end. Please get in touch if you’re implementing it.

Contents


Introduction

This specification defines strepr, a stable representation that enables computing hashes and cryptographic signatures out of a defined set of composite values that is commonly found across a number of languages and applications.

Although the defined representation is a serialization format, it isn’t meant to be used as a traditional one. It may not be seen entirely in memory at once, or written to disk, or sent across the network. Its role is specifically in aiding the generation of hashes and signatures for values that are serialized via other means (JSON, BSON, YAML, HTTP headers or query parameters, configuration files, etc).

The format is designed with the following principles in mind:

Understandable — The representation must be easy to understand to increase the chances of it being implemented correctly.

Portable — The defined logic works properly when the data is being transferred across different platforms and implementations, independently from the choice of protocol and serialization implementation.

Unambiguous — As a natural requirement for producing stable hashes, there is a single way to process any supported value being held in the native form of the host language.

Meaning-oriented — The stable representation holds the meaning of the data being transferred, not its type. For example, the number 7 must be represented in the same way whether it’s being held in a float64 or in an uint16.


Supported values

The following values are supported:

  • nil: the nil/null/none singleton
  • bool: the true and false singletons
  • string: raw sequence of bytes
  • integers: positive, zero, and negative integer numbers
  • floats: IEEE754 binary floating point numbers
  • list: sequence of values
  • map: associative value→value pairs


Representation

nil = 'z'

The nil/null/none singleton is represented by the single byte 'z' (0x7a).

bool = 't' / 'f'

The true and false singletons are represented by the bytes 't' (0x74) and 'f' (0x66), respectively.

unsigned integer = 'p' <value>

Positive and zero integers are represented by the byte 'p' (0x70) followed by the variable-length encoding of the number.

For example, the number 131 is always represented as {0x70, 0x81, 0x03}, independently from the type that holds it in the host language.

negative integer = 'n' <absolute value>

Negative integers are represented by the byte 'n' (0x6e) followed by the variable-length encoding of the absolute value of the number.

For example, the number -131 is always represented as {0x6e, 0x81, 0x03}, independently from the type that holds it in the host language.

string = 's' <num bytes> <bytes>

Strings are represented by the byte 's' (0x73) followed by the variable-length encoding of the number of bytes in the string, followed by the specified number of raw bytes. If the string holds a list of Unicode code points, the raw bytes must contain their UTF-8 encoding.

For example, the string hi is represented as {0x73, 0x02, 'h', 'i'}

Due to the complexity involved in Unicode normalization, it is not required for the implementation of this specification. Consequently, Unicode strings that if normalized would be equal may have different stable representations.

binary float = 'd' <binary64>

32-bit or 64-bit IEEE754 binary floating point numbers that are not holding integers are represented by the byte 'd' (0x64) followed by the big-endian 64-bit IEEE754 binary floating point encoding of the number.

There are two exceptions to that rule:

1. If the floating point value is holding a NaN, it must necessarily be encoded by the following sequence of bytes: {0x64, 0x7f, 0xf8, 0x00 0x00, 0x00, 0x00, 0x00, 0x00}. This ensures all NaN values have a single representation.

2. If the floating point value is holding an integer number it must instead be encoded as an unsigned or negative integer, as appropriate. Floating point values that hold integer numbers are defined as those where floor(v) == v && abs(v) != ∞.

For example, the value 1.1 is represented as {0x64, 0x3f, 0xf1, 0x99, 0x99, 0x99, 0x99, 0x99, 0x9a}, but the value 1.0 is represented as {0x70, 0x01}, and -0.0 is represented as {0x70, 0x00}.

This distinction means all supported numbers have a single representation, independently from the data type used by the host language and serialization format.

list = 'l' <num items> [<item> ...]

Lists of values are represented by the byte 'l' (0x6c), followed by the variable-length encoding of the number of pairs in the list, followed by the stable representation of each item in the list in the original order.

For example, the value [131, -131] is represented as {0x6c, 0x70, 0x81, 0x03, 0x6e, 0x81, 0x03, 0x65}

map = 'm' <num pairs> [<item key> <item value>  ...]

Associative maps of values are represented by the byte 'm' (0x6d) followed by the variable-length encoding of the number of pairs in the map, followed by an ordered sequence of the stable representation of each key and value in the map. The pairs must be sorted so that the stable representation of the keys is in ascending lexicographical order. A map must not have multiple keys with the same representation.

For example, the map {"a": 4, 5: "b"} is always represented as {0x6d, 0x02, 0x70, 0x05, 0x73, 0x01, 'b', 0x73, 0x01, 'a', 0x70, 0x04}.


Variable-length encoding

Integers are variable-length encoded so that they can be represented in short space and with unbounded size. In an encoded number, the last byte holds the 7 least significant bits of the unsigned value, and zero as the eight bit. If there are remaining non-zero bits, the previous byte holds the next 7 bits, and the eight bit is set on to flag the continuation to the next byte. The process continues until there are non-zero bits remaining. The most significant bits end up in the first byte of the encoded value, which must necessarily not be 0x80.

For example, the number 128 is variable-length encoded as {0x81, 0x00}.


Reference implementation

A reference implementation is available, including a test suite which should be considered when implementing the specification.


Changes

draft1 → draft2

  • Enforce the use of UTF-8 for Unicode strings and explain why normalization is being left out.
  • Enforce a single NaN representation for floats.
  • Explain that map key uniqueness refers to the representation.
  • Don’t claim the specification is easy to implement; floats require attention.
  • Mention reference implementation.

Read more
niemeyer

The very first time the concepts behind the juju project were presented, by then still under the prototype name of Ubuntu Pipes, was about four years ago, in July of 2009. It was a short meeting with Mark Shuttleworth, Simon Wardley, and myself, when Canonical still had an office on a tall building by the Thames. That was just the seed of a long road of meetings and presentations that eventually led to the codification of these ideas into what today is a major component of the Ubuntu strategy on servers.

Despite having covered the core concepts many times in those meetings and presentations, it recently occurred to me that they were never properly written down in any reasonable form. This is an omission that I’ll attempt to fix with this post while still holding the proper context in mind and while things haven’t changed too much.

It’s worth noting that I’ve stepped aside as the project technical lead in January, which makes more likely for some of these ideas to take a turn, but they are still of historical value, and true for the time being.

Contents

This post is long enough to deserve an index, but these sections do build up concepts incrementally, so for a full understanding sequential reading is best:


Classical deployments

In a simplistic sense, deploying an application means configuring and running a set of processes in one or more machines to compose an integrated system. This procedure includes not only configuring the processes for particular needs, but also appropriately interconnecting the processes that compose the system.

The following figure depicts a simple example of such a scenario, with two frontend machines that had the Wordpress software configured on them to serve the same content out of a single backend machine running the MySQL database.

Deploying even that simple environment already requires the administrator to deal with a variety of tasks, such as setting up physical or virtual machines, provisioning the operating system, installing the applications and the necessary dependencies, configuring web servers, configuring the database, configuring the communication across the processes including addresses and credentials, firewall rules, and so on. Then, once the system is up, the deployed system must be managed throughout its whole lifecycle, with upgrades, configuration changes, new services integrated, and more.

The lack of a good mechanism to turn all of these tasks into high-level operations that are convenient, repeatable, and extensible, is what motivated the development of juju. The next sections provide an overview of how these problems are solved.


Preparing a blank slate

Before diving into the way in which juju environments are organized, a few words must be said about what a juju environment is in the first place.

All resources managed by juju are said to be within a juju environment, and such an environment may be prepared by juju itself as long as the administrator has access to one of the supported infrastructure providers (AWS, OpenStack, MAAS, etc).

In practice, creating an environment is done by running juju’s bootstrap command:

$ juju bootstrap

This will start a machine in the configured infrastructure provider and prepare the machine for running the juju state server to control the whole environment. Once the machine and the state server are up, they’ll wait for future instructions that are provided via follow up commands or alternative user interfaces.


Service topologies

The high-level perspective that juju takes about an environment and its lifecycle is similar to the perspective that a person has about them. For instance, although the classical deployment example provided above is simple, the mental model that describes it is even simpler, and consists of just a couple of communicating services:

That’s pretty much the model that an administrator using juju has to input into the system for that deployment to be realized. This may be achieved with the following commands:

$ juju deploy cs:precise/wordpress
$ juju deploy cs:precise/mysql
$ juju add-relation wordpress mysql

These commands will communicate with the previously bootstrapped environment, and will input into the system the desired model. The commands themselves don’t actually change the current state of the deployed software, but rather inform the juju infrastructure of the state that the environment should be in. After the commands take place, the juju state server will act to transform the current state of the deployment into the desired one.

In the example described, for instance, juju starts by deploying two new machines that are able to run the service units responsible for Wordpress and MySQL, and configures the machines to run agents that manipulate the system as needed to realize the requested model. An intermediate stage of that process might conceptually be represented as:

topology-step-1

The service units are then provided with the information necessary to configure and start the real software that is responsible for the requested workload (Wordpress and MySQL themselves, in this example), and are also provided with a mechanism that enables service units that were related together to easily exchange data such as addresses, credentials, and so on.

At this point, the service units are able to realize the requested model:

topology-step-2

This is close to the original scenario described, except that there’s a single frontend machine running Wordpress. The next section details how to add that second frontend machine.


Scaling services horizontally

The next step to match the original scenario described is to add a second service unit that can run Wordpress, and that can be achieved by the single command:

$ juju add-unit wordpress

No further commands or information are necessary, because the juju state server understands what the model of the deployment is. That model includes both the configuration of the involved services and the fact that units of the wordpress service should talk to units of the mysql service.

This final step makes the deployed system look equivalent to the original scenario depicted:

topology-step-3

Although that is equivalent to the classic deployment first described, as hinted by these examples an environment managed by juju isn’t static. Services may be added, removed, reconfigured, upgraded, expanded, contracted, and related together, and these actions may take place at any time during the lifetime of an environment.

The way that the service reacts to such changes isn’t enforced by the juju infrastructure. Instead, juju delegates service-specific decisions to the charm that implements the service behavior, as described in the following section.


Charms

A juju-managed environment wouldn't be nearly as interesting if all it could do was constrained by preconceived ideas that the juju developers had about what services should be supported and how they should interact among themselves and with the world.

Instead, the activities within a service deployed by juju are all orchestrated by a juju charm, which is generally named after the main software it exposes. A charm is defined by its metadata, one or more executable hooks that are called after certain events take place, and optionally some custom content.

The charm metadata contains basic declarative information, such as the name and description of the charm, relationships the charm may participate in, and configuration options that the charm is able to handle.

The charm hooks are executable files with well-defined names that may be written in any language. These hooks are run non-concurrently to inform the charm that something happened, and they give a chance for the charm to react to such events in arbitrary ways. There are hooks to inform that the service is supposed to be first installed, or started, or configured, or for when a relation was joined, departed, and so on.

This means that in the previous example the service units depicted are in fact reporting relevant events to the hooks that live within the wordpress charm, and those hooks are the ones responsible for bringing the Wordpress software and any other dependencies up.

wordpress-service-unit

The interface offered by juju to the charm implementation is the same, independently from which infrastructure provider is being used. As long as the charm author takes some care, one can create entire service stacks that can be moved around among different infrastructure providers.


Relations

In the examples above, the concept of service relationships was introduced naturally, because it’s indeed a common and critical aspect of any system that depends on more than a single process. Interestingly, despite it being such a foundational idea, most management systems in fact pay little attention to how the interconnections are modeled.

With juju, it’s fair to say that service relations were part of the system since inception, and have driven the whole mindset around it.

Relations in juju have three main properties: an interface, a kind, and a name.

The relation interface is simply a unique name that represents the protocol that is conventionally followed by the service units to exchange information via their respective hooks. As long as the name is the same, the charms are assumed to have been written in a compatible way, and thus the relation is allowed to be established via the user interface. Relations with different interfaces cannot be established.

The relation kind informs whether a service unit that deploys the given charm will act as a provider, a requirer, or a peer in the relation. Providers and requirers are complementary, in the sense that a service that provides an interface can only have that specific relation established with a service that requires the same interface, and vice-versa. Peer relations are automatically established internally across the units of the service that declares the relation, and enable easily clustering together these units to setup masters and slaves, rings, or any other structural organization that the underlying software supports.

The relation name uniquely identifies the given relation within the charm, and allows a single charm (and service and service units that use it) to have multiple relations with the same interface but different purposes. That identifier is then used in hook names relative to the given relation, user interfaces, and so on.

For example, the two communicating services described in examples might hold relations defined as:

wordpress-mysql-relation-details

When that service model is realized, juju will eventually inform all service units of the wordpress service that a relation was established with the respective service units of the mysql service. That event is communicated via hooks being called on both units, in a way resembling the following representation:

wordpress-mysql-relation-workflow

As depicted above, such an exchange might take the following form:

  1. The administrator establishes a relation between the wordpress service and the mysql service, which causes the service units of these services (wordpress/1 and mysql/0 in the example) to relate.
  2. Both service units concurrently call the relation-joined hook for the respective relation. Note that the hook is named after the local relation name for each unit. Given the conventions established for the mysql interface, the requirer side of the relation does nothing, and the provider informs the credentials and database name that should be used.
  3. The requirer side of the relation is informed that relation settings have changed via the relation-changed hook. This hook implementation may pick up the provided settings and configure the software to talk to the remote side.
  4. The Wordpress software itself is run, and establishes the required TCP connection to the configured database.

In that workflow, neither side knows for sure what service is being related to. It would be feasible (and probably welcome) to have the mysql service replaced by a mariadb service that provided a compatible mysql interface, and the wordpress charm wouldn’t have to be changed to communicate with it.

Also, although this example and many real world scenarios will have relations reflecting TCP connections, this may not always be the case. It’s reasonable to have relations conveying any kind of metadata across the related services.


Configuration

Service configuration follows the same model of metadata plus executable hooks that was described above for relations. A charm can declare what configuration settings it expects in its metadata, and how to react to setting changes in an executable hook named config-changed. Then, once a valid setting is changed for a service, all of the respective service units will have that hook called to reflect the new configuration.

Changing a service setting via the command line may be as simple as:

$ juju set wordpress title="My Blog"

This will communicate with the juju state server, record the new configuration, and consequently incite the service units to realize the new configuration as described. For clarity, this process may be represented as:

config-changed


Taking from here

This conceptual overview hopefully provides some insight into the original thinking that went into designing the juju project. For more in-depth information on any of the topics covered here, the following resources are good starting points:

Read more
Lina Pio

One of the key challenges with designing calendar applications is the number of ways you can display your time, whether it’s by year, month, week or day. After a lot of good old fashioned hard work, we refactored navigation by making the tab header the key to switching between views. Although the direction I’ll take you through in this article is strong and clean, it’s still a work in progress, and as such, can still change. The images are small in this article, to get a closer look at all of them collected together, download the PDF here.

The latest designs in this article show you how we’ve aimed to solve:

  • Navigation between different calendar views
  • Gestures to help quick navigation
  • Editing events
  • Creating events
  • How this will potentially look and feel

 

Different views

There are 5 different view templates inside the calendar app we are focusing on. They are:

  • Year
  • Month
  • Week
  • Day
  • Event

 


Navigating between different views using title header bar

You can move through the different views by tapping on the title header bar to toggle the view mode options. Just like our patterns, this title is scrollable – so that you can scroll through the view modes which don’t fit the width of the screen. Also like our pattern, swiping to the left or right moves along to the next or previous unit (year/month/week/day/event) in its category. For reference, take a look at Calum’s excellent post on this.

To get a better idea, click here to see a video of the prototype which formed the base of this navigation model and allowed us to test it out, comparing and contrasting it against other design directions. It was enough to give us a feel for the potential final build.

 

Navigating between views using spread and pinch gestures

To aid fast navigation for pro users and to also add an element of fun, we’ve decided to enable zooming in and out between views using finger spreading and pinch gestures similar to zooming in and out in a map app.

 Spreading fingers gesture: This zooms in to the next view; the next view offering more detail, close up.

     E.g. A user spreads when on the year view. This opens up the month view. Spreading on the month view opens up the week view, and so on.

Pinching fingers gesture: This zooms out, to a less detailed view – the previous view in the view hierarchy.

     E.g. A user pinches on month view, the system responds by taking the user to the year view.

 

An event

An event has several detail fields. In order of appearance they are:

  • Event name
  • Time
  • Description
  • Location
  • Guests
  • This happens (how many times does this event happen in the series? Or is it a one time event?)
  • Remind me
  • Timezone

 

Editing an event

A bottom edge swipe on an event page brings up the toolbar with the edit button.

[NOTE: toolbar menu options within the calendar and across the whole system have not been finalised, this image of the toolbar is a placeholder to give an idea of how to edit]

The edit mode shows the boxes around the fields allowing the user to type and change the event details. The toolbar in edit mode is always present. It shows cancel and save options.

 

Creating an event

A bottom edge swipe on year, month, week and day views brings up the toolbar with the option ‘New’ to create a new event. Pressing this brings up a similar template to the ‘Edit’ mode, the only difference being the blank forms.

 

Visuals

The visuals in the image below are an exploration of how this can potentially look and feel. This is still very much still in progress, but gives a strong hint of what’s to come.

 

 

I hope you like our thoughts and directions on this, and that this article gives a stronger idea of what the final app will look and behave like.

Watch this space for my upcoming articles focusing on: an in depth look at events – (including guest contacts, location views, time and date pickers etc) calendar synching with external accounts, calendar settings, and calendar mode inside the indicators.

Read more
niemeyer

Since relatively early in the public life of the Go language, I’ve been involved in pushing forward packages that might be used in Ubuntu, including making the compiler suite itself happier in such packaged environments. In due time, these packages were moved over to an automatic build system, so that people wouldn’t have to rely on my good will to have up-to-date packages, nor would I have to be regularly spending time maintaining those packages. Or so was the theory.

It’s well known that the real world is not so plain, though, and issues became much more regular than hoped. Some of the issues were caused by changes in the build conventions of Go, others self-inflicted due to my limited knowledge of the extensive conventions around packaging, or bugs in indirect dependencies of the process, and more recently the sub-optimal scheduling algorithm used by the build farm has driven the builds to a halt.

So, the question is how to get out of this rabbit hole, but still give people a convenient way to use Go in Ubuntu.

Enter godeb, an experiment that dynamically translates the upstream builds of Go into deb packages. In practice, it’s a simple standalone Go program that can parse the build list, fetch the requested version, and in memory translate the contents into a correct binary deb package.

Since you cannot build a Go application without a Go compiler first, there’s an x86 32-bit binary and an x86 64-bit binary of godeb available for download. After the compiler is installed, godeb may be fetched and rebuilt locally by running go get launchpad.net/godeb.

Once the godeb binary is available, it’s easy to get up-to-date packages:

$ ./godeb install
processing https://go.googlecode.com/files/go1.1.1.linux-amd64.tar.gz
package go_1.1.1-godeb1_amd64.deb ready
Selecting previously unselected package go.
(Reading database ... 488515 files and (...) installed.)
Unpacking go (from go_1.1.1-godeb1_amd64.deb) ...
Setting up go (1.1.1-godeb1) ...

It figures what the most recent build available is, downloads, translates, and installs it, asking for a password via sudo if necessary. Running godeb install again will fetch the latest version (or the requested one) and replace the currently installed package. Package installs default to the same architecture of godeb itself, and may be changed by setting the GOARCH environment variable to 386 or amd64, borrowing from a Go convention.

New releases of Go are immediately available, and so are the old ones:

$ ./godeb list
1.2
1.2rc5
1.2rc4
1.2rc3
1.2rc2
1.2rc1
1.1.2
1.1.1
1.1
(...)

$ ./godeb -h
Usage: godeb <command> [<options> ...]

Available commands:

    list
    install [<version>]
    download [<version>]
    remove

For the time being, I’m holding up maintenance of the Go PPA in Launchpad in favor of this system. Of course, you can still install the golang-* packages on Ubuntu 12.10 and 13.04 from the official repositories as usual.

Read more
Jussi Pakkanen

The decision by Go to not provide exceptions has given rise to a renaissance of sorts to eliminate exceptions and go back to error codes. There are various reasons given, such as efficiency, simplicity and the fact that exceptions “suck”.

Let’s examine what exceptions really are through a simple example. Say we need to write code to download some XML, parse and validate it and then extract some piece of information. There are several different ways in which this can fail: network may be down, the server won’t respond, the XML is malformed and so on. Suppose then that we encounter an error. The call stack probably looks like this:

Func1 is the function that drives this functionality and Func7 is where the problem happens. In this particular case we don’t care about partial results. If we can’t do all steps, we just give up. The error propagation starts by Func7 returning an error code to Func6. Func6 detects this and returns an error to Func5. This keeps happening until Func1 gets the error and reports failure to its caller.

Should Func7 throw an exception, functions 6-2 would not need to do anything. The compiler takes care of everything, Func1 catches the exception and reports the error.

This very simple example tells us what exceptions really are: a reliable way of moving up the call stack multiple frames at a time.

It also tells us what their main feature is: they provide a way to centralise error handling in one place.

It should be noted that exceptions do not force centralised error handling. Any Function between 1 and 7 can catch any exception if that is deemed the best thing to do. The developer only needs to write code in those locations. In contrast to error codes require extra code at every single intermediate step. This might not seem so much in this particular case, after all there are only 6 functions to change. Unfortunately in reality things look like this:

That is, functions usually call several other functions to get their job done. This means that if the average call stack depth is N, the developer needs to write O(2^N) error handling stubs. They also need to be tested, which means writing tons of mock classes. If any single one of these checks is wrong or missing, the system has a latent bug.

Even worse, most error code handlers look roughly like this:

ec = do_something();
if(ec) {
  do_some_cleanup();
  return ec;
}

What this code actually does is replicate the behaviour of exceptions. The only difference is that the developer needs to write this anew every single time, which opens the door for bugs.

Design lesson to be learned

Usually when you design an API, there are two choices: either it can be very simple or feature rich. The latter usually takes more time for the API developer to get right but saves effort for its users. In the case of exceptions, it requires work in the compiler, linker and runtime. Depending on circumstances, either one of these may be a valid choice.

When choosing between these two it is often beneficial to step back and look at it from a wider perspective. If the simpler choice was taken, what would happen? If it seems that in most cases (say >80%) people would only use the simple approach to mimic the behaviour of the feature rich one, it is a pretty strong hint that you should provide the feature rich one (or maybe even both).

This problem can go the other way, too. If the framework only provides a very feature rich and complex api, which people then use to simulate the simpler approach. The price of good design is eternal vigilance.

Read more
niemeyer

This week I found some time to work on another small spin-off from the juju project at Canonical, and I’m happy to make it openly available today: the xmlpath package, which implements an efficient and strict subset of the XPath specification for the Go language.

This new package will be used in an upcoming (and long due) revision of the goamz package API, which is currently limited by the fact that once the XML result returned by Amazon is unmarshalled into a static structure, any other data that the package wasn’t prepared to deal with becomes hard to access by clients. This problem is being solved by parsing the tree into an intermediary form which can then have XPath expressions conveniently and efficiently applied to it.

Path expressions currently supported by the package are in the following format, with all components being optional:

/axis-name::node-test[predicate]/axis-name::node-test[predicate]

Compatibility with the XPath specification goes to the following extent:

  • All axes are supported (“child”, “following-sibling”, etc)
  • All abbreviated forms are supported (“.”, “//”, etc)
  • All node types except for namespace are supported
  • Predicates are restricted to [N], [path], and [path=literal] forms
  • Only a single predicate is supported per path step
  • Richer expressions and namespaces are not supported

For example, consider this simple document:

<library>
  <!-- Great book. -->
  <book id="b0836217462">
    <isbn>0836217462</isbn>
    <title>Being a Dog Is a Full-Time Job</title>
    <author id="CMS">
      <name>Charles M Schulz</name>
      <born>1922-11-26</born>
    </author>
    <character id="PP">
      <name>Peppermint Patty</name>
      <born>1966-08-22</born>
    </character>
    <character id="Snoopy">
      <name>Snoopy</name>
      <born>1950-10-04</born>
    </character>
  </book>
</library>

The following expressions can be applied to it, with the indicated result as first match:

/library/book/isbn “0836217462″
/library/*/isbn “0836217462″
/library/book/../book/./isbn “0836217462″
/library/book/character[2]/name “Snoopy”
/library/book/character[born='1950-10-04']/name “Snoopy”
/library/book//node()[@id='PP']/name “Peppermint Patty”
//*[author/@id='CMS']/name “Charles M Schulz”
/library/book/preceding::comment() ” Great book. “

The API implemented allows compiled paths to be held and re-applied any number of times, concurrently or not. For example:

path := xmlpath.MustCompile("/library/book/isbn")
root, err := xmlpath.Parse(file)
if err != nil {
        log.Fatal(err)
}
if value, ok := path.String(root); ok {
        fmt.Println("Found:", value)
}

Result sets can also be optionally stepped over via an idiomatic iterator interface.

The performance of these operations is close to using the static unmarshaling currently implemented by Go’s encoding/xml package:

BenchmarkParse                 5000        613862 ns/op
BenchmarkSimplePathCompile     1000000     1983 ns/op
BenchmarkSimplePathString      1000000     1565 ns/op

As a reference, this is a similar encoding/xml operation, using a struct with a single nested field on the same document:

BenchmarkSimpleUnmarshal       5000        622519 ns/op

I’m hoping this will make our unavoidable XML interactions slightly less painful.

Read more
Martin Keary

This is a presentation of our ‘Paper’ Motion theme for Ubuntu Mobile.

The theme is informed by the ‘paper’ graphic style of the mobile OS and we have sought to accentuate it wherever possible. Rather than using more overt effects like page curling and folding, we have hinted at the theme by using multiple layers, ‘stacking’ and suggestive effects. Multiple layers of sliding paper can be observed in the animation of the switch button, stacking can be seen occurring on the icons in the launcher and an example of a suggestive page-turning effect can be seen during the ‘App Stacking’ example.

Read more
Lisette Slegers

Research around reading

As you can see in the image above, I have spent some time looking at different types and contexts of reading, trying to understand what the reading experience might be for Ubuntu. The contexts of reading varies from libraries to magazine stands to the sofa in your lounge, and these each have an impact on how and what you read.

Something we all know (from a healthy bit of stalking field research on public transport) is that reading on a phone means you are probably doing something else at the same time. You are waiting for your friends in a restaurant, or on a busy train on your way to work. You open the reader app to quickly check some news.

Meet “Shorts”: leaf through your news while you wait

Paul is in the station, waiting for a train. He has 5 minutes until it arrives.

Shorts wireframe 1He launches Shorts. The app opens up with a view that shows short snippets of articles on the topics that interest him. The items are laid out on the page in the organic grid, similar to the grid that is used for the gallery app.

Shorts wireframe 2

It is going to rain tonight. Paul decides to stay in and cook a meal with his flatmates. All he needs is a recipe. He navigates to one of his topics, Food.

Shorts wireframe 3

Paul selects a recipe and reads through it. He decides it is too elaborate and returns to the topic.

Shorts wireframe 4

He looks further through the topic and taps on another article. This recipe is perfect for tonight! Paul saves it so he can easily find it later.

Check out this video too:

See how this concept also fits with our Design Vision and the other ritual apps?

Next steps

We will be connecting the dots and working on key journeys for Shorts. Follow development progress on Google+, the Ubuntu Phone mailing list and IRC channel.

One last thing

What do you think of the name?

Read more
Matthew Paul Thomas

In late 2008, I sketched initial designs for what became Gnome’s System Settings utility. This centralized most operating system settings in a single window, without the need to reopen menus or switch between multiple windows if you didn’t find the setting you were looking for the first time. It made Ubuntu, and other Gnome-based systems, much easier to configure.

Five years later, we’re building a phone operating system. So once again, we need a centralized system settings interface.

What other phone OSes do

The first step in designing this was a competitor evaluation of how other phone systems present system settings.

The main Settings screen of

iOS 6.1.4.

iOS is highly consistent in using a hierarchy of list items for Settings. But their design is rather awkward in three ways. First, the top-level Settings screen is very long, usually containing 30 or more top-level categories. Second, Apple originally tried to include application-specific settings inside the system-wide Settings, which made them hard to find while using the app. Some apps (including nearly all the default ones) still do that, but nowadays most put settings in their own UI. And third, the top-level “General” settings category is a bit of a junk drawer — containing subcategories for everything from auto-lock to accessibility, software updates to Siri.

In the “Data usage” screen of

Android 4.2: Tapping “Set mobile data limit” checks the checkbox. Tapping “Mobile data” flashes the switch label, but does nothing else. Tapping “?” opens a menu of more settings.

Android’s Settings similarly uses a hierarchy of lists, though some sections use dialogs instead. It has other consistency problems, too. Sometimes checkboxes are on the left, sometimes on the right. Tapping a checkbox label toggles the checkbox, but tapping a switch label doesn’t toggle the switch — sometimes it navigates to a different screen, other times it does nothing at all. Sometimes a screen’s heading contains a Back button, sometimes it doesn’t. Sometimes it contains a “?” dropdown menu of more settings, and sometimes it doesn’t. All this shows the importance of system settings having, if not a single designer, at least strong design guidelines.

An impressive aspect of Android’s Settings is that they can display in either portrait or landscape mode.

The “phone+camera” screen of

Windows Phone 8.

The Windows Phone design emphasizes typography and visual simplicity. It’s a bit rough around the edges: for example, the “photos+camera” settings screen uses ten font variations, and the main heading doesn’t fit on the screen. Windows Phone also groups “system” and “applications” settings on separate screens, but the separation needs work: for example, the voicemail sound effect is set in one of the “system” screens, while the voicemail number is set in one of the “applications” screens.

A nice detail in Windows Phone’s Settings is the use of summary values. The row you would tap, to navigate to a settings screen, often contains a line of small text summarizing the current settings values. This can save you from having to visit the other screen at all.

Learning from others

This competitor evaluation revealed three main issues. First, the difficulty of organizing system settings versus application settings. Apple tried to group them all together in iOS, but that lacks in-app discoverability. Microsoft used “system” and “applications” categories in Windows Phone, but suffers from poor sorting. It seems more likely that we can solve the sorting problem than the discoverability problem. So, as with Ubuntu for PC, Ubuntu Phone will have “System Settings”, not just “Settings”. Applications will be responsible for presenting their own settings.

Second, there is a tension between categorizing settings, and promoting frequent or urgently used settings. Categorizing by itself is tricky enough: different people might look for the same setting in different places. (For example, iOS sometimes mirrors subcategories of settings inside multiple categories.) A search function may help, but is not a complete answer, because people still need to know what settings are available in the first place. Categorization becomes even trickier when trying to provide quick access to settings like flight mode or orientation lock. Indicators at the top of the screen may help with this, by providing quick access to frequently used functions, like they do on Ubuntu for PC.

Third, it can be useful to reveal current state of settings as part of the navigation to those settings. This is usually done in text, with summary values, but an icon could work too. For example, a Bluetooth settings icon might be dull when Bluetooth is off, bright when it is on, and have an emblem when it is paired to any device.

User journeys

Two user journeys influenced the design of the System Settings interface.

The primary journey is someone wanting to solve a problem. Maybe their Internet connection is not working. Maybe they’re wondering if they can save battery. Maybe a cabin attendant has asked them to put the phone into flight mode. Maybe a friend has been messing around with their phone and they want to stop it from happening again. This person usually will be in a hurry, and sometimes irritated. They’ll want to get in and out as quickly as possible.

The secondary journey is an adventurous new owner, starting out with their phone, wanting to explore what it is capable of. They have more time to read explanations, and to explore cross-references between categories.

Designing the overview

Next, I sketched out nine possible layouts for the overview screen — the first thing people would see when they entered System Settings.

There was a square grid of icons with headings, like on Ubuntu for PC. A variation where the headings doubled as controls. A triangular grid of the same icons, just for fun. Text lists of subcategories, interspersed with occasional controls as list items. And an amalgam of the grid and list models.

Another text-based list, this time using two lines of text for each subcategory. An arrangement of tiles of different sizes for varying prominence of categories. And finally a list using both icons and text.

Selecting the most promising elements from each of the nine layouts, I passed them on to one of our visual designers, Rosie Zhu. She produced mockups of three possibilities, and with help from Marcus Haslam we decided on one final layout.

The design promotes frequently- and urgently-needed settings at the top, categorizes other settings compactly, and places bureaucratic stuff (“About This Phone” and “Reset Phone”) right at the bottom.

This is far from a final mockup. We need to finalize the icon style, and fine-tune control sizes, use of color, use of lines, and so on. But the basic layout is in place for engineers to start work. (Contact Sebastien Bacher if you’d like to help out with the code.)

Designing individual screens

Meanwhile, I have been busy designing individual settings screens. This has helped reveal missing controls in the UI toolkit, so they can be implemented for app developers to use them too.

Links to designs for the individual screens, as well as the design for the overview screen, are on the System Settings wiki page. Your feedback on any of the designs is welcome, either here, or on the ubuntu-phone@ mailing list.

Read more
Alejandra Obregon

Ubuntu.com update

I’d like to give an update on upcoming plans for Ubuntu.com and to respond to recent concerns about the positioning of the community within the website.

Earlier in the year we worked on an evolution of Ubuntu.com to reflect the expanded scope of the project in the main site structure. Our re-structuring conversations went beyond the existing website to cover the broader Ubuntu web ecosystem. We wanted to review how users gained access to key websites that were not linked to directly from Ubuntu.com. For example: developer.ubuntu.com, design.ubuntu.com, askubuntu.com…

Our target users for these journeys were mainly community members or those new to Ubuntu who might be interested in getting involved or finding out more about how the community and Canonical work together to create Ubuntu.

Our proposed solution consisted of a global navigation menu that was to go across all key sites so that – no matter which site users arrived at – they would be able to reach the main destinations in our ecosystem. This was to include a new community site that has been under discussion for some time by Canonical employees and members of the community. One key factor for this community site is the ability for community members to have direct input so that the site reflects current community topics and areas of focus. By adding it to the global navigation we hoped to increase traffic and make it more accessible across the Ubuntu web ecosystem.

We created some prototypes to test our proposals in terms of interaction, design, site positioning, labeling etc. Laura Czajkowski helped us reach UK-based community members who came into the office to meet with an independent researcher to test the prototypes. Based on the feedback, we have made amendments and planned to implement the work in two phases:

Phase one of the restructure consisted of updating Ubuntu.com to reflect the expanded scope of the project.

Phase two is in progress and consists of adding the global navigation to all the key sites and making sure they work together across domains etc.

I’m sure you can understand that there is a large amount of coordination that needs to go into a restructure of this scale, across a number of sites, on different domains, that are managed and maintained by different teams across Canonical and beyond.

The limited scope of phase one meant the community link was temporarily dropped from the primary navigation menu. We appreciate why this might cause concern in the community, specially in the absence of an understanding of the broader context of our global navigation project. The global navigation project will restore the balance and provide access to various key community sites that need to be surfaced and will benefit from the increased traffic this new positioning will drive.

All of this work is in progress and we are aiming to go live with the changes by the end of this month.

I hope this will address some of the concerns in the community about this topic and that our roadmap shows how we will improve Ubuntu.com for all our audiences.

Ubuntu global navigation menu

Read more
Michal Izydorczyk

Thank you for all your positive feedback after our first blog post.
We are very excited and are continuing with the designs,
here’s a quick update on how we’re getting on.

During the last few weeks we have been looking at the development of the weather and clock apps. We are also looking at set of gradients that could specify a range of weather conditions.

Here’s the how

A linear colour gradient is specified by two points, and a colour at each point. The colours along the line through those points are calculated using linear interpolation, then extended perpendicular to that line.

* wikipedia.org

 
This is great way to describe temperature and how it changes over 24 hours.

The second part of developing these apps was to create a set of graphic assets that could support the weather icons as well as the clock face.

Using entirely white mono assets was obvious to contrast with the colourful changing backgrounds.

But we quickly realized that the graphic style of our icons used as indicators or toolbar actions did not fit well for those assets. The weather icons, for example, looked a bit too heavy while we wanted something more zen and simple to blend nicely with the minimalistic and elegant design of the apps.

We replaced the solid fills with thin outlines and add some roundness to the end of the strokes. The weather icons have become playful but graceful, while keeping their plain but not to simplistic in the look and feel.

The clock faces are designed following to the same principles. With great results?

You be the judge ;)

 
 
 

Read more
niemeyer

A few years ago, when I started pondering about the possibility of porting juju to the Go language, one of the first pieces of the puzzle that were put in place was goyaml: a Go package to parse and serialize a yaml document. This was just an experiment and, as a sane route to get started, a Go layer that does all the language-specific handling was written on top of the libyaml C scanner, parser, and serializer library.

This was a good initial plan, but for a number of reasons the end goal was always to have a pure Go implementation. Having a C layer in a Go program slows down builds significantly due to the time taken to build the C code, makes compiling in other platforms and cross-compiling harder, has certain runtime penalties, and also forces the application to drop the memory safety guarantees offered by Go.

For these reasons, over the last couple of weeks I took a few hours a day to port the C backend to Go. The total time, considering full time work days, would be equivalent to about a week worth of work.

The work started on the scanner and parser side of the library. This took most of the time, not only because it encompassed more than half of the code base, but also because the shared logic had to be ported too, and there was a need to understand which patterns were used in the old code and how they would be converted across in a reasonable way.

The whole scanner and parser plus header files, or around 5000 code lines of C, were ported over in a single shot without intermediate runs. To steer the process in a sane direction, gofmt was called often to reformat the converted code, and then the project was compiled every once in a while to make sure that the pieces were hanging together properly enough.

It’s worth highlighting how useful gofmt was in that process. The C code was converted in the most convenient way to type it, and then gofmt would quickly put it all together in a familiar form for analysis. Not rarely, it would also point out trivial syntactic issues. A double win.

After the scanner and parser were finally converted completely, the pre-existing Go unmarshaling logic was shifted to the new pure implementation, and the reading side of the test suite could run as-is. Naturally, though, it didn’t work out of the box.

To quickly pick up the errors in the new implementation, the C logic and the Go port were put side-by-side to run the same tests, and tracing was introduced in strategic points of the scanner and parser. With that, it was easy to spot where they diverged and pinpoint the human errors.

It took about two hours to get the full suite to run successfully, with a handful of bugs uncovered. Out of curiosity, the issues were:

  • An improperly dropped parenthesis affected the precedence of an expression
  • A slice was being iterated with copying semantics where a reference was necessary
  • A pointer arithmetic conversion missed the base where there was base+offset addressing
  • An inner scoped variable improperly shadowed the outer scope

The same process of porting and test-fixing was then repeated on the the serializing side of the project, in a much shorter time frame for the reasons cited.

The resulting code isn’t yet idiomatic Go. There are several signs in it that it was ported over from C: the name conventions, the use of custom solutions for buffering and reader/writer abstractions, the excessive copying of data due to the need of tracking data ownership so the simple deallocating destructors don’t double-free, etc. It’s also been deoptimized, due to changes such as the removal of macros and in many cases its inlining, and the direct expansion of large unions which causes some core objects to grow significantly.

At this point, though, it’s easy to gradually move the code base towards the common idiom in small increments and as time permits, and cleaning up those artifacts that were left behind.

This code will be made public over the next few days via a new goyaml release. Meanwhile, some quick facts about the process and outcome follows.

Lines of code

According to cloc, there was a total of 7070 lines of C code in .c and .h files. Of those, 6727 were ported, and 342 were 12 functions that were left unconverted as being unnecessary right now. Those 6727 lines of C became 5039 lines of Go code in a mostly one-to-one dumb translation.

That difference comes mainly from garbage collection, lack of forward declarations, standard helpers such as append, range-based for loops, first class slice type with length and capacity, internal OOM handling, and so on.

Future work code can easily increase the difference further by replacing some of the logic ported with more sensible options available in Go, such as standard abstractions for readers and writers, buffered writing support as availalbe in the standard library, etc.

Code clarity and safety

In the specific context of the work done, which is of a scanner, parser and serializer, the slice abstraction is responsible for noticeable clarity gains in the code, when compared to the equivalent logic based on pointer arithmetic. It also gives a much more comforting guarantee of correctness of the written code due to bound-checking.

Performance

While curious, this shouldn’t be taken as a performance comparison between the two languages, as it is comparing a fine tuned C implementation with something that is worse than a direct one-to-one port: not only it hasn’t seen any time at all on preventing waste, but the original logic was deoptimized due to changes such as the removal of inlining macros and the expansion of large unions. There are many obvious changes to be done for improving performance.

With that out of the way, in a simple decoding benchmark the C-backed decoder runs on about 37% of the time taken by the out-of-the-box deoptimized Go port.

Output size

The previous goyaml.a Go package file had 1463kb. The new one has 1016kb. This difference includes glue code generated for the integration.

Considering only the .c and .h files involved in the port, the C object code generated with the standard flags used by the go build tool (-g -O2) sums up to 789kb. The equivalent Go code with the standard settings compiles to 664kb. The 12 functions not ported are also part of that difference, so the difference is pretty much negligible.

Build time

Building the 8 .c files alone takes 3.6 seconds with the standard flags used by the go build tool (-g -O2). After the port, building the entire Go project with the standard settings takes 0.3 seconds.

Mechanical changes

Many of the mechanical changes were done using regular expressions. Excluding the trivial ones, about a dozen regular expressions were used to swap variable and type names, drop parenthesis, place brackets in the right locations, convert function declarations, and so on.

Read more
Calum Pringle

We have just published a new chapter on our App Design Guides : how to handle orientation.

To cater for the different orientations of a range of touch devices, we need to design apps for Ubuntu in a responsive way.

Phone orientations

orientation_1

  1. The primary orientation for an app on the phone is portrait.
  2. Consider using landscape orientation when we want to have a full screen experience for a single piece of content, such as watching a video, looking at a photo or gaming.
  3. A phone app automatically fits in the tablet’s side stage, with a flexible height.

Tablet orientations

orientation_2
orientation_3
orientation_4

  1. The primary orientation for an app on the tablet is landscape.
  2. Consider portrait orientation when it will help the user engage with your app; for example reading a magazine or writing a long email.
  3. By supporting portrait, your app automatically supports split screen.

Responsive strategies

Use these strategies to make your app work on screens of both different sizes and orientations.

Position graphic elements relatively

For ease of use we space graphical elements relatively; to both one another and the screen edges.

orientation_5

Decide how your app might show more or less content

  • An app on the tablet’s main stage might show more content than on the phone.
  • orientation_6

  • An app on the phone with a list of content, such as a feed, would show much more content in the side stage as it is taller.
  • orientation_7

  • If your app’s content is larger than what fits in view, for example a map, you might consider showing more or less content depending on shape and orientation.
  • orientation_8

  • If your app’s content is fixed in shape then it can simply scale up or down. For example the same amount of content on the phone would be scaled up on the tablet.
  • orientation_9

A few last things

1. Use extra space constructively
Consider what content your app could show in extra space, be it the history of a calculator, a list of missed calls or even high scores!
2. If your phone app does not scale, it will remain a fixed height in the side stage.

Hope this helps – and as ever please let us know what you think, these guidelines are a work in progress and will grow over time. Feel free to get in touch with us on the Ubuntu Phone mailing list and the IRC channel.

Read more
niemeyer

Last week I was part of a rant with a couple of coworkers around the fact Go handles errors for expected scenarios by returning an error value instead of using exceptions or a similar mechanism. This is a rather controversial topic because people have grown used to having errors out of their way via exceptions, and Go brings back an improved version of a well known pattern previously adopted by a number of languages — including C — where errors are communicated via return values. This means that errors are in the programmer’s face and have to be dealt with all the time. In addition, the controversy extends towards the fact that, in languages with exceptions, every unadorned error comes with a full traceback of what happened and where, which in some cases is convenient.

All this convenience has a cost, though, which is rather simple to summarize:

Exceptions teach developers to not care about errors.

A sad corollary is that this is relevant even if you are a brilliant developer, as you’ll be affected by the world around you being lenient towards error handling. The problem will show up in the libraries that you import, in the applications that are sitting in your desktop, and in the servers that back your data as well.

Raymond Chen described the issue back in 2004 as:

Writing correct code in the exception-throwing model is in a sense harder than in an error-code model, since anything can fail, and you have to be ready for it. In an error-code model, it’s obvious when you have to check for errors: When you get an error code. In an exception model, you just have to know that errors can occur anywhere.

In other words, in an error-code model, it is obvious when somebody failed to handle an error: They didn’t check the error code. But in an exception-throwing model, it is not obvious from looking at the code whether somebody handled the error, since the error is not explicit.
(…)
When you’re writing code, do you think about what the consequences of an exception would be if it were raised by each line of code? You have to do this if you intend to write correct code.

That’s exactly right. Every line that may raise an exception holds a hidden “else” branch for the error scenario that is very easy to forget about. Even if it sounds like a pointless repetitive task to be entering that error handling code, the exercise of writing it down forces developers to keep the alternative scenario in mind, and pretty often it doesn’t end up empty.

It isn’t the first time I write about that, and given the controversy that surrounds these claims, I generally try to find one or two examples that bring the issue home. So here is the best example I could find today, within the pty module of Python’s 3.3 standard library:

def spawn(argv, master_read=_read, stdin_read=_read):
    """Create a spawned process."""
    if type(argv) == type(''):
        argv = (argv,)
    pid, master_fd = fork()
    if pid == CHILD:
        os.execlp(argv[0], *argv)
    (...)

Every time someone calls this logic with an improper executable in argv there will be a new Python process lying around, uncollected, and unknown to the application, because execlp will fail, and the process just forked will be disregarded. It doesn’t matter if a client of that module catches that exception or not. It’s too late. The local duty wasn’t done. Of course, the bug is trivial to fix by adding a try/except within the spawn function itself. The problem, though, is that this logic looked fine for everybody that ever looked at that function since 1994 when Guido van Rossum first committed it!

Here is another interesting one:

$ make clean
Sorry, command-not-found has crashed! Please file a bug report at:

https://bugs.launchpad.net/command-not-found/+filebug

Please include the following information with the report:

command-not-found version: 0.3
Python version: 3.2.3 final 0
Distributor ID: Ubuntu
Description:    Ubuntu 13.04
Release:        13.04
Codename:       raring
Exception information:

unsupported locale setting
Traceback (most recent call last):
  File "/.../CommandNotFound/util.py", line 24, in crash_guard
    callback()
  File "/usr/lib/command-not-found", line 69, in main
    enable_i18n()
  File "/usr/lib/command-not-found", line 40, in enable_i18n
    locale.setlocale(locale.LC_ALL, '')
  File "/usr/lib/python3.2/locale.py", line 541, in setlocale
    return _setlocale(category, locale)
locale.Error: unsupported locale setting

That’s a pretty harsh crash for the lack of locale data in a system-level application that is, ironically, supposed to tell users what packages to install when commands are missing. Note that at the top of the stack there’s a reference to crash_guard. This function has the intent of catching all exceptions right at the edge of the call stack, and displaying a detailed system specification and traceback to aid in fixing the problem.

Such “parachute catching” is a fairly common pattern in exception-oriented programming and tends to give developers the false sense of having good error handling within the application. Rather than actually guarding the application, though, it’s just a useful way to crash. The proper thing to have done in the case above would be to print a warning, if at all, and then let the program run as usual. This would have been achieved by simply wrapping that one line as in:

try:
    locale.setlocale(locale.LC_ALL, '')
except Exception as e:
    print("Cannot change locale:", e)

Clearly, it was easy to handle that one. The problem, again, is that it was very natural to not do it in the first place. In fact, it’s more than natural: it actually feels good to not be looking at the error path. It’s less code, more linear, and what’s left is the most desired outcome.

The consequence, unfortunately, is that we’re immersing ourselves in a world of brittle software and pretty whales. Although more verbose, the error result style builds the correct mindset: does that function or method have a possible error outcome? How is it being handled? Is that system-interacting function not returning an error? What is being done with the problem that, of course, can happen?

A surprising number of crashes and plain misbehavior is a result of such unconscious negligence.

Read more
Ivanka Majic

Designing in the open

On Tuesday 9th of April I gave a little talk at Mozilla’s “Designing in the Open” event. Along with the rest of the speakers I was asked to prepare a 10 – 15 minutes talk and to be prepared for a discussion to follow.

In preparation for the talk I spoke to a few people – designers and engineers – and one of them (@sil) said: “I hope you are going to publish this somewhere”. So here I am.

To help stimulate some discussion I decided to present some truths. You will have to wait for my follow up posts for something with more answers than questions!

Thank you for the fantastic illustration by @zhenshuofang

 

Truth #1: It’s complicated

Design in the open is an ambiguous term behind which hides an endless stream of potential pitfalls, challenges and rewards.

It is complicated not just in the practical sense of not sitting in the same country, nevermind the same office, as the people you are working with.

I’ll go a bit stronger, design in the open is in my view an ill defined term which is why it is often such a contentious and emotionally fraught activity. I don’t want to get too philosophical but there are some questions that definitely need to be considered before you embark on a project to design something in the open.

Why design in the open? What is it that you hope to gain from doing this? What do you think your co-contributors are going to gain? Why do you want to make something in the open?

Do you really mean build something in the open? I would argue that design and engineering are both integral parts of making something and therefore to make that thing well you should never talk about the two activities as somehow divorced.

Something can be made in secret and still be completely open-source.

Do we mean everyone has a voice in the open? Because that is what it feels like to me. I don’t think people want to necessarily participate in a design process but rather they want to be heard. They want to feel included and like they have some influence and control.

When I first joined Canonical and Ubuntu I had this awful emotional pressure to do what felt like sitting in some sort of zoo that hosts design so that people could watch me work. “In this cage you will see a group of designers using what they call ‘post-its’. They can often be seen engaging in this activity where they stick them on walls and argue with each other about what should be written on them and how they should be grouped.”

Is that how design needs to be done to be truly open?

@mpt put it as: “the real issue is how much delay is there between someone having an idea and it appearing in the public and whether people who are not in the same room can get involved early enough to make a difference.”

That’s true and sometimes it isn’t possible for everyone to get involved in everything and that should be OK too. I have opinions on whether or not the Ubuntu Dash should be engineered in NUX or QML and I like my opinions to be heard but I don’t expect to actually have direct influence.

Are we talking about design in the open as in OpenIDEO or do we mean design in an existing open-source community?

I do have some very specific thoughts and suggestions around this but I will provide them in a second post!

 

Truth #2: Have no secrets

 If you want to design in the open you can’t have secrets. You can’t operate in a world of partial disclosure. It is an additional burden that quickly gets sidelined. It is too hard. There can be no trust and therefore there can be no real collaboration.

Since I joined Canonical in March 2009 we have been pursuing a design strategy of cross device convergence. Most of our work – and even the fact we have been doing it – has been confidential. This year we have gone public with our phone and tablet story and already it is approximately a million times easier to organise work with our community. I say a million but I might actually mean 2 million. You see, if your design strategy isn’t truly public then how can you explain your design direction? And, in an open-source environment which claims to be a meritocracy how can you properly justify your actions if you can’t tie them back to a clear design strategy?

The flipside is that in a world where a tada moment is still a useful promotional tool, can you ever have full disclosure and, if not, how can you work around that?

 

Truth #3: Play nicely

 Over the years (nearly 40 of them!) of school then work and life in general I have come to the conclusion that most people don’t get up in the morning and say to themselves: “Today I would like to be mediocre.”

 One thing that is hard with doing anything in the open is that it seems quite a number of people appear to assume that people *do* get up in the morning wanting to be mediocre or worse, that everyone else *is* mediocre (or worse!). It is another truth, but one that doesn’t justify a number of its own, that it seems easier to make that negative assumption when you are sitting in front of a computer writing a comment, filing a bug or writing to a mailing list.

 This works all sorts of ways round. From people on my team coming back from a Google Hangout and saying: “the developers had some really good ideas!” or a classic following an internal sprint: “you know, I really enjoyed myself hanging out with the developers this week, they are really a lot of fun!” to the developers saying things like: “I didn’t realise you guys actually cared about Ubuntu!”

 One could argue that if you are going to take a walk down a busy street then you should be prepared for people to bump into you, but in reality social norms exist which help us walk along a busy street without incurring actual injury.

I know what meritocracy means and I also know that some merit takes time to show and proving it is not always a case of: “your code compiled and you passed the code review”. Wikipedia asks us to assume good faith, let’s do that and also remember that you don’t actually need to shoulder barge to make your way down a crowded street.

 

Truth #4 Educate and share

 In order to work with people you need to communicate. Starting with the word design, moving on to the word open, and then through words like wireframe, mood board, sketch, distro, motu, if these words don’t have the same meaning to everyone who is trying to use them then you can’t actually have a conversation!

 It is very hard to collaborate if you can’t communicate. Take the time to explain what you are doing how and why.

 Have evidence-based conversations. Not because you need to prove yourself – that can be a really negative thing to carry around with you all the time – , but because it helps people learn. Engineers like to – need to – break problems down into tiny parts so that they can build them, help them break down and understand your ideas.

 

Truth #5 It’s brilliant

 This is the best truth of all. I get to work with some outstanding – if not awesome – people. Vish, Sense, Thorwil, Jason de Rose, and so many other who are community volunteers who have helped me deliver.

 Being part of the community itself is an experience that I don’t know how it would be possible to replicate elsewhere. Some of you may know that I travelled from Alaska to Argentina on a motorcycle and the open-source (note, not just Ubuntu) community helped me more than once. The most amazing example of which was the help we got from the Central America Software Libre guys and girls. We were in El Salvador and one of the bikers we met travelling had some bad news from home and needed to leave urgently. The problem was what to do with his bike and gear and how to organise flights etc. I spent 45 minutes on email and IRC (thanks to the 3G dongle I borrowed from the owner of the hostel we were in) and in that time people had volunteered secure parking in San Salvador and Managua (thanks @tatotat, @leogg and @n0rman)

 I agree that my story is more an opportunity to tell tales of my trip (can you really blame me?) but making the effort to be part of a community is more than just lines on your CV and that bit actually is awesome (pronounced with a British English accent to give it gravitas).

 These engineers I work with, they like to make things work. They go “WOW, this is going to be the best clock app in the world ever!” And then they go off and make it. Just like that. In their spare time!

Now that, that’s brilliant.

Read more
Inayaili León

Spring cleaning ubuntu.com

Yesterday we’ve launched an updated version of ubuntu.com, which we hope will improve navigation throughout the site and provide visitors with a better understanding of what Ubuntu is.

Research is an important part of what we do, and this update isn’t an exception. The improvements we have made in terms of content and structure of the site are mostly a reflection of what users have told us during testing, such as clarifying the description of what Ubuntu is when you are browsing the Desktop section.

Refocused navigation

By focusing our site navigation on the products themselves, we aim to make it clear for someone who is new to the site that Ubuntu is about all of these things: PCs, phones, tablets — you name it.

Ubuntu.com - New navigation
New ubuntu.com’s product-based navigation

Updated footer

We have introduced a contextual footer at the bottom of most pages. This gives people an opportunity to explore more of the section they’re in and read related resources like news and articles. We have also made the main footer work harder by providing a bird’s eye view of the entire content of the site.

Ubuntu.com - New footer
An overview of ubuntu.com in the updated footer

Ubuntu.com phone page

But the changes are not just about the site’s structure. Visually, ubuntu.com has been evolving ever since it was first launched, and with the latest update, we keep moving towards a cleaner, fresher, but also more modular, approach.

This direction started with the design of the phone and then the tablet sections of the site, which took existing design components and “opened” them up — we’ve added larger margins, lighter text, more space all around.

We then revisited the other sections of the site and standardised the templates and remaining components as much as we could. This exercise took a great deal of close collaboration between design and front-end development: it was important that design decisions were agreed by everyone and that the code reflected those decisions.

The product of this code cleanup had already been made live a few weeks ago, but the more visible side of the cleanup happened right now, with this update.

What we’ll do next

We will be working on, and improving, the way you navigate through ubuntu.com and other websites in the Ubuntu web universe in the coming months, so keep an eye on this blog! And, as always, we’d be delighted to hear your feedback.

Read more
niemeyer

This weekend the proper environment settled out for sorting a pet peeve that shows up every once in a while when coding: writing logic that interacts with other applications in the system via their stdin and stdout streams is often more involved than it should be, which seems pretty ironic when sitting in front of a Unix-like system.

Rather than going over the trouble of setting up pipes and hooking them up in a custom way, often applications end up just delegating the job to /bin/sh, which is not ideal for a number of reasons: argument formatting isn’t straightforward, injecting custom application-defined logic is hard, which means even simple tasks that might be easily achieved by the language end up shelling out to further external applications, and so on.

In an attempt to address that, I’ve spent some time working on an experimental Go package that is being released today: pipe.

I hope you like it as well, and please drop me a note if you find any issues.

Read more
Michal Izydorczyk

Hi everyone, following our process post last week I’m excited to share the first round of visual exploration for three of our core utility apps; clock, weather and calendar (calculator and an updated notepad soon too I promise!).

It is important to note that this is still an early stage in the design process.

Our aim was to extend the look and feel we have already established in the phone demo to the family of apps. The style we call Suru.

What we’ve concentrated on (while the functionality is being prototyped from our wireframes) is how to develop the look and feel of the apps. The accuracy of the information on the screens is yet to be developed. We’ll work on the layout of the information once we’ve started to focus on a single app.

These last few weeks we have looked at developing a mood board and colour scheme for the utility apps. For example, we’ve used a combination of gradients and photo-manipulation to showcase different temperatures from locations in the weather app. We have started now to think how this gradient could influence the look of the other apps; maintaining individual styles but feeling still within a family of apps defined “rituals”  metaphor.

Following our design vision, we aim to focus on the essential information in each view, with a minimal, sophisticated feel. You’ll notice from these images that we’ve tried to be as clean as possible.

In the next couple of weeks we are going to concentrate on refining each concept. Even though we have a direction for layering, materials and textures they all still need a bit of love. Enjoy ;)

 

Read more