Canonical Voices

(This article is was originally posted on design.canonical.com)

On release day we can get up to 8,000 requests a second to ubuntu.com from people trying to download the new release. In fact, last October (13.10) was the first release day in a long time that the site didn't crash under the load at some point during the day (huge credit to the infrastructure team).

Ubuntu.com has been running on Drupal, but we've been gradually migrating it to a more bespoke Django based system. In March we started work on migrating the download section in time for the release of Trusty Tahr. This was a prime opportunity to look for ways to reduce some of the load on the servers.

Choosing geolocated download mirrors is hard work for an application

When someone downloads Ubuntu from ubuntu.com (on a thank-you page), they are actually sent to one of the 300 or so mirror sites that's nearby.

To pick a mirror for the user, the application has to:

  1. Decide from the client's IP address what country they're in
  2. Get the list of mirrors and find the ones that are in their country
  3. Randomly pick them a mirror, while sending more people to mirrors with higher bandwidth

This process is by far the most intensive operation on the whole site, not because these tasks are particularly complicated in themselves, but because this needs to be done for each and every user - potentially 8,000 a second while every other page on the site can be aggressively cached to prevent most requests from hitting the application itself.

For the site to be able to handle this load, we'd need to load-balance requests across perhaps 40 VMs.

Can everything be done client-side?

Our first thought was to embed the entire mirror list in the thank-you page and use JavaScript in the users' browsers to select an appropriate mirror. This would drastically reduce the load on the application, because the download page would then be effectively static and cache-able like every other page.

The only way to reliably get the user's location client-side is with the geolocation API, which is only supported by 85% of users' browsers. Another slight issue is that the user has to give permission before they could be assigned a mirror, which would slightly hinder their experience.

This solution would inconvenience users just a bit too much. So we found a trade-off:

A mixed solution - Apache geolocation

mod_geoip2 for Apache can apply server rules based on a user's location and is much faster than doing geolocation at the application level. This means that we can use Apache to send users to a country-specific version of the download page (e.g. the German desktop thank-you page) by adding &country=GB to the end of the URL.

These country specific pages contain the list of mirrors for that country, and each one can now be cached, vastly reducing the load on the server. Client-side JavaScript randomly selects a mirror for the user, weighted by the bandwidth of each mirror, and kicks off their download, without the need for client-side geolocation support.

This solution was successfully implemented shortly before the release of Trusty Tahr.

Read more
Robin Winslow

On release day we can get up to 8,000 requests a second to ubuntu.com from people trying to download the new release. In fact, last October (13.10) was the first release day in a long time that the site didn’t crash under the load at some point during the day (huge credit to the infrastructure team).

Ubuntu.com has been running on Drupal, but we’ve been gradually migrating it to a more bespoke Django based system. In March we started work on migrating the download section in time for the release of Trusty Tahr. This was a prime opportunity to look for ways to reduce some of the load on the servers.

Choosing geolocated download mirrors is hard work for an application

When someone downloads Ubuntu from ubuntu.com (on a thank-you page), they are actually sent to one of the 300 or so mirror sites that’s nearby.

To pick a mirror for the user, the application has to:

  1. Decide from the client’s IP address what country they’re in
  2. Get the list of mirrors and find the ones that are in their country
  3. Randomly pick them a mirror, while sending more people to mirrors with higher bandwidth

This process is by far the most intensive operation on the whole site, not because these tasks are particularly complicated in themselves, but because this needs to be done for each and every user – potentially 8,000 a second while every other page on the site can be aggressively cached to prevent most requests from hitting the application itself.

For the site to be able to handle this load, we’d need to load-balance requests across perhaps 40 VMs.

Can everything be done client-side?

Our first thought was to embed the entire mirror list in the thank-you page and use JavaScript in the users’ browsers to select an appropriate mirror. This would drastically reduce the load on the application, because the download page would then be effectively static and cache-able like every other page.

The only way to reliably get the user’s location client-side is with the geolocation API, which is only supported by 85% of users’ browsers. Another slight issue is that the user has to give permission before they could be assigned a mirror, which would slightly hinder their experience.

This solution would inconvenience users just a bit too much. So we found a trade-off:

A mixed solution – Apache geolocation

mod_geoip2 for Apache can apply server rules based on a user’s location and is much faster than doing geolocation at the application level. This means that we can use Apache to send users to a country-specific version of the download page (e.g. the German desktop thank-you page) by adding &country=GB to the end of the URL.

These country specific pages contain the list of mirrors for that country, and each one can now be cached, vastly reducing the load on the server. Client-side JavaScript randomly selects a mirror for the user, weighted by the bandwidth of each mirror, and kicks off their download, without the need for client-side geolocation support.

This solution was successfully implemented shortly before the release of Trusty Tahr.

(This article was also posted on robinwinslow.co.uk)

Read more

Docker is a fantastic tool for running virtual images and managing light Linux containers extremely quickly.

One thing this has been very useful for in my job at Canonical is quickly running older versions of Ubuntu - for example to test how to install specific packages on Precise when I'm running Trusty.

Installing Docker

The simplest way to install Docker on Ubuntu is using the automatic script:

curl -sSL https://get.docker.io/ubuntu/ | sudo sh

You may then want to authorise your user to run Docker directly (as opposed to using sudo) by adding yourself to the docker group:

sudo gpasswd -a [YOUR-USERNAME] docker

You need to log out and back in again before this will take effect.

Spinning up an old version of Ubuntu

With docker installed, you should be able to run it as follows. The below example is for Ubuntu Precise, but you can replace "precise" with any available ubuntu version:

mkdir share  # Shared folder with docker image - optional
docker run -v `pwd`/share:/share -i -t ubuntu:precise /bin/bash  # Run ubuntu, with a shared folder
root@cba49fae35ce:/#  # We're in!

The -v `pwd`/share:/share part mounts the local ./share/ folder at /share/ within the Docker instance, for easily sharing files with the host OS. Setting this up is optional, but might well be useful.

There are some import things to note:

  • This is a very stripped-down operating system. You are logged in as the root user, your home directory is the filesystem root (/), and very few packages are installed. Almost always, the first thing you'll want to run is apt-get update. You'll then almost certainly need to install a few packages before this instance will be of any use.
  • Every time you run the above command it will spin up a new instance of the Ubuntu image from scratch. If you log out, retrieving your current instance in that same state is complicated. So don't logout until you're done. Or learn about managing Docker containers.
  • In some cases, Docker will be unable to resolve DNS correctly, meaning that apt-get update will fail. In this case, follow the guide to fix DNS.

Read more

Fix Docker's DNS

Docker is really useful for a great many things - including, but not limited to, quickly testing older versions of Ubuntu. If you've not used it before, why not try out the online demo?.

Networking issues

Sometimes docker is unable to use the host OS's DNS resolver, resulting in a DNS resolve error within your Docker container:

$ sudo docker run -i -t ubuntu /bin/bash  # Start a docker container
root@0cca56c41dfe:/# apt-get update  # Try to Update apt from within the container
Err http://archive.ubuntu.com precise Release.gpg
Temporary failure resolving 'archive.ubuntu.com'  # DNS resolve failure
..
W: Some index files failed to download. They have been ignored, or old ones used instead.

How to fix it

We can fix this by explicitly telling Docker to use Google's DNS public server (8.8.8.8).

However, within some networks (for example, Canonical's London office) all public DNS will be blocked, so we should find and explicitly add the network's DNS server as a backup as well:

Get the address of your current DNS server

From the host OS, check the address of the DNS server you're using locally with nm-tool, e.g.:

$ nm-tool
...
  IPv4 Settings:
    Address:         192.168.100.154
    Prefix:          21 (255.255.248.0)
    Gateway:         192.168.100.101

    DNS:             192.168.100.101  # This is my DNS server address
...

Add your DNS server as a 2nd DNS server for Docker

Now open up the docker config file at /etc/default/docker, and update or replace the DOCKER_OPTS setting to add Google's DNS server first, but yours as a backup: --dns 8.8.8.8 --dns=[YOUR-DNS-SERVER]. E.g.:

# /etc/default/docker
# ...
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 8.8.8.8 --dns 192.168.100.102"
# Google's DNS first ^, and ours ^ second

Restart Docker

sudo service docker restart

Success?

Hopefully, all should now be well:

$ sudo docker run -i -t ubuntu /bin/bash  # Start a docker container
root@0cca56c41dfe:/# apt-get update  # Try to Update apt from within the container
Get:1 http://archive.ubuntu.com precise Release.gpg [198 B]  # DNS resolves properly
...

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140826 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:
- http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel remains based on the v3.16.1 upstream stable kernel
and is available for testing in the archive, ie. linux-3.16.0-11.16.
Please test and let us know your results.
—–
Important upcoming dates:
Thurs Aug 28 – Utopic Beta 1 (~2 days)
Mon Sep 22 – Utopic Final Beta Freeze (~4 weeks away)
Thurs Sep 25 – Utopic Final Beta (~4 weeks away)
Thurs Oct 9 – Utopic Kernel Freeze (~6 weeks away)
Thurs Oct 16 – Utopic Final Freeze (~7 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~8 weeks away)
o/
o/


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

o/
Status for the main kernels, until today (Aug. 26):

  • Lucid – verification & testing
  • Precise – verification & testing
  • Trusty – verification & testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    cycle: 29-Aug through 20-Sep
    ====================================================================
    29-Aug Last day for kernel commits for this cycle
    31-Sep – 06-Sep Kernel prep week.
    07-Sep – 13-Sep Bug verification & Regression testing.
    14-Sep – 20-Sep Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more

If you glance up to the address bar, you will see that this post is being served securely. I've done this because I believe strongly in the importance of internet privacy, and I support the Reset The Net campaign to encrypt the web.

I've done this completely for free. Here's how:

Get a free certificate

StartSSL isn't the nicest website in the world to use. However, they will give you a free certificate without too much hassle. Click "Sign up" and follow the instructions.

Get an OpenShift Bronze account

Sign up to a RedHat OpenShift Bronze account. Although this account is free to use, as long as you only use one 1-3 gears, it does require you to provide card details.

Once you have an account, create a new application. On the application screen, open the list of domain aliases by clicking on the aliases link (might say "change"):

Application page - click on aliases

Edit your selected domain name and upload the certificate, chain file and private key. NB: Make sure you upload the chain file. If the chain file isn't uploaded initially it may not register later on.

Pushing your site

Now you can push any website to the created application and it should be securely hosted.

Given that you only get 1-3 gears for free, if you have a static site, it's more likely to handle high load. For instance, this site gets about 250 visitors a day and runs perfectly fine on the free resources from OpenShift.

Read more
Nicholas Skaggs

As we continue to iterate on new ubuntu touch images, it's important for everyone to be able to enjoy the ubuntu phone experience in their native language. This is where you can help!

We need your input and help to make sure the phone images are well localized for your native language. If you've never contributed a translation before, this is a perfect opportunity for you to learn. There's a wiki guide to help you, along with translation teams who speak your language and can help.

Don't worry, you don't need a ubuntu phone to do this work. The wiki guide details how to translate using a phone, emulator, or even just your desktop PC running ubuntu. If nothing else, you can help review other folks translations by simply using launchpad in your web browser.

If this sounds interesting to you and the links don't make sense or you would like some more personal help, feel free to contact me. English is preferred, but in the spirit of translation feel free to contact me in French, Spanish or perhaps even German :-).

Happy Translating everyone!

P.S. If you are curious about the status of your language translation, or looking for known missing strings, have a look at the stats page kept by David Planella.

Read more
facundo

Jardinera


Sí, ya sé que estoy denso con las fotitos de Malena... ¿¿¿pero qué quieren que haga???

<baba>
Malena jardinera
</baba>

Read more
Ben Howard

For years, the Ubuntu Cloud Images have been built on a timer (i.e. cronjob or Jenkins). Every week, you can reasonably expect that stable and LTS releases to be built twice a week while our development build is build once a day.  Each of these builds is given a serial in the form of YYYYMMDD. 

While time-based building has proven to be reliable, different build serials may be functionally the same, just put together at a different point in time. Many of the builds that we do for stable and LTS releases are pointless.

When the whole heartbleed fiasco hit, it put the Cloud Image team into over-drive, since it required manually triggering builds the LTS releases. When we manually trigger builds, it takes roughly 12-16 hours to build, QA, test and release new Cloud Images. Sure, most of this is automated, but the process had to be manually started by a human. This got me thinking: there has to be a better way.

What if we build the Cloud Images when the package set changes?

With that, I changed the Ubuntu 14.10 (Utopic Unicorn) build process from time-based to archive trigger-based. Now, instead of building every day at 00:30 UTC, the build starts when the archive has been updated and the packages in the prior cloud image build is older than the archive version. In the last three days, there were eight builds for Utopic. For a development version of Ubuntu, this just means that developers don't have to wait 24 hours for the latest package changes to land in a Cloud Image.

Over the next few weeks, I will be moving the 10.04 LTS, 12.04 LTS and 14.04 LTS build processes from time to archive trigger-based. While this might result less frequent daily builds, the main advantage is that the daily builds will contain the latest package sets. And if you are trying to respond to the latest CVE, or waiting on a bug fix to land, it likely means that you'll have a fresh daily that you can use the following day.

Read more
Dustin Kirkland


Docker 1.0.1 is available for testing, in Ubuntu 14.04 LTS!

Docker 1.0.1 has landed in the trusty-proposed archive, which we hope to SRU to trusty-updates very soon.  We would love to have your testing feedback, to ensure both upgrades from Docker 0.9.1, as well as new installs of Docker 1.0.1 behave well, and are of the highest quality you have come to expect from Ubuntu's LTS  (Long Term Stable) releases!  Please file any bugs or issues here.

Moreover, this new version of the Docker package now installs the Docker binary to /usr/bin/docker, rather than /usr/bin/docker.io in previous versions. This should help Ubuntu's Docker package more closely match the wealth of documentation and examples available from our friends upstream.

A big thanks to Paul Tagliamonte, James Page, Nick Stinemates, Tianon Gravi, and Ryan Harper for their help upstream in Debian and in Ubuntu to get this package updated in Trusty!  Also, it's probably worth mentioning that we're targeting Docker 1.1.2 (or perhaps 1.2.0) for Ubuntu 14.10 (Utopic), which will release on October 23, 2014.

Here are a few commands that might help your testing...

Check What Candidate Versions are Available

$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

If that shows 0.9.1~dfsg1-2 (as it should), then you need to enable the trusty-proposed pocket.

$ echo "deb http://archive.ubuntu.com/ubuntu/ trusty-proposed universe" | sudo tee -a /etc/apt/sources.list
$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

And now you should see the new version, 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1, available (probably in addition to 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1).

Upgrades

Check if you already have Docker installed, using:

$ dpkg -l docker.io

If so, you can simply upgrade.

$ sudo apt-get upgrade

And now, you can check your Docker version:

$ sudo dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
0.9.1~dfsg1-2

New Installations

You can simply install the new package with:

$ sudo apt-get install docker.io

And ensure that you're on the latest version with:

$ dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1

Running Docker

If you're already a Docker user, you probably don't need these instructions.  But in case you're reading this, and trying Docker for the first time, here's the briefest of quick start guides :-)

$ sudo docker pull ubuntu
$ sudo docker run -i -t ubuntu /bin/bash

And now you're running a bash shell inside of an Ubuntu Docker container.  And only bash!

root@1728ffd1d47b:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 13:42 ? 00:00:00 /bin/bash
root 8 1 0 13:43 ? 00:00:00 ps -ef

If you want to do something more interesting in Docker, well, that's whole other post ;-)

:-Dustin

Read more
Prakash Advani

2 Billions containers at Google each week!

That tech is called Linux Containerization, and is the latest in a long line of innovations meant to make it easier to package up applications and sling them around data centers. It’s not a new approach – see Solaris Zones, BSD Jails, Parallels, and so on – but Google has managed to popularize it enough that a small cottage industry is forming around it.

Read More: http://www.stumbleupon.com/su/2S0zFm/YAbnLcdp:9a2Vn45F/www.theregister.co.uk/2014/05/23/google_containerization_two_billion/

Read more
Michael Hall

Recognition is like money, it only really has value when it’s being passed between one person and another. Otherwise it’s just potential value, sitting idle.  Communication gives life to recognition, turning it’s potential value into real value.

As I covered in my previous post, Who do you contribute to?, recognition doesn’t have a constant value.  In that article I illustrated how the value of recognition differs depending on who it’s coming from, but that’s not the whole story.  The value of recognition also differs depending on the medium of communication.

communication_triangleOver at the Community Leadership Knowledge Base I started documenting different forms of communication that a community might choose, and how each medium has a balance of three basic properties: Speed, Thoughtfulness and Discoverability. Let’s call this the communication triangle. Each of these also plays a part in the value of recognition.

Speed

Again, much like money, recognition is something that is circulated.  It’s usefulness is not simply created by the sender and consumed by the receiver, but rather passed from one person to another, and then another.  The faster you can communicate recognition around your community, the more utility you can get out of even a small amount of it. Fast communications, like IRC, phone calls or in-person meetups let you give and receive a higher volume of recognition than slower forms, like email or blog posts. But speed is only one part, and faster isn’t necessarily better.

Thoughtfulness

Where speed emphasizes quantity, thoughtfulness is a measure of the quality of communication, and that directly affects the value of recognition given. Thoughtful communications require consideration upon both receiving and replying. Messages are typically longer, more detailed, and better presented than those that emphasize speed. As a result, they are also usually a good bit slower too, both in the time it takes for a reply to be made, and also the speed at which a full conversation happens. An IRC meeting can be done in an hour, where an email exchange can last for weeks, even if both end up with the same word-count at the end.

Discoverability

The third point on our communication triangle, discoverability, is a measure of how likely it is that somebody not immediately involved in a conversation can find out about it. Because recognition is a social good, most of it’s value comes from other people knowing who has given it to whom. Discoverability acts as a multiplier (or divisor, if done poorly) to the original value of recognition.

There are two factors to the discoverability of communication. The first, accessibility, is about how hard it is to find the conversation. Blog posts, or social media posts, are usually very easy to discover, while IRC chats and email exchanges are not. The second factor, longevity, is about how far into the future that conversation can still be discovered. A social media post disappears (or at least becomes far less accessible) after a while, but an IRC log or mailing list archive can stick around for years. Unlike the three properties of communication, however, these factors to discoverability do not require a trade off, you can have something that is both very accessible and has high longevity.

Finding Balance

Most communities will have more than one method of communication, and a healthy one will have a combination of them that compliment each other. This is important because sometimes one will offer a more productive use of your recognition than another. Some contributors will respond better to lots of immediate recognition, rather than a single eloquent one. Others will respond better to formal recognition than informal.  In both cases, be mindful of the multiplier effect that discoverability gives you, and take full advantage of opportunities where that plays a larger than usual role, such as during an official meeting or when writing an article that will have higher than normal readership.

Read more
Prakash Advani

Zen story on religion

This also explains why meditation (truth) got converted to religion. Why conflict arises from religion, was religion created for vested interests ?

One day, Mara, the Evil One, was going through a village with his attendant when he saw a man doing walking meditation whose face was lit up in wonder. The man had just discovered something lying in front of him. Mara’s attendant asked what that was and Mara replied, “A piece of truth.” His attendant thought a while and asked, “Does it not bother you when someone finds a piece of truth, O Evil One?” Unfazed, Mara replied, “No, because right after that, they usually make a religion out of it.”

Read More: http://articles.economictimes.indiatimes.com/2014-08-19/news/52983337_1_mara-attendant-evil

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140819 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to the first v3.16.1 upstream stable
kernel and uploaded to the archive, ie. linux-3.16.0-9.14. Please test
and let us know your results.
—–
Important upcoming dates:
Thurs Aug 21 – Utopic Feature Freeze (~2 days away)
Mon Sep 22 – Utopic Final Beta Freeze (~5 weeks away)
Thurs Sep 25 – Utopic Final Beta (~5 weeks away)
Thurs Oct 9 – Utopic Kernel Freeze (~7 weeks away)
Thurs Oct 16 – Utopic Final Freeze (~8 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~9 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Aug. 19):

  • Lucid – verification & testing
  • Precise – verification & testing
  • Trusty – verification & testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    cycle: 29-Aug through 20-Sep
    ====================================================================
    29-Aug Last day for kernel commits for this cycle
    31-Sep – 06-Sep Kernel prep week.
    07-Sep – 13-Sep Bug verification & Regression testing.
    14-Sep – 20-Sep Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more

The short version is that if you want to enable middleclick scrolling for Lenovo clickpads in Ubuntu, do this in a terminal:

sudo add-apt-repository ppa:bjornt/evdev
sudo apt-get update
sudo apt-get dist-upgrade

The commands above should upgrade the xserver-xorg-input-evdev package, as well as remove the xserver-xorg-input-synaptics and xserver-xorg-input-all packages.

Next you need to create a file at /usr/share/X11/xorg.conf.d/90-clickpad.conf with the following contents:

Section "InputClass"
    Identifier "Clickpad"
    MatchIsTouchpad "on"
    MatchDevicePath "/dev/input/event*"
    Driver "evdev"
    # Synaptics options come here.
    Option "Clickpad" "true"
    option "EmulatedMidButtonTime" "0"
    Option "SoftButtonAreas" "65% 0 0 40% 42% 65% 0 40%"
    Option "AreaBottomEdge" "0%"
EndSection

Section "InputClass"
    Identifier   "TrackPoint"
    MatchProduct "TrackPoint"
    MatchDriver  "evdev"
    Option       "EmulateWheel"       "1"
    Option       "EmulateWheelButton" "2"
    Option       "XAxisMapping"       "6 7"
EndSection

The interesting options are SoftButtonAreas and AreaBottomEdge. SoftButtonAreas specifies where the buttons should be. If you want the buttons at the top, itt should generally be in the form "R 0 0 H L R 0 H", where R is where the border between the middle and right buttons, H is the height of the buttons, and L is the border between the middle and left buttons.

The AreaBottomEdge turns off the touchpad, expect for clicking. If you want to keep using the touchpad, you can instead specify AreaTopEdge, with the same value you use for H. That would enable the touchpad below the buttons.

Unfortunately, you can't specify where the left button should be, instead if occupies everything that isn't the middle or right button. This is a bit annoying, since at least I tend to touch the touchpad with my palm when reaching for the middle button, which will result in a left click being registered instead of a middle click.

I created this package because Ubuntu doesn't quite support the clickpads that come in the newer Lenovo laptops. Ubuntu does support clickpads, and with the SoftButtonAreas config settings it's possible to have three soft buttons on the clickpad where the real buttons used to be. However, what's not supported out of the box is middleclick scrolling, where you hold the middle button and scroll with the trackpoint.

The main problem is that the clickpad is driven by synaptics and the trackpoint by evdev, and they can't communicate to generate the scroll events. Bae Taegil patched the evdev driver to basically include the synaptics driver. I've taken that patch and generated a package for Ubuntu 14.04. I've only added a package for Trusty, but I could add packages for other releases if needed. I will most likely add one for Utopic, when it becomes more stable.

Read more
facundo

¡Carne!


Mi familia es bastante carnívora (como sucede con los argentinos en general). Cuando vivía con mis padres, no tenía mayor injerencia en cómo, dónde o cuando se compraba la carne (aunque existe la anécdota de que yo, de niño, siguiendo una receta, fuí a la carnicería del barrio a comprar "un kilo de ternera", y el carnicero me preguntaba "¿pero qué corte"?).

Cuando volé del nido, una de las cuestiones que tuve que decidir y hacerme cargo, fue, obviamente ¿dónde comprar la carne?

No es una pregunta sencilla. Bah, la pregunta es sencilla, lo que no es simple es la respuesta. Para simplificar, voy a contar solamente mi última experiencia.

A muchos les gusta construir una "relación especial" con el carnicero/a del barrio (no encontré una manera de escribir eso que no suene un poco pornográfico). De esa manera, siempre tratan de conseguir un corte lindo, una mejor atención, etc. Mi problema es que en la zona donde vivo la carne está cara. Eso me llevó a ir buscando precios por un lado, por el otro, y al final terminé en Chalín, un frigorífico que descubrió mi viejo por la zona de Mataderos.

Mataderos es lejos de casa, sin embargo. Durante mucho tiempo Chalín hacía delivery, con lo cual me traían el pedido a casa (entonces, me resbalaba que fuese lejos).  Desde hace un tiempo, sin embargo, no tienen más ese servicio, pero se puede encargar el pedido y pasarlo a buscar. Aunque no podés elegir el pedazo de carne puntualmente (esa tira de asado, ese pedacito de vacío), pasar a buscar el pedido tiene la ventaja de que no tenés que hacer la cola.

¡Y no hacer la cola es importante! Es que en Chalín se junta gente. Normalmente, tenés una espera de dos o tres horas. Pero me ha pasado de ir en una fecha complicada (para las fiestas de fin de año), llegar a las 7:50 de la mañana, y encontrarme con que tenía 270 personas adelante!! Me terminé yendo a las 12:40 :/

Ahora, ¿tiene sentido ir a buscar la carne hasta allá? De Olivos a Mataderos hay una distancia respetable, y aunque en tiempo no es tanto, hay un gasto de nafta, etc. Una vez, a principio de año, comparé los precios y calculé que, para la compra que había hecho, en Chalín gasté $998, y en la carnicería del barrio habría gastado $1309. Sí, más de 300 mangos de diferencia.

En fin. El punto es que hace mucho tiempo que compro en Chalín, para el consumo de la casa, para cuando hago asados por un cumpleaños, o mis asados geek. Incluso tengo el historial de precios, desde hace cuatro años y medio, que les comparto acá.

Carne

No dejo de buscar alternativas, sin embargo, por si deja de ser la mejor opción. El otro día un conocido que respeto mucho culinariamente me recomendó el Frigorífico Las Heras, que tienen delivery, y decidí probarlos.

Tienen varias cajas, paquetes prearmados de opciones. Por ejemplo, yo compré la "Familiar", que traía "cortes de milanesa, picada, y peceto" (según me dijeron por teléfono). Cuando llegó la caja, ví que tenía 3.5kg de picada, los cortes para milanesa eran 1.5kg de nalga, 1.6kg de cuaddrada, y 1.3kg de bola de lomo, y finalmente el peceto, 1.4kg.

Las principales ventajas de este frigorífico son que te traen el pedido a tu casa, y que viene empaquetado al vacío, super cómodo de manejar y meter en el freezer.

Pero aunque la carne es un poco mejor que en Chalín, me cobraron $70 el kilo (mientras que en Chalín, a esas cantidades de esos cortes, hubiese tenido un promedio de $51 por kg). Y no hay una diferencia de calidad que amerite esa diferencia de precios.

Sí, a Chalín la tengo que ir a buscar. ¡Pero compro los cortes que quiero! Finalmente, un detalle no menor: en Chalín *siempre* me dieron un ticket AFIP válido, y del frigorífico Las Heras no me trajeron boleta.

Read more
niemeyer

Announcing qml v1 for Go

After a few weeks of slow progress on the qml package for Go, action is starting again.

The first important change is the release of the new v1 API, which is motivated by conversations and debugging sessions we’ve had at GopherCon back in April. The problem being solved is that Mac OS requires its graphic activities to be held in the first thread in the running process, and that’s incompatible with the API provided by the qml package in v0.

In practice, code in v0 looked like this:

func main() {
        qml.Init(nil)
        engine := qml.NewEngine()
        ...
}

This interface implies that logic must continue running in the initial goroutine after the GUI initialization is performed by qml.Init. Internally qml.Init will spawn a goroutine which will take over a process thread to hold the main GUI loop, and this works fine in most OSes, and used to work okay in Mac OS too, probably by chance (there’s a chance the thread locked up is the first one), but recently it started to fail more consistently and enabled the problem to be more clearly spotted and solved.

The solution requires a minor incompatible change. In v1, the initialization procedure takes the following form:

func main() {
        err := qml.Run(run)
        ...
}

func run() error {
        engine := qml.NewEngine()
        ...
}

This subtle change allows the qml package to deterministically take over the initial process thread for running the GUI loop, and is all that needs to change for an application to be initialized under the new API.

This issue also affects testing, but qml.Run isn’t suitable in those cases because the “go test” machinery doesn’t provide an appropriate place for calling it. As a good workaround, the GUI loop may be setup from an init function within any *_test.go file with:

func init() { qml.SetupTesting() }

With that in place, all tests can assume the GUI loop is running and may then use the qml package functionality arbitrarily.

Besides these changes related to initialization, the v1 API will also hold a new GL interface that is still being worked on. The intention was to have that interface ready by the announcement date, but developing the new API in an isolated branch is hindering collaboration on the new package, and is also unfortunate for people that depend on the Mac OS fixes, so v1 is being released with an unstable gl package under the path "gopkg.in/qml.v1/work-in-progress/gl". This package will suffer significant changes in the coming weeks, so it’s best to avoid publishing any applications that depend on it before it is moved to the final import path.

The package source code, documentation, and import path are referenced from gopkg.in, and to talk about any of these changes please join the mailing list.

Thanks to Andrew Gerrand for the help debugging and testing the new initialization procedure.

Read more
Joseph Salisbury

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140812 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to v3.16 final and uploaded to the
archive, ie. linux-3.13.0-7.12. Please test and let us know your
results.
—–
Important upcoming dates:
Thurs Aug 21 – Utopic Feature Freeze (~1 week away)
Thurs Sep 25 – Utopic Final Beta (~6 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Schedule:

cycle: 08-Aug through 29-Aug
====================================================================
08-Aug Last day for kernel commits for this cycle
10-Aug – 16-Aug Kernel prep week.
17-Aug – 23-Aug Bug verification & Regression testing.
24-Aug – 29-Aug Regression testing & Release to -updates.

cycle: 29-Aug through 29-Aug
====================================================================
29-Aug Last day for kernel commits for this cycle
31-Sep – 06-Sep Kernel prep week.
07-Sep – 13-Sep Bug verification & Regression testing.
14-Sep – 20-Sep Regression testing & Release to -updates.

Status for the main kernels, until today (Aug. 12):

  • Lucid – Kernels being prep’d
  • Precise – Kernels being prep’d
  • Trusty – Kernels being prep’d

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Read more
Dustin Kirkland


If you're interested in learning how to more effectively use your terminal as your integrated devops environment, consider taking 10 minutes and watching this video while enjoying the finale of Mozart's Symphony No. 40Allegro Assai (part of which is rumored to have inspired Beethoven's 5th).

I'm often asked for a quick-start guide, to using Byobu effectively.  This wiki page is a decent start, as is the manpage, and the various links on the upstream website.  But it seems that some of the past screencast videos have had the longest lasting impressions to Byobu users over the years.

I was on a long, international flight from Munich to Newark this past Saturday with a bit of time on my hands, and I cobbled together this instructional video.    That recent international trip to Nuremberg inspired me to rediscover Mozart, and I particularly like this piece, which Mozart wrote in 1788, but sadly never heard performed.  You can hear it now, and learn how to be more efficient in command line environments along the way :-)


Enjoy!
:-Dustin

Read more
Iain Farrell

Verónica Sousa's Cul de sac

Verónica Sousa’s Cul de sac

Ubuntu was once described to me by a wise(ish ;) ) man as a train that was leaving whether you’re on it or not. That’s the beauty of a 6 month release cycle. As many of you will already know, each release we include photos and illustrations produced by community members. We ask that you submit your images using free photo sharing site Flickr and that you limit your images this time to 2. The group won’t let you submit more than that but if you change your mind after you’ve submitted, fear not, simply remove one and it’ll let you add another.

As with previous submissions processes we’ve run, and in conjunction with the designers at Canonical we’ve come up with the following tips for creating wallpaper images.

  1. Images shouldn’t be too busy and filled with too many shapes and colours, a similar tone throughout is a good rule of thumb.
  2. A single point of focus, a single area that draws the eye into the image, can also help you avoid something too cluttered.
  3. The left and top edges are home to Ubuntu’s Launcher and Panel so be careful to consider how your images look in place so as not to clash with the user interface. Try them out on your own desktop, see how they feel.
  4. Try your image at different aspect ratios to make sure something important isn’t cropped out on smaller/ larger screens at different resolutions.
  5. Take a look at the wallpapers guidance on the Ubuntu Wiki regarding the size of images. Our target resolution is 2560 x 1600.
  6. Break all the rules except the resolution one! :D

To shortlist from this collection we’ll be going to the contributors whose images were selected last time around to act as our selection judges. In doing this we’ll hold a series of public IRC meetings on Freenode in #1410wallpaper to discuss the selection. In those sessions we’ll get the selection team to try out the images on their own Ubuntu machines to see what they look like on a range of displays and resolutions.

Anyone is welcome to come to these sessions but please keep in mind that an outcome is needed from the time that people are volunteering and there’s usually a lot of images to get through so we’d appreciate it if there isn’t too much additional debate.

Based on the Utopic release schedule, I think our schedule for this cycle should look like this:

  • 08/08/14 – Kick off 14.10 wallpaper submission process.
  • 22/08/14 – First get together on #1410wallpaper at 19:30 GMT.
  • 29/08/14 – Submissions deadline at 18:00 GMT – Flickr group will be locked and the selection process will begin.
  • 09/09/14 – Deliver final selection in zip format to the appropriate bug on Launchpad.
  • 11/09/14 – UI freeze for latest version of Ubuntu with our fantastic images in it!

As always, ping me if you have any questions, I’ll be lurking in #1410wallpaper on freenode or leave a question in the Flickr group for wider discussion, that’s probably the fastest way to get an answer to a question.

I’ll be posting updates on our schedule here from time to time but the Flickr group will serve as our hub.

Happy snapping and scribbling and on behalf of the community, thanks for contributing to Ubuntu! :)


Read more