Canonical Voices

Posts tagged with 'ubuntu'

pitti

I’m on my way home from Düsseldorf where I attended the LinuxCon Europe and Linux Plumber conferences. I was quite surprised how huge LinuxCon was, there were about 1.500 people there! Certainly much more than last year in New Orleans.

Containers (in both LXC and docker flavors) are the Big Thing everybody talks about and works with these days; there was hardly a presentation where these weren’t mentioned at all, and (what felt like) half of the presentations were either how to improve these, or how to use these technologies to solve problems. For example, some people/companies really take LXC to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. For example there was an interesting talk how to secure networking for containers, and pretty much everyone uses docker or LXC now to deploy workloads, run CI tests. There are projects like “fleet” which manage systemd jobs across an entire cluster of containers (distributed task scheduler) or like project-builder.org which auto-build packages from each commit of projects.

Another common topic is the trend towards building/shipping complete (r/o) system images, atomic updates and all that goodness. The central thing here was certainly “Stateless systems, factory reset, and golden images” which analyzed the common requirements and proposed how to implement this with various package systems and scenarios. In my opinion this is certainly the way to go, as our current solution on Ubuntu Touch (i. e. Ubuntu’s system-image) is far too limited and static yet, it doesn’t extend to desktops/servers/cloud workloads at all. It’s also a lot of work to implement this properly, so it’s certainly understandable that we took that shortcut for prototyping and the relatively limited Touch phone environment.

On Plumbers my main occupations were mostly the highly interesting LXC track to see what’s coming in the container world, and the systemd hackfest. On the latter I was again mostly listening (after all, I’m still learning most of the internals there..) and was able to work on some cleanups and improvements like getting rid of some of Debian’s patches and properly run the test suite. It was also great to sync up again with David Zeuthen about the future of udisks and some particular proposed new features. Looks like I’m the de-facto maintainer now, so I’ll need to spend some time soon to review/include/clean up some much requested little features and some fixes.

All in all a great week to meet some fellows of the FOSS world a gain, getting to know a lot of new interesting people and projects, and re-learning to drink beer in the evening (I hardly drink any at home :-P).

If you are interested you can also see my raw notes, but beware that there are mostly just scribbling.

Now, off to next week’s Canonical meeting in Washington, DC!

Read more
Michael Hall

screenshot_1.0So it’s finally happened, one of my first Ubuntu SDK apps has reached an official 1.0 release. And I think we all know what that means. Yup, it’s time to scrap the code and start over.

It’s a well established mantra, codified by Fred Brooks, in software development that you will end up throwing away the first attempt at a new project. The releases between 0.1 and 0.9 are a written history of your education about the problem, the tools, or the language you are learning. And learn I did, I wrote a whole series of posts about my adventures in writing uReadIt. Now it’s time to put all of that learning to good use.

Often times projects still spend an extremely long time in this 0.x stage, getting ever closer but never reaching that 1.0 release.  This isn’t because they think 1.0 should wait until the codebase is perfect, I don’t think anybody expects 1.0 to be perfect. 1.0 isn’t the milestone of success, it’s the crossing of the Rubicon, the point where drastic change becomes inevitable. It’s the milestone where the old code, with all it’s faults, dies, and out of it is born a new codebase.

So now I’m going to start on uReadIt 2.0, starting fresh, with the latest Ubuntu UI Toolkit and platform APIs. It won’t be just a feature-for-feature rewrite either, I plan to make this a great Reddit client for both the phone and desktop user. To that end, I plan to add the following:

  • A full Javascript library for interacting with the Reddit API
  • User account support, which additionally will allow:
    • Posting articles & comments
    • Reading messages in your inbox
    • Upvoting and downvoting articles and comments
  • Convergence from the start, so it’s usable on the desktop as well
  • Re-introduce link sharing via Content-Hub
  • Take advantage of new features in the UITK such as UbuntuListView filtering & pull-to-refresh, and left/right swipe gestures on ListItems

Another change, which I talked about in a previous post, will be to the license of the application. Where uReadIt 1.0 is GPLv3, the next release will be under a BSD license.

Read more
pitti

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack.

Finally, let' run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Read more
Michael Hall

Ubuntu Mauritius CommunityBut it isn’t perfect.  And that, in my opinion, is okay.  I’m not perfect, and neither are you, but you are still wonderful too.

I was asked, not too long ago, what I hated about the community. The truth, then and now, is that I don’t hate anything about it. There is a lot I don’t like about what happens, of course, but nothing that I hate. I make an effort to understand people, to “grok” them if I may borrow the word from Heinlein. When you understand somebody, or in this case a community of somebodies, you understand the whole of them, the good and the bad. Now understanding the bad parts doesn’t make them any less bad, but it does provide opportunities for correcting or removing them that you don’t get otherwise.

You reap what you sow

People will usually respond in kind with the way they are treated. I try to treat everybody I interact with respectfully, kindly, and rationally, and I’ve found that I am treated that way back. But, if somebody is prone to arrogance or cruelty or passion, they will find far more of that treatment given back and them than the positive ones. They are quite often shocked when this happens. But when you are a source of negativity you drive away people who are looking for something positive, and attract people who are looking for something negative. It’s not absolute, nice people will have some unhappy followers, and crumpy people will have some delightful ones, but on average you will be surrounded by people who behave like you.

Don’t get even, get better

An eye for an eye makes the whole world blind, as the old saying goes. When somebody is rude or disrespectful to us, it’s easy to give in to the desire to be rude and disrespectful back. When somebody calls us out on something, especially in public, we want to call them out on their own problems to show everybody that they are just as bad. This might feel good in the short term, but it causes long term harm to both the person who does it and the community they are a part of. This ties into what I wrote above, because even if you aren’t naturally a negative person, if you respond to negativity with more of the same, you’ll ultimately share the same fate. Instead use that negativity as fuel to drive you forward in a positive way, respond with coolness, thoughtfulness and introspection and not only will you disarm the person who started it, you’ll attract far more of the kind of people and interactions that you want.

Know your audience

Your audience isn’t the person or people you are talking to. Your audience is the people who hear you. Many of the defenders of Linus’ beratement of kernel contributors is that he only does it to people he knows can take it. This defense is almost always countered, quite properly, by somebody pointing out that his actions are seen by far more than just their intended recipient. Whenever you interact with any member of your community in a public space, such as a forum or mailing list, treat it as if you were interacting with every member, because you are. Again, if you perpetuate negativity in your community, you will foster negativity in your community, either directly in response to you or indirectly by driving away those who are more positive in nature. Linus’ actions might be seen as a joke, or necessary “tough love” to get the job done, but the LKML has a reputation of being inhospitable to potential contributors in no small part because of them. You can gather a large number of negative, or negativity-accepting, people into a community and get a lot of work done, but it’s easier and in my opinion better to have a large number of positive people doing it.

Monoculture is dangerous

I think all of us in the open source community know this, and most of us have said it at least once to somebody else. As noted security researcher Bruce Schneier says, “monoculture is bad; embrace diversity or die along with everyone else.” But it’s not just dangerous for software and agriculture, it’s dangerous to communities too. Communities need, desperately need, diversity, and not just for the immediate benefits that various opinions and perspectives bring. Including minorities in your community will point out flaws you didn’t know existed, because they didn’t affect anyone else, but a distro-specific bug in upstream is still a bug, and a minority-specific flaw in your community is still a flaw. Communities that are almost all male, or white, or western, aren’t necessarily bad because of their monoculture, but they should certainly consider themselves vulnerable and deficient because of it. Bringing in diversity will strengthen it, and adding minority contributor will ultimately benefit a project more than adding another to the majority. When somebody from a minority tells you there is a problem in your community that you didn’t see, don’t try to defend it by pointing out that it doesn’t affect you, but instead treat it like you would a normal bug report from somebody on different hardware than you.

Good people are human too

The appendix is a funny organ. Most of the time it’s just there, innocuous or maybe even slightly helpful. But every so often one happens to, for whatever reason, explode and try to kill the rest of the body. People in a community do this too.  I’ve seen a number of people that were good or even great contributors who, for whatever reason, had to explode and they threatened to take down anything they were a part of when it happened. But these people were no more malevolent than your appendix is, they aren’t bad, even if they do need to be removed in order to avoid lasting harm to the rest of the body. Sometimes, once whatever caused their eruption has passed, these people can come back to being a constructive part of your community.

Love the whole, not the parts

When you look at it, all of it, the open source community is a marvel of collaboration, of friendship and family. Yes, family. I know I’m not alone in feeling this way about people I may not have ever met in person. And just like family you love them during the good and the bad. There are some annoying, obnoxious people in our family. There are good people who are sometimes annoying and obnoxious. But neither of those truths changes the fact that we are still a part of an amazing, inspiring, wonderful community of open source contributors and enthusiasts.

Read more
Dustin Kirkland

A StackExchange question, back in February of this year inspired a new feature in Byobu, that I had been thinking about for quite some time:

Wouldn't it be nice to have a hot key in Byobu that would send a command to multiple splits (or windows?
This feature was added and is available in Byobu 5.73 and newer (in Ubuntu 14.04 and newer, and available in the Byobu PPA for older Ubuntu releases).

I actually use this feature all the time, to update packages across multiple computers.  Of course, Landscape is a fantastic way to do this as well.  But if you don't have access to Landscape, you can always do this very simply with Byobu!

Create some splits, using Ctrl-F2 and Shift-F2, and in each split, ssh into a target Ubuntu (or Debian) machine.

Now, use Shift-F9 to open up the purple prompt at the bottom of your screen.  Here, you enter the command you want to run on each split.  First, you might want to run:

sudo true

This will prompt you for your password, if you don't already have root or sudo access.  You might need to use Shift-Up, Shift-Down, Shift-Left, Shift-Right to move around your splits, and enter passwords.

Now, update your package lists:

sudo apt-get update

And now, apply your updates:

sudo apt-get dist-upgrade

Here's a video to demonstrate!


In a related note, another user-requested feature has been added, to simultaneously synchronize this behavior among all splits.  You'll need the latest version of Byobu, 5.87, which will be in Ubuntu 14.10 (Utopic).  Here, you'll press Alt-F9 and just start typing!  Another demonstration video here...




Cheers,
Dustin

Read more
Ben Howard

Cloud Images and Bash Vulnerabilities

The Ubuntu Cloud Image team has been monitoring the bash vulnerabilities. Due to the scope, impact and high profile nature of these vulnerabilties, we have published new images. New cloud images to address the lastest bash USN-2364-1 [1, 8, 9] are being released with a build serials of 20140927. These images include code to address all prior CVEs, including CVE-2014-6271 [6] and CVE-2014-7169 [7], and supersede images published in the past week which addressed those CVEs.

Please note: Securing Ubuntu Cloud Images requires users to regularly apply updates[5]; using the latest Cloud Images are insufficient. 

Addressing the full scope of the Bash vulnerability has been an iterative process. The security team has worked with the upstream bash community to address multiple aspects of the bash issue. As these fixes have become available, the Cloud Image team has published daily[2]. New released images[3] have been made available at the request of the Ubuntu Security team.

Canonical has been in contact with our public Cloud Partners to make these new builds available as soon as possible.

Cloud image update timeline

Daily image builds are automatically triggered when new package versions become available in the public archives. New releases for Cloud Images are triggered automatically when a new kernel becomes available. The Cloud Image team will manually trigger new released images when either requested by the Ubuntu Security team or when a significant defect requires.

Please note:  Securing Ubuntu cloud images requires that security updates be applied regularly [5], using the latest available cloud image is not sufficient in itself.  Cloud Images are built only after updated packages are made available in the public archives. Since it takes time to build the  images, test/QA and finally promote the images, there is time (sometimes  considerable) between public availablity of the package and updated Cloud Images. Users should consider this timing in their update strategy.

[1] http://www.ubuntu.com/usn/usn-2364-1/
[2] http://cloud-images.ubuntu.com/daily/server/
[3] http://cloud-images.ubuntu.com/releases/
[4] https://help.ubuntu.com/community/Repositories/Ubuntu/
[5] https://wiki.ubuntu.com/Security/Upgrades/
[6] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-6271.html
[7] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7169.html
[8] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7187.html
[9] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7186.html

Read more
Nicholas Skaggs

Final Beta testing for Utopic

Can you believe final beta is here for utopic already? Where has the summer gone? The milestone and images are already prepared for the final beta testing. This is the first round of image testing for ubuntu this cycle. A final image will also be tested next month, but now is the time to try out the image on your system. Be sure to report any bugs you may find. This will help ensure there is time to fix them before the release images.

To help make sure the final utopic image is in good shape, we need your help and test results! Please, head over to the milestone on the isotracker, select your favorite flavor and perform the needed tests against the images.

If you've never submitted test results for the iso tracker, check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help. Happy Testing!

Read more
Michael Hall

Last week I attended FOSSETCON, a new open source convention here in central Florida, and I had the opportunity to give a couple of presentations on Ubuntu phones and app development. Anybody who knows me knows that I love talking about these things, but a lot fewer people know that doing it in front of a room of people I don’t know still makes me extremely nervous. I’m an introvert, and even though I have a public-facing job and work with the wider community all the time, I’m still an introvert.

I know there are a lot of other introverts out there who might find the idea of giving presentations to be overwhelming, but they don’t have to be.  Here I’m going to give my personal experiences and advice, in the hope that it’ll encourage some of you to step out of your comfort zones and share your knowledge and talent with the rest of us at meetups and conferences.

You will be bad at it…

Public speaking is like learning how to ride a bicycle, everybody falls their first time. Everybody falls a second time, and a third. You will fidget and stutter, you will lose your train of thought, your voice will sound funny. It’s not just you, everybody starts off being bad at it. Don’t let that stop you though, accept that you’ll have bruises and scrapes and keep getting back on that bike. Coincidentally, accepting that you’re going to be bad at the first ones makes it much less frightening going into them.

… until you are good at it

I read a lot of things about how to be a good and confident public speaker, the advice was all over the map, and a lot of it felt like pure BS.  I think a lot of people try different things and when they finally feel confident in speaking, they attribute whatever their latest thing was with giving them that confidence. In reality, you just get more confident the more you do it.  You’ll be better the second time than the first, and better the third time than the second. So keep at it, you’ll keep getting better. No matter how good or bad you are now, you will keep getting better if you just keep doing it.

Don’t worry about your hands

You’ll find a lot of suggestions about how to use your hands (or not use them), how to walk around (or not walk around) or other suggestions about what to do with yourself while you’re giving your presentation. Ignore them all. It’s not that these things don’t affect your presentation, I’ll admit that they do, it’s that they don’t affect anything after your presentation. Think back about all of the presentations you’ve seen in your life, how much do you remember about how the presenter walked or waved their hands? Unless those movements were integral to the subject, you probably don’t remember much. The same will happen for you, nobody is going to remember whether you walked around or not, they’re going to remember the information you gave them.

It’s not about you

This is the one piece of advice I read that actually has helped me. The reason nobody remembers what you did with your hands is because they’re not there to watch you, they’re there for the information you’re giving them. Unless you’re an actual celebrity, people are there to get information for their own benefit, you’re just the medium which provides it to them.  So don’t make it about you (again, unless you’re an actual celebrity), focus on the topic and information you’re giving out and what it can do for the audience. If you do that, they’ll be thinking about what they’re going to do with it, not what you’re doing with your hands or how many times you’ve said “um”. Good information is a good distraction from the things you don’t want them paying attention to.

It’s all just practice

Practicing your presentation isn’t nearly as stressful as giving it, because you’re not worried about messing up. If you mess up during practice you just correct it, make a note to not make the same mistake next time, and carry on. Well if you plan on doing more public speaking there will always be a next time, which means this time is your practice for that one. Keep your eye on the presentation after this one, if you mess up now you can correct it for the next one.

 

All of the above are really just different ways of saying the same thing: just keep doing it and worry about the content not you. You will get better, your content will get better, and other people will benefit from it, for which they will be appreciative and will gladly overlook any faults in the presentation. I guarantee that you will not be more nervous about it than I was when I started.

Read more
pitti

Last week’s autopkgtest 3.5 release (in Debian sid and Ubuntu Utopic) brings several new features which I’d like to announce.

Tests that reboot

For testing low-level packages like init or the kernel it is sometimes desirable to reboot the testbed in the middle of a test. For example, I added a new boot_and_services systemd autopkgtest which configures grub to boot with systemd as pid 1, reboots, and then checks that the most important services like lightdm, D-BUS, NetworkManager, and cron come up as expected. (This test will be expanded a lot in the future to cover other areas like the journal, logind, etc.)

In a testbed which supports rebooting (currently only QEMU) your test will now find an “autopkgtest-reboot” command which the test calls with an arbitrary “marker” string. autopkgtest will then reboot the testbed, save/restore any files it needs to (like the tests file tree or previously created artifacts), and then re-run the test with ADT_REBOOT_MARK=mymarker.

The new “Reboot during a test” section in README.package-tests explains this in detail with an example.

Implicit test metadata for similar packages

The Debian pkg-perl team recently discussed how to add package tests to the ~ 3.000 Perl packages. For most of these the test metadata looks pretty much the same, so they created a new pkg-perl-autopkgtest package which centralizes the logic. autopkgtest 3.5 now supports an implicit debian/tests/control control file to avoid having to modify several thousand packages with exactly the same file.

An initial run already looked quite promising, 65% of the packages pass their tests. There will be a few iterations to identify common failures and fix those in pkg-perl-autopkgtest and autopkgtestitself now.

There is still some discussion about how implicit test control files go together with the DEP-8 specification, as other runners like sadt do not support them yet. Most probably we’ll declare those packages XS-Testsuite: autopkgtest-pkg-perl instead of the usual autopkgtest.

In the same vein, Debian’s Ruby maintainer (Antonio Terceiro) added implicit test control support for Ruby packages. We haven’t done a mass test run with those yet, but their structure will probably look very similar.

Read more
Dustin Kirkland


This little snippet of ~200 lines of YAML is the exact OpenStack that I'm deploying tonight, at the OpenStack Austin Meetup.

Anyone with a working Juju and MAAS setup, and 7 registered servers should be able to deploy this same OpenStack setup, in about 12 minutes, with a single command.


$ wget http://people.canonical.com/~kirkland/icehouseOB.yaml
$ juju-deployer -c icehouseOB.yaml
$ cat icehouseOB.yaml

icehouse:
overrides:
openstack-origin: "cloud:trusty-icehouse"
source: "distro"
services:
ceph:
charm: "cs:trusty/ceph-27"
num_units: 3
constraints: tags=physical
options:
fsid: "9e7aac42-4bf4-11e3-b4b7-5254006a039c"
"monitor-secret": AQAAvoJSOAv/NRAAgvXP8d7iXN7lWYbvDZzm2Q==
"osd-devices": "/srv"
"osd-reformat": "yes"
annotations:
"gui-x": "2648.6688842773438"
"gui-y": "708.3873901367188"
keystone:
charm: "cs:trusty/keystone-5"
num_units: 1
constraints: tags=physical
options:
"admin-password": "admin"
"admin-token": "admin"
annotations:
"gui-x": "2013.905517578125"
"gui-y": "75.58013916015625"
"nova-compute":
charm: "cs:trusty/nova-compute-3"
num_units: 3
constraints: tags=physical
to: [ceph=0, ceph=1, ceph=2]
options:
"flat-interface": eth0
annotations:
"gui-x": "776.1040649414062"
"gui-y": "-81.22811031341553"
"neutron-gateway":
charm: "cs:trusty/quantum-gateway-3"
num_units: 1
constraints: tags=virtual
options:
ext-port: eth1
instance-mtu: 1400
annotations:
"gui-x": "329.0572509765625"
"gui-y": "46.4658203125"
"nova-cloud-controller":
charm: "cs:trusty/nova-cloud-controller-41"
num_units: 1
constraints: tags=physical
options:
"network-manager": Neutron
annotations:
"gui-x": "1388.40185546875"
"gui-y": "-118.01156234741211"
rabbitmq:
charm: "cs:trusty/rabbitmq-server-4"
num_units: 1
to: mysql
annotations:
"gui-x": "633.8120727539062"
"gui-y": "862.6530151367188"
glance:
charm: "cs:trusty/glance-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "1147.3269653320312"
"gui-y": "1389.5643157958984"
cinder:
charm: "cs:trusty/cinder-4"
num_units: 1
to: nova-cloud-controller
options:
"block-device": none
annotations:
"gui-x": "1752.32568359375"
"gui-y": "1365.716194152832"
"ceph-radosgw":
charm: "cs:trusty/ceph-radosgw-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "2216.68212890625"
"gui-y": "697.16796875"
cinder-ceph:
charm: "cs:trusty/cinder-ceph-1"
num_units: 0
annotations:
"gui-x": "2257.5515747070312"
"gui-y": "1231.2130126953125"
"openstack-dashboard":
charm: "cs:trusty/openstack-dashboard-4"
num_units: 1
to: "keystone"
options:
webroot: "/"
annotations:
"gui-x": "2353.6898193359375"
"gui-y": "-94.2642593383789"
mysql:
charm: "cs:trusty/mysql-1"
num_units: 1
constraints: tags=physical
options:
"dataset-size": "20%"
annotations:
"gui-x": "364.4567565917969"
"gui-y": "1067.5167846679688"
mongodb:
charm: "cs:trusty/mongodb-0"
num_units: 1
constraints: tags=physical
annotations:
"gui-x": "-70.0399979352951"
"gui-y": "1282.8224487304688"
ceilometer:
charm: "cs:trusty/ceilometer-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "-78.13333225250244"
"gui-y": "919.3128051757812"
ceilometer-agent:
charm: "cs:trusty/ceilometer-agent-0"
num_units: 0
annotations:
"gui-x": "-90.9158582687378"
"gui-y": "562.5347595214844"
heat:
charm: "cs:trusty/heat-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "494.94012451171875"
"gui-y": "1363.6024169921875"
ntp:
charm: "cs:trusty/ntp-4"
num_units: 0
annotations:
"gui-x": "-104.57728099822998"
"gui-y": "294.6641273498535"
relations:
- - "keystone:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:amqp"
- "rabbitmq:amqp"
- - "nova-cloud-controller:image-service"
- "glance:image-service"
- - "nova-cloud-controller:identity-service"
- "keystone:identity-service"
- - "glance:shared-db"
- "mysql:shared-db"
- - "glance:identity-service"
- "keystone:identity-service"
- - "cinder:shared-db"
- "mysql:shared-db"
- - "cinder:amqp"
- "rabbitmq:amqp"
- - "cinder:cinder-volume-service"
- "nova-cloud-controller:cinder-volume-service"
- - "cinder:identity-service"
- "keystone:identity-service"
- - "neutron-gateway:shared-db"
- "mysql:shared-db"
- - "neutron-gateway:amqp"
- "rabbitmq:amqp"
- - "neutron-gateway:quantum-network-service"
- "nova-cloud-controller:quantum-network-service"
- - "openstack-dashboard:identity-service"
- "keystone:identity-service"
- - "nova-compute:shared-db"
- "mysql:shared-db"
- - "nova-compute:amqp"
- "rabbitmq:amqp"
- - "nova-compute:image-service"
- "glance:image-service"
- - "nova-compute:cloud-compute"
- "nova-cloud-controller:cloud-compute"
- - "cinder:storage-backend"
- "cinder-ceph:storage-backend"
- - "ceph:client"
- "cinder-ceph:ceph"
- - "ceph:client"
- "nova-compute:ceph"
- - "ceph:client"
- "glance:ceph"
- - "ceilometer:identity-service"
- "keystone:identity-service"
- - "ceilometer:amqp"
- "rabbitmq:amqp"
- - "ceilometer:shared-db"
- "mongodb:database"
- - "ceilometer-agent:container"
- "nova-compute:juju-info"
- - "ceilometer-agent:ceilometer-service"
- "ceilometer:ceilometer-service"
- - "heat:shared-db"
- "mysql:shared-db"
- - "heat:identity-service"
- "keystone:identity-service"
- - "heat:amqp"
- "rabbitmq:amqp"
- - "ceph-radosgw:mon"
- "ceph:radosgw"
- - "ceph-radosgw:identity-service"
- "keystone:identity-service"
- - "ntp:juju-info"
- "neutron-gateway:juju-info"
- - "ntp:juju-info"
- "ceph:juju-info"
- - "ntp:juju-info"
- "keystone:juju-info"
- - "ntp:juju-info"
- "nova-compute:juju-info"
- - "ntp:juju-info"
- "nova-cloud-controller:juju-info"
- - "ntp:juju-info"
- "rabbitmq:juju-info"
- - "ntp:juju-info"
- "glance:juju-info"
- - "ntp:juju-info"
- "cinder:juju-info"
- - "ntp:juju-info"
- "ceph-radosgw:juju-info"
- - "ntp:juju-info"
- "openstack-dashboard:juju-info"
- - "ntp:juju-info"
- "mysql:juju-info"
- - "ntp:juju-info"
- "mongodb:juju-info"
- - "ntp:juju-info"
- "ceilometer:juju-info"
- - "ntp:juju-info"
- "heat:juju-info"
series: trusty

:-Dustin

Read more
Dustin Kirkland

What would you say if I told you, that you could continuously upload your own Software-as-a-Service  (SaaS) web apps into an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, autonomically managed by an open source Orchestration-Service… right now, today?

“An idea is resilient. Highly contagious. Once an idea has taken hold of the brain it's almost impossible to eradicate.”

“Now, before you bother telling me it's impossible…”

“No, it's perfectly possible. It's just bloody difficult.” 

Perhaps something like this...

“How could I ever acquire enough detail to make them think this is reality?”

“Don’t you want to take a leap of faith???”
Sure, let's take a look!

Okay, this looks kinda neat, what is it?

This is an open source Java Spring web application, called Spring-Music, deployed as an app, running inside of Linux containers in CloudFoundry


Cloud Foundry?

CloudFoundry is an open source Platform-as-a-Service (PAAS) cloud, deployed into Linux virtual machine instances in OpenStack, by Juju.


OpenStack?

Juju?

OpenStack is an open source Infrastructure-as-a-Service (IAAS) cloud, deployed by Juju and Landscape on top of MAAS.

Juju is an open source Orchestration System that deploys and scales complex services across many public clouds, private clouds, and bare metal servers.

Landscape?

MAAS?

Landscape is a systems management tool that automates software installation, updates, and maintenance in both physical and virtual machines. Oh, and it too is deployed by Juju.

MAAS is an open source bare metal provisioning system, providing a cloud-like API to physical servers. Juju can deploy services to MAAS, as well as public and private clouds.

"Ready for the kick?"

If you recall these concepts of nesting cloud technologies...

These are real technologies, which exist today!

These are Software-as-a-Service  (SaaS) web apps served by an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, managed by an open source Orchestration-Service.

Spring Music, served by CloudFoundry, running on top of OpenStack, deployed on MAAS, managed by Juju and Landscape!

“The smallest seed of an idea can grow…”

Oh, and I won't leave you hanging...you're not dreaming!


:-Dustin

Read more
Nicholas Skaggs

Autopilot Test Runners

In my last next post, I discussed will discuss notable autopilot features and talk about how autopilot has matured since it became an independent project.

In the meantime I would be remiss if I didn't also talk about the different test runners commonly used with autopilot tests. In addition to the autopilot binary which can be executed to run the tests, different tools have cropped up to make running tests easier.

autopilot-sandbox-run
This tool ships with autopilot itself and was developed as a way to run autopilot test suites on your desktop in a sane manner. Run the autopilot3-sandbox-run command with --help to see all the options available. By default, the tests will run in an Xvfb server, all completely behind the scenes with the results being reported to you upon completion. This is a great way to run tests with no interference on your desktop. If you are a visual person like me, you may instead wish to pass -X to enable the test runs to occur in a Xephyr window allowing you to see what's happening, but still retaining control of your mouse and keyboard.

I need this tool!
sudo apt-get install python3-autopilot

I want to run tests on my desktop without losing control of my mouse!
autopilot3-sandbox-run my_testsuite_name

I want to run tests on my desktop without losing control of my mouse, but I still want to see what's happening!
autopilot3-sandbox-run -X my_testsuite_name

Autopkgtest
Autopkgtest was developed as a means to automatically test Debian packages, "as-installed". Recently support was added to also test click packages and to run on phablet devices. Autopkgtest will take care of dependencies, setting up autopilot, and unlocking the device. You can literally plug in a device and wait for the results. You should really checkout the README pages, including those on running tests. That said, here's a quick primer on running tests using autopkgtest.

I need this tool!
sudo apt-get install autopkgtest
If you are on trusty, grab and install the utopic deb from here.

I want to run tests for a click package installed on my device!
Awesome. This one is simple. Connect the device and then run:
adt-run --click my.click.name --- ssh -s adb

For example,
adt-run --click com.ubuntu.music --- ssh -s adb

will run the tests for the installed version of the music app on your device. You don't need to do anything else. For the curious, this works by reading the manifest file all click packages have. Read more here.

I want to run the tests I wrote/modified against an installed click package!
For this you need to also pass your local folder containing the tests. You will also want to make sure you installed the new version of the click package if needed.

adt-run my-folder/ --click my.click.name --- ssh -s adb

Autopkgtest can also run in a lxc container, QEMU, a chroot, and other fun targets. In the examples above, I passed --- ssh -s adb as the target, instructing autopkgtest to use ssh and adb and thus run the tests on a connected phablet device. If you want to run autopilot tests on a phablet device, I recommend using autopkgtest as it handles everything for you.

phablet-test-run
This tool is part of the greater phablet-tools package. It was originally developed as an easy way to execute tests on your phablet device. Note however that copying the tests and any dependencies to the phablet device is left to you. The phablet-tools package provides some other useful utilities to help you with this (checkout phablet-click-test-setup for example).

I need this tool!
sudo apt-get install phablet-tools

I want to run the tests I wrote/modified against an installed click package!
First copy the tests to the device. You can use the ubuntu sdk or click-buddy for this, or even do it manually via adb. Then run phablet-test-run. It takes the same arguments as autopilot itself.

phablet-test-run -v my_testsuite

Note the tools looks for the testsuite and any dependencies of the testsuite inside the /home/phablet/autopilot folder. It's up to you to make sure everything that is needed to run your tests are located there or else it will fail.

other ways
There are of course other possible test runners that wrap around autopilot to make executing tests easier. Perhaps you've written a script yourself. Just remember at the end of the day the autopilot binary will be running the tests. It simply needs to be able to find the testsuite and all of it's dependencies in order to run. For this reason, don't be afraid to execute autopilot3 and run the tests yourself. Happy test runs!

Read more
Daniel Holbach

We just created a new Ubuntu mailing list called ubuntu-community-team.

As we didn’t have a place like this before, we created it so we can

  • have discussions around planning community events
  • start all kinds of initiatives around Ubuntu
  • allow enthusiasts of the Ubuntu community to kick around new ideas
  • bring people from all parts of our community together so we can learn from each other
  • hang out and have fun

We are looking forward to seeing you on the list as well, sign up on this page.

Read more
Dustin Kirkland



In case you missed the recent Cloud Austin MeetUp, you have another chance to see the Ubuntu Orange Box live and in action here in Austin!

This time, we're at the OpenStack Austin MeetUp, next Wednesday, September 10, 2014, at 6:30pm at Tech Ranch Austin, 9111 Jollyville Rd #100, Austin, TX!

If you join us, you'll witness all of OpenStack Ice House, deployed in minutes to real hardware. Not an all-in-one DevStack; not a minimum viable set of components.  Real, rich, production-quality OpenStack!  Ceilometer, Ceph, Cinder, Glance, Heat, Horizon, Keystone, MongoDB, MySQL, Nova, NTP, Quantum, and RabbitMQ -- intelligently orchestrated and rapidly scaled across 10 physical servers sitting right up front on the podium.  Of course, we'll go under the hood and look at how all of this comes together on the fabulous Ubuntu Orange Box.

And like any good open source software developer, I generally like to make things myself, and share them with others.  In that spirit, I'll also bring a couple of growlers of my own home brewed beer, Ubrewtu ;-)  Free as in beer, of course!
Cheers,Dustin

Read more
Robin Winslow

On release day we can get up to 8,000 requests a second to ubuntu.com from people trying to download the new release. In fact, last October (13.10) was the first release day in a long time that the site didn’t crash under the load at some point during the day (huge credit to the infrastructure team).

Ubuntu.com has been running on Drupal, but we’ve been gradually migrating it to a more bespoke Django based system. In March we started work on migrating the download section in time for the release of Trusty Tahr. This was a prime opportunity to look for ways to reduce some of the load on the servers.

Choosing geolocated download mirrors is hard work for an application

When someone downloads Ubuntu from ubuntu.com (on a thank-you page), they are actually sent to one of the 300 or so mirror sites that’s nearby.

To pick a mirror for the user, the application has to:

  1. Decide from the client’s IP address what country they’re in
  2. Get the list of mirrors and find the ones that are in their country
  3. Randomly pick them a mirror, while sending more people to mirrors with higher bandwidth

This process is by far the most intensive operation on the whole site, not because these tasks are particularly complicated in themselves, but because this needs to be done for each and every user – potentially 8,000 a second while every other page on the site can be aggressively cached to prevent most requests from hitting the application itself.

For the site to be able to handle this load, we’d need to load-balance requests across perhaps 40 VMs.

Can everything be done client-side?

Our first thought was to embed the entire mirror list in the thank-you page and use JavaScript in the users’ browsers to select an appropriate mirror. This would drastically reduce the load on the application, because the download page would then be effectively static and cache-able like every other page.

The only way to reliably get the user’s location client-side is with the geolocation API, which is only supported by 85% of users’ browsers. Another slight issue is that the user has to give permission before they could be assigned a mirror, which would slightly hinder their experience.

This solution would inconvenience users just a bit too much. So we found a trade-off:

A mixed solution – Apache geolocation

mod_geoip2 for Apache can apply server rules based on a user’s location and is much faster than doing geolocation at the application level. This means that we can use Apache to send users to a country-specific version of the download page (e.g. the German desktop thank-you page) by adding &country=GB to the end of the URL.

These country specific pages contain the list of mirrors for that country, and each one can now be cached, vastly reducing the load on the server. Client-side JavaScript randomly selects a mirror for the user, weighted by the bandwidth of each mirror, and kicks off their download, without the need for client-side geolocation support.

This solution was successfully implemented shortly before the release of Trusty Tahr.

(This article was also posted on robinwinslow.co.uk)

Read more
Ben Howard

For years, the Ubuntu Cloud Images have been built on a timer (i.e. cronjob or Jenkins). Every week, you can reasonably expect that stable and LTS releases to be built twice a week while our development build is build once a day.  Each of these builds is given a serial in the form of YYYYMMDD. 

While time-based building has proven to be reliable, different build serials may be functionally the same, just put together at a different point in time. Many of the builds that we do for stable and LTS releases are pointless.

When the whole heartbleed fiasco hit, it put the Cloud Image team into over-drive, since it required manually triggering builds the LTS releases. When we manually trigger builds, it takes roughly 12-16 hours to build, QA, test and release new Cloud Images. Sure, most of this is automated, but the process had to be manually started by a human. This got me thinking: there has to be a better way.

What if we build the Cloud Images when the package set changes?

With that, I changed the Ubuntu 14.10 (Utopic Unicorn) build process from time-based to archive trigger-based. Now, instead of building every day at 00:30 UTC, the build starts when the archive has been updated and the packages in the prior cloud image build is older than the archive version. In the last three days, there were eight builds for Utopic. For a development version of Ubuntu, this just means that developers don't have to wait 24 hours for the latest package changes to land in a Cloud Image.

Over the next few weeks, I will be moving the 10.04 LTS, 12.04 LTS and 14.04 LTS build processes from time to archive trigger-based. While this might result less frequent daily builds, the main advantage is that the daily builds will contain the latest package sets. And if you are trying to respond to the latest CVE, or waiting on a bug fix to land, it likely means that you'll have a fresh daily that you can use the following day.

Read more
Dustin Kirkland


Docker 1.0.1 is available for testing, in Ubuntu 14.04 LTS!

Docker 1.0.1 has landed in the trusty-proposed archive, which we hope to SRU to trusty-updates very soon.  We would love to have your testing feedback, to ensure both upgrades from Docker 0.9.1, as well as new installs of Docker 1.0.1 behave well, and are of the highest quality you have come to expect from Ubuntu's LTS  (Long Term Stable) releases!  Please file any bugs or issues here.

Moreover, this new version of the Docker package now installs the Docker binary to /usr/bin/docker, rather than /usr/bin/docker.io in previous versions. This should help Ubuntu's Docker package more closely match the wealth of documentation and examples available from our friends upstream.

A big thanks to Paul Tagliamonte, James Page, Nick Stinemates, Tianon Gravi, and Ryan Harper for their help upstream in Debian and in Ubuntu to get this package updated in Trusty!  Also, it's probably worth mentioning that we're targeting Docker 1.1.2 (or perhaps 1.2.0) for Ubuntu 14.10 (Utopic), which will release on October 23, 2014.

Here are a few commands that might help your testing...

Check What Candidate Versions are Available

$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

If that shows 0.9.1~dfsg1-2 (as it should), then you need to enable the trusty-proposed pocket.

$ echo "deb http://archive.ubuntu.com/ubuntu/ trusty-proposed universe" | sudo tee -a /etc/apt/sources.list
$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

And now you should see the new version, 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1, available (probably in addition to 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1).

Upgrades

Check if you already have Docker installed, using:

$ dpkg -l docker.io

If so, you can simply upgrade.

$ sudo apt-get upgrade

And now, you can check your Docker version:

$ sudo dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
0.9.1~dfsg1-2

New Installations

You can simply install the new package with:

$ sudo apt-get install docker.io

And ensure that you're on the latest version with:

$ dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1

Running Docker

If you're already a Docker user, you probably don't need these instructions.  But in case you're reading this, and trying Docker for the first time, here's the briefest of quick start guides :-)

$ sudo docker pull ubuntu
$ sudo docker run -i -t ubuntu /bin/bash

And now you're running a bash shell inside of an Ubuntu Docker container.  And only bash!

root@1728ffd1d47b:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 13:42 ? 00:00:00 /bin/bash
root 8 1 0 13:43 ? 00:00:00 ps -ef

If you want to do something more interesting in Docker, well, that's whole other post ;-)

:-Dustin

Read more
Michael Hall

Recognition is like money, it only really has value when it’s being passed between one person and another. Otherwise it’s just potential value, sitting idle.  Communication gives life to recognition, turning it’s potential value into real value.

As I covered in my previous post, Who do you contribute to?, recognition doesn’t have a constant value.  In that article I illustrated how the value of recognition differs depending on who it’s coming from, but that’s not the whole story.  The value of recognition also differs depending on the medium of communication.

communication_triangleOver at the Community Leadership Knowledge Base I started documenting different forms of communication that a community might choose, and how each medium has a balance of three basic properties: Speed, Thoughtfulness and Discoverability. Let’s call this the communication triangle. Each of these also plays a part in the value of recognition.

Speed

Again, much like money, recognition is something that is circulated.  It’s usefulness is not simply created by the sender and consumed by the receiver, but rather passed from one person to another, and then another.  The faster you can communicate recognition around your community, the more utility you can get out of even a small amount of it. Fast communications, like IRC, phone calls or in-person meetups let you give and receive a higher volume of recognition than slower forms, like email or blog posts. But speed is only one part, and faster isn’t necessarily better.

Thoughtfulness

Where speed emphasizes quantity, thoughtfulness is a measure of the quality of communication, and that directly affects the value of recognition given. Thoughtful communications require consideration upon both receiving and replying. Messages are typically longer, more detailed, and better presented than those that emphasize speed. As a result, they are also usually a good bit slower too, both in the time it takes for a reply to be made, and also the speed at which a full conversation happens. An IRC meeting can be done in an hour, where an email exchange can last for weeks, even if both end up with the same word-count at the end.

Discoverability

The third point on our communication triangle, discoverability, is a measure of how likely it is that somebody not immediately involved in a conversation can find out about it. Because recognition is a social good, most of it’s value comes from other people knowing who has given it to whom. Discoverability acts as a multiplier (or divisor, if done poorly) to the original value of recognition.

There are two factors to the discoverability of communication. The first, accessibility, is about how hard it is to find the conversation. Blog posts, or social media posts, are usually very easy to discover, while IRC chats and email exchanges are not. The second factor, longevity, is about how far into the future that conversation can still be discovered. A social media post disappears (or at least becomes far less accessible) after a while, but an IRC log or mailing list archive can stick around for years. Unlike the three properties of communication, however, these factors to discoverability do not require a trade off, you can have something that is both very accessible and has high longevity.

Finding Balance

Most communities will have more than one method of communication, and a healthy one will have a combination of them that compliment each other. This is important because sometimes one will offer a more productive use of your recognition than another. Some contributors will respond better to lots of immediate recognition, rather than a single eloquent one. Others will respond better to formal recognition than informal.  In both cases, be mindful of the multiplier effect that discoverability gives you, and take full advantage of opportunities where that plays a larger than usual role, such as during an official meeting or when writing an article that will have higher than normal readership.

Read more
Dustin Kirkland


If you're interested in learning how to more effectively use your terminal as your integrated devops environment, consider taking 10 minutes and watching this video while enjoying the finale of Mozart's Symphony No. 40Allegro Assai (part of which is rumored to have inspired Beethoven's 5th).

I'm often asked for a quick-start guide, to using Byobu effectively.  This wiki page is a decent start, as is the manpage, and the various links on the upstream website.  But it seems that some of the past screencast videos have had the longest lasting impressions to Byobu users over the years.

I was on a long, international flight from Munich to Newark this past Saturday with a bit of time on my hands, and I cobbled together this instructional video.    That recent international trip to Nuremberg inspired me to rediscover Mozart, and I particularly like this piece, which Mozart wrote in 1788, but sadly never heard performed.  You can hear it now, and learn how to be more efficient in command line environments along the way :-)


Enjoy!
:-Dustin

Read more
Iain Farrell

Verónica Sousa's Cul de sac

Verónica Sousa’s Cul de sac

Ubuntu was once described to me by a wise(ish ;) ) man as a train that was leaving whether you’re on it or not. That’s the beauty of a 6 month release cycle. As many of you will already know, each release we include photos and illustrations produced by community members. We ask that you submit your images using free photo sharing site Flickr and that you limit your images this time to 2. The group won’t let you submit more than that but if you change your mind after you’ve submitted, fear not, simply remove one and it’ll let you add another.

As with previous submissions processes we’ve run, and in conjunction with the designers at Canonical we’ve come up with the following tips for creating wallpaper images.

  1. Images shouldn’t be too busy and filled with too many shapes and colours, a similar tone throughout is a good rule of thumb.
  2. A single point of focus, a single area that draws the eye into the image, can also help you avoid something too cluttered.
  3. The left and top edges are home to Ubuntu’s Launcher and Panel so be careful to consider how your images look in place so as not to clash with the user interface. Try them out on your own desktop, see how they feel.
  4. Try your image at different aspect ratios to make sure something important isn’t cropped out on smaller/ larger screens at different resolutions.
  5. Take a look at the wallpapers guidance on the Ubuntu Wiki regarding the size of images. Our target resolution is 2560 x 1600.
  6. Break all the rules except the resolution one! :D

To shortlist from this collection we’ll be going to the contributors whose images were selected last time around to act as our selection judges. In doing this we’ll hold a series of public IRC meetings on Freenode in #1410wallpaper to discuss the selection. In those sessions we’ll get the selection team to try out the images on their own Ubuntu machines to see what they look like on a range of displays and resolutions.

Anyone is welcome to come to these sessions but please keep in mind that an outcome is needed from the time that people are volunteering and there’s usually a lot of images to get through so we’d appreciate it if there isn’t too much additional debate.

Based on the Utopic release schedule, I think our schedule for this cycle should look like this:

  • 08/08/14 – Kick off 14.10 wallpaper submission process.
  • 22/08/14 – First get together on #1410wallpaper at 19:30 GMT.
  • 29/08/14 – Submissions deadline at 18:00 GMT – Flickr group will be locked and the selection process will begin.
  • 09/09/14 – Deliver final selection in zip format to the appropriate bug on Launchpad.
  • 11/09/14 – UI freeze for latest version of Ubuntu with our fantastic images in it!

As always, ping me if you have any questions, I’ll be lurking in #1410wallpaper on freenode or leave a question in the Flickr group for wider discussion, that’s probably the fastest way to get an answer to a question.

I’ll be posting updates on our schedule here from time to time but the Flickr group will serve as our hub.

Happy snapping and scribbling and on behalf of the community, thanks for contributing to Ubuntu! :)


Read more