Canonical Voices

Posts tagged with 'ubuntu'

Daniel Holbach

Ubuntu Online Summit: 12-14 November

Yet another Ubuntu Online Summit (UOS) is ahead of us. It’s going to happen from 12-14 November. Participation is open to everyone, so to attend simply:

If you still need to get a session on the schedule to discuss a topic related to your field, create the session soon!

What I love about the Ubuntu Online Summit is that people get together, invite some fresh sets of eyes and brains and figure out together where Ubuntu is going. The sessions are also not too long (1h), so you are forced to come conclusions (and work items!) quickly.

Sessions I’m particularly looking forward to are:

  • 12 Nov
    • 15 UTC – Community Roundtable
    • 15 UTC – Testing Unity 8 Desktop
    • 16 UTC – App/Scope development training events
    • 18 UTC – Community events in Vivid cycle
    • 19 UTC – More appdev/scope code examples
  • 13 Nov
    • 16 UTC – Community Council Feedback
    • 16 UTC – Porting Apps To Ubuntu
    • 18 UTC – Ubuntu Women Vivid Goals
    • 19 UTC – Ubuntu Community Q&A
  • 14 Nov
    • 14 UTC – Transparency and participation
    • 15 UTC – Promoting the Ubuntu phone in LoCos
    • 16 UTC – LoCo Team Activity Review
    • 18 UTC – Ubuntu Touch Component Store

Please note: session times might still be changed, so keep an eye on the schedule. (Also: there’s lots more good stuff!)

Looking forward to seeing you all there! :-D

Read more
Dustin Kirkland

Earlier this week, here in Paris, at the OpenStack Design Summit, Mark Shuttleworth and Canonical introduced our vision and proof of concept for LXD.

You can find the official blog post on Canonical Insights, and a short video introduction on Youtube (by yours truly).

Our Canonical colleague Stephane Graber posted a bit more technical design detail here on the lxc-devel mailing list, which was picked up by HackerNews.  And LWN published a story yesterday covering another Canonical colleague of ours, Serge Hallyn, and his work on Cgroups and CGManager, all of which feeds into LXD.  As it happens, Stephane and Serge are upstream co-maintainers of Linux Containers.  Tycho Andersen, another colleague of ours, has been working on CRIU, which was the heart of his amazing demo this week, live migrating a container running the cult classic 1st person shooter, Doom! between two containers, back and forth.


Moreover, we've answered a few journalists' questions for excellent articles on ZDnet and SynergyMX.  Predictably, El Reg is skeptical (which isn't necessarily a bad thing).  But unfortunately, The Var Guy doesn't quite understand the technology (and unfortunately uses this article to conflate LXD with other random Canonical/Ubuntu complaints).

In any case, here's a bit more about LXD, in my own words...

Our primary design goal with LXD, is to extend containers into process based systems that behave like virtual machines.

We love KVM for its total machine abstraction, as a full virtualization hypervisor.  Moreover, we love what Docker does for application level development, confinement, packaging, and distribution.

But as an operating system and Linux distribution, our customers are, in fact, asking us for complete operating systems that boot and function within a Linux Container's execution space, natively.

Linux Containers are essential to our reference architecture of OpenStack, where we co-locate multiple services on each host.  Nearly every host is a Nova compute node, as well as a Ceph storage node, and also run a couple of units of "OpenStack overhead", such as MySQL, RabbitMQ, MongoDB, etc.  Rather than running each of those services all on the same physical system, we actually put each of them in their own container, with their own IP address, namespace, cgroup, etc.  This gives us tremendous flexibility, in the orchestration of those services.  We're able to move (migrate, even live migrate) those services from one host to another.  With that, it becomes possible to "evacuate" a given host, by moving each contained set of services elsewhere, perhaps a larger or smaller system, and then shut down the unit (perhaps to replace a hard drive or memory, or repurpose it entirely).

Containers also enable us to similarly confine services on virtual machines themselves!  Let that sink in for a second...  A contained workload is able, then, to move from one virtual machine to another, to a bare metal system.  Even from one public cloud provider, to another public or private cloud!

The last two paragraphs capture a few best practices that what we've learned over the last few years implementing OpenStack for some of the largest telcos and financial services companies in the world.  What we're hearing from Internet service and cloud providers is not too dissimilar...  These customers have their own customers who want cloud instances that perform at bare metal equivalence.  They also want to maximize the utilization of their server hardware, sometimes by more densely packing workloads on given systems.

As such, LXD is then a convergence of several different customer requirements, and our experience deploying some massively complex, scalable workloads (a la OpenStack, Hadoop, and others) in enterprises. 

The rapid evolution of a few key technologies under and around LXC have recently made this dream possible.  Namely: User namespaces, Cgroups, SECCOMP, AppArmorCRIU, as well as the library abstraction that our external tools use to manage these containers as systems.

LXD is a new "hypervisor" in that it provides (REST) APIs that can manage Linux Containers.  This is a step function beyond where we've been to date: able to start and stop containers with local commands and, to a limited extent, libvirt, but not much more.  "Booting" a system, in a container, running an init system, bringing up network devices (without nasty hacks in the container's root filesystem), etc. was challenging, but we've worked our way all of these, and Ubuntu boots unmodified in Linux Containers today.

Moreover, LXD is a whole new semantic for turning any machine -- Intel, AMD, ARM, POWER, physical, or even a virtual machine (e.g. your cloud instances) -- into a system that can host and manage and start and stop and import and export and migrate multiple collections of services bundled within containers.

I've received a number of questions about the "hardware assisted" containerization slide in my deck.  We're under confidentiality agreements with vendors as to the details and timelines for these features.

What (I think) I can say, is that there are hardware vendors who are rapidly extending some of the key features that have made cloud computing and virtualization practical, toward the exciting new world of Linux Containers.  Perhaps you might read a bit about CPU VT extensions, No Execute Bits, and similar hardware security technologies.  Use your imagination a bit, and you can probably converge on a few key concepts that will significantly extend the usefulness of Linux Containers.

As soon as such hardware technology is enabled in Linux, you have our commitment that Ubuntu will bring those features to end users faster than anyone else!

If you want to play with it today, you can certainly see the primitives within Ubuntu's LXC.  Launch Ubuntu containers within LXC and you'll start to get the general, low level idea.  If you want to view it from one layer above, give our new nova-compute-flex (flex was the code name, before it was released as LXD), a try.  It's publicly available as a tech preview in Ubuntu OpenStack Juno (authored by Chuck Short, Scott Moser, and James Page).  Here, you can launch OpenStack instances as LXC containers (rather than KVM virtual machines), as "general purpose" system instances.

Finally, perhaps lost in all of the activity here, is a couple of things we're doing different for the LXD project.  We at Canonical have taken our share of criticism over the years about choice of code hosting (our own Bazaar and Launchpad.net), our preferred free software licence (GPLv3/AGPLv3), and our contributor license agreement (Canonical CLA).   [For the record: I love bzr/Launchpad, prefer GPL/AGPL, and am mostly ambivalent on the CLA; but I won't argue those points here.]
  1. This is a public, community project under LinuxContainers.org
  2. The code and design documents are hosted on Github
  3. Under an Apache License
  4. Without requiring signatures of the Canonical CLA
These have been very deliberate, conscious decisions, lobbied for and won by our engineers leading the project, in the interest of collaborating and garnering the participation of communities that have traditionally shunned Canonical-led projects, raising the above objections.  I, for one, am eager to see contribution and collaboration that too often, we don't see.

Cheers!
:-Dustin

Read more
Michael Hall

A couple of weeks ago we announced the start of a contest to write new Unity Scopes. These are the Dash plugins that let you search for different kinds of content from different sources. Last week Alan Pope posted his Scopes Wishlist detailing the ones he would like to see. And while I think they’re all great ideas, they didn’t particularly resonate with my personal use cases. So I’ve decided to put together a wishlist of my own:

Ubuntu Community

I’ve started on one of these in the past, more to test-drive the Scope API and documentation (both of which have changed somewhat since then), but our community has a rather large amount of content available via open APIs or feeds, that could be combined into making one really great scope. My attempt used the LoCo Team Portal API, but there is also the Planet Ubuntu RSS feed (also feeds from a number of other websites), iCal feeds from Summit, a Google calendar for UbuntuOnAir, etc. There’s a lot of community data out there just waiting to be surfaced to Ubuntu users.

Open States

My friend Paul Tagliamante works for the Sunlight Foundation, which provides access to a huge amount of local law and political data (open culture + government, how cool is that?), including the Open States website which provides more local information for those of us in the USA. Now only could a scope use these APIs to make it easy for us citizens to keep up with that’s going on in our governments, it’s a great candidate to use the Location information to default you to local data no matter where you are.

Desktop

This really only has a purpose on Unity 8 on the desktop, and even then only for a short term until a normal desktop is implemented. But for now it would be a nice way to view your desktop files and such. I think that a Scope’s categories and departments might provide a unique opportunity to re-think how we use the desktop too, with the different files organized by type, sorted by date, and displayed in a way that suits it’s content.

There’s potential here to do some really interesting things, I’m just not sure what they are. If one of you intrepid developers has some good ideas, though, give it a shot.

Comics

Let’s be honest, I love web comics, you love web comics, we all love web comic. Wouldn’t it be super awesome if you got the newest, best webcomics on your Dash? Think about it, get your XKCD, SMBC or The Oatmeal delivered every day. Okay, it might be a productivity killer, but still, I’d install it.

Read more
Michael Hall

Next week we will be kicking off the November 2014 Ubuntu Online Summit where people from the Ubuntu community and Canonical will be hosting live video sessions talking about what is being worked on, what is currently available, and what the future holds across all of the Ubuntu ecosystem.

uos_scheduleWe are in the process of recruiting sessions and filling out the Summit Schedule for this event, which should be finalized at the start of next week. You can register that you are attending on the Summit website, where you can also mark specific sessions that you are interested in and get a personalized view of your schedule (and an available iCal feed too!) UOS is designed for participation, not just consumption. Every session will have active IRC channel that goes along with it where you can speak directly to the people on video. For discussion sessions, you’re encouraged to join the video yourself when you want to join the conversation.

Moreover, we want you to host sessions! Anybody who has an idea for a good topic for conversation, presentation, or planning and is willing to host the video (meaning you need to run a Google On-Air Hangout) can propose a session. You don’t need to be a Canonical employee, project leader, or even an Ubuntu member to run a session, all you need is a topic and a willingness to be the person to drive it. And don’t worry, we have track leads who have volunteered to help you get it setup.

These sessions will be split into tracks, so you can follow along with the topics that interest you. Or you can jump from track to track to see what everybody else in the community is doing. And if you want to host a session yourself, you can contact any one of the friendly Track Leads, who will help you get it registered and on the schedule.

Ubuntu Development

Those who have participated in the Ubuntu Developer Summit (UDS) in the past will find the same kind of platform-focused topics and discussions in the Ubuntu Development track. This track covers everything from the kernel to packaging, desktops and all of the Ubuntu flavors.

The track leads are: Will CookeŁukasz ZemczakSteve LangasekAntonio Rosales, and Rohan Garg

App & Scope Development

For developers who are targeting the Ubuntu platform, for both apps and Unity scopes, we will be featuring a number of presentations on the current state of the tools, APIs and documentation, as well as gathering feedback from those who have been using them to help us improve upon them in Ubuntu 15.04. You will also see a lot of planning for the Ubuntu Core Apps, and some showcases of other apps or technologies that developers are creating.

The track leads are: Tim PeetersMichael HallAlan Pope, and Nekhelesh Ramananthan

Cloud & DevOps

Going beyond the core and client side, Ubuntu is making a lot of waves in the cloud and server market these days, and there’s no better place to learn about what we’re building (and help us build it) that the Cloud & Devops track. Whether you want to roll out your own OpenStack cloud, or make your web service easy to deploy and scale out, you will find topics here that interest you.

The track leads are: Antonio RosalesMarco CeppiPatricia Gaughen, and José Antonio Rey

Community

The Ubuntu Online Summit is itself a community coordinated event, and we’ve got a track dedicated to helping us improve and grow the whole community. You can use this to showcase the amazing work that your team has been doing, or plan out new events and projects for the coming cycle. The Community Team from canonical will be there, as well as members of the various councils, flavors and boards that provide governance for the Ubuntu project.

The track leads are: David PlanellaDaniel HolbachSvetlana Belkin, and José Antonio Rey

Users

And of course we can’t forget about our millions or users, we have a whole track setup just to provide them with resources and presentations that will help them make the most out Ubuntu. If you have been working on a project for Ubuntu, you should think about hosting a session on this track to show it off. We’ll also be hosting several feedback session to hear directly from users about what works, what doesn’t, and how we can improve.

The track leads are: Nicholas SkaggsElfy, and Scarlett Clark

Read more
Dustin Kirkland

Say it with me, out loud.  Lex.  See.  Lex-see.  LXC.

Now, change the "see" to a "dee".  Lex.  Dee.  Lex-dee.  LXD.

Easy!

Earlier this week, here in Paris, at the OpenStack Design Summit, Mark Shuttleworth and Canonical introduced our vision and proof of concept for LXD.

You can find the official blog post on Canonical Insights, and a short video introduction on Youtube (by yours truly).

Our Canonical colleague Stephane Graber posted a bit more technical design detail here on the lxc-devel mailing list, which was picked up by HackerNews.  And LWN published a story yesterday covering another Canonical colleague of ours, Serge Hallyn, and his work on Cgroups and CGManager, all of which feeds into LXD.  As it happens, Stephane and Serge are upstream co-maintainers of Linux Containers.  Tycho Andersen, another colleague of ours, has been working on CRIU, which was the heart of his amazing demo this week, live migrating a container running the cult classic 1st person shooter, Doom! between two containers, back and forth.



Moreover, we've answered a few journalists' questions for excellent articles on ZDnet and SynergyMX.  Predictably, El Reg is skeptical (which isn't necessarily a bad thing).  But unfortunately, The Var Guy doesn't quite understand the technology (and unfortunately uses this article to conflate LXD with other random Canonical/Ubuntu complaints).

In any case, here's a bit more about LXD, in my own words...

Our primary design goal with LXD, is to extend containers into process based systems that behave like virtual machines.

We love KVM for its total machine abstraction, as a full virtualization hypervisor.  Moreover, we love what Docker does for application level development, confinement, packaging, and distribution.

But as an operating system and Linux distribution, our customers are, in fact, asking us for complete operating systems that boot and function within a Linux Container's execution space, natively.

Linux Containers are essential to our reference architecture of OpenStack, where we co-locate multiple services on each host.  Nearly every host is a Nova compute node, as well as a Ceph storage node, and also run a couple of units of "OpenStack overhead", such as MySQL, RabbitMQ, MongoDB, etc.  Rather than running each of those services all on the same physical system, we actually put each of them in their own container, with their own IP address, namespace, cgroup, etc.  This gives us tremendous flexibility, in the orchestration of those services.  We're able to move (migrate, even live migrate) those services from one host to another.  With that, it becomes possible to "evacuate" a given host, by moving each contained set of services elsewhere, perhaps a larger or smaller system, and then shut down the unit (perhaps to replace a hard drive or memory, or repurpose it entirely).

Containers also enable us to similarly confine services on virtual machines themselves!  Let that sink in for a second...  A contained workload is able, then, to move from one virtual machine to another, to a bare metal system.  Even from one public cloud provider, to another public or private cloud!

The last two paragraphs capture a few best practices that what we've learned over the last few years implementing OpenStack for some of the largest telcos and financial services companies in the world.  What we're hearing from Internet service and cloud providers is not too dissimilar...  These customers have their own customers who want cloud instances that perform at bare metal equivalence.  They also want to maximize the utilization of their server hardware, sometimes by more densely packing workloads on given systems.

As such, LXD is then a convergence of several different customer requirements, and our experience deploying some massively complex, scalable workloads (a la OpenStack, Hadoop, and others) in enterprises. 

The rapid evolution of a few key technologies under and around LXC have recently made this dream possible.  Namely: User namespaces, Cgroups, SECCOMP, AppArmorCRIU, as well as the library abstraction that our external tools use to manage these containers as systems.

LXD is a new "hypervisor" in that it provides (REST) APIs that can manage Linux Containers.  This is a step function beyond where we've been to date: able to start and stop containers with local commands and, to a limited extent, libvirt, but not much more.  "Booting" a system, in a container, running an init system, bringing up network devices (without nasty hacks in the container's root filesystem), etc. was challenging, but we've worked our way all of these, and Ubuntu boots unmodified in Linux Containers today.

Moreover, LXD is a whole new semantic for turning any machine -- Intel, AMD, ARM, POWER, physical, or even a virtual machine (e.g. your cloud instances) -- into a system that can host and manage and start and stop and import and export and migrate multiple collections of services bundled within containers.

I've received a number of questions about the "hardware assisted" containerization slide in my deck.  We're under confidentiality agreements with vendors as to the details and timelines for these features.

What (I think) I can say, is that there are hardware vendors who are rapidly extending some of the key features that have made cloud computing and virtualization practical, toward the exciting new world of Linux Containers.  Perhaps you might read a bit about CPU VT extensions, No Execute Bits, and similar hardware security technologies.  Use your imagination a bit, and you can probably converge on a few key concepts that will significantly extend the usefulness of Linux Containers.

As soon as such hardware technology is enabled in Linux, you have our commitment that Ubuntu will bring those features to end users faster than anyone else!

If you want to play with it today, you can certainly see the primitives within Ubuntu's LXC.  Launch Ubuntu containers within LXC and you'll start to get the general, low level idea.  If you want to view it from one layer above, give our new nova-compute-flex (flex was the code name, before it was released as LXD), a try.  It's publicly available as a tech preview in Ubuntu OpenStack Juno (authored by Chuck Short, Scott Moser, and James Page).  Here, you can launch OpenStack instances as LXC containers (rather than KVM virtual machines), as "general purpose" system instances.

Finally, perhaps lost in all of the activity here, is a couple of things we're doing different for the LXD project.  We at Canonical have taken our share of criticism over the years about choice of code hosting (our own Bazaar and Launchpad.net), our preferred free software licence (GPLv3/AGPLv3), and our contributor license agreement (Canonical CLA).   [For the record: I love bzr/Launchpad, prefer GPL/AGPL, and am mostly ambivalent on the CLA; but I won't argue those points here.]
  1. This is a public, community project under LinuxContainers.org
  2. The code and design documents are hosted on Github
  3. Under an Apache License
  4. Without requiring signatures of the Canonical CLA
These have been very deliberate, conscious decisions, lobbied for and won by our engineers leading the project, in the interest of collaborating and garnering the participation of communities that have traditionally shunned Canonical-led projects, raising the above objections.  I, for one, am eager to see contribution and collaboration that too often, we don't see.

Cheers!
:-Dustin

Read more
beuno

As the pieces start to come together and we get closer to converging mobile and desktop in Ubuntu, Click packages running on the desktop start to feel like they will be a reality soon (Unity 8 brings us Click packages). I think it's actually very exciting, and I thought I'd talk a bit about why that is.

First off: security. The Ubuntu Security team have done some pretty mind-blowing work to ensure Click packages are confined in a safe, reliable but still flexible manner. Jamie has explained how and why in a very eloquent manner. This will only push further an OS that is already well known and respected for being a safe place to do computing for all levels of computer skills.
My second favorite thing: simplification for app developers. When we started sketching out how Clicks would work, there was a very sharp focus on enabling app developers to have more freedom to build and maintain their apps, while still making it very easy to build a package. Clicks, by design, can't express any external dependencies other than a base system (called a "framework"). That means that if your app depends on a fancy library that isn't shipped by default, you just bundle it into the Click package and you're set. You get to update it whenever it suits you as a developer, and have predictability over how it will run on a user's computer (or device!). That opens up the possibility of shipping newer versions of a library, or just sticking with one that works for you. We exchange that freedom for some minor theoretical memory usage increases and extra disk space (if 2 apps end up including the same library), but with today's computing power and disk space cost, it seems like a small price to pay to empower application developers.
Building on top of my first 2 favorite things comes the third: updating apps outside of the Ubuntu release cycle and gaining control as an app developer. Because Click packages are safer than traditional packaging systems, and dependencies are more self-contained, app developers can ship their apps directly to Ubuntu users via the software store without the need for specialized reviewers to review them first. It's also simpler to carry support for previous base systems (frameworks) in newer versions of Ubuntu, allowing app developers to ship the same version of their app to both Ubuntu users on the cutting edge of an Ubuntu development release, as well as the previous LTS from a year ago. There have been many cases over the years where this was an obvious problem, OwnCloud being the latest example of the tension that arises from the current approach where app developers don't have control over what gets shipped.
I have many more favorite things about Clicks, some more are:
- You can create "fat" packages where the same binary supports multiple architectures
- Updated between versions is transactional so you never end up with a botched app update. No more holding your breath while an update installs, hoping your power doesn't drop mid-way
- Multi-user environments can have different versions of the same app without any problems
- Because Clicks are so easy to introspect and verify their proper confinement, the process for verifying them has been easy to automate enabling the store to process new applications within minutes (if not seconds!) and make them available to users immediately

The future of Ubuntu is exciting and it has a scent of a new revolution.

Read more
Nicholas Skaggs

Ubuntu Online Summit: Vivid Edition

Ubuntu Online Summit is once again upon us. This is a community event by and for the community. It's all encompassing and intends to cover a wide range of topics. You don't need to be a developer, project lead, member of a team, or even a member of ubuntu to join and participate. The only requirement is your passion for ubuntu and desire to discuss about it's future with others.

The dates are set as November 12-November 14th from 1400 UTC to 2000 UTC. I am once again privileged to be a track lead for the users track. In my opinion, this is the best track as it's the one the largest number of us within the community can easily feel a part of (just don't like Michael, David, Daniel or Alan know I said that). Do you use ubuntu? Awesome, this is the track for you.

What I'm asking for is sessions. Have an idea for a session? Please propose it! Everything you need to know about participating can be found here. If you've attended things like ubuntu open week or a classroom session in the past, all of those types of sessions are welcome and encouraged.

"The focus of the Users track is to highlight ways to get the most out of Ubuntu, on your laptop, your phone or your server. From detailed how-to sessions, to tips and tricks, and more, this track can provide something for everybody, regardless of skill level."

Regardless of your desire to contribute a session, I would encourage everyone to take a look at the schedule as it evolves and considering joining in sessions they find interesting. In addition, it's not yet too late to offer up ideas for sessions (though I would encourage you to find a way to host the session).

Ready to propose a session? Checkout this page and feel free to ping me or any track lead for help. Don't forget to register to attend and check out the currently scheduled sessions!

Read more
Nicholas Skaggs

Autopilot Feature Primer

Autopilot celebrated it's 2 year anniversary as an independent project this summer. During that time it has developed into a useful tool for testing application UI's for gtk and qt toolkits. Support has also been extended to MIR as well as phablet devices.

With this in mind, I thought it would be useful to bring attention to some new and under-used features of autopilot, along with providing a brief explanation of some companion tools you might find useful. Thus I present to you, an autopilot primer. Let's talk through some new features shall we?

Python3 Support
Autopilot started as a python2 tool but has since migrated to python3 and you should too! For now the entire source tree remains python2 compatible, but you really should migrate your tests to python3. You'll notice the autopilot3 binary in newer releases which should be used to run autopilot with python3.

Scenario Support
Scenarios are a wonderful way to keep your tests simple and easy to read while allowing you to test with many different inputs. In short, you might need to test several edge cases as part of your acceptance testing. This is most easily accomplished by keeping the test itself generic and utilizing a scenario to vary your inputs. You can check out more information on scenarios specific to autopilot in the autopilot documentation.

Screenshots / Video
Autopilot allows you to get a video recording of a test failure. To make sure autopilot records failures, install recordmydesktop and pass the -r argument to your autopilot3 run command. However, at the moment this requires X so for now it doesn't work with the MIR backend (which things like the ubuntu phone utilize). Fortunately a screenshot at the point of failure when combined with the log is generally sufficient to solving your issue. Getting those screenshots brings us to subunit support.

Subunit Support
By default autopilot generates the test output and logs straight to your console in a text format. However autopilot also supports outputting to xml and subunit. Subunit support is what I would like to highlight for a few reasons. When you set the output format as subunit, you get a few niceties. One of which is an easier to grok format for tools, and the other is screenshots of the application when failures occur. To get a subunit stream, pass the -f subunit argument to your autopilot3 run command. You will want to also pass -O with a filename to save the output to a file as the subunit stream contains binary data.

Test Result Viewer
So, with this subunit test results file, how can you enjoy all of it's goodness? Enter trv, a simple python ui that will let organize the test run in an easy to view manner, including screenshots. The tool is the creation of Thomi Richards who describes it as a quick hack (:p), and has a youtube video demonstrating it's use. It's perfect for viewing the subunit stream and visualizing your test results. For now, it's not packaged but can be easily obtained via launchpad. Grab it with bzr branch lp:trv.

autopilot3 vis
The vis tool allows you to visually interact with the introspection tree after launching an application using autopilot launch. What this means is you can visualize the application in the same way autopilot does at runtime, with live tree updates. It lets you see what autopilot sees, allowing you to interactively build your testcase.

I'll refer you the official tutorial for more information, as well as a youtube video by yours truly. It's from a livestream, but covers what you need to know. autopilot3 vis also contains a search box, and a highlight tool which didn't exist in the orignal version, so it's even nicer now than before. Give it a whirl!

autopilot3-sandbox-run
I talked about this utility when I covered the test runners for autopilot. Still I would be remiss if I didn't mention it again. Everything I said in the test runners for autopilot post still applies, so go have a quick read about how to use the tool if you need more information. This tool enables you to easily run autopilot tests on the desktop in a nested xserver. What that means to you as a test author is that you can run tests without giving up your desktop session. No more waiting for autopilot to hand back control of your mouse after a test. If you are writing tests, you should be using this tool along with autopilot vis mentioned above during your test writing process.

Per test timeout
Although we all only write "good tests", sometimes you may find your test misbehaves. When this happens the test may even not exit cleanly or get stuck in a loop. The result is autopilot and the system under test to wait forever for the test to finish. To prevent a rouge test from killing a test suite run, autopilot is introducing support for per-test timeouts. This has landed in vivid; you'll need version 1.5.0+15.04.20141031-0ubuntu1 or later. To use the feature, add the --test-timeout argument to autopilot run and give is a timeout in seconds.

In conclusion
Autopilot has gotten many new features along the way, and these are but a few of the most recent and important ones. I hope this helps you take another look at what autopilot might be able to help you test. Happy Testing!

Read more
Ben Howard

We are pleased to announce that Ubuntu 12.04 LTS, 14.04 LTS, and 14.10 are now in beta on Google Compute Engine [1, 2, 3].

These images support both the traditional user-data as well the Google Compute Engine startup scripts. We have included the Google Cloud SDK, pre-installed as well. Users coming from other Clouds can expect to have the same great experience as on other clouds, while enjoying the features of Google Compute Engine.

From an engineering perspective, a lot of us are excited to see this launch. While we don't expect too many rough edges, it is a beta, so feedback is welcome. Please file bugs or join us in #ubuntu-server on Freenode to report any issues (ping me, utlemming, rcj or Odd_Bloke).

Finally, I wanted to thank those that have helped on this project. Launching a cloud is not an easy engineering task. You have have build infrastructure to support the new cloud, create tooling to build and publish, write QA stacks, and do packaging work. All of this spans multiple teams and disciplines. The support from Google and Canonical's Foundations and Kernel teams have been instrumental in this launch, as well the engineers on the Certified Public Cloud team.

Getting the Google Cloud SDK:

As part of the launch, Canonical and Google have been working together on packaging a version of the Google Cloud SDK. At this time, we are unable to bring it into the main archives. However, you can find it in our partner archive.

To install it run the following:

  • echo "deb http://archive.canonical.com/ubuntu $(lsb_release -c -s) partner" | sudo tee /etc/apt/sources.list.d/partner.list
  • sudo apt-get update
  • sudo apt-get -y install google-cloud-sdk


Then follow the instruction for using the Cloud SDK at [4]


[1] https://cloud.google.com/compute/docs/operating-systems#ubuntu
[2] http://googlecloudplatform.blogspot.co.uk/2014/11/curated-ubuntu-images-now-available-on.html
[3] http://insights.ubuntu.com/2014/11/03/certified-ubuntu-images-available-on-google-cloud-platform/
[4] https://cloud.google.com/sdk/gcloud/

Read more
Steph

Last week was a week of firsts for me: my first trip to America, my first Sprint and my first chili-dog.

Introducing myself as the new (only) Editorial and Web Publisher, I dove head first into the world of developers, designers and Community members. It was a very absorbing week, which after felt more like a marathon than a sprint.

After being grilled by Customs, finally we arrived at Tyson’s Corner where 200 or so other developers, designers and Community members gathered for the Devices Sprint. It was a great opportunity for me to see how people from each corner of the world contribute to Ubuntu, and share their passion for open source. I especially found it interesting to see how designers and developers work together, given their different mind sets and how they collaborated together.

The highlight for me was talking to some of the Community guys, it was really interesting to talk to them about why and how they contribute from all corners of the world.

From left to right: Riccardo, Andrew, Filippo and Victor.

(From left to right: Riccardo, Andrew, Filippo and Victor)

The main ballroom.

(The main Ballroom)

Design Team dinner.  From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni.

(Design Team dinner. From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni)

I caught up with Olga and Giorgio to share their thoughts and experiences from the Sprint:

So how did the Sprint go for you guys?

Olga: “It was very busy and productive in terms of having face time with development, which was the main reason we went, as we don’t get to see them that often.

For myself personally, I have a better understanding of things in terms of what the issues are and what is needed, and also what can or cannot be done in certain ways. I was very pleased with the whole sprint. There was a lot of running around between meetings, where I tried to use the the time in-between to catch-up with people. On the other hand as well, Development made the approach to the Design Team in terms of guidance, opinions and a general catch-up/chat, which was great!

Steph: “I agree, I found it especially productive in terms of getting the right people in the same room and working face-to-face, as it was a lot more productive than sharing a document or talking on IRC.”

Giorgio: “Working remotely with the engineers works well for certain tasks, but the Design Team sometimes needs to achieve a higher bandwidth through other means of communication, so these sprints every 3 months are incredibly useful.

What a Sprint allows us to do is to put a face to the name and start to understand each other’s needs, expectations and problems, as stuff gets lost in translation.

I agree with Olga, this Sprint was a massive opportunity to shift to much higher level of collaboration with the engineers.

What was your best moment?

Giorgio: “My best moment was when the engineers perception towards the efforts of the Design Team changed. My goal is to better this collaboration process with each Sprint.”

Did anything come up that you didn’t expect?

Giorgio: “Gaming was an underground topic that came up during the Sprint. There was a nice workshop on Wednesday on it, which was really interesting.”

Steph: “Andrew a Community Developer I interviewed actually made two games one evening during the Sprint!”

Olga: “They love what they do, they’re very passionate and care deeply.”

Do you feel as a whole the Design Team gave off a good vibe?

Giorgio: “We got a good vibe but it’s still a working progress, as we need to raise our game and become even better. This has been a long process as the design of the Platform and Apps wasn’t simply done overnight. However, now we are in a mature stage of the process where we can afford to engage with Community more. We are all in this journey together.

Canonical has a very strong engineering nature, as it was founded by engineers and driven by them, and it is has evolved because of this. As a result, over the last few years the design culture is beginning to complement that. Now they expect steer from the Design Team on a number of things, for example: Responsive design and convergence.

The Sprint was good, as we finally got more of a perception on what other parties expect from you. It’s like a relationship, you suddenly have a moment of clarity and enlightenment, where you start to see that you actually need to do that, and that will make the relationship better.”

Olga: The other parties and the Development Team started to understand that initiated communication is not just the responsibility of the Design Team, but it’s an engagement we all need to be involved in.”

In all it was a very productive week, as everyone worked hard to push for the first release of the BQ phone; together with some positive feedback and shout-outs for the Design Team :)

Unicorn hard at work.

(Unicorn hard at work)

There was a bit of time for some sightseeing too…

It would have been rude not to see what the capital had to offer, so on the weekend before the sprint we checked out some of Washington’s iconic sceneries.

The Washington Monument.

(The Washington Monument)

We saw most of the important parliamentary buildings like the White House, Washington Monument and Lincoln’s Statue. Seeing them in the flesh was spectacular, however, I half expected a UFO to appear over the Monument like in ‘Independence Day’, and for Abraham Lincoln to suddenly get up off his chair like in the movie ‘Night at the Museum’ - unfortunately none of that happened.

The White House.

(The White House)

D.C. isn’t as buzzing as London but it definitely has a lot of character, as it embodies an array of thriving ethnic pockets that represented African, Asian and Latin American cultures, and also a lot of Italians. Washington is known for getting its sax on, so me and a few of the Design Team decided to check-out the night scene and hit a local Jazz Club in Georgetown.

...And all the jazz.

(Twins Jazz Club)

On the Sunday, we decided to leave the hustle and bustle of the city and venture to the beautiful Great Falls Park, which was only 10-15 minutes from the hotel. The park was located in the Northern Fairfax County along the banks of the Potomac River, which is an integral part of the George Washington Memorial Parkway. Its creeks and rapids made for some great selfie opportunities…

Great Falls Park.

(Great Falls Park)

Read more
Daniel Holbach

In the Community Q&A with Alan and Michael yesterday, I talked a bit about the sprint in Washington already, but I thought I’d write up a bit more about it again.

First of all: it was great to see a lot of old friends and new faces at the sprint. Especially with the two events (14.10 release and upcoming phone release) coming together, it was good to lock people up in various rooms and let them figure it out when nobody could run away easily. For me it was a great time to chat with lots of people and figure out if we’re still on track and if our old assumptions still made sense.  :-)

We were all locked up in a room as well...We were all locked up in a room as well…

What was pretty fantastic was the general vibe there. Everyone was crazy busy, but everybody seemed happy to see that their work of the last months and years is slowly coming together. There are still bugs to be fixed but we are close to getting the first Ubuntu phone ever out the door. Who would have thought that a couple of years ago?

It was great to catch up with people about our App Development story. There were a number of things we looked at during the sprint:

  • Up until now we had a Virtualbox image with Ubuntu and the SDK installed for people at training (or App Dev School) events, who didn’t have Ubuntu installed. This was a clunky solution, my beta testing at xda:devcon confirmed that. I sat down with Michael Vogt who encouraged me to look into providing something more akin to an “official ISO” and showed me the ropes in terms of creating seeds and how livecd-rootfs is used.
  • I had a number of conversations with XiaoGuo Liu, who works for Canonical as well, and has been testing our developer site and our tools for the last few months. He also wrote lots and lots of great articles about Ubuntu development in Chinese. We talked about providing our developer site in Chinese as well, how we could integrate code snippets more easily and many other things.
  • I had a many chats at the breakfast buffet with Zoltan and Zsombor of the SDK team (it always looked like we were there at the same time).  We talked about making fat packages easier to generate, my experiences with kits and many other things.
  • It was also great to catch up with David Callé who is working on scopes documentation. He’s just great!

What also liked a lot was being able to debug issues with the phone on the spot. I changed to the proposed channel, set it to read-write and installed debug symbols and voilà, grabbing the developer was never easier. My personal recommendation: make sure the problem happens around 12:00, stand in the hallway with your laptop attached to the phone and wait for the developer in charge to grab lunch. This way I could find out more about a couple of issues which are being fixed now.

It was also great to meet the non-Canonical folks at the sprint who worked on the Core Apps like crazy.

What I liked as well was our Berlin meet-up: we basically invited Berliners, ex-Berliners and honorary Berliners and went to a Mexican place. Wished I met those guys more often.

I also got my Ubuntu Pioneers T-Shirt. Thanks a lot! I’ll make sure to post a selfie (as everyone else :-)) soon.

Thanks a lot for a great sprint, now I’m looking forward to the upcoming Ubuntu Online Summit (12-14 Nov)! Make sure you register and add your sessions to the schedule!

Read more
Nicholas Skaggs

Sprinting in DC: Friday

This week, my team and I are sprinting with many of the core app developers and other folks inside of Ubuntu Engineering. Each day I'm attempting to give you a glimpse of what's happening.

Friday brings an end to an exciting week, and the faces of myself and those around me reflect the discussions, excitement, fun and lack of sleep this week has entailed.

Bubbles!
The first session of the day involved hanging out with the QA team while they heard feedback from various teams on issues with quality and process within there project. Always fun to hear about what causes different teams the most issues when it comes to testing.

Next I spent some time interviewing a couple folks for publishing later. In my case I interviewed Thomi from the QA team and Zoltan from the SDK team about the work going on within there teams and how the last cycle went. The team as a whole has been conducting interviews all week. Look for these interviews to appear on youtube in the coming weeks.

Thursday night while having a look through a book store, I came across an ad for ubuntu in Linux Voice magazine. It made me smile. The dream of running ubuntu on all my devices is becoming closer every day.


I'd like to thank all the community core app developers who joined us this week. Thanks for hanging out with us, providing feedback, and most of all for the creating the wonderful apps we have for the ubuntu phone. Your work has helped shaped the device and turn it into what it is today.

Looking back over the schedule there were sessions I wish I had been able to attend, and it was wonderful catching up with everyone. Sadly my flight home prevented me from attending the closing session and presumably getting a summary of some of these sessions. I can say I was delighted to talk and interact with the unity8 team on the next steps for unity8 on the desktop. I trust next cycle we as a community can do more around testing there work.

As I head to the airport for home, it's time to celebrate the release of utopic unicorn!

Read more
ssweeny

Ubuntu 14.10

I’m at a sprint in Washington, DC with my fellow Canonicalers gearing up for the commercial release of our phone OS (more on that later) but that doesn’t mean we’ve forgotten about the desktop and cloud.

Yesterday was another Ubuntu release day! We released Ubuntu 14.10, codenamed the Utopic Unicorn. Look for lots of subtle improvements to the desktop as we prepare some big things to come soon.

As usual, you can take a tour or go straight to the download page.

And while we’re at it, here’s to another 10 years of Ubuntu!

Read more
Nicholas Skaggs

Sprinting in DC: Wednesday

This week, my team and I are sprinting with many of the core app developers and other folks inside of Ubuntu Engineering. Each day I'm attempting to give you a glimpse of what's happening.

To kick off the day, I led a session on something that has been wreaking havoc for application test writers within the core apps -- environment setup. In theory, setting up the environment to run your test should be easy. In practice, I've found it increasingly difficult. The music, calendar, clock, reminders, file manager and other teams have all been quite affected by this and the canonical QA team and myself have all pitched in to help, but struggled as well. In short, a test should be easy to launch, be well behaved and not delete any user data, and be easy to setup and feed test data into for the test process. I'm happy to report that the idea of a permanent solution has been reached. Now we must implement it of course, but the result should be drastically easier and more reliable test setup for you the test author.

I also had the chance to list some grievances for application developers with the QA team. We spoke about wanting to expand the documentation on testing and specifically targeted the need to create better templates in the ubuntu sdk for new projects. When you start a new project you should have well functioning tests, and we should teach you about how to run them too!



Just before lunch the community core app developers were able to discuss post-RTM plans and features. A review of the apps was undertaken and some desire for new designs or features were discussed. Terminal is being rebuilt to be more aligned with upstream. Music is currently undergoing a re-design which is coming along great. Calculator is anxious to get some design love. Reminders potential for offline notetaking as well as potential name changes were all discussed. Overall, an amazing accomplishment by all the developers!

After lunch, I spent time confirming the fix for a longstanding bug within autopilot. The merge proposal for fixing this bug has been simmering all summer and it's time to get it fixed. The current test suites for calendar and clock have been impacted by this and have already had regressions occur that could have been caught had tests been able to be written for this area. Having myself, the autopilot team, and the calendar developers in one place made fixing this possible.

To end the day, I spent some time attending sessions for changes to CI and learning more about the coming changes to CI within ubuntu. In summary the news is wonderful. CI will test using autopkgtest, and all of ubuntu will come under this umbrella -- phone, desktop, everything. If it's a package and it has tests, we will do all of the autopkgtest goodness currently being done for the distro.

The evening closed with a bit of fun provided by a game making hackathon using bacon2d and the hilariously horrible "Turkish Star Wars". We could always use more games in the ubuntu app store, and I hear there might even still be a pioneers t-shirt or two left if you get it in early!

Read more
Nicholas Skaggs

Sprinting in DC: Tuesday

This week, my team and I are sprinting with many of the core app developers and other folks inside of Ubuntu Engineering. Each day I'm attempting to give you a glimpse of what's happening.

On Tuesday I was finally able to sit down with the team and plan our week. In addition I was able to plan some of the work I had in mind with the community folks working on the core apps. Being obsessed with testing, my primary goals this week are centered around quality. Namely I want to make it easier for developers to write tests. Asking them to write tests is much easier when it's easy to do so. Fortunately, I think (hope?) all of the community core apps developers recognize the benefits to tests and thus are motivated to drive maturity into the testing story.

I'm also keen to work on the manual testing story. The community is imperative in helping test images for not only ubuntu, but also all of it's flavors. Seriously, you should say thank you to those folks helping make sure your install of ubuntu works well. They are busy this week helping make sure utopic is as good as it can be. Rock on image testers! But the tools and process used weigh on my mind, and I'm keen to chat later in the week with the canonical QA team and get there feedback.

During the day I attended sessions regarding changes and tweaks to the CI process. For core apps developers, errors in jenkins should be easier to replicate after these changes. CI will be moving to utilizing adt-run (autopkgtest) for there test execution (and you should too!). They will also provide the exact commands used to run the test. That means you can easily duplicate the results on the dashboard locally and fix the issues found. No more works on my box excuses!

I also met the team responsible for the application store and gave them feedback on the application submission process. Submitting apps is already so simple, but even more cool things are happening on this front.

The end of the evening found us shuffling into cab's for a team dinner. We had a long table of folks eating Italian food and getting to know each other better.


After dinner, I pressured a few folks into having some dessert and ordered a sorbet for myself. After receiving no less than 4 fruit sorbets due to a misunderstanding, I began carving the fruits and sending plates of sorbet down the table. My testcase failed however when the plates all came back :-(



Read more
pitti

I’m on my way home from Düsseldorf where I attended the LinuxCon Europe and Linux Plumber conferences. I was quite surprised how huge LinuxCon was, there were about 1.500 people there! Certainly much more than last year in New Orleans.

Containers (in both LXC and docker flavors) are the Big Thing everybody talks about and works with these days; there was hardly a presentation where these weren’t mentioned at all, and (what felt like) half of the presentations were either how to improve these, or how to use these technologies to solve problems. For example, some people/companies really take LXC to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. For example there was an interesting talk how to secure networking for containers, and pretty much everyone uses docker or LXC now to deploy workloads, run CI tests. There are projects like “fleet” which manage systemd jobs across an entire cluster of containers (distributed task scheduler) or like project-builder.org which auto-build packages from each commit of projects.

Another common topic is the trend towards building/shipping complete (r/o) system images, atomic updates and all that goodness. The central thing here was certainly “Stateless systems, factory reset, and golden images” which analyzed the common requirements and proposed how to implement this with various package systems and scenarios. In my opinion this is certainly the way to go, as our current solution on Ubuntu Touch (i. e. Ubuntu’s system-image) is far too limited and static yet, it doesn’t extend to desktops/servers/cloud workloads at all. It’s also a lot of work to implement this properly, so it’s certainly understandable that we took that shortcut for prototyping and the relatively limited Touch phone environment.

On Plumbers my main occupations were mostly the highly interesting LXC track to see what’s coming in the container world, and the systemd hackfest. On the latter I was again mostly listening (after all, I’m still learning most of the internals there..) and was able to work on some cleanups and improvements like getting rid of some of Debian’s patches and properly run the test suite. It was also great to sync up again with David Zeuthen about the future of udisks and some particular proposed new features. Looks like I’m the de-facto maintainer now, so I’ll need to spend some time soon to review/include/clean up some much requested little features and some fixes.

All in all a great week to meet some fellows of the FOSS world a gain, getting to know a lot of new interesting people and projects, and re-learning to drink beer in the evening (I hardly drink any at home :-P).

If you are interested you can also see my raw notes, but beware that there are mostly just scribbling.

Now, off to next week’s Canonical meeting in Washington, DC!

Read more
Michael Hall

screenshot_1.0So it’s finally happened, one of my first Ubuntu SDK apps has reached an official 1.0 release. And I think we all know what that means. Yup, it’s time to scrap the code and start over.

It’s a well established mantra, codified by Fred Brooks, in software development that you will end up throwing away the first attempt at a new project. The releases between 0.1 and 0.9 are a written history of your education about the problem, the tools, or the language you are learning. And learn I did, I wrote a whole series of posts about my adventures in writing uReadIt. Now it’s time to put all of that learning to good use.

Often times projects still spend an extremely long time in this 0.x stage, getting ever closer but never reaching that 1.0 release.  This isn’t because they think 1.0 should wait until the codebase is perfect, I don’t think anybody expects 1.0 to be perfect. 1.0 isn’t the milestone of success, it’s the crossing of the Rubicon, the point where drastic change becomes inevitable. It’s the milestone where the old code, with all it’s faults, dies, and out of it is born a new codebase.

So now I’m going to start on uReadIt 2.0, starting fresh, with the latest Ubuntu UI Toolkit and platform APIs. It won’t be just a feature-for-feature rewrite either, I plan to make this a great Reddit client for both the phone and desktop user. To that end, I plan to add the following:

  • A full Javascript library for interacting with the Reddit API
  • User account support, which additionally will allow:
    • Posting articles & comments
    • Reading messages in your inbox
    • Upvoting and downvoting articles and comments
  • Convergence from the start, so it’s usable on the desktop as well
  • Re-introduce link sharing via Content-Hub
  • Take advantage of new features in the UITK such as UbuntuListView filtering & pull-to-refresh, and left/right swipe gestures on ListItems

Another change, which I talked about in a previous post, will be to the license of the application. Where uReadIt 1.0 is GPLv3, the next release will be under a BSD license.

Read more
pitti

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific “daily” image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical’s Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the “most demanding” tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an “always need this constant to select a non-default network” option that is specific to Canonical Bootstack.

Finally, let’ run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Read more
Michael Hall

Ubuntu Mauritius CommunityBut it isn’t perfect.  And that, in my opinion, is okay.  I’m not perfect, and neither are you, but you are still wonderful too.

I was asked, not too long ago, what I hated about the community. The truth, then and now, is that I don’t hate anything about it. There is a lot I don’t like about what happens, of course, but nothing that I hate. I make an effort to understand people, to “grok” them if I may borrow the word from Heinlein. When you understand somebody, or in this case a community of somebodies, you understand the whole of them, the good and the bad. Now understanding the bad parts doesn’t make them any less bad, but it does provide opportunities for correcting or removing them that you don’t get otherwise.

You reap what you sow

People will usually respond in kind with the way they are treated. I try to treat everybody I interact with respectfully, kindly, and rationally, and I’ve found that I am treated that way back. But, if somebody is prone to arrogance or cruelty or passion, they will find far more of that treatment given back and them than the positive ones. They are quite often shocked when this happens. But when you are a source of negativity you drive away people who are looking for something positive, and attract people who are looking for something negative. It’s not absolute, nice people will have some unhappy followers, and crumpy people will have some delightful ones, but on average you will be surrounded by people who behave like you.

Don’t get even, get better

An eye for an eye makes the whole world blind, as the old saying goes. When somebody is rude or disrespectful to us, it’s easy to give in to the desire to be rude and disrespectful back. When somebody calls us out on something, especially in public, we want to call them out on their own problems to show everybody that they are just as bad. This might feel good in the short term, but it causes long term harm to both the person who does it and the community they are a part of. This ties into what I wrote above, because even if you aren’t naturally a negative person, if you respond to negativity with more of the same, you’ll ultimately share the same fate. Instead use that negativity as fuel to drive you forward in a positive way, respond with coolness, thoughtfulness and introspection and not only will you disarm the person who started it, you’ll attract far more of the kind of people and interactions that you want.

Know your audience

Your audience isn’t the person or people you are talking to. Your audience is the people who hear you. Many of the defenders of Linus’ beratement of kernel contributors is that he only does it to people he knows can take it. This defense is almost always countered, quite properly, by somebody pointing out that his actions are seen by far more than just their intended recipient. Whenever you interact with any member of your community in a public space, such as a forum or mailing list, treat it as if you were interacting with every member, because you are. Again, if you perpetuate negativity in your community, you will foster negativity in your community, either directly in response to you or indirectly by driving away those who are more positive in nature. Linus’ actions might be seen as a joke, or necessary “tough love” to get the job done, but the LKML has a reputation of being inhospitable to potential contributors in no small part because of them. You can gather a large number of negative, or negativity-accepting, people into a community and get a lot of work done, but it’s easier and in my opinion better to have a large number of positive people doing it.

Monoculture is dangerous

I think all of us in the open source community know this, and most of us have said it at least once to somebody else. As noted security researcher Bruce Schneier says, “monoculture is bad; embrace diversity or die along with everyone else.” But it’s not just dangerous for software and agriculture, it’s dangerous to communities too. Communities need, desperately need, diversity, and not just for the immediate benefits that various opinions and perspectives bring. Including minorities in your community will point out flaws you didn’t know existed, because they didn’t affect anyone else, but a distro-specific bug in upstream is still a bug, and a minority-specific flaw in your community is still a flaw. Communities that are almost all male, or white, or western, aren’t necessarily bad because of their monoculture, but they should certainly consider themselves vulnerable and deficient because of it. Bringing in diversity will strengthen it, and adding minority contributor will ultimately benefit a project more than adding another to the majority. When somebody from a minority tells you there is a problem in your community that you didn’t see, don’t try to defend it by pointing out that it doesn’t affect you, but instead treat it like you would a normal bug report from somebody on different hardware than you.

Good people are human too

The appendix is a funny organ. Most of the time it’s just there, innocuous or maybe even slightly helpful. But every so often one happens to, for whatever reason, explode and try to kill the rest of the body. People in a community do this too.  I’ve seen a number of people that were good or even great contributors who, for whatever reason, had to explode and they threatened to take down anything they were a part of when it happened. But these people were no more malevolent than your appendix is, they aren’t bad, even if they do need to be removed in order to avoid lasting harm to the rest of the body. Sometimes, once whatever caused their eruption has passed, these people can come back to being a constructive part of your community.

Love the whole, not the parts

When you look at it, all of it, the open source community is a marvel of collaboration, of friendship and family. Yes, family. I know I’m not alone in feeling this way about people I may not have ever met in person. And just like family you love them during the good and the bad. There are some annoying, obnoxious people in our family. There are good people who are sometimes annoying and obnoxious. But neither of those truths changes the fact that we are still a part of an amazing, inspiring, wonderful community of open source contributors and enthusiasts.

Read more
Dustin Kirkland

A StackExchange question, back in February of this year inspired a new feature in Byobu, that I had been thinking about for quite some time:

Wouldn't it be nice to have a hot key in Byobu that would send a command to multiple splits (or windows?
This feature was added and is available in Byobu 5.73 and newer (in Ubuntu 14.04 and newer, and available in the Byobu PPA for older Ubuntu releases).

I actually use this feature all the time, to update packages across multiple computers.  Of course, Landscape is a fantastic way to do this as well.  But if you don't have access to Landscape, you can always do this very simply with Byobu!

Create some splits, using Ctrl-F2 and Shift-F2, and in each split, ssh into a target Ubuntu (or Debian) machine.

Now, use Shift-F9 to open up the purple prompt at the bottom of your screen.  Here, you enter the command you want to run on each split.  First, you might want to run:

sudo true

This will prompt you for your password, if you don't already have root or sudo access.  You might need to use Shift-Up, Shift-Down, Shift-Left, Shift-Right to move around your splits, and enter passwords.

Now, update your package lists:

sudo apt-get update

And now, apply your updates:

sudo apt-get dist-upgrade

Here's a video to demonstrate!


In a related note, another user-requested feature has been added, to simultaneously synchronize this behavior among all splits.  You'll need the latest version of Byobu, 5.87, which will be in Ubuntu 14.10 (Utopic).  Here, you'll press Alt-F9 and just start typing!  Another demonstration video here...




Cheers,
Dustin

Read more