Canonical Voices

Posts tagged with 'ubuntu'

Daniel Holbach

I’m very happy that folks took notes during and after the meeting to bring up their ideas, thoughts, concerns and plans. It got a bit unwieldy, so Elfy put up a pad which summarises it and is meant to discuss actions and proposals.

Today we are going to have a meeting to discuss what’s on the “actions” pad. That’s why I thought it’d be handy to put together a bit of a summary of what people generally brought up. They’re not my thoughts, I’m just putting them up for further discussion.

Problem statements

  • Feeling that people innovate *with* Ubuntu, not *in* Ubuntu.
  • Perception of contributor drop in “older” parts of the community.
    • Less activity at UDS/vUDS/UOS events (was discussed at UOS too, maybe we need a committee which finds a new vision for Ubuntu Community Planning)?
    • Less activity in LoCos (lacking a sense of purpose?)
    • No drop in members/developers.
  • Less activity in Canonical-led projects.
  • We don’t spend marketing money on social media. Build a pavement online.
  • Downloading a CD image is too much of a barrier for many.
  • Our “community infrastructure” did not scale with the amount of users.
  • Some discussion about it being hard becoming a LoCo team. Bureaucracy from the LoCo Council.
  • We don’t have enough time to train newcomers.
  • Language barriers make it hard for some to get involved.
  • Canonical does a bad job announcing their presence at events.


  • Why are less people innovating in Ubuntu? Is Canonical driving too much of Ubuntu?
  • Why aren’t more folks stepping up into leadership positions? Mentoring? Lack of opportunities? More delegation? Do leaders just come in and lead because they’re interested?
  • Lack of planning? Do we re-plan things at UOS events, because some stuff never gets done? Need more follow-through? More assessment?


  • More clearly indicate Canonical-led projects? Detail active projects, with point of contact, etc? Clean up moribund projects.
  • Make Ubuntu events more about “doing things with Ubuntu”?
  • Ubuntu Leadership Mentoring programme.
  • Form more of an Ubuntu ecosystem, allowing to earn money with Ubuntu.

Join the hangout on on Friday, 12th December 2014, 16 UTC.

Read more
Daniel Holbach

It’s fantastic that a we have more discussion about where we want our community to go. We get ideas out of it, people communicate and get a common understanding of issues. Jono’s blog post and the ubuntu-community-team mailing list generated a lot of good stuff already. Last week we had an IRC meeting with the CC and discussed governance and leadership in there.

We took quite a bit of notes, and Elfy set up a doc where we note down actions. I would like to suggest we have


  • use Elfy’s action’s doc for submitting agenda items,
  • your agenda item is a concrete proposal or something which could be turned into work items,
  • make sure you’re there,
  • add your name to it!

Looking forward to seeing you there! :-)

Read more
Dustin Kirkland

A couple of months ago, I re-introduced an old friend -- Ubuntu JeOS (Just enough OS) -- the smallest, (merely 63MB compressed!) functional OS image that we can still call “Ubuntu”.  In fact, we call it Ubuntu Core.

That post was a prelude to something we’ve been actively developing at Canonical for most of 2014 -- Snappy Ubuntu Core!  Snappy Ubuntu combines the best of the ground-breaking image-based Ubuntu remix known as Ubuntu Touch for phones and tablets with the base Ubuntu server operating system trusted by millions of instances in the cloud.

Snappy introduces transactional updates and atomic, image based workflows -- old ideas implemented in databases for decades -- adapted to Ubuntu cloud and server ecosystems for the emerging cloud design patterns known as microservice architectures.

The underlying, base operating system is a very lean Ubuntu Core installation, running on a read-only system partition, much like your iOS, Android, or Ubuntu phone.  One or more “frameworks” can be installed through the snappy command, which is an adaptation of the click packaging system we developed for the Ubuntu Phone.  Perhaps the best sample framework is Docker.  Applications are also packaged and installed using snappy, but apps run within frameworks.  This means that any of the thousands of Docker images available in DockerHub are trivially installable as snap packages, running on the Docker framework in Snappy Ubuntu.

Take Snappy for a Drive

You can try Snappy for yourself in minutes!

You can download Snappy and launch it in a local virtual machine like this:

$ wget
$ kvm -m 512 -redir :2222::22 -redir :4443::443 ubuntu-core-alpha-01.img

Then, SSH into it with password 'ubuntu':

$ ssh -p 2222 ubuntu@localhost

At this point, you might want to poke around the system.  Take a look at the mount points, and perhaps try to touch or modify some files.

$ sudo rm /sbin/init
rm: cannot remove ‘/sbin/init’: Permission denied
$ sudo touch /foo

touch: cannot touch ‘foo’: Permission denied
$ apt-get install docker
apt-get: command not found

Rather, let's have a look at the new snappy package manager:

$ sudo snappy --help

And now, let’s install the Docker framework:

$ sudo snappy install docker

At this point, we can do essentially anything available in the Docker ecosystem!

Now, we’ve created some sample Snappy apps using existing Docker containers.  For one example, let’s now install OwnCloud:

$ sudo snappy install owncloud

This will take a little while to install, but eventually, you can point a browser at your own private OwnCloud image, running within a Docker container, on your brand new Ubuntu Snappy system.

We can also update the entire system with a simple command and a reboot:
$ sudo snappy versions
$ sudo snappy update
$ sudo reboot

And we can rollback to the previous version!
$ sudo snappy rollback
$ sudo reboot

Here's a short screencast of all of the above...

While the downloadable image is available for your local testing today, you will very soon be able to launch Snappy Ubuntu instances in your favorite public (Azure, GCE, AWS) and private clouds (OpenStack).


Read more

Ubuntu Core with Snappy was recently announced and a key ingredient for snappy is security. Snappy applications are confined by AppArmor and the confinement story for snappy is an evolution of the security model for Ubuntu Touch. The basic concepts for confined applications and the AppStore model pertain to snappy applications as well. In short, snappy applications are confined using AppArmor by default and this is achieved through an easy to understand, use and developer-friendly system. Read the snappy security specification for all the nitty gritty details.

A developer doc will be published soon.

Filed under: canonical, security, ubuntu, ubuntu-server

Read more
Nicholas Skaggs

I thought I would add a little festivity to the holiday season, quality style. In case your holidays just are not the same without a little quality in your life, allow me to share how you can get involved.

There are opportunities for every role listed on the QA wiki. Testers and test writers are both needed. Testing and writing manual tests can be learned by anyone, no coding required. That said if you have skills or interest in technical work, I would encourage you help out. You will learn by doing and get help from others while you do it.

Now onto the good stuff! What can you do to help ubuntu this cycle from a quality perspective?

There is an ever present need for brave folks willing to simply run the development version of ubuntu and use it as a daily machine throughout the cycle. It's one of the best ways for us as a community to uncover bugs and issues, in particular things that regress from the previous release. Upgrade to vivid today and see what you can break!

This tool is written in drupal7 and runs the and sites. These sites are used to record and view the results of all of our manual testing efforts. Currently dkessel is leading the effort on implementing some needed UI changes. The code and more information about the project can be found on launchpad. The tracker is one of our primary tools and needs your help to become friendly for everyone to use.

In addition a charm would be useful to simplify setting up a development environment. The charm can be based upon the existing drupal charm. At the moment this work is ready for someone to jump in.

Running unity8 as a full-time desktop is a personal goal I have for this cycle. I hope some others might also want to be early adopters and join me in this goal. For now you can help by testing the unity8 desktop. Have a look at running unity in lxc for an easy way to run unity8 today on your machine. Use it, test it, and offer feedback. I'll be talking more about unity8 as the cycle progresses and opportunities to test new features aimed at the desktop appear.

Core Apps
The core apps project is an excellent way to get involved. These applications have been lovingly developed by community members just like you. Many of the teams are looking for help in writing tests and for someone who can help bring a testing mindset and eye to the work. As of this writing specifically the docviewer, terminal and calculator teams would love your help. The core apps hackdays are happening this week, drop by and introduce yourself to get started!

Manual Tests
Like the sound of writing tests but the idea of writing code turns you off? Manual tests are needed as well! They are written in English and are easy to understand and write. Manual tests include everything you see on the qatracker and are managed as a launchpad project. This means you can pick a bug and "fix it" by submitting a merge request. The bugs involve both fixing existing tests as well as requests for new testcases.

As always there are images that need testing. Testing milestones occur later in the cycle which involve everyone helping to test a specific set of images. In the meantime, daily images are generated that have made it through the automated tests and are ready for manual testing. Booting an image in a live session is a great way to check for regressions on your machine. Doing this early in the cycle can help make sure your hardware and others like it experience a regression free upgrade when the time comes.

After subjecting software to testing, bugs are naturally found. These bugs then need to be verified and triaged. The bugsquadders, as they are called, would be happy to help you learn to categorize or triage bugs and do other tasks.

No matter how you choose to get involved, feel free to contact me for help if needed. Most of all, Happy Testing!

Read more
Daniel Holbach

The call for an Ubuntu Foundation has come up again. It has been discussed many times before, ever since an announcement was made many years ago which left a number of people confused about the state of things.

The way I understood the initial announcement was that a trust had been set up, so that if aliens ever kidnapped our fearless leader, or if he decided that beekeeping was more interesting than Ubuntu, we could still go on and bring the best flavour of linux to the world.

Ok, now back to the current discussion. An Ubuntu Foundation seems to have quite an appeal to some. The question to me is: which problems would it solve?

Looking at it from a very theoretical point of view, an Ubuntu foundation could be a place where you separate “commercial” from “public” interests, but how would this separation work? Who would work for which of the entities? Would people working for the Ubuntu foundation have to review Canonical’s paperwork before they can close deals? Would there be a board where decisions have to be pre-approved? Which separation would generally happen?

Right now, Ubuntu’s success is closely tied to Canonical’s success. I consider this a good thing. With every business win of Canonical, Ubuntu gets more exposure in the world. Canonical’s great work in the support team, in the OEM sector or when closing deals with governments benefits Ubuntu to a huge degree. It’s like two sides of a coin right now. Also: Canonical pays the bills for Ubuntu’s operations. Data centers, engineers, designers and others have to be paid.

In theory it all sounds fine: “you get to have a say”, “more transparency”, etc. I don’t think many realise though, that this will mean that additional people will have to sift through legal and other documents, that more people will be busy writing reports, summarising discussions, that there will be more need for admin , that customers will have to wait longer, that this will in general have to cost more time and money.

I believe that bringing in a new layer will bring incredible amounts of work and open up endless possibilities for politics and easily bring things to a stand-still.

Will this fix Ubuntu’s problems? I absolutely don’t think so. Could we be more open, more inspiring and more inviting? Sure, but demanding more transparency and more separation is not going to bring that.

Read more

To Win in Toulouse

Now the only thing a gambler needs
Is a suitcase and a trunk.
– Animals, The House of the Rising Sun

So, as many others, I have been to the LibreOffice Hackfest in Toulouse which — unlike many of our other Hackfests — was part of a bigger event: Capitole du Libre. As we had our own area and were not 30+ hackers, this also had the advantage that we got quicker to work. And while I had still some boring administrative work to do, this is a Hackfest were I actually got to do some coding. I looked for some bookmark related bugs in Writer, but the first bugs I looked at were just too well suited to be Easy Hacks: fdo#51741 (“Deleting bookmark is not seen as modification of document”) and fdo#56116 (“Names of bookmarks should allow all characters which are valid in HTML anchor names (missing: ‘:’ and ‘.’)”). Both were made Easy Hacks and both are fixed on master now. I then fixed fdo#85542 (“DOCX import of overlapping bookmarks”), which proved slightly more work then expected and provided a unittest for it to never come back. I later learned that the second part was entirely nonoptional, as Markus promised he would not have let me leave Toulouse without writing a unittest for commited code. I have to admit that that is a supportable position.

Toulouse Hackfest Room

Toulouse Hackfest Room

Scenes like the above were actually rather rare as we were mostly working over our notebooks. One thing I came up with at the Hackfest, but didnt finish there was some clang plugins for finding cascading conditional ops and and conditional ops that have assignments as a sideeffect in their midst. While I found nothing as mindboggling as the tweet that gave inspiration to these plugins in sw (Writer), I found some impressive expressions that certainly wouldnt be a joy to step through in gdb (or even better: set a breakpoint in) when debugging and fixed those. We probably could make a few EasyHacks out of what these (or similar) plugins find outside of sw/ (I only looked there for now) — those are reasonably easy to refactor, but you dont want to do that in the middle of a debugging session. While at it, I also looked at clangs “value assigned, but never read” hints. Most were harmless, but also trivial to get rid of. On the other hand, some of those pointed to real logic errors that are otherwise hard to see. Like this one, which has been hiding — if git is to be believed — in plain sight ever since was originally open sourced in 2000. All in all, this experience is encouraging. Now that there are our coverity defect density is a just a rounding error above zero getting more fancy clang plugins might be promising.

Just one week after the Hackfest in Toulouse, there was another event LibreOffice took part in: The Bug Squashing Party in Munich — its encouraging to see Jonathan Riddell being a commiter to LibreOffice too now, but that is not all, we have more events coming up: The Document Foundation and LibreOffice will have an assembly at 31c3 in Hamburg, you are most welcome to drop by there! And next then, there will be FOSDEM 2015 in Bruessels, where LibreOffice will be present as usual.

Read more
Nicholas Skaggs

Creating mutli-arch click packages

Click packages are one of the pieces of new technology that drives the next version of ubuntu on the phone and desktop. In a nutshell click packages allow for application developers to easily package and deliver application updates independent of the distribution release or archive. Without going into the interesting technical merits and de-merits of click packages, this means the consumer can get faster application updates. But much of the discussion and usage of click packages until now has revolved around mobile. I wanted to talk about using click packages on the desktop and packaging clicks for multiple architectures.

The manifest file
Click packages follow a specific format. Click packages contain a payload of an application's libraries, code, artwork and resources, along with its needed external dependencies. The description of the package is found in the manifest file, which is what I'd like to talk about. The file must contain a few keys, but one of the recognized optional keys is architecture. This key allows specifying architectures the package will run on.

If an application contains no compiled code, simply use 'all' as the value for architecture. This accomplishes the goal of running on all supported architectures and many of the applications currently in the ubuntu touch store fall into this category. However, an increasing number of applications do contain compiled code. Here's how to enable support across architectures for projects with compiled code.

Fat packages
The click format along with the ubuntu touch store fully support specifying one or more values for specific architecture support inside the application manifest file. Those values follow the same format as dpkg architecture names. Now in theory if a project containing compiled code lists the architectures to support, click build should be able to build one package for all. However, for now this process requires a little manual intervention. So lets talk about building a fat (or big boned!) package that contains support for multiple architectures inside a single click package.

Those who just want to skip ahead can check out the example package I put together using clock. This same package can be found in the store as multi-arch clock test. Feel free to install the click package on the desktop, the i386 emulator and an armhf device.

Building a click for a different architecture
To make a multi-arch package a click package needs to be built for each desired architecture. Follow this tutorial on for more information on how to create a click target for each architecture. Once all the targets are setup, use the ubuntu sdk to build a click for each target. The end result is a click file specific to each architecture.

For example in creating the clock package above, I built a click for amd64, i386 and armhf. Three files were generated:

Notice the handy naming scheme allows for easy differentiation as to which click belongs to which architecture. Next, extract the compiled code from each click package. This can be accomplished by utilizing dpkg. For example,

dpkg -x amd64

Do this for each package. The result should be a folder corresponding to each package architecture.

Next copy one version of the package for use as the base of multi-arch click package. In addition, remove all the compiled code under the lib folder. This folder will be populated with the extracted compiled code from the architecture specific click packages.

cp amd64 multi
rm -rf multi/lib/*

Now there is a folder for each click package, and a new folder named multi that contains the application, minus any compiled code.

Creating the multi-arch click
Inside the extracted click packages is a lib folder. The compiled modules should be arranged inside, potentially inside an architecture subfolder (depending on how the package is built).

Copy all of the compiled modules into a new folder inside the lib folder of the multi directory. The folder name should correspond to the architecture of the complied code. Here's a list of the architectures for ARM, i386, and amd64 respectively.


You can check the naming from an intended device by looking in the application-click.conf file.

grep ARCH /usr/share/upstart/sessions/application-click.conf

To use the clock package as an example again, here's a quick look at the folder structure:


The contents of lib/* from each click package I built earlier is under a corresponding folder inside the multi/lib directory. So for example, the lib folder from became lib/i386-linux-gnu/.

Presto, magic package time! 
Finally the manifest.json file needs to be updated to reflect support for the desired architectures. Inside the manifest.json file under the multi directory, edit the architecture key values to list all supported architectures for the new package. For example to list support for ARM and x86 architectures,

"architecture": ["armhf", "i386", "amd64"],

To build the new package, execute click build multi. The resulting click should build and be named with a prefix. This click can be installed on any of the specified architectures and is ready to be uploaded to the store.

Caveats, nibbly bits and bugs
So apart from click not automagically building these packages, there is one other bug as of this writing. The resulting multi-arch click will fail the automated store review and instead enter manual review. To workaround this request a manual review. Upon approval, the application will enter the store as usual.

In summary to create a multi-arch click package build a click for each supported architecture. Then pull the compiled library code from each click and place into a single click package. Next modify the click manifest file to state all of the architectures supported. Finally, rebuild the click package!

I trust this explanation and example provides encouragement to include support for x86 platforms when creating and uploading a click package to the store. Undoubtedly there are other ways to build a multi-arch click; simply ensure all the compiled code for each architecture is included inside the click package. Feel free to experiment!

If you have any questions as usual feel free to contact me. I look forward to seeing more applications in the store from my unity8 desktop!

Read more
Daniel Holbach

Despite being an “old” technology and having its problems, we still use mailing lists… a lot.  Some of the lists have been cleaned up by the Community Council some time ago, especially if they were created and then forgotten some time later.

We do have a number of mailing lists though which are still active, but have the problem of not having enough (or enough active) moderators on board. What then happens is this:

List moderation

… which sucks.

It’s not very nice if you have lots and lots of good discussion not happening just because you had no time to tend to the moderation queue.

Some mailing lists receive quite a bit of spam, others get a lot of mails from folks who are not subscribed yet, but this really shouldn’t be a problem. If you run a popular mailing list and moderation gets too much of a hassle, please consider adding more moderators – if you ask nicely a bunch of folks will be happy to help out.

So my advice:

  1. If you every registered a mailing list, please have a look at its moderation queue and see if you need help.
  2. If yes, please add more moderators.
  3. If you don’t do it yet, use listadmin – it’s the best thing since sliced bread and keeping up with moderation in the future will be no problem at all.

Read more
Daniel Holbach

I’m very happy that the ubuntu-community-team mailing list is seeing lots of discussion right now. It shows how many people deeply care about the direction of Ubuntu’s community and have ideas for how to improve things.

Looking back through the discussion of the last weeks, I can’t help but notice a few issues we are running into – issues all to common on open source project mailing lists. Maybe you all have some ideas on how we could improve the discussion?

  • Bikeshedding
    The term bikeshedding has a negative connotation, but it’s a very natural phenomenon. Rouven, a good friend of mine, recently pointed out that the recent proposal to change the statutes of the association behind our coworking space (which took a long time to put together) received no comments on the internal mailing list, whereas a change of the coffee brand seemed to invite comments from everyone.
    It is quite natural for this to happen. In a bigger proposal it’s natural for us to comment on anything that is tangible. Discussions in our community of more technical people you will often see discussions about which technology to use, rather than an answer which tries to comment on all aspects.
  • Idea overload
    Being a creative community can sometimes be a bit of a curse. You end up with different proposals plus additional ideas and nobody or few to actually implement them.
  • Huge proposals
    Sometimes you see a mail on a list which lists a huge load of different things. Without somebody who tracks where the discussion is going, summing things up, making lists of work items, etc. it will be very hard to convert a discussion into an actual project.
  • Derailing the conversation
    You’ve all seen this happen: you start the conversation with a specific problem or proposal and end up discussing something entirely different.

All of the above are nothing new, but in a part of our project where discussions tend to be quite general and where we have contributors from many different parts of the community some of the above are even more true.

Personally I feel that all of the above are fine problems to have. We are creative and we have ideas on how to improve things – that’s great. In my mind I always treated the ubuntu-community-team mailing list as a place to kick around ideas, to chat and to hang out and see what others are doing.

As I care a lot about our community and I’d still like to figure out how we can avoid the risk of some of the better ideas falling through the cracks. What do you think would help?

Maybe a meeting, maybe every two weeks to pick up some of the recent discussion and see together as a group if we can convert some of the discussion into something which actually flies?

Read more
Michael Hall

The Ubuntu Core Apps project has proven that the Ubuntu community is not only capable of building fantastic software, but they’re capable of the meeting the same standards, deadlines and requirements that are expected from projects developed by employees. One of the things that I think made Core Apps so successful was the project management support that they all received from Alan Pope.

Project management is common, even expected, for software developed commercially, but it’s just as often missing from community projects. It’s time to change that. I’m kicking off a new personal[1] project, I’m calling it the Ubuntu Incubator.

get_excited_banner_banner_smallThe purpose of the Incubator is to help community projects bootstrap themselves, obtain the resources they need to run their project, and put together a solid plan that will set them on a successful, sustainable path.

To that end I’m going to devote one month to a single project at a time. I will meet with the project members regularly (weekly or every-other week), help define a scope for their project, create a spec, define work items and assign them to milestones. I will help them get resources from other parts of the community and Canonical when they need them, promote their work and assist in recruiting contributors. All of the important things that a project needs, other than direct contributions to the final product.

I’m intentionally keeping the scope of my involvement very focused and brief. I don’t want to take over anybody’s project or be a co-founder. I will take on only one project at a time, so that project gets all of my attention during their incubation period. The incubation period itself is very short, just one month, so that I will focus on getting them setup, not on running them.  Once I finish with one project, I will move on to the next[2].

How will I choose which project to incubate? Since it’s my time, it’ll be my choice, but the most important factor will be whether or not a project is ready to be incubated. “Ready” means they are more than just an idea: they are both possible to accomplish and feasible to accomplish with the person or people already involved, the implementation details have been mostly figured out, and they just need help getting the ball rolling. “Ready” also means it’s not an existing project looking for a boost, while we need to support those projects too, that’s not what the Incubator is for.

So, if you have a project that’s ready to go, but you need a little help taking that first step, you can let me know by adding your project’s information to this etherpad doc[3]. I’ll review each one and let you know if I think it’s ready, needs to be defined a little bit more, or not a good candidate. Then each month I’ll pick one and reach out to them to get started.

Now, this part is important: don’t wait for me! I want to speed up community innovation, not slow it down, so even if I add your project to the “Ready” queue, keep on doing what you would do otherwise, because I have no idea when (or if) I will be able to get to yours. Also, if there are any other community leaders with project management experience who have the time and desire to help incubate one of these project, go ahead and claim it and reach out to that team.

[1] While this compliments my regular job, it’s not something I’ve been asked to do by Canonical, and to be honest I have enough Canonical-defined tasks to consume my working hours. This is me with just my community hat on, and I’m inclined to keep it that way.

[2] I’m not going to forget about projects after their month is up, but you get 100% of the time I spend on incubation during your month, after that my time will be devoted to somebody else.

[3] I’m using Etherpad to keep the process as lightweight as possible, if we need something better in the future we’ll adopt it then.

Read more
Dustin Kirkland

Try These 7 Tips in Your Next Blog Post

In a presentation to my colleagues last week, I shared a few tips I've learned over the past 8 years, maintaining a reasonably active and read blog.  I'm delighted to share these with you now!

1. Keep it short and sweet

Too often, we spend hours or days working on a blog post, trying to create an epic tome.  I have dozens of draft posts I'll never finish, as they're just too ambitious, and I should really break them down into shorter, more manageable articles.

Above, you can see Abraham Lincoln's Gettysburg Address, from November 19, 1863.  It's merely 3 paragraphs, 10 sentences, and less than 300 words.  And yet it's one of the most powerful messages ever delivered in American history.  Lincoln wrote it himself on the train to Gettysburg, and delivered it as a speech in less than 2 minutes.

2. Use memorable imagery

Particularly, you need one striking image at the top of your post.  This is what most automatic syndicates or social media platforms will pick up and share, and will make the first impression on phones and tablets.

3. Pen a catchy, pithy title

More people will see or read your title than the post itself.  It's sort of like the chorus to that song you know, but you don't know the rest of the lyrics.  A good title attracts readers and invites re-shares.

4. Publish midweek

This is probably more applicable for professional, rather than hobbyist, topics, but the data I have on my blog (1.7 million unique page views over 8 years), is that the majority of traffic lands on Tuesday, Wednesday, and Thursday.  While I'm writing this very post on a rainy Saturday morning over a cup of coffee, I've scheduled it to publish at 8:17am (US Central time) on the following Tuesday morning.

5. Share to your social media circles

My posts are generally professional in nature, so I tend to share them on G+, Twitter, and LinkedIn.  Facebook is really more of a family-only thing for me, but you might choose to share your posts there too.  With the lamentable death of the Google Reader a few years ago, it's more important than ever to share links to posts on your social media platforms.

6. Hope for syndication, but never expect it

So this is the one "tip" that's really out of your control.  If you ever wake up one morning to an overflowing inbox, congratulations -- your post just went "viral".  Unfortunately, this either "happens", or it "doesn't".  In fact, it almost always "doesn't" for most of us.

7. Engage with comments only when it makes sense

If you choose to use a blog platform that allows comments (and I do recommend you do), then be a little careful about when and how to engage in the comments.  You can easily find yourself overwhelmed with vitriol and controversy.  You might get a pat on the back or two.  More likely, though, you'll end up under a bridge getting pounded by a troll.  Rather than waste your time fighting a silly battle with someone who'll never admit defeat, start writing your next post.  I ignore trolls entirely.

A Case Study

As a case study, I'll take as an example the most successful post I've written: Fingerprints are Usernames, Not Passwords, with nearly a million unique page views.

  1. The entire post is short and sweet, weighing in at under 500 words and about 20 sentences
  2. One iconic, remarkable image at the top
  3. A succinct, expressive title
  4. Published on Tuesday, October 1, 2013
  5. 1561 +1's on G+, 168 retweets on Twitter
  6. Shared on Reddit and HackerNews (twice)
  7. 434 comments, some not so nice

Read more
Daniel Holbach

I Am Who I Am Because Of Who We All Are

I read the “We Are Not Loco” post  a few days ago. I could understand that Randall wanted to further liberate his team in terms of creativity and everything else, but to me it looks feels the wrong approach.

The post makes a simple promise: do away with bureaucracy, rename the team to use a less ambiguous name, JFDI! and things are going to be a lot better. This sounds compelling. We all like simplicity; in a faster and more complicated world we all would like things to be simpler again.

What I can also agree with is the general sense of empowerment. If you’re member of a team somewhere or want to become part of one: go ahead and do awesome things – your team will appreciate your hard work and your ideas.

So what was it in the post that made me sad? It took me a while to find out what specifically it was. The feeling set in when I realised somebody turned their back on a world-wide community and said “all right, we’re doing our own thing – what we used to do together to us is just old baggage”.

Sure, it’s always easier not having to discuss things in a big team. Especially if you want to agree on something like a name or any other small detail this might take ages. On the other hand: the world-wide LoCo community has achieved a lot of fantastic things together: there are lots of coordinated events around the world, there’s the LoCo team portal, and most importantly, there’s a common understanding of what teams can do and we all draw inspiration from each other’s teams. By making this a global initiative we created numerous avenues where new contributors find like-minded individuals (who all live in different places on the globe, but share the same love for Ubuntu and organising local events and activities). Here we can learn from each other, experiment and find out together what the best practices for local community awesomeness are.

Going away and equating the global LoCo community with bureaucracy to me is desolidarisation – it’s quite the opposite of “I Am Who I Am Because Of Who We All Are”.

Personally I would have preferred a set of targeted discussions which try to fix processes, improve communication channels and inspire a new round leaders of Ubuntu LoCo teams. Not everything you do in a LoCo team has to be approved by the entire set of other teams, actual reality in the LoCo world is quite different from that.

If you have ideas to discuss or suggestions, feel free to join our loco-contacts mailing list and bring it up there! It’s your chance to hang out with a lot of fun people from around the globe. :-)

Read more
Dustin Kirkland

I had the great pleasure to deliver a 90 minute talk at the USENIX LISA14 conference, in Seattle, Washington.

During the course of the talk, we managed to:

  • Deploy OpenStack Juno across 6 physical nodes, on an Orange Box on stage
  • Explain all of the major components of OpenStack (Nova, Neutron, Swift, Cinder, Horizon, Keystone, Glance, Ceilometer, Heat, Trove, Sahara)
  • Explore the deployed OpenStack cloud's Horizon interface in depth
  • Configured Neutron networking with internal and external networks, as well as a gateway and a router
  • Setup our security groups to open ICMP and SSH ports
  • Upload an SSH keypair
  • Modify the flavor parameters
  • Update a bunch of quotas
  • Add multiple images to Glance
  • Launch some instances until we max out our hypervisor limits
  • Scale up the Nova Compute nodes from 3 units to 6 units
  • Deploy a real workload (Hadoop + Hive + Kibana + Elastic Search)
  • Then, we deleted the entire environment, and ran it all over again from scratch, non-stop
Slides and a full video are below.  Enjoy!


Read more

My setup:

Laptop with Ubuntu 14.04LTS  and Nexus 4.

I also assume you are comfortable with the command prompt. You need to run some commands from terminal.

If you are already running Android 5.0, you can skip Step 1 and go directly to Step 2 to root your device. In my case, I didn’t wait for the OTA update, but if you prefer to play it safe, get the Android 5.0 update and then start.


Make sure your laptop is charged or plugged into the power and your phone is charged too.

Take a full system backup, because these steps wipes the Android clean.

And ensure your laptop doesn’t go into suspend mode while you do this.

Now install a few packages:

sudo apt-get install android-tools-adb

sudo apt-get install fastboot

sudo add-apt-repository ppa:phablet-team/tools
sudo apt-get update
sudo apt-get install phablet-tools

Step 1: Installing Android 5.0

NOTE: This will wipe the system clean so if you haven’t backed up, go and back up first.

Download the correct file from the Google site:

For Nexus 4 it is called.


Now extract the files, it will show you a directory


Change to this directory occam-lrx21t

Now boot into the bootloader, remember this step as you will need to do it a few times.

adb reboot bootloader

If it’s not able to find the device,  you can boot manually.

For nexus 4, while holding the volume button down, press the power button.

Now unlock the device with the following command.

fastboot oem unlock

Now flash the Android 5.0 Image by running this command from the directory above.


After a few minutes, On the phone you will see a new boot up logo for Android 5.0. this will take a few minutes to complete. Grab a coffee or your favourite beverage.

Once this is complete, wait you still have to root the device.

wait, don’t do much on your android, as the unlock process may wipe your data!

Step 2: Rooting Android.

Download the rooting script from chainfire:

When you scroll down, you see the actual link to download.

This will download this file:

Extract this directory, change to this directory.

Now boot into the boot loader with your preferred method. I used volume down button + power on.

Now type these commands.

chmod +x 

Step 3: Installing MultROM manager

Find the app in Google Play and Install.

Start the app. This will ask you to install 3 things, go ahead and install.


It will also boot into recovery mode.

Once its done, it will reboot and start android.

Now start the MultROM manager again.

You should see the option to install Ubuntu Touch.


Step 4: Installing Ubuntu Touch

If you want demo files, select the -demo

you can choose stable OR development version, you can also install both one by one.

This step took the longest amount of time for me. Go get a nap!

It will ask you to reboot once.


Once this is done, there is no intimation that it is completed.

When you reboot, it will give you an option to boot internal (Android) or Ubuntu Touch. Here you can select Ubuntu Touch and boot into it and setup.

Final Housekeeping. Boot into bootloader and lock the boot loader again:

fastboot oem lock

click on start to reboot your system.

Note: If you upgrade your Android, you will lose the dual-boot and have to start again from step 2 which may differ with your android version.

Read more
Nicholas Skaggs

Virtual Hugs of appreciation!

Because I was asleep at the wheel (err, keyboard) yesterday I failed to express my appreciation for some folks. It's a day for hugging! And I missed it!

I gave everyone a shoutout on social media, but since planet looks best overrun with thank you posts, I shall blog it as well!

Thank you to:

David Planella for being the rock that has anchored the team.
Leo Arias for being super awesome and making testing what it is today on all the core apps.
Carla Sella for working tirelessly on many many different things in the years I've known her. She never gives up (even when I've tried too!), and has many successes to her name for that reason.
Nekhelesh Ramananthan for always being willing to let clock app be the guinea pig
Elfy, for rocking the manual tests project. Seriously awesome work. Everytime you use the tracker, just know elfy has been a part of making that testcase happen.
Jean-Baptiste Lallement and Martin Pitt for making some of my many wishes come true over the years with quality community efforts. Autopkgtest is but one of these.

And many more. Plus some I've forgotten. I can't give hugs to everyone, but I'm willing to try!

To everyone in the ubuntu community, thanks for making ubuntu the wonderful community it is!

Read more
David Henningsson

PulseAudio buffers and protocol

This is a technical post about PulseAudio internals and the upcoming protocol improvements in the upcoming PulseAudio 6.0 release.

PulseAudio memory copies and buffering

PulseAudio is said to have a “zero-copy” architecture. So let’s look at what copies and buffers are involved in a typical playback scenario.

Client side

When PulseAudio server and client runs as the same user, PulseAudio enables shared memory (SHM) for audio data. (In other cases, SHM is disabled for security reasons.) Applications can use pa_stream_begin_write to get a pointer directly into the SHM buffer. When using pa_stream_write or through the ALSA plugin, there will be one memory copy into the SHM.

Server resampling and remapping

On the server side, the server might need to convert the stream into a format that fits the hardware (and potential other streams that might be running simultaneously). This step is skipped if deemed unnecessary.

First, the samples are converted to either signed 16 bit or float 32 bit (mainly depending on resampler requirements).
In case resampling is necessary, we make use of external resampler libraries for this, the default being speex.
Second, if remapping is necessary, e g if the input is mono and the output is stereo, that is performed as well. Finally, the samples are converted to a format that the hardware supports.

So, in worst case, there might be up to four different buffers involved here (first: after converting to “work format”, second: after resampling, third: after remapping, fourth: after converting to hardware supported format), and in best case, this step is entirely skipped.

Mixing and hardware output

PulseAudio’s built in mixer multiplies each channel of each stream with a volume factor and writes the result to the hardware. In case the hardware supports mmap (memory mapping), we write the mix result directly into the DMA buffers.


The best we can do is one copy in total, from the SHM buffer directly into the DMA hardware buffer. I hope this clears up any confusion about what PulseAudio’s advertised “zero copy” capabilities means in practice.

However, memory copies is not the only thing you want to avoid to get good performance, which brings us to the next point:

Protocol improvements in 6.0

PulseAudio does pretty well CPU wise for high latency loads (e g music playback), but a bit worse for low latency loads (e g VOIP, gaming). Or to put it another way, PulseAudio has a low per sample cost, but there is still some optimisation that can be done per packet.

For every playback packet, there are three messages sent: from server to client saying “I need more data”, from client to server saying “here’s some data, I put it in SHM, at this address”, and then a third from server to client saying “thanks, I have no more use for this SHM data, please reclaim the memory”. The third message is not sent until the audio has actually been played back.
For every message, it means syscalls to write, read, and poll a unix socket. This overhead turned out to be significant enough to try to improve.

So instead of putting just the audio data into SHM, as of 6.0 we also put the messages into two SHM ringbuffers, one in each direction. For signalling we use eventfds. (There is also an optimisation layer on top of the eventfd that tries to avoid writing to the eventfd in case no one is currently waiting.) This is not so much for saving memory copies but to save syscalls.

From my own unscientific benchmarks (i e, running “top”), this saves us ~10% – 25% of CPU power in low latency use cases, half of that being on the client side.

Read more
Michael Hall

When things are moving fast and there’s still a lot of work to do, it’s sometimes easy to forget to stop and take the time to say “thank you” to the people that are helping you and the rest of the community. So every November 20th we in Ubuntu have a Community Appreciation Day, to remind us all of the importance of those two little words. We should of course all be saying it every day, but having a reminder like this helps when things get busy.

Like so many who have already posted their appreciation have said, it would be impossible for me to thank everybody I want to thank. Even if I spent all day on this post, I wouldn’t be able to mention even half of them.  So instead I’m going to highlight two people specifically.

First I want to thank Scarlett Clark from the Kubuntu community. In the lead up to this last Ubuntu Online Summit we didn’t have enough track leads on the Users track, which is one that I really wanted to see more active this time around. The track leads from the previous UOS couldn’t do it because of personal or work schedules, and as time was getting scarce I was really in a bind to find someone. I put out a general call for help in one of the Kubuntu IRC channels, and Scarlett was quick to volunteer. I really appreciated her enthusiasm then, and even more the work that she put in as a first-time track lead to help make the Users track a success. So thank you Scarlett.

Next, I really really want to say thank you to Svetlana Belkin, who seems to be contributing in almost every part of Ubuntu these days (including ones I barely know about, like Ubuntu Scientists). She was also a repeat track lead last UOS for the Community track, and has been contributing a lot of great feedback and ideas on ways to make our amazing community even better. Most importantly, in my opinion, is that she’s trying to re-start the Ubuntu Leadership team, which I think is needed now more than ever, and which I really want to become more active in once I get through with some deadline-bound work. I would encourage anybody else who is a leader in the community, or who wants to be one, to join her in that. And thank you, Svetlana, for everything that you do.

It is both a joy and a privilege to be able to work with people like Scarlett and Svetlana, and everybody else in the Ubuntu community. Today more than ever I am reminded about how lucky I am to be a part of it.

Read more
Daniel Holbach

Appreciation for Michael Hall

Today marks another Ubuntu Community Appreciation Day, one of Ubuntu’s beautiful traditions, where you publicly thank people for their work. It’s always hard to pick just one person or a group of people, but you know what – better appreciate somebody’s work than nobody’s work at all.

One person I’d like to thanks for their work is Michael Hall. He is always around, always working on a number of projects, always involved in discussions on social media and never shy to add yet another work item to his TODO list. Even with big projects on his plate, he is still writing apps, blog entries, charms and hacks on a number of websites and is still on top of things like mailing list discussions.

I don’t know how he does it, but I’m astounded how he gets things done and still stays friendly. I’m glad he’s part of our team and tirelessly working on making Ubuntu a better place.

I also like this picture of him.


Mike: keep up the good work! :-)

Read more
Michael Hall

Last week was our second ever Ubuntu Online Summit, and it couldn’t have gone better. Not only was it a great chance for us in Canonical to talk about what we’re working on and get community members involved in the ongoing work, it was also an opportunity for the community to show us what they have been working on and give us an opportunity to get involved with them.

Community Track leads

This was also the second time we’ve recruited track leads from among the community. Traditionally leading a track was a responsibility given to one of the engineering managers within Canonical, and it was up to them to decide what sessions to put on the UDS schedule. We kept the same basic approach when we went to online vUDS. But starting with UOS 14.06, we asked leaders in the community to help us with that, and they’ve done a phenomenal job. This time we had Nekhelesh RamananthanJosé Antonio ReySvetlana BelkinRohan GargElfy, and Scarlett Clark take up that call, and they were instrumental in getting even more of the community involved

Community Session Hosts

uos_creatorsMore than a third of those who created sessions for this UOS were from the community, not Canonical. For comparison, in the last in-person UDS, less than a quarter of session creators were non-Canonical. The shift online has been disruptive, and we’ve tried many variations to try and find what works, but this metric shows that those efforts are starting to pay off. Community involvement, indeed community direction, is higher in these Online Summits than it was in UDS. This is becoming a true community event: community focused, community organized, and community run.

Community Initiatives

The Ubuntu Online Summit wasn’t just about the projects driven by Canonical, such as the Ubuntu desktop and phone, there were many sessions about projects started and driven by members of the community. Last week we were shown the latest development on Ubuntu MATE and KDE Plasma 5 from non-Canonical lead flavors. We saw a whole set of planning sessions for community developed Core Apps and an exciting new Component Store for app developers to share bits of code with each other. For outreach there were sessions for providing localized ISOs for loco teams and expanding the scope of the community-lead Start Ubuntu project. Finally we had someone from the community kick off a serious discussion about getting Ubuntu running on cars. Cars! All of these exciting sessions were thought up by, proposed by, and run by members of the community.

Community Improvements

This was a great Ubuntu Online Summit, and I was certainly happy with the increased level of community involvement in it, but we still have room to make it better. And we are going to make it better with help from the community. We will be sending out a survey to everyone who registered as attending for this UOS to gather feedback and ideas, please take the time to fill it out when you get the link. If you attended but didn’t register there’s still time, go to the link above, log in and save your attendance record. Finally, it’s never too early to start thinking about the next UOS and what sessions you might want to lead for it, so that you’re prepared when those track leads come knocking at your door.

Read more