# Canonical Voices

Iain Farrell

## Ubuntu 14.10 wallpapers – we needs ‘em!

Verónica Sousa’s Cul de sac

Ubuntu was once described to me by a wise(ish ;) ) man as a train that was leaving whether you’re on it or not. That’s the beauty of a 6 month release cycle. As many of you will already know, each release we include photos and illustrations produced by community members. We ask that you submit your images using free photo sharing site Flickr and that you limit your images this time to 2. The group won’t let you submit more than that but if you change your mind after you’ve submitted, fear not, simply remove one and it’ll let you add another.

As with previous submissions processes we’ve run, and in conjunction with the designers at Canonical we’ve come up with the following tips for creating wallpaper images.

1. Images shouldn’t be too busy and filled with too many shapes and colours, a similar tone throughout is a good rule of thumb.
2. A single point of focus, a single area that draws the eye into the image, can also help you avoid something too cluttered.
3. The left and top edges are home to Ubuntu’s Launcher and Panel so be careful to consider how your images look in place so as not to clash with the user interface. Try them out on your own desktop, see how they feel.
4. Try your image at different aspect ratios to make sure something important isn’t cropped out on smaller/ larger screens at different resolutions.
5. Take a look at the wallpapers guidance on the Ubuntu Wiki regarding the size of images. Our target resolution is 2560 x 1600.
6. Break all the rules except the resolution one! :D

To shortlist from this collection we’ll be going to the contributors whose images were selected last time around to act as our selection judges. In doing this we’ll hold a series of public IRC meetings on Freenode in #1410wallpaper to discuss the selection. In those sessions we’ll get the selection team to try out the images on their own Ubuntu machines to see what they look like on a range of displays and resolutions.

Anyone is welcome to come to these sessions but please keep in mind that an outcome is needed from the time that people are volunteering and there’s usually a lot of images to get through so we’d appreciate it if there isn’t too much additional debate.

Based on the Utopic release schedule, I think our schedule for this cycle should look like this:

• 08/08/14 – Kick off 14.10 wallpaper submission process.
• 22/08/14 – First get together on #1410wallpaper at 19:30 GMT.
• 29/08/14 – Submissions deadline at 18:00 GMT – Flickr group will be locked and the selection process will begin.
• 09/09/14 – Deliver final selection in zip format to the appropriate bug on Launchpad.
• 11/09/14 – UI freeze for latest version of Ubuntu with our fantastic images in it!

As always, ping me if you have any questions, I’ll be lurking in #1410wallpaper on freenode or leave a question in the Flickr group for wider discussion, that’s probably the fastest way to get an answer to a question.

I’ll be posting updates on our schedule here from time to time but the Flickr group will serve as our hub.

Happy snapping and scribbling and on behalf of the community, thanks for contributing to Ubuntu!

Dustin Kirkland

## Ubuntu OpenStack on an Orange Box, Live Demo at the Cloud Austin Meetup, August 19th

I hope you'll join me at Rackspace on Tuesday, August 19, 2014, at the Cloud Austin Meetup, at 6pm, where I'll use our spectacular Orange Box to deploy Hadoop, scale it up, run a terasort, destroy it, deploy OpenStack, launch instances, and destroy it too.  I'll talk about the hardware (the Orange Box, Intel NUCs, Managed VLAN switch), as well as the software (Ubuntu, OpenStack, MAAS, Juju, Hadoop) that makes all of this work in 30 minutes or less!

Be sure to RSVP, as space is limited.

http://www.meetup.com/CloudAustin/events/194009002/

Cheers,
Dustin

Michael Hall

## Who do you contribute to?

When you contribute something as a member of a community, who are you actually giving it to? The simple answer of course is “the community” or “the project”, but those aren’t very specific.  On the one hand you have a nebulous group of people, most of which you probably don’t even know about, and on the other you’ve got some cold, lifeless code repository or collection of web pages. When you contribute, who is that you really care about, who do you really want to see and use what you’ve made?

In my last post I talked about the importance of recognition, how it’s what contributors get in exchange for their contribution, and how human recognition is the kind that matters most. But which humans do our contributors want to be recognized by? Are you one of them and, if so, are you giving it effectively?

## Owners

The owner of a project has a distinct privilege in a community, they are ultimately the source of all recognition in that community.  Early contributions made to a project get recognized directly by the founder. Later contributions may only get recognized by one of those first contributors, but the value of their recognition comes from the recognition they received as the first contributors.  As the project grows, more generations of contributors come in, with recognition coming from the previous generations, though the relative value of it diminishes as you get further from the owner.

After the project owner, the next most important source of recognition is a project’s leaders. Leaders are people who gain authority and responsibility in a project, they can affect the direction of a project through decisions in addition to direct contributions. Many of those early contributors naturally become leaders in the project but many will not, and many others who come later will rise to this position as well. In both cases, it’s their ability to affect the direction of a project that gives their recognition added value, not their distance from the owner. Before a community can grown beyond a very small size it must produce leaders, either through a formal or informal process, otherwise the availability of recognition will suffer.

## Legends

Leadership isn’t for everybody, and many of the early contributors who don’t become one still remain with the project, and end of making very significant contributions to it and the community over time.  Whenever you make contributions, and get recognition for them, you start to build up a reputation for yourself.  The more and better contributions you make, the more your reputation grows.  Some people have accumulated such a large reputation that even though they are not leaders, their recognition is still sought after more than most. Not all communities will have one of these contributors, and they are more likely in communities where heads-down work is valued more than very public work.

## Mentors

When any of us gets started with a community for the first time, we usually end of finding one or two people who help us learn the ropes.  These people help us find the resources we need, teach us what those resources don’t, and are instrumental in helping us make the leap from user to contributor. Very often these people aren’t the project owners or leaders.  Very often they have very little reputation themselves in the overall project. But because they take the time to help the new contributor, and because theirs is very likely to be the first, the recognition they give is disproportionately more valuable to that contributor than it otherwise would be.

Every member of a community can provide recognition, and every one should, but if you find yourself in one of the roles above it is even more important for you to be doing so. These roles are responsible both for setting the example, and keeping a proper flow, or recognition in a community. And without that flow or recognition, you will find that your flow of contributions will also dry up.

pitti

## vim config for Markdown+LaTeX pandoc editing

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX -------------------------------------------

function s:MDSettings()
noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>

" adjust syntax highlighting for LaTeX parts
"   inline formulas:
syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
"   environments:
syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement
"   commands:
syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement
endfunction

autocmd FileType markdown :call <SID>MDSettings()


That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

Dustin Kirkland

## Improving Random Seeds in Ubuntu 14.04 LTS Cloud Instances

Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!

### A: Because entropy is important!

• Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
• SSL keys
• SSH keys
• GPG keys
• TCP sequence numbers
• UUIDs
• dm-crypt keys
• eCryptfs keys
• Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

### A: Hardware, typically.

• Keyboards
• Mouses
• Interrupt requests
• HDD seek timing
• Network activity
• Microphones
• Web cams
• Touch interfaces
• WiFi/RF
• TPM chips
• RdRand
• Entropy Keys
• Pricey IBM crypto cards
• Expensive RSA cards
• USB lava lamps
• Geiger Counters
• Seismographs
• Light/temperature sensors
• And so on

### A: Pseudo random number generators are our only viable alternative.

• In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
• Basically, endless streams of pseudo random bytes
• Some utilities and most programming languages implement their own PRNGs
• But they usually seed from /dev/random or /dev/urandom
• Sometimes, virtio-rng is available, for hosts to feed guests entropy
• But not always

### A: Yes, if they are properly seeded.

• See random(4)
• When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
• This reduces the actual amount of noise in the entropy pool below the estimate
• In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
• See /etc/init.d/urandom
...dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1 ...

### A: Basically, its a small catalyst that primes the PRNG pump.

• Let’s pretend the digits of Pi are our random number generator
• The random seed would be a starting point, or “initialization vector”
• e.g. Pick a number between 1 and 20
• say, 18
• Now start reading random numbers

• Not bad...but if you always pick ‘18’...

#### XKCD on random numbers

 RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

### A: Yep, but computers are predictable, especially VMs.

• Computers are inherently deterministic
• And thus, bad at generating randomness
• Real hardware can provide quality entropy
• But virtual machines are basically clones of one another
• ie, The Cloud
• No keyboard or mouse
• IRQ based hardware is emulated
• Block devices are virtual and cached by hypervisor
• RTC is shared
• The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

### A: I’m afraid not...

#### Analysis of the LRNG (2006)

• Little prior documentation on Linux’s random number generator
• Random bits are a limited resource
• Very little entropy in embedded environments
• OpenWRT was the case study
• OS start up consists of a sequence of routine, predictable processes
• Very little demonstrable entropy shortly after boot
• http://j.mp/McV2gT

#### Black Hat (2009)

• iSec Partners designed a simple algorithm to attack cloud instance SSH keys
• Picked up by Forbes
• http://j.mp/1hcJMPu

#### Factorable.net (2012)

• Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
• Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
• Insecure or poorly seeded RNGs in widespread use
• 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
• They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
• They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
• http://j.mp/1iPATZx

#### Dual_EC_DRBG Backdoor (2013)

• Dual Elliptic Curve Deterministic Random Bit Generator
• Ratified NIST, ANSI, and ISO standard
• Possible backdoor discovered in 2007
• Bruce Schneier noted that it was “rather obvious”
• Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
• http://j.mp/1bJEjrB

### A: For starters, do a better job seeding our PRNGs.

• Securely
• With high quality, unpredictable data
• More sources are better
• As early as possible
• And certainly before generating
• SSH host keys
• SSL certificates
• Or any other critical system DNA
• /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

### A: Run Ubuntu!

Sorry, shameless plug...

### A: Meet pollinate.

• pollinate is a new security feature, that seeds the PRNG.
• Introduced in Ubuntu 14.04 LTS cloud images
• Upstart job
• It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
• It’s GPLv3 free software
• Simple shell script wrapper around curl
• Fetches random seeds
• From 1 or more entropy servers in a pool
• Writes them into /dev/urandom

### A: Introducing pollen.

• pollen is an entropy-as-a-service implementation
• Works over HTTP and/or HTTPS
• Supports a challenge/response mechanism
• Provides 512 bit (64 byte) random seeds
• It’s AGPL free software
• Implemented in golang
• Less than 50 lines of code
• Fast, efficient, scalable
• Returns the (optional) challenge sha512sum
• And 64 bytes of entropy

pollen.go

### A: Hello, entropy.ubuntu.com.

• Highly available pollen cluster
• TLS/SSL encryption
• Multiple physical servers
• Behind a reverse proxy
• Deployed and scaled with Juju
• Multiple sources of hardware entropy
• High network traffic is always stirring the pot
• AGPL, so source code always available
• Supported by Canonical
• Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

### A: Then use a different entropy service :-)

• bzr branch lp:pollen
• sudo apt-get install pollen
• juju deploy pollen
• Add your preferred server(s) to your $POOL • In /etc/default/pollinate • In your cloud-init user data • In progress • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge ### Q: So does this increase the overall entropy on a system? ### A: No, no, no, no, no! • pollinate seeds your PRNG, securely and properly and as early as possible • This improves the quality of all random numbers generated thereafter • pollen provides random seeds over HTTP and/or HTTPS connections • This information can be fed into your PRNG • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail • Note that neither pollen nor pollinate directly affect this quantity estimate!!! ### Q: Why the challenge/response in the protocol? ### A: Think of it like the Heisenberg Uncertainty Principle. • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine • pollinate can verify the response and ensure that the pollen server at least “did some work” • From the perspective of the pollen server administrator, all communications are “stirring the pot” • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state ### Q: What if pollinate gets crappy or compromised or no random seeds? ### A: Functionally, it’s no better or worse than it was without pollinate in the mix. • In fact, you can dd if=/dev/zero of=/dev/random if you like, without harming your entropy quality • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool • Of course it doesn’t help, but it doesn’t hurt either • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state • Note the permissions on /dev/*random • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom • It's a bummer of course, but there's no new compromise ### Q: What about SSL compromises, or CA Man-in-the-Middle attacks? ### A: We are mitigating that by bundling the public certificates in the client. • The pollinate package ships the public certificate of entropy.ubuntu.com • /etc/pollinate/entropy.ubuntu.com.pem • And curl uses this certificate exclusively by default • If this really is your concern (and perhaps it should be!) • Add more URLs to the$POOL variable in /etc/default/pollinate
• Put one of those behind your firewall
• You simply need to ensure that at least one of those is outside of the control of your attackers

### A: The usual web server debug info.

• The current timestamp
• The incoming client IP/port
• At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
• The browser user-agent string
• Basically, the exact same information that Chrome/Firefox/Safari sends
• You can override if you like in /etc/default/pollinate
• The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

### A: Yes, but more feedback is welcome!

• All of the source is available
• Service design and hardware specs are available
• The Ubuntu Security team has reviewed the design and implementation
• All feedback has been incorporated
• At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
• All feedback has been incorporated

Stay safe out there!
:-Dustin

Michael Hall

## Why do you contribute to open source?

It seems a fairly common, straight forward question.  You’ve probably been asked it before. We all have reasons why we hack, why we code, why we write or draw. If you ask somebody this question, you’ll hear things like “scratching an itch” or “making something beautiful” or “learning something new”.  These are all excellent reasons for creating or improving something.  But contributing isn’t just about creating, it’s about giving that creation away. Usually giving it away for free, with no or very few strings attached.  When I ask “Why do you contribute to open source”, I’m asking why you give it away.

This question is harder to answer, and the answers are often far more complex than the ones given for why people simply create something. What makes it worthwhile to spend your time, effort, and often money working on something, and then turn around and give it away? People often have different intentions or goals in mind when the contribute, from benevolent giving to a community they care about to personal pride in knowing that something they did is being used in something important or by somebody important. But when you strip away the details of the situation, these all hinge on one thing: Recognition.

If you read books or articles about community, one consistent theme you will find in almost all of them is the importance of recognizing  the contributions that people make. In fact, if you look at a wide variety of successful communities, you would find that one common thing they all offer in exchange for contribution is recognition. It is the fuel that communities run on.  It’s what connects the contributor to their goal, both selfish and selfless. In fact, with open source, the only way a contribution can actually stolen is by now allowing that recognition to happen.  Even the most permissive licenses require attribution, something that tells everybody who made it.

Now let’s flip that question around:  Why do people contribute to your project? If their contribution hinges on recognition, are you prepared to give it?  I don’t mean your intent, I’ll assume that you want to recognize contributions, I mean do you have the processes and people in place to give it?

We’ve gotten very good about building tools to make contribution easier, faster, and more efficient, often by removing the human bottlenecks from the process.  But human recognition is still what matters most.  Silently merging someone’s patch or branch, even if their name is in the commit log, isn’t the same as thanking them for it yourself or posting about their contribution on social media. Letting them know you appreciate their work is important, letting other people know you appreciate it is even more important.

If you the owner or a leader in a project with a community, you need to be aware of how recognition is flowing out just as much as how contributions are flowing in. Too often communities are successful almost by accident, because the people in them are good at making sure contributions are recognized and that people know it simply because that’s their nature. But it’s just as possible for communities to fail because the personalities involved didn’t have this natural tendency, not because of any lack of appreciation for the contributions, just a quirk of their personality. It doesn’t have to be this way, if we are aware of the importance of recognition in a community we can be deliberate in our approaches to making sure it flows freely in exchange for contributions.

pitti

## autopkgtest 3.2: CLI cleanup, shell command tests, click improvements

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

## Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$cat adt_sid --output-dir=/tmp/out -s --- schroot sid$ adt-run libpng @adt_sid


## Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]


This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

## Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

Michael Hall

## When is a fork not a fork?

Technically a fork is any instance of a codebase being copied and developed independently of its parent.  But when we use the word it usually encompasses far more than that. Usually when we talk about a fork we mean splitting the community around a project, just as much as splitting the code itself. Communities are not like code, however, they don’t always split in consistent or predictable ways. Nor are all forks the same, and both the reasons behind a fork, and the way it is done, will have an effect on whether and how the community around it will split.

There are, by my observation, three different kinds of forks that can be distinguished by their intent and method.  These can be neatly labeled as Convergent, Divergent and Emergent forks.

## Convergent Forks

Most often when we talk about forks in open source, we’re talking about convergent forks. A convergent fork is one that shares the same goals as its parent, seeks to recruit the same developers, and wants to be used by the same users. Convergent forks tend to happen when a significant portion of the parent project’s developers are dissatisfied with the management or processes around the project, but otherwise happy with the direction of its development. The ultimate goal of a convergent fork is to take the place of the parent project.

Because they aim to take the place of the parent project, convergent forks must split the community in order to be successful. The community they need already exists, both the developers and the users, around the parent project, so that is their natural source when starting their own community.

## Divergent Forks

Less common that convergent forks, but still well known by everybody in open source, are the divergent forks.  These forks are made by developers who are not happy with the direction of a project’s development, even if they are generally satisfied with its management.  The purpose of a divergent fork is to create something different from the parent, with different goals and most often different communities as well. Because they are creating a different product, they will usually be targeting a different group of users, one that was not well served by the parent project.  They will, however, quite often target many of the same developers as the parent project, because most of the technology and many of the features will remain the same, as a result of their shared code history.

Divergent forks will usually split a community, but to a much smaller extent than a convergent fork, because they do not aim to replace the parent for the entire community. Instead they often focus more on recruiting those users who were not served well, or not served at all, by the existing project, and will grown a new community largely from sources other than the parent community.

## Emergent Forks

Emergent forks are not technically forks in the code sense, but rather new projects with new code, but which share the same goals and targets the same users as an existing project.  Most of us know these as NIH, or “Not Invented Here”, projects. They come into being on their own, instead of splitting from an existing source, but with the intention of replacing an existing project for all or part of an existing user community. Emergent forks are not the result of dissatisfaction with either the management or direction of an existing project, but most often a dissatisfaction with the technology being used, or fundamental design decisions that can’t be easily undone with the existing code.

Because they share the same goals as an existing project, these forks will usually result in a split of the user community around an existing project, unless they differ enough in features that they can targets users not already being served by those projects. However, because they do not share much code or technology with the existing project, they most often grow their own community of developers, rather than splitting them from the existing project as well.

All of these kinds of forks are common enough that we in the open source community can easily name several examples of them. But they are all quite different in important ways. Some, while forks in the literal sense, can almost be considered new projects in a community sense.  Others are not forks of code at all, yet result in splitting an existing community none the less. Many of these forks will fail to gain traction, in fact most of them will, but some will succeed and surpass those that came before them. All of them play a role in keeping the wider open source economy flourishing, even though we may not like them when they affect a community we’ve been involved in building.

Dustin Kirkland

## Scalable, Parallel Video Transcoding on Ubuntu

Transcoding video is a very resource intensive process.

It can take many minutes to process a small, 30-second clip, or even hours to process a full movie.  There are numerous, excellent, open source video transcoding and processing tools freely available in Ubuntu, including libav-toolsffmpegmencoder, and handbrake.  Surprisingly, however, none of those support parallel computing easily or out of the box.  And disappointingly, I couldn't find any MPI support readily available either.

I happened to have an Orange Box for a few days recently, so I decided to tackle the problem myself, and develop a scalable, parallel video transcoding solution myself.  I'm delighted to share the result with you today!

When it comes to commercial video production, it can take thousands of machines, hundreds of compute hours to render a full movie.  I had the distinct privilege some time ago to visit WETA Digital in Wellington, New Zealand and tour the render farm that processed The Lord of the Rings triology, Avatar, and The Hobbit, etc.  And just a few weeks ago, I visited another quite visionary, cloud savvy digital film processing firm in Hollywood, called Digital Film Tree.

Windows and Mac OS may be the first platforms that come to mind, when you think about front end video production, Linux is far more widely used for batch video processing, and with Ubuntu, in particular, being extensively at both WETA Digital and Digital Film Tree, among others.

While I could have worked with any of a number of tools, I settled on avconv (the successor(?) of ffmpeg), as it was the first one that I got working well on my laptop, before scaling it out to the cluster.

I designed an approach on my whiteboard, in fact quite similar to some work I did parallelizing and scaling the john-the-ripper password quality checker.

At a high level, the algorithm looks like this:
1. Create a shared network filesystem, simultaneously readable and writable by all nodes
2. Have the master node split the work into even sized chunks for each worker
3. Have each worker process their segment of the video, and raise a flag when done
4. Have the master node wait for each of the all-done flags, and then concatenate the result
And that's exactly what I implemented that in a new transcode charm and transcode-cluster bundle.  It provides linear scalability and performance improvements, as you add additional units to the cluster.  A transcode job that takes 24 minutes on a single node, is down to 3 minutes on 8 worker nodes in the Orange Box, using Juju and MAAS against physical hardware nodes.

For the curious, the real magic is in the config-changed hook, which has decent inline documentation.

The trick, for anyone who might make their way into this by way of various StackExchange questions and (incorrect) answers, is in the command that splits up the original video (around line 54):

avconv -ss $start_time -i$filename -t $length -s$size -vcodec libx264 -acodec aac -bsf:v h264_mp4toannexb -f mpegts -strict experimental -y ${filename}.part${current_node}.ts

And the one that puts it back together (around line 72):

avconv -i concat:"$concat" -c copy -bsf:a aac_adtstoasc -y${filename}_${size}_x264_aac.${format}

I found this post and this documentation particularly helpful in understanding and solving the problem.

In any case, once deployed, my cluster bundle looks like this.  8 units of transcoders, all connected to a shared filesystem, and performance monitoring too.

I was able to leverage the shared-fs relation provided by the nfs charm, as well as the ganglia charm to monitor the utilization of the cluster.  You can see the spikes in the cpu, disk, and network in the graphs below, during the course of a transcode job.

For my testing, I downloaded the movie Code Rushfreely available under the CC-BY-NC-SA 3.0 license.  If you haven't seen it, it's an excellent documentary about the open source software around Netscape/Mozilla/Firefox and the dotcom bubble of the late 1990s.

Oddly enough, the stock, 746MB high quality MP4 video doesn't play in Firefox, since it's an mpeg4 stream, rather than H264.  Fail.  (Yes, of course I could have used mplayer, vlc, etc., that's not the point ;-)

Perhaps one of the most useful, intriguing features of HTML5 is it's support for embedding multimedia, video, and sound into webpages.  HTML5 even supports multiple video formats.  Sounds nice, right?  If it only were that simple...  As it turns out, different browsers have, and lack support for the different formats.  While there is no one format to rule them all, MP4 is supported by the majority of browsers, including the two that I use (Chromium and Firefox).  This matrix from w3schools.com illustrates the mess.

 http://www.w3schools.com/html/html5_video.asp

The file format, however, is only half of the story.  The audio and video contents within the file also have to be encoded and compressed with very specific codecs, in order to work properly within the browsers.  For MP4, the video has to be encoded with H264, and the audio with AAC.

Among the various brands of phones, webcams, digital cameras, etc., the output format and codecs are seriously all over the map.  If you've ever wondered what's happening, when you upload a video to YouTube or Facebook, and it's a while before it's ready to be viewed, it's being transcoded and scaled in the background.

In any case, I find it quite useful to transcode my videos to MP4/H264/AAC format.  And for that, a scalable, parallel computing approach to video processing would be quite helpful.

During the course of the 3 minute run, I liked watching the avconv log files of all of the nodes, using Byobu and Tmux in a tiled split screen format, like this:

Also, the transcode charm installs an Apache2 webserver on each node, so you can expose the service and point a browser to any of the nodes, where you can find the input, output, and intermediary data files, as well as the logs and DONE flags.

Once the job completes, I can simply click on the output file, Code_Rush.mp4_1280x720_x264_aac.mp4, and see that it's now perfectly viewable in the browser!

In case you're curious, I have verified the same charm with a couple of other OGG, AVI, MPEG, and MOV input files, too.

Beyond transcoding the format and codecs, I have also added configuration support within the charm itself to scale the video frame size, too.  This is useful to take a larger video, and scale it down to a more appropriate size, perhaps for a phone or tablet.  Again, this resource intensive procedure perfectly benefits from additional compute units.

File format, audio/video codec, and frame size changes are hardly the extent of video transcoding workloads.  There are hundreds of options and thousands of combinations, as the manpages of avconv and mencoder attest.  All of my scripts and configurations are free software, open source.  Your contributions and extensions are certainly welcome!

In the mean time, I hope you'll take a look at this charm and consider using it, if you have the need to scale up your own video transcoding ;-)

Cheers,
Dustin

pitti

## deb, click, schroot, LXC, QEMU, phone, cloud: One autopkgtest to Rule Them All!

We currently use completely different methods and tools of building test beds and running tests for Debian vs. Click packages, for normal uploads vs. CI airline landings vs. upstream project merge proposal testing, and keep lots of knowledge about Click package test metadata external and not easily accessible/discoverable.

Today I released autopkgtest 3.0 (and 3.0.1 with a few minor updates) which is a major milestone in unifying how we run package tests both locally and in production CI. The goals of this are:

• Keep all test metadata, such as test dependencies, commands to run the test etc., in the project/package source itself instead of external. We have had that for a long time for Debian packages with DEP-8 and debian/tests/control, but not yet for Ubuntu’s Click packages.
• Use the same tools for Debian and Click packages to simplify what developers have to know about and to reduce the amount of test infrastructure code to maintain.
• Use the exact same testbeds and test runners in production CI than what developers use locally, so that you can reproduce and investigate failures.
• Re-use the existing autopkgtest capabilities for using various kinds of testbeds, and conversely, making all new testbed types immediately available to all package formats.
• Stop putting tests into the Ubuntu archive as packages (such as mediaplayer-app-autopilot). This just adds packaging and archive space overhead and also makes updating tests a lot harder and taking longer than it should.

So, let’s dive into the new features!

We want to run tests on real hardware such as a laptop of a particular brand with a particular graphics card, or an Ubuntu phone. We also want to restructure our current CI machinery to run tests on a real OpenStack cloud and gradually get rid of our hand-maintained QA lab with its test machines. While these use cases seem rather different, they both have in common that there is an already existing machine which is pretty much only accessible with ssh. Once you have an ssh connection, they look pretty much the same, you just need different initial setup (like fiddling with adb, calling nova boot, etc.) to prepare them.

So the new adt-virt-ssh runner factorizes all the common bits such as communicating with adt-run, auto-detecting sudo availability, doing SSH connection sharing etc., and delegates the target specific bits to a “setup script”. E. g. we could specify --setup-script ssh-setup-nova or --setup-script ssh-setup-adb which would then get called with open at the appropriate time by adt-run; it calls the nova commands to create a VM, or run a few adb commands to install/start ssh and install the public key. Then autopkgtest does its thing, and eventually calls the script with cleanup again. The actual protocol is a bit more involved (see manpage), but that’s the general idea.

autopkgtest now ships readymade scripts for these two use cases. So you could e. g. run the libpng tests in a temporary cloud VM:

# if you don't have one, create it with "nova keypair-create"
$nova keypair-list [...] | pitti | 9f:31:cf:78:50:4f:42:04:7a:87:d7:2a:75:5e:46:56 | # find a suitable image$ nova image-list
[...]
| ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 | Ubuntu Server 14.04 LTS (amd64 20140607.1) - Partner Image                         | ACTIVE |

$nova flavor-list [...] | 100 | standard.xsmall | 1024 | 10 | 10 | | 1 | 1.0 | N/A | # now run the tests: please be patient, this takes a few mins!$ adt-run libpng --setup-commands="apt-get update" --- ssh -s /usr/share/autopkgtest/ssh-setup/nova -- \
-f standard.xsmall -i ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 -k pitti
[...]
adt-run [16:23:16]: test build:  - - - - - - - - - - results - - - - - - - - - -
build                PASS


Please see man adt-virt-ssh for details how to use it and how to write setup scripts. There is also a commented /usr/share/autopkgtest/ssh-setup/SKELETON template for writing your own for your use cases. You can also not use any setup script and just specify user and host name as options, but please remember that the ssh runner cannot clean up after itself, so never use this on important machines which you can’t reset/reinstall!

Ubuntu phones with system images have a read-only file system where you can’t install test dependencies with apt. A similar case is using the “null” runner without root. When apt-get install is not available, autopkgtest now has a reduced fallback mode: it downloads the required test dependencies, unpacks them into a temporary directory, and runs the tests with $PATH, $PYTHONPATH, $GI_TYPELIB_PATH, etc. pointing to the unpacked temp dir. Of course this only works for packages which are relocatable in that way, i. e. libraries, Python modules, or command line tools; it will totally fail for things which look for config files, plugins etc. in hardcoded directory paths. But it’s good enough for the purposes of Click package testing such as installing autopilot, libautopilot-qt etc. ## Click package support autopkgtest now recognizes click source directories and *.click package arguments, and introduces a new test metadata specification syntax in a click package manifest. This is similar in spirit and capabilities to DEP-8 debian/tests/control, except that it’s using JSON:  "x-test": { "unit": "tests/unittests", "smoke": { "path": "tests/smoketest", "depends": ["shunit2", "moreutils"], "restrictions": ["allow-stderr"] }, "another": { "command": "echo hello > /tmp/world.txt" } }  For convenience, there is also some magic to make running autopilot tests particularly simple. E. g. our existing click packages usually specify something like  "x-test": { "autopilot": "ubuntu_calculator_app" }  which is enough to “do what I mean”, i. e. implicitly add the autopilot test depends and run autopilot with the specified test module name. You can specify your own dependencies and/or commands, and restrictions etc., of course. So with this, and the previous support for non-apt test dependencies and the ssh runner, we can put all this together to run the tests for e. g. the Ubuntu calculator app on the phone: $ bzr branch lp:ubuntu-calculator-app
# built straight from that branch; TODO: where is the official" download URL?
$wget http://people.canonical.com/~pitti/tmp/com.ubuntu.calculator_1.3.283_all.click$ adt-run ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- \
[..]
Traceback (most recent call last):
File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 93, in test_divide_with_infinity_length_result_number
self._assert_result("0.33333333")
File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 63, in _assert_result
self.main_view.get_result, Eventually(Equals(expected_result)))
File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 406, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: After 10.0 seconds test failed: '0.33333333' != '0.3'

Ran 33 tests in 295.586s
FAILED (failures=1)


Note that the current adb ssh setup script deals with some things like applying the autopilot click AppArmor hooks and disabling screen dimming, but it does not do the first-time setup (connecting to network, doing the gesture intro) and unlocking the screen. These are still on the TODO list, but I need to find out how to do these properly. Help appreciated!

## Click app tests in schroot/containers

But, that’s not the only thing you can do! autopkgtest has all these other runners, so why not try and run them in a schroot or container? To emulate the environment of an Ubuntu Touch session I wrote a --setup-commands script:

adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \
ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- schroot utopic


This will actually work in the sense of running (and succeeding) the autopilot tests, but it will fail due to a lot of libust[11345/11358]: Error: Error opening shm /lttng-ust-wait... warnings on stderr. I don’t know what these mean, just that I also see them on the phone itself occasionally.

I also wrote another setup-commands script which emulates “read-only apt”, so that you can test the “unpack only” fallback. So you could prepare a container with click and the App framework preinstalled (so that it doesn’t always take ages to install them), starting from a standard adt-build-lxc container:

$sudo lxc-clone -o adt-utopic -n click$ sudo lxc-start -n click
# run "sudo apt-get install click ubuntu-sdk-libs ubuntu-app-launch-tools" there
# then "sudo powerdown"

# current apparmor profile doesn't allow remounting something read-only
$echo "lxc.aa_profile = unconfined" | sudo tee -a /var/lib/lxc/click/config  Now that container has enough stuff preinstalled to be reasonably fast to set up, and the remaining test dependencies (mostly autopilot) work fine with the unpack/$*_PATH fallback:

$adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \ --setup-commands /usr/share/autopkgtest/setup-commands/ro-apt \ ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click \ --- lxc -es click  This will successfully run all the tests, and provided you have apt-cacher-ng installed, it only takes a few seconds to set up. This might be a nice thing to do on merge proposals, if you don’t have an actual phone at hand, or don’t want to clutter it up. autopkgtest 3.0.1 will be available in Utopic tomorrow (through autosyncs). If you can’t wait to try it out, download it from my people.c.c page ☺. Feedback appreciated! Read more Dustin Kirkland ## The Yo Charm. It's that simple.  It's that simple. It was about 4pm on Friday afternoon, when I had just about wrapped up everything I absolutely needed to do for the day, and I decided to kick back and have a little fun with the remainder of my work day. It's now 4:37pm on Friday, and I'm now done. Done with what? The Yo charm, of course! The Internet has been abuzz this week about the how the Yo app received a whopping$1 million dollars in venture funding.  (Forbes notes that this is a pretty surefire indication that there's another internet bubble about to burst...)

It's little more than the first program any kid writes -- hello world!

Subsequently I realized that we don't really have a "hello world" charm.  And so here it is, yo.

$juju deploy yo Deploying up a webpage that says "Yo" is hardly the point, of course. Rather, this is a fantastic way to see the absolute simplest form of a Juju charm. Grab the source, and go explore it yo-self! $ charm-get yo\$ tree yo├── config.yaml├── copyright├── hooks│   ├── config-changed│   ├── install│   ├── start│   ├── stop│   ├── upgrade-charm│   └── website-relation-joined├── icon.svg├── metadata.yaml└── README.md1 directory, 11 files

• The config.yaml let's you set and dynamically changes the service itself (the color and size of the font that renders "Yo").
• The copyright is simply boilerplate GPLv3
• The icon.svg is just a vector graphics "Yo."
• The metadata.yaml explains what this charm is, how it can relate to other charms
• The README.md is a simple getting-started document
• And the hooks...
• config-changed is the script that runs when you change the configuration -- basically, it uses sed to inline edit the index.html Yo webpage
• install simply installs apache2 and overwrites /var/www/index.html
• start and stop simply starts and stops the apache2 service
• upgrade-charm is currently a no-op
• website-relation-joined sets and exports the hostname and port of this system
The website relation is very important here...  Declaring and defining this relation instantly lets me relate this charm with dozens of other services.  As you can see in the screenshot at the top of this post, I was able to easily relate the varnish website accelerator in front of the Yo charm.

Hopefully this simple little example might help you examine the anatomy of a charm for the first time, and perhaps write your own first charm!

Cheers,

Dustin

Michael Hall

## A Tale of Two Systems

Two years ago, my wife and I made the decision to home-school our two children.  It was the best decision we could have made, our kids are getting a better education, and with me working from home since joining Canonical I’ve been able to spend more time with them than ever before. We also get to try and do some really fun things, which is what sets the stage for this story.

Both my kids love science, absolutely love it, and it’s one of our favorite subjects to teach.  A couple of weeks ago my wife found an inexpensive USB microscope, which lets you plug it into a computer and take pictures using desktop software.  It’s not a scientific microscope, nor is it particularly powerful or clear, but for the price it was just right to add a new aspect to our elementary science lessons. All we had to do was plug it in and start exploring.

My wife has a relatively new (less than a year) laptop running windows 8.  It’s not high-end, but it’s all new hardware, new software, etc.  So when we plugged in our simple USB microscope…….it failed.  As in, didn’t do anything.  Windows seemed to be trying to figure out what to do with it, over and over and over again, but to no avail.

My laptop, however, is running Ubuntu 14.04, the latest stable and LTS release.  My laptop is a couple of years old, but classic, Lenovo x220. It’s great hardware to go with Ubuntu and I’ve had nothing but good experiences with it.  So of course, when I decided to give our new USB microsope a try……it failed.  The connection was fine, the log files clearly showed that it was being identified, but nothing was able to see it as a video input device or make use of it.

Now, if that’s where our story ended, it would fall right in line with a Shakespearean tragedy. But while both Windows and Ubuntu failed to “just work” with this microscope, both failures were not equal. Because the Windows drivers were all closed source, my options ended with that failure.

But on Ubuntu, the drivers were open, all I needed to do was find a fix. It took a while, but I eventually found a 2.5 year old bug report for an identical chipset to my microscope, and somebody proposed a code fix in the comments.  Now, the original reporter never responded to say whether or not the fix worked, and it was clearly never included in the driver code, but it was an opportunity.  Now I’m no kernel hacker, nor driver developer, in fact I probably shouldn’t be trusted to write any amount of C code at all.  But because I had Ubuntu, getting the source code of my current driver, as well as all the tools and dependencies needed to build it, took only a couple of terminal commands.  The patch was too old to cleanly apply to the current code, but it was easy enough to figure out where they should go, and after a couple tries to properly build just the driver (and not the full kernel or every driver in it), I had a new binary kernel modules that would load without error.  Then, when I plugged my USB microscope in again, it worked!

People use open source for many reasons.  Some people use it because it’s free as in beer, for them it’s on the same level as freeware or shareware, only the cost matters. For others it’s about ethics, they would choose open source even if it cost them money or didn’t work as well, because they feel it’s morally right, and that proprietary software is morally wrong. I use open source because of USB microscopes. Because when they don’t work, open source gives me a chance to change that.

David Murphy (schwuk)

## Hands-on with Canonical’s Orange Box

Ars Technica has a great write up by Lee Hutchinson on our Orange Box demo and training unit.

You can’t help but have your attention grabbed by it!

As the comments are quick to point out – at the expense of the rest of the piece – the hardware isn’t the compelling story here. While you can buy your own, you can almost certainly hand build an equivalent-or-better set up for less money1, but Ars recognises this:

Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it’s making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud.

The Orange Box is about showing off MAAS & Juju, and – usually – OpenStack.

To see what Ars think of those, you should read the article.

I definitely echo Lee’s closing statement:

I wish my closet had an Orange Box in it. That thing is hella cool.

1. Or make one out of wood like my colleague Gavin did!

Dustin Kirkland

## Elon Musk, Tesla Motors, and My Own Patent Apologies

It's hard for me to believe that I have sat on a this draft blog post for almost 6 years.  But I'm stuck on a plane this evening, inspired by Elon Musk and Tesla's (cleverly titled) announcement, "All Our Patents Are Belong To You."  Musk writes:

Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.
When I get home, I'm going to take down a plaque that has proudly hung in my own home office for nearly 10 years now.  In 2004, I was named an IBM Master Inventor, recognizing sustained contributions to IBM's patent portfolio.

Musk continues:
When I started out with my first company, Zip2, I thought patents were a good thing and worked hard to obtain them. And maybe they were good long ago, but too often these days they serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors. After Zip2, when I realized that receiving a patent really just meant that you bought a lottery ticket to a lawsuit, I avoided them whenever possible.
And I feel the exact same way!  When I was an impressionable newly hired engineer at IBM, I thought patents were wonderful expressions of my own creativity.  IBM rewarded me for the work, and recognized them as important contributions to my young career.  Remember, in 2003, IBM was defending the Linux world against evil SCO.  (Confession: I think I read Groklaw every single day.)

Yeah, I filed somewhere around 75 patents in about 4 years, 47 of which have been granted by the USPTO to date.

I'm actually really, really proud of a couple of them.  I was the lead inventor on a couple of early patents defining the invention you might know today as Swype (Android) or Shapewriter (iPhone) on your mobile devices.  In 2003, I called it QWERsive, as the was basically applying "cursive handwriting" to a "qwerty keyboard."  Along with one of my co-inventors, we actually presented a paper at the 27th UNICODE conference in Berlin in 2005, and IBM sold the patent to Lenovo a year later.  (To my knowledge, thankfully that patent has never been enforced, as I used Swype every single day.)

 QWERsive

But that enthusiasm evaporated very quickly between 2005 and 2007, as I reviewed thousands of invention disclosures by my IBM colleagues, and hundreds of software patents by IBM competitors in the industry.

I spent most of 2005 working onsite at Red Hat in Westford, MA, and came to appreciate how much more efficiently innovation happened in a totally open source world, free of invention disclosures, black out periods, gag orders, and software patents.  I met open source activists in the free software community, such as Jon maddog Hall, who explained the wake of destruction behind, and the impending doom ahead, in a world full of software patents.

Finally, in 2008, I joined an amazing little free software company called Canonical, which was far too busy releasing Ubuntu every 6 months on time, and building an amazing open source software ecosystem, to fart around with software patents.  To my delight, our founder, Mark Shuttleworth, continues to share the same enlightened view, as he states in this TechCrunch interview (2012):
“People have become confused,” Shuttleworth lamented, “and think that a patent is incentive to create at all.” No one invents just to get a patent, though — people invent in order to solve problems. According to him, patents should incentivize disclosure. Software is not something you can really keep secret, and as such Shuttleworth’s determination is that “society is not benefited by software patents at all.”Software patents, he said, are a bad deal for society. The remedy is to shorten the duration of patents, and reduce the areas people are allowed to patent. “We’re entering a third world war of patents,” Shuttleworth said emphatically. “You can’t do anything without tripping over a patent!” One cannot possibly check all possible patents for your invention, and the patent arms race is not about creation at all.
And while I'm still really proud of some of my ideas today, I'm ever so ashamed that they're patented.

If I could do what Elon Musk did with Tesla's patent portfolio, you have my word, I absolutely would.  However, while my name is listed as the "inventor" on four dozen patents, all of them are "assigned" to IBM (or Lenovo).  That is to say, they're not mine to give, or open up.

What I can do, is speak up, and formally apologize.  I'm sorry I filed software patents.  A lot of them.  I have no intention on ever doing so again.  The system desperately needs a complete overhaul.  Both the technology and business worlds are healthier, better, more innovative environment without software patents.

I do take some consolation that IBM seems to be "one of the good guys", in so much as our modern day IBM has not been as litigious as others, and hasn't, to my knowledge, used any of the patents for which I'm responsible in an offensive manner.

 No longer hanging on my wall.  Tucked away in a box in the attic.
But there are certainly those that do.  Patent trolls.

Another former employer of mine, Gazzang was acquired earlier this month (June 3rd) by Cloudera -- a super sharp, up-and-coming big data open source company with very deep pockets and tremendous market potential.  Want to guess what happened 3 days later?  A super shady patent infringement lawsuit is filed, of course!
Protegrity Corp v. Gazzang, Inc.
Complaint for Patent InfringementCivil Action No. 3:14-cv-00825; no judge yet assigned. Filed on June 6, 2014 in the U.S. District Court for the District of Connecticut;Patents in case 7,305,707: “Method for intrusion detection in a database system” by Mattsson. Prosecuted by Neuner; George W. Cohen; Steven M. Edwards Angell Palmer & Dodge LLP. Includes 22 claims (2 indep.). Was application 11/510,185. Granted 12/4/2007.
Yuck.  And the reality is that happens every single day, and in places where the stakes are much, much higher.  See: Apple v. Google, for instance.

Musk concludes his post:
Technology leadership is not defined by patents, which history has repeatedly shown to be small protection indeed against a determined competitor, but rather by the ability of a company to attract and motivate the world’s most talented engineers. We believe that applying the open source philosophy to our patents will strengthen rather than diminish Tesla’s position in this regard.
What a brave, bold, ballsy, responsible assertion!

I've never been more excited to see someone back up their own rhetoric against software patents, with such a substantial, palpable, tangible assertion.  Kudos, Elon.

Moreover, I've also never been more interested in buying a Tesla.   Coincidence?

Maybe it'll run an open source operating system and apps, too.  Do that, and I'm sold.

:-Dustin

bmichaelsen

## Free Four

The memories of a man in his old age

are the deeds of a man in his prime

– Free Four, Obscured by Clouds, Pink Floyd

I just donated to:

• the Wikimedia Foundation
• the OpenBSD project

and became:

Being involved in a project that is heavily driven by donations, I keep remembering myself of the importance of putting my money were my mouth is.

Some of these donations were triggered by recent events and initiatives in these projects. GNOME’s outreach for women program for example. Or OpenBSDs bold initiative in starting LibreSSL, which is doing what needed to be done and vitalizing an overlooked area of open source development. Watching them explain the status quo and how they are attacking it remembers me of LibreOffice — beyond the name. Plus, I dont want to be compared with a My little Pony character again.

goals of LibreSSL — they remind me of something

Others are already working examples of the long tail, crowd funding and the meshed society (Wikipedia) or tailblazing to be one (Krautreporter) beyond the world of software. The latter might also have been influenced by one of the last wishes of a man that unexpectedly died way to early. May he rest in peace.

jdstrand

## Application isolation with AppArmor – part IV

Last time I discussed AppArmor, I talked about new features in Ubuntu 13.10 and a bit about ApplicationConfinement for Ubuntu Touch. With the release of Ubuntu 14.04 LTS, several improvements were made:

• Mediation of signals
• Mediation of ptrace
• Various policy updates for 14.04, including new tunables, better support for XDG user directories, and Unity7 abstractions
• Parser policy compilation performance improvements
• Google Summer of Code (SUSE sponsored) python rewrite of the userspace tools

## Signal and ptrace mediation

Prior to Ubuntu 14.04 LTS, a confined process could send signals to other processes (subject to DAC) and ptrace other processes (subject to DAC and YAMA). AppArmor on 14.04 LTS adds mediation of both signals and ptrace which brings important security improvements for all AppArmor confined applications, such as those in the Ubuntu AppStore and qemu/kvm machines as managed by libvirt and OpenStack.

When developing policy for signal and ptrace rules, it is important to remember that AppArmor does a cross check such that AppArmor verifies that:

• the process sending the signal/performing the ptrace is allowed to send the signal to/ptrace the target process
• the target process receiving the signal/being ptraced is allowed to receive the signal from/be ptraced by the sender process

Signal(7) permissions use the ‘signal’ rule with the ‘receive/send’ permissions governing signals. PTrace permissions use the ‘ptrace’ rule with the ‘trace/tracedby’ permissions governing ptrace(2) and the ‘read/readby’ permissions governing certain proc(5) filesystem accesses, kcmp(2), futexes (get_robust_list(2)) and perf trace events.

Consider the following denial:

Jun 6 21:39:09 localhost kernel: [221158.831933] type=1400 audit(1402083549.185:782): apparmor="DENIED" operation="ptrace" profile="foo" pid=29142 comm="cat" requested_mask="read" denied_mask="read" peer="unconfined" 

This demonstrates that the ‘cat’ binary running under the ‘foo’ profile was unable to read the contents of a /proc entry (in my test, /proc/11300/environ). To allow this process to read /proc entries for unconfined processes, the following rule can be used:

ptrace (read) peer=unconfined, 

If the receiving process was confined, the log entry would say ‘peer=”<profile name>”‘ and you would adjust the ‘peer=unconfined’ in the rule to match that in the log denial. In this case, because unconfined processes implicitly can be readby all other processes, we don’t need to specify the cross check rule. If the target process was confined, the profile for the target process would need a rule like this:

ptrace (readby) peer=foo, 

Likewise for signal rules, consider this denial:

Jun 6 21:53:15 localhost kernel: [222005.216619] type=1400 audit(1402084395.937:897): apparmor="DENIED" operation="signal" profile="foo" pid=29069 comm="bash" requested_mask="send" denied_mask="send" signal=term peer="unconfined" 

This shows that ‘bash’ running under the ‘foo’ profile tried to send the ‘term’ signal to an unconfined process (in my test, I used ‘kill 11300′) and was blocked. Signal rules use ‘read’ and ‘send to determine access, so we can add a rule like so to allow sending of the signal:

signal (send) set=("term") peer=unconfined, 

Like with ptrace, a cross-check is performed with signal rules but implicit rules allow unconfined processes to send and receive signals. If pid 11300 were confined, you would adjust the ‘peer=’ in the rule of the foo profile to match the denial in the log, and then adjust the target profile to have something like:

signal (receive) set=("term") peer=foo, 

Signal and ptrace rules are very flexible and the AppArmor base abstraction in Ubuntu 14.04 LTS has several rules to help make profiling and transitioning to the new mediation easier:

 # Allow other processes to read our /proc entries, futexes, perf tracing and # kcmp for now ptrace (readby),   # Allow other processes to trace us by default (they will need # 'trace' in the first place). Administrators can override # with: # deny ptrace (tracedby) ... ptrace (tracedby),   # Allow unconfined processes to send us signals by default signal (receive) peer=unconfined,   # Allow us to signal ourselves signal peer=@{profile_name},   # Checking for PID existence is quite common so add it by default for now signal (receive, send) set=("exists"), 

Note the above uses the new ‘@{profile_name}’ AppArmor variable, which is particularly handy with ptrace and signal rules. See man 5 apparmor.d for more details and examples.

## 14.10

Work still remains and some of the things we’d like to do for 14.10 include:

• Finishing mediation for non-networking forms of IPC (eg, abstract sockets). This will be done in time for the phone release.
• Have services integrate with AppArmor and the upcoming trust-store to become trusted helpers (also for phone release)
• Continue work on netowrking IPC (for 15.04)
• Continue to work with the upstream kernel on kdbus
• Work continued on LXC stacking and we hope to have stacked profiles within the current namespace for 14.10. Full support for stacked profiles where different host and container policy for the same binary at the same time should be ready by 15.04
• Various fixes to the python userspace tools for remaining bugs. These will also be backported to 14.04 LTS

Until next time, enjoy!

Filed under: canonical, security, ubuntu, ubuntu-server

David Murphy (schwuk)

## Enabling Students in a Digital Age: Charlie Reisinger at TEDxLancaster

This is really inspiring to me, on several levels: as an Ubuntu member, as a Canonical, and as a school governor.

Not only are they deploying Ubuntu and other open-source software to their students, they are encouraging those students to tinker with their laptops, and – better yet – some of those same students are directly involved in the development, distribution, and providing support for their peers. All of those students will take incredibly valuable experience with them into their future careers.

Well done.

David Planella

## Internationalizing your apps at the Ubuntu App Developer Week

As part of the Ubuntu App Developer Week, I just ran a live on-air session on how to internationalize your Ubuntu apps. Some of the participants on the live chat asked me if I could share the slides somewhere online.

So here they are for your viewing pleasure :) If you’ve got any questions on i18n or in Ubuntu app development in general, feel free to ask in the comments or ping me (dpm) on IRC.

## The slides

Enjoy!

The post Internationalizing your apps at the Ubuntu App Developer Week appeared first on David Planella.

David Planella

## A new era for the Ubuntu community team, or business as usual

A sample of the wider Ubuntu Community team, with Canonicalers and volunteer core app developers

After the recent news of Jono stepping down as the Ubuntu Community Manager to seek new challenges at XPRIZE, a new era in Ubuntu begins. Jono’s leadership, passion and drive to continually push the boundaries have been contagious over the years, and have been the catalyst for growing the unique community of individuals that defines Ubuntu today.

Jono is now joining the ranks of non-Canonical Ubuntu members, and while this will change the angle of participation, I’m certain that it won’t change his energy and dedication one bit. But most importantly, it’s a testament to his work that his former team will continue to thrive and take up the torch in pushing those boundaries.

For us, it will be business as usual in the sense of implementing our roadmap, continuing to grow a strong and open community, being innovative in how we do it, and coordinating the logistics around our plans. So not much will be different in that regard, but obviously some organizational bits will change.

As part of the transition, the Ubuntu Community Team at Canonical in full, that is, Michael Hall, Daniel Holbach, Alan Pope, Nicholas Skaggs and myself, will now be hosting the weekly Ubuntu Q&A, starting today at 18:00 UTC on Ubuntu On Air (click here for the time at your location).

## The Ubuntu Community Team Q&A

Openness, both in being a transparent and welcoming community, is one of the core values of Ubuntu, and we believe the channels should be always open for a healthy information flow and to help contributors get involved.

As such, the Ubuntu Community Team Q&A will continue to provide a weekly, 1-hour-long session open for participation to anyone who wants to ask their questions about Ubuntu. In fact, as in former editions, you can ask the Community Team just anything about Free Software, Technology, or whatever you come up with. As before, the only questions we won’t answer are those related to technical support, where you’ll be much better served using Ask Ubuntu, the Ubuntu forums or IRC.

Join the Ubuntu Community Team Q&A at 18:00 UTC today and ask your questions >

## The Ubuntu Online Summit is coming soon!

Also, following the thread of events and participation, the new Ubuntu Online Summit (UOS) is coming up very soon, and it’s an excellent opportunity to learn about getting involved in Ubuntu, organizing or presenting the plans of the different Ubuntu teams for the next months.

UOS will be held on June 10th – 12th and it will be a combination of the former Ubuntu Developer Summit and the more user-facing events we’ve been organizing in the past. This opens the door to a wider audience that can follow a richer mix of developer and user or contributor content.

If you want to learn about the details, check out Michael’s UOS post on how it’s going to work. If you want to contribute and make a difference in Ubuntu, do register a session too!

Looking forward to seeing you soon!

The post A new era for the Ubuntu community team, or business as usual appeared first on David Planella.

mark

## #9 – Canonical’s cloud-init saves you from image soup, on every cloud

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.