# Canonical Voices

Michael Hall

## Public speaking for introverts

Last week I attended FOSSETCON, a new open source convention here in central Florida, and I had the opportunity to give a couple of presentations on Ubuntu phones and app development. Anybody who knows me knows that I love talking about these things, but a lot fewer people know that doing it in front of a room of people I don’t know still makes me extremely nervous. I’m an introvert, and even though I have a public-facing job and work with the wider community all the time, I’m still an introvert.

I know there are a lot of other introverts out there who might find the idea of giving presentations to be overwhelming, but they don’t have to be.  Here I’m going to give my personal experiences and advice, in the hope that it’ll encourage some of you to step out of your comfort zones and share your knowledge and talent with the rest of us at meetups and conferences.

## You will be bad at it…

Public speaking is like learning how to ride a bicycle, everybody falls their first time. Everybody falls a second time, and a third. You will fidget and stutter, you will lose your train of thought, your voice will sound funny. It’s not just you, everybody starts off being bad at it. Don’t let that stop you though, accept that you’ll have bruises and scrapes and keep getting back on that bike. Coincidentally, accepting that you’re going to be bad at the first ones makes it much less frightening going into them.

## … until you are good at it

I read a lot of things about how to be a good and confident public speaker, the advice was all over the map, and a lot of it felt like pure BS.  I think a lot of people try different things and when they finally feel confident in speaking, they attribute whatever their latest thing was with giving them that confidence. In reality, you just get more confident the more you do it.  You’ll be better the second time than the first, and better the third time than the second. So keep at it, you’ll keep getting better. No matter how good or bad you are now, you will keep getting better if you just keep doing it.

You’ll find a lot of suggestions about how to use your hands (or not use them), how to walk around (or not walk around) or other suggestions about what to do with yourself while you’re giving your presentation. Ignore them all. It’s not that these things don’t affect your presentation, I’ll admit that they do, it’s that they don’t affect anything after your presentation. Think back about all of the presentations you’ve seen in your life, how much do you remember about how the presenter walked or waved their hands? Unless those movements were integral to the subject, you probably don’t remember much. The same will happen for you, nobody is going to remember whether you walked around or not, they’re going to remember the information you gave them.

This is the one piece of advice I read that actually has helped me. The reason nobody remembers what you did with your hands is because they’re not there to watch you, they’re there for the information you’re giving them. Unless you’re an actual celebrity, people are there to get information for their own benefit, you’re just the medium which provides it to them.  So don’t make it about you (again, unless you’re an actual celebrity), focus on the topic and information you’re giving out and what it can do for the audience. If you do that, they’ll be thinking about what they’re going to do with it, not what you’re doing with your hands or how many times you’ve said “um”. Good information is a good distraction from the things you don’t want them paying attention to.

## It’s all just practice

Practicing your presentation isn’t nearly as stressful as giving it, because you’re not worried about messing up. If you mess up during practice you just correct it, make a note to not make the same mistake next time, and carry on. Well if you plan on doing more public speaking there will always be a next time, which means this time is your practice for that one. Keep your eye on the presentation after this one, if you mess up now you can correct it for the next one.

All of the above are really just different ways of saying the same thing: just keep doing it and worry about the content not you. You will get better, your content will get better, and other people will benefit from it, for which they will be appreciative and will gladly overlook any faults in the presentation. I guarantee that you will not be more nervous about it than I was when I started.

pitti

## autopkgtest 3.5: Reboot support, Perl/Ruby implicit tests

Last week’s autopkgtest 3.5 release (in Debian sid and Ubuntu Utopic) brings several new features which I’d like to announce.

## Tests that reboot

For testing low-level packages like init or the kernel it is sometimes desirable to reboot the testbed in the middle of a test. For example, I added a new boot_and_services systemd autopkgtest which configures grub to boot with systemd as pid 1, reboots, and then checks that the most important services like lightdm, D-BUS, NetworkManager, and cron come up as expected. (This test will be expanded a lot in the future to cover other areas like the journal, logind, etc.)

In a testbed which supports rebooting (currently only QEMU) your test will now find an “autopkgtest-reboot” command which the test calls with an arbitrary “marker” string. autopkgtest will then reboot the testbed, save/restore any files it needs to (like the tests file tree or previously created artifacts), and then re-run the test with ADT_REBOOT_MARK=mymarker.

The new “Reboot during a test” section in README.package-tests explains this in detail with an example.

## Implicit test metadata for similar packages

The Debian pkg-perl team recently discussed how to add package tests to the ~ 3.000 Perl packages. For most of these the test metadata looks pretty much the same, so they created a new pkg-perl-autopkgtest package which centralizes the logic. autopkgtest 3.5 now supports an implicit debian/tests/control control file to avoid having to modify several thousand packages with exactly the same file.

An initial run already looked quite promising, 65% of the packages pass their tests. There will be a few iterations to identify common failures and fix those in pkg-perl-autopkgtest and autopkgtestitself now.

There is still some discussion about how implicit test control files go together with the DEP-8 specification, as other runners like sadt do not support them yet. Most probably we’ll declare those packages XS-Testsuite: autopkgtest-pkg-perl instead of the usual autopkgtest.

In the same vein, Debian’s Ruby maintainer (Antonio Terceiro) added implicit test control support for Ruby packages. We haven’t done a mass test run with those yet, but their structure will probably look very similar.

Dustin Kirkland

## Deploy OpenStack IceHouse like a Boss!

This little snippet of ~200 lines of YAML is the exact OpenStack that I'm deploying tonight, at the OpenStack Austin Meetup.

Anyone with a working Juju and MAAS setup, and 7 registered servers should be able to deploy this same OpenStack setup, in about 12 minutes, with a single command.

$wget http://people.canonical.com/~kirkland/icehouseOB.yaml$ juju-deployer -c icehouseOB.yaml$cat icehouseOB.yamlicehouse: overrides: openstack-origin: "cloud:trusty-icehouse" source: "distro" services: ceph: charm: "cs:trusty/ceph-27" num_units: 3 constraints: tags=physical options: fsid: "9e7aac42-4bf4-11e3-b4b7-5254006a039c" "monitor-secret": AQAAvoJSOAv/NRAAgvXP8d7iXN7lWYbvDZzm2Q== "osd-devices": "/srv" "osd-reformat": "yes" annotations: "gui-x": "2648.6688842773438" "gui-y": "708.3873901367188" keystone: charm: "cs:trusty/keystone-5" num_units: 1 constraints: tags=physical options: "admin-password": "admin" "admin-token": "admin" annotations: "gui-x": "2013.905517578125" "gui-y": "75.58013916015625" "nova-compute": charm: "cs:trusty/nova-compute-3" num_units: 3 constraints: tags=physical to: [ceph=0, ceph=1, ceph=2] options: "flat-interface": eth0 annotations: "gui-x": "776.1040649414062" "gui-y": "-81.22811031341553" "neutron-gateway": charm: "cs:trusty/quantum-gateway-3" num_units: 1 constraints: tags=virtual options: ext-port: eth1 instance-mtu: 1400 annotations: "gui-x": "329.0572509765625" "gui-y": "46.4658203125" "nova-cloud-controller": charm: "cs:trusty/nova-cloud-controller-41" num_units: 1 constraints: tags=physical options: "network-manager": Neutron annotations: "gui-x": "1388.40185546875" "gui-y": "-118.01156234741211" rabbitmq: charm: "cs:trusty/rabbitmq-server-4" num_units: 1 to: mysql annotations: "gui-x": "633.8120727539062" "gui-y": "862.6530151367188" glance: charm: "cs:trusty/glance-3" num_units: 1 to: nova-cloud-controller annotations: "gui-x": "1147.3269653320312" "gui-y": "1389.5643157958984" cinder: charm: "cs:trusty/cinder-4" num_units: 1 to: nova-cloud-controller options: "block-device": none annotations: "gui-x": "1752.32568359375" "gui-y": "1365.716194152832" "ceph-radosgw": charm: "cs:trusty/ceph-radosgw-3" num_units: 1 to: nova-cloud-controller annotations: "gui-x": "2216.68212890625" "gui-y": "697.16796875" cinder-ceph: charm: "cs:trusty/cinder-ceph-1" num_units: 0 annotations: "gui-x": "2257.5515747070312" "gui-y": "1231.2130126953125" "openstack-dashboard": charm: "cs:trusty/openstack-dashboard-4" num_units: 1 to: "keystone" options: webroot: "/" annotations: "gui-x": "2353.6898193359375" "gui-y": "-94.2642593383789" mysql: charm: "cs:trusty/mysql-1" num_units: 1 constraints: tags=physical options: "dataset-size": "20%" annotations: "gui-x": "364.4567565917969" "gui-y": "1067.5167846679688" mongodb: charm: "cs:trusty/mongodb-0" num_units: 1 constraints: tags=physical annotations: "gui-x": "-70.0399979352951" "gui-y": "1282.8224487304688" ceilometer: charm: "cs:trusty/ceilometer-0" num_units: 1 to: mongodb annotations: "gui-x": "-78.13333225250244" "gui-y": "919.3128051757812" ceilometer-agent: charm: "cs:trusty/ceilometer-agent-0" num_units: 0 annotations: "gui-x": "-90.9158582687378" "gui-y": "562.5347595214844" heat: charm: "cs:trusty/heat-0" num_units: 1 to: mongodb annotations: "gui-x": "494.94012451171875" "gui-y": "1363.6024169921875" ntp: charm: "cs:trusty/ntp-4" num_units: 0 annotations: "gui-x": "-104.57728099822998" "gui-y": "294.6641273498535" relations: - - "keystone:shared-db" - "mysql:shared-db" - - "nova-cloud-controller:shared-db" - "mysql:shared-db" - - "nova-cloud-controller:amqp" - "rabbitmq:amqp" - - "nova-cloud-controller:image-service" - "glance:image-service" - - "nova-cloud-controller:identity-service" - "keystone:identity-service" - - "glance:shared-db" - "mysql:shared-db" - - "glance:identity-service" - "keystone:identity-service" - - "cinder:shared-db" - "mysql:shared-db" - - "cinder:amqp" - "rabbitmq:amqp" - - "cinder:cinder-volume-service" - "nova-cloud-controller:cinder-volume-service" - - "cinder:identity-service" - "keystone:identity-service" - - "neutron-gateway:shared-db" - "mysql:shared-db" - - "neutron-gateway:amqp" - "rabbitmq:amqp" - - "neutron-gateway:quantum-network-service" - "nova-cloud-controller:quantum-network-service" - - "openstack-dashboard:identity-service" - "keystone:identity-service" - - "nova-compute:shared-db" - "mysql:shared-db" - - "nova-compute:amqp" - "rabbitmq:amqp" - - "nova-compute:image-service" - "glance:image-service" - - "nova-compute:cloud-compute" - "nova-cloud-controller:cloud-compute" - - "cinder:storage-backend" - "cinder-ceph:storage-backend" - - "ceph:client" - "cinder-ceph:ceph" - - "ceph:client" - "nova-compute:ceph" - - "ceph:client" - "glance:ceph" - - "ceilometer:identity-service" - "keystone:identity-service" - - "ceilometer:amqp" - "rabbitmq:amqp" - - "ceilometer:shared-db" - "mongodb:database" - - "ceilometer-agent:container" - "nova-compute:juju-info" - - "ceilometer-agent:ceilometer-service" - "ceilometer:ceilometer-service" - - "heat:shared-db" - "mysql:shared-db" - - "heat:identity-service" - "keystone:identity-service" - - "heat:amqp" - "rabbitmq:amqp" - - "ceph-radosgw:mon" - "ceph:radosgw" - - "ceph-radosgw:identity-service" - "keystone:identity-service" - - "ntp:juju-info" - "neutron-gateway:juju-info" - - "ntp:juju-info" - "ceph:juju-info" - - "ntp:juju-info" - "keystone:juju-info" - - "ntp:juju-info" - "nova-compute:juju-info" - - "ntp:juju-info" - "nova-cloud-controller:juju-info" - - "ntp:juju-info" - "rabbitmq:juju-info" - - "ntp:juju-info" - "glance:juju-info" - - "ntp:juju-info" - "cinder:juju-info" - - "ntp:juju-info" - "ceph-radosgw:juju-info" - - "ntp:juju-info" - "openstack-dashboard:juju-info" - - "ntp:juju-info" - "mysql:juju-info" - - "ntp:juju-info" - "mongodb:juju-info" - - "ntp:juju-info" - "ceilometer:juju-info" - - "ntp:juju-info" - "heat:juju-info" series: trusty :-Dustin Read more Dustin Kirkland ## Dream a little dream (in a dream within another dream) with me! What would you say if I told you, that you could continuously upload your own Software-as-a-Service (SaaS) web apps into an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, autonomically managed by an open source Orchestration-Service… right now, today?  “An idea is resilient. Highly contagious. Once an idea has taken hold of the brain it's almost impossible to eradicate.”  “Now, before you bother telling me it's impossible…”  “No, it's perfectly possible. It's just bloody difficult.”  Perhaps something like this...  “How could I ever acquire enough detail to make them think this is reality?”  “Don’t you want to take a leap of faith???” Sure, let's take a look!  Okay, this looks kinda neat, what is it? This is an open source Java Spring web application, called Spring-Music, deployed as an app, running inside of Linux containers in CloudFoundry  Cloud Foundry? CloudFoundry is an open source Platform-as-a-Service (PAAS) cloud, deployed into Linux virtual machine instances in OpenStack, by Juju.  OpenStack?  Juju? OpenStack is an open source Infrastructure-as-a-Service (IAAS) cloud, deployed by Juju and Landscape on top of MAAS. Juju is an open source Orchestration System that deploys and scales complex services across many public clouds, private clouds, and bare metal servers.  Landscape?  MAAS? Landscape is a systems management tool that automates software installation, updates, and maintenance in both physical and virtual machines. Oh, and it too is deployed by Juju. MAAS is an open source bare metal provisioning system, providing a cloud-like API to physical servers. Juju can deploy services to MAAS, as well as public and private clouds.  "Ready for the kick?"  If you recall these concepts of nesting cloud technologies...  These are real technologies, which exist today! These are Software-as-a-Service (SaaS) web apps served by an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, managed by an open source Orchestration-Service. Spring Music, served by CloudFoundry, running on top of OpenStack, deployed on MAAS, managed by Juju and Landscape!  “The smallest seed of an idea can grow…”  Oh, and I won't leave you hanging...you're not dreaming! :-Dustin Read more Nicholas Skaggs ## Autopilot Test Runners In my last next post, I discussed will discuss notable autopilot features and talk about how autopilot has matured since it became an independent project. In the meantime I would be remiss if I didn't also talk about the different test runners commonly used with autopilot tests. In addition to the autopilot binary which can be executed to run the tests, different tools have cropped up to make running tests easier. autopilot-sandbox-run This tool ships with autopilot itself and was developed as a way to run autopilot test suites on your desktop in a sane manner. Run the autopilot3-sandbox-run command with --help to see all the options available. By default, the tests will run in an Xvfb server, all completely behind the scenes with the results being reported to you upon completion. This is a great way to run tests with no interference on your desktop. If you are a visual person like me, you may instead wish to pass -X to enable the test runs to occur in a Xephyr window allowing you to see what's happening, but still retaining control of your mouse and keyboard. I need this tool! sudo apt-get install python3-autopilot I want to run tests on my desktop without losing control of my mouse! autopilot3-sandbox-run my_testsuite_name I want to run tests on my desktop without losing control of my mouse, but I still want to see what's happening! autopilot3-sandbox-run -X my_testsuite_name Autopkgtest Autopkgtest was developed as a means to automatically test Debian packages, "as-installed". Recently support was added to also test click packages and to run on phablet devices. Autopkgtest will take care of dependencies, setting up autopilot, and unlocking the device. You can literally plug in a device and wait for the results. You should really checkout the README pages, including those on running tests. That said, here's a quick primer on running tests using autopkgtest. I need this tool! sudo apt-get install autopkgtest If you are on trusty, grab and install the utopic deb from here. I want to run tests for a click package installed on my device! Awesome. This one is simple. Connect the device and then run: adt-run --click my.click.name --- ssh -s adb For example, adt-run --click com.ubuntu.music --- ssh -s adb will run the tests for the installed version of the music app on your device. You don't need to do anything else. For the curious, this works by reading the manifest file all click packages have. Read more here. I want to run the tests I wrote/modified against an installed click package! For this you need to also pass your local folder containing the tests. You will also want to make sure you installed the new version of the click package if needed. adt-run my-folder/ --click my.click.name --- ssh -s adb Autopkgtest can also run in a lxc container, QEMU, a chroot, and other fun targets. In the examples above, I passed --- ssh -s adb as the target, instructing autopkgtest to use ssh and adb and thus run the tests on a connected phablet device. If you want to run autopilot tests on a phablet device, I recommend using autopkgtest as it handles everything for you. phablet-test-run This tool is part of the greater phablet-tools package. It was originally developed as an easy way to execute tests on your phablet device. Note however that copying the tests and any dependencies to the phablet device is left to you. The phablet-tools package provides some other useful utilities to help you with this (checkout phablet-click-test-setup for example). I need this tool! sudo apt-get install phablet-tools I want to run the tests I wrote/modified against an installed click package! First copy the tests to the device. You can use the ubuntu sdk or click-buddy for this, or even do it manually via adb. Then run phablet-test-run. It takes the same arguments as autopilot itself. phablet-test-run -v my_testsuite Note the tools looks for the testsuite and any dependencies of the testsuite inside the /home/phablet/autopilot folder. It's up to you to make sure everything that is needed to run your tests are located there or else it will fail. other ways There are of course other possible test runners that wrap around autopilot to make executing tests easier. Perhaps you've written a script yourself. Just remember at the end of the day the autopilot binary will be running the tests. It simply needs to be able to find the testsuite and all of it's dependencies in order to run. For this reason, don't be afraid to execute autopilot3 and run the tests yourself. Happy test runs! Read more Daniel Holbach ## ubuntu-community-team list: Hang out, discuss new ideas We just created a new Ubuntu mailing list called ubuntu-community-team. As we didn’t have a place like this before, we created it so we can • have discussions around planning community events • start all kinds of initiatives around Ubuntu • allow enthusiasts of the Ubuntu community to kick around new ideas • bring people from all parts of our community together so we can learn from each other • hang out and have fun We are looking forward to seeing you on the list as well, sign up on this page. Read more Dustin Kirkland ## OpenStack Austin Meetup, with an Orange Box and Home Brew Beer! In case you missed the recent Cloud Austin MeetUp, you have another chance to see the Ubuntu Orange Box live and in action here in Austin! This time, we're at the OpenStack Austin MeetUp, next Wednesday, September 10, 2014, at 6:30pm at Tech Ranch Austin, 9111 Jollyville Rd #100, Austin, TX! If you join us, you'll witness all of OpenStack Ice House, deployed in minutes to real hardware. Not an all-in-one DevStack; not a minimum viable set of components. Real, rich, production-quality OpenStack! Ceilometer, Ceph, Cinder, Glance, Heat, Horizon, Keystone, MongoDB, MySQL, Nova, NTP, Quantum, and RabbitMQ -- intelligently orchestrated and rapidly scaled across 10 physical servers sitting right up front on the podium. Of course, we'll go under the hood and look at how all of this comes together on the fabulous Ubuntu Orange Box. And like any good open source software developer, I generally like to make things myself, and share them with others. In that spirit, I'll also bring a couple of growlers of my own home brewed beer, Ubrewtu ;-) Free as in beer, of course! Cheers,Dustin Read more Robin Winslow ## Saving ubuntu.com on download day: Caching location specific pages On release day we can get up to 8,000 requests a second to ubuntu.com from people trying to download the new release. In fact, last October (13.10) was the first release day in a long time that the site didn’t crash under the load at some point during the day (huge credit to the infrastructure team). Ubuntu.com has been running on Drupal, but we’ve been gradually migrating it to a more bespoke Django based system. In March we started work on migrating the download section in time for the release of Trusty Tahr. This was a prime opportunity to look for ways to reduce some of the load on the servers. ## Choosing geolocated download mirrors is hard work for an application When someone downloads Ubuntu from ubuntu.com (on a thank-you page), they are actually sent to one of the 300 or so mirror sites that’s nearby. To pick a mirror for the user, the application has to: 1. Decide from the client’s IP address what country they’re in 2. Get the list of mirrors and find the ones that are in their country 3. Randomly pick them a mirror, while sending more people to mirrors with higher bandwidth This process is by far the most intensive operation on the whole site, not because these tasks are particularly complicated in themselves, but because this needs to be done for each and every user – potentially 8,000 a second while every other page on the site can be aggressively cached to prevent most requests from hitting the application itself. For the site to be able to handle this load, we’d need to load-balance requests across perhaps 40 VMs. ## Can everything be done client-side? Our first thought was to embed the entire mirror list in the thank-you page and use JavaScript in the users’ browsers to select an appropriate mirror. This would drastically reduce the load on the application, because the download page would then be effectively static and cache-able like every other page. The only way to reliably get the user’s location client-side is with the geolocation API, which is only supported by 85% of users’ browsers. Another slight issue is that the user has to give permission before they could be assigned a mirror, which would slightly hinder their experience. This solution would inconvenience users just a bit too much. So we found a trade-off: ## A mixed solution – Apache geolocation mod_geoip2 for Apache can apply server rules based on a user’s location and is much faster than doing geolocation at the application level. This means that we can use Apache to send users to a country-specific version of the download page (e.g. the German desktop thank-you page) by adding &country=GB to the end of the URL. These country specific pages contain the list of mirrors for that country, and each one can now be cached, vastly reducing the load on the server. Client-side JavaScript randomly selects a mirror for the user, weighted by the bandwidth of each mirror, and kicks off their download, without the need for client-side geolocation support. This solution was successfully implemented shortly before the release of Trusty Tahr. (This article was also posted on robinwinslow.co.uk) Read more Ben Howard ## Archive-triggered Cloud Image Builds For years, the Ubuntu Cloud Images have been built on a timer (i.e. cronjob or Jenkins). Every week, you can reasonably expect that stable and LTS releases to be built twice a week while our development build is build once a day. Each of these builds is given a serial in the form of YYYYMMDD. While time-based building has proven to be reliable, different build serials may be functionally the same, just put together at a different point in time. Many of the builds that we do for stable and LTS releases are pointless. When the whole heartbleed fiasco hit, it put the Cloud Image team into over-drive, since it required manually triggering builds the LTS releases. When we manually trigger builds, it takes roughly 12-16 hours to build, QA, test and release new Cloud Images. Sure, most of this is automated, but the process had to be manually started by a human. This got me thinking: there has to be a better way. What if we build the Cloud Images when the package set changes? With that, I changed the Ubuntu 14.10 (Utopic Unicorn) build process from time-based to archive trigger-based. Now, instead of building every day at 00:30 UTC, the build starts when the archive has been updated and the packages in the prior cloud image build is older than the archive version. In the last three days, there were eight builds for Utopic. For a development version of Ubuntu, this just means that developers don't have to wait 24 hours for the latest package changes to land in a Cloud Image. Over the next few weeks, I will be moving the 10.04 LTS, 12.04 LTS and 14.04 LTS build processes from time to archive trigger-based. While this might result less frequent daily builds, the main advantage is that the daily builds will contain the latest package sets. And if you are trying to respond to the latest CVE, or waiting on a bug fix to land, it likely means that you'll have a fresh daily that you can use the following day. Read more Dustin Kirkland ## Call for Testing: Docker 1.0.1 in Ubuntu 14.04 LTS (Trusty) ## Docker 1.0.1 is available for testing, in Ubuntu 14.04 LTS! Docker 1.0.1 has landed in the trusty-proposed archive, which we hope to SRU to trusty-updates very soon. We would love to have your testing feedback, to ensure both upgrades from Docker 0.9.1, as well as new installs of Docker 1.0.1 behave well, and are of the highest quality you have come to expect from Ubuntu's LTS (Long Term Stable) releases! Please file any bugs or issues here. Moreover, this new version of the Docker package now installs the Docker binary to /usr/bin/docker, rather than /usr/bin/docker.io in previous versions. This should help Ubuntu's Docker package more closely match the wealth of documentation and examples available from our friends upstream. A big thanks to Paul Tagliamonte, James Page, Nick Stinemates, Tianon Gravi, and Ryan Harper for their help upstream in Debian and in Ubuntu to get this package updated in Trusty! Also, it's probably worth mentioning that we're targeting Docker 1.1.2 (or perhaps 1.2.0) for Ubuntu 14.10 (Utopic), which will release on October 23, 2014. Here are a few commands that might help your testing... ### Check What Candidate Versions are Available $ sudo apt-get update
$apt-cache show docker.io | grep ^Version: If that shows 0.9.1~dfsg1-2 (as it should), then you need to enable the trusty-proposed pocket. $ echo "deb http://archive.ubuntu.com/ubuntu/ trusty-proposed universe" | sudo tee -a /etc/apt/sources.list
$sudo apt-get update $ apt-cache show docker.io | grep ^Version:

And now you should see the new version, 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1, available (probably in addition to 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1).

Check if you already have Docker installed, using:

$dpkg -l docker.io If so, you can simply upgrade. $ sudo apt-get upgrade
And now, you can check your Docker version:

$sudo dpkg -l docker.io | grep -m1 ^ii | awk '{print$3}'
0.9.1~dfsg1-2

### New Installations

You can simply install the new package with:

$sudo apt-get install docker.io And ensure that you're on the latest version with: $ dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1 ### Running Docker If you're already a Docker user, you probably don't need these instructions. But in case you're reading this, and trying Docker for the first time, here's the briefest of quick start guides :-) $ sudo docker pull ubuntu$sudo docker run -i -t ubuntu /bin/bash And now you're running a bash shell inside of an Ubuntu Docker container. And only bash! root@1728ffd1d47b:/# ps -efUID PID PPID C STIME TTY TIME CMDroot 1 0 0 13:42 ? 00:00:00 /bin/bashroot 8 1 0 13:43 ? 00:00:00 ps -ef If you want to do something more interesting in Docker, well, that's whole other post ;-) :-Dustin Read more Michael Hall ## Communicating Recognition Recognition is like money, it only really has value when it’s being passed between one person and another. Otherwise it’s just potential value, sitting idle. Communication gives life to recognition, turning it’s potential value into real value. As I covered in my previous post, Who do you contribute to?, recognition doesn’t have a constant value. In that article I illustrated how the value of recognition differs depending on who it’s coming from, but that’s not the whole story. The value of recognition also differs depending on the medium of communication. Over at the Community Leadership Knowledge Base I started documenting different forms of communication that a community might choose, and how each medium has a balance of three basic properties: Speed, Thoughtfulness and Discoverability. Let’s call this the communication triangle. Each of these also plays a part in the value of recognition. ## Speed Again, much like money, recognition is something that is circulated. It’s usefulness is not simply created by the sender and consumed by the receiver, but rather passed from one person to another, and then another. The faster you can communicate recognition around your community, the more utility you can get out of even a small amount of it. Fast communications, like IRC, phone calls or in-person meetups let you give and receive a higher volume of recognition than slower forms, like email or blog posts. But speed is only one part, and faster isn’t necessarily better. ## Thoughtfulness Where speed emphasizes quantity, thoughtfulness is a measure of the quality of communication, and that directly affects the value of recognition given. Thoughtful communications require consideration upon both receiving and replying. Messages are typically longer, more detailed, and better presented than those that emphasize speed. As a result, they are also usually a good bit slower too, both in the time it takes for a reply to be made, and also the speed at which a full conversation happens. An IRC meeting can be done in an hour, where an email exchange can last for weeks, even if both end up with the same word-count at the end. ## Discoverability The third point on our communication triangle, discoverability, is a measure of how likely it is that somebody not immediately involved in a conversation can find out about it. Because recognition is a social good, most of it’s value comes from other people knowing who has given it to whom. Discoverability acts as a multiplier (or divisor, if done poorly) to the original value of recognition. There are two factors to the discoverability of communication. The first, accessibility, is about how hard it is to find the conversation. Blog posts, or social media posts, are usually very easy to discover, while IRC chats and email exchanges are not. The second factor, longevity, is about how far into the future that conversation can still be discovered. A social media post disappears (or at least becomes far less accessible) after a while, but an IRC log or mailing list archive can stick around for years. Unlike the three properties of communication, however, these factors to discoverability do not require a trade off, you can have something that is both very accessible and has high longevity. ## Finding Balance Most communities will have more than one method of communication, and a healthy one will have a combination of them that compliment each other. This is important because sometimes one will offer a more productive use of your recognition than another. Some contributors will respond better to lots of immediate recognition, rather than a single eloquent one. Others will respond better to formal recognition than informal. In both cases, be mindful of the multiplier effect that discoverability gives you, and take full advantage of opportunities where that plays a larger than usual role, such as during an official meeting or when writing an article that will have higher than normal readership. Read more Dustin Kirkland ## Learn Byobu in 10 minutes while listening to Mozart If you're interested in learning how to more effectively use your terminal as your integrated devops environment, consider taking 10 minutes and watching this video while enjoying the finale of Mozart's Symphony No. 40Allegro Assai (part of which is rumored to have inspired Beethoven's 5th). I'm often asked for a quick-start guide, to using Byobu effectively. This wiki page is a decent start, as is the manpage, and the various links on the upstream website. But it seems that some of the past screencast videos have had the longest lasting impressions to Byobu users over the years. I was on a long, international flight from Munich to Newark this past Saturday with a bit of time on my hands, and I cobbled together this instructional video. That recent international trip to Nuremberg inspired me to rediscover Mozart, and I particularly like this piece, which Mozart wrote in 1788, but sadly never heard performed. You can hear it now, and learn how to be more efficient in command line environments along the way :-) Enjoy! :-Dustin Read more Iain Farrell ## Ubuntu 14.10 wallpapers – we needs ‘em! Verónica Sousa’s Cul de sac Ubuntu was once described to me by a wise(ish ;) ) man as a train that was leaving whether you’re on it or not. That’s the beauty of a 6 month release cycle. As many of you will already know, each release we include photos and illustrations produced by community members. We ask that you submit your images using free photo sharing site Flickr and that you limit your images this time to 2. The group won’t let you submit more than that but if you change your mind after you’ve submitted, fear not, simply remove one and it’ll let you add another. As with previous submissions processes we’ve run, and in conjunction with the designers at Canonical we’ve come up with the following tips for creating wallpaper images. 1. Images shouldn’t be too busy and filled with too many shapes and colours, a similar tone throughout is a good rule of thumb. 2. A single point of focus, a single area that draws the eye into the image, can also help you avoid something too cluttered. 3. The left and top edges are home to Ubuntu’s Launcher and Panel so be careful to consider how your images look in place so as not to clash with the user interface. Try them out on your own desktop, see how they feel. 4. Try your image at different aspect ratios to make sure something important isn’t cropped out on smaller/ larger screens at different resolutions. 5. Take a look at the wallpapers guidance on the Ubuntu Wiki regarding the size of images. Our target resolution is 2560 x 1600. 6. Break all the rules except the resolution one! :D To shortlist from this collection we’ll be going to the contributors whose images were selected last time around to act as our selection judges. In doing this we’ll hold a series of public IRC meetings on Freenode in #1410wallpaper to discuss the selection. In those sessions we’ll get the selection team to try out the images on their own Ubuntu machines to see what they look like on a range of displays and resolutions. Anyone is welcome to come to these sessions but please keep in mind that an outcome is needed from the time that people are volunteering and there’s usually a lot of images to get through so we’d appreciate it if there isn’t too much additional debate. Based on the Utopic release schedule, I think our schedule for this cycle should look like this: • 08/08/14 – Kick off 14.10 wallpaper submission process. • 22/08/14 – First get together on #1410wallpaper at 19:30 GMT. • 29/08/14 – Submissions deadline at 18:00 GMT – Flickr group will be locked and the selection process will begin. • 09/09/14 – Deliver final selection in zip format to the appropriate bug on Launchpad. • 11/09/14 – UI freeze for latest version of Ubuntu with our fantastic images in it! As always, ping me if you have any questions, I’ll be lurking in #1410wallpaper on freenode or leave a question in the Flickr group for wider discussion, that’s probably the fastest way to get an answer to a question. I’ll be posting updates on our schedule here from time to time but the Flickr group will serve as our hub. Happy snapping and scribbling and on behalf of the community, thanks for contributing to Ubuntu! Read more Dustin Kirkland ## Ubuntu OpenStack on an Orange Box, Live Demo at the Cloud Austin Meetup, August 19th I hope you'll join me at Rackspace on Tuesday, August 19, 2014, at the Cloud Austin Meetup, at 6pm, where I'll use our spectacular Orange Box to deploy Hadoop, scale it up, run a terasort, destroy it, deploy OpenStack, launch instances, and destroy it too. I'll talk about the hardware (the Orange Box, Intel NUCs, Managed VLAN switch), as well as the software (Ubuntu, OpenStack, MAAS, Juju, Hadoop) that makes all of this work in 30 minutes or less! Be sure to RSVP, as space is limited. http://www.meetup.com/CloudAustin/events/194009002/ Cheers, Dustin Read more Michael Hall ## Who do you contribute to? When you contribute something as a member of a community, who are you actually giving it to? The simple answer of course is “the community” or “the project”, but those aren’t very specific. On the one hand you have a nebulous group of people, most of which you probably don’t even know about, and on the other you’ve got some cold, lifeless code repository or collection of web pages. When you contribute, who is that you really care about, who do you really want to see and use what you’ve made? In my last post I talked about the importance of recognition, how it’s what contributors get in exchange for their contribution, and how human recognition is the kind that matters most. But which humans do our contributors want to be recognized by? Are you one of them and, if so, are you giving it effectively? ## Owners The owner of a project has a distinct privilege in a community, they are ultimately the source of all recognition in that community. Early contributions made to a project get recognized directly by the founder. Later contributions may only get recognized by one of those first contributors, but the value of their recognition comes from the recognition they received as the first contributors. As the project grows, more generations of contributors come in, with recognition coming from the previous generations, though the relative value of it diminishes as you get further from the owner. ## Leaders After the project owner, the next most important source of recognition is a project’s leaders. Leaders are people who gain authority and responsibility in a project, they can affect the direction of a project through decisions in addition to direct contributions. Many of those early contributors naturally become leaders in the project but many will not, and many others who come later will rise to this position as well. In both cases, it’s their ability to affect the direction of a project that gives their recognition added value, not their distance from the owner. Before a community can grown beyond a very small size it must produce leaders, either through a formal or informal process, otherwise the availability of recognition will suffer. ## Legends Leadership isn’t for everybody, and many of the early contributors who don’t become one still remain with the project, and end of making very significant contributions to it and the community over time. Whenever you make contributions, and get recognition for them, you start to build up a reputation for yourself. The more and better contributions you make, the more your reputation grows. Some people have accumulated such a large reputation that even though they are not leaders, their recognition is still sought after more than most. Not all communities will have one of these contributors, and they are more likely in communities where heads-down work is valued more than very public work. ## Mentors When any of us gets started with a community for the first time, we usually end of finding one or two people who help us learn the ropes. These people help us find the resources we need, teach us what those resources don’t, and are instrumental in helping us make the leap from user to contributor. Very often these people aren’t the project owners or leaders. Very often they have very little reputation themselves in the overall project. But because they take the time to help the new contributor, and because theirs is very likely to be the first, the recognition they give is disproportionately more valuable to that contributor than it otherwise would be. Every member of a community can provide recognition, and every one should, but if you find yourself in one of the roles above it is even more important for you to be doing so. These roles are responsible both for setting the example, and keeping a proper flow, or recognition in a community. And without that flow or recognition, you will find that your flow of contributions will also dry up. Read more pitti ## vim config for Markdown+LaTeX pandoc editing I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me. A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺ So last night I finally sat down and created a vim config for it: "-- pandoc Markdown+LaTeX ------------------------------------------- function s:MDSettings() inoremap <buffer> <Leader>n \note[item]{}<Esc>i noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR> noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR> noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR> " adjust syntax highlighting for LaTeX parts " inline formulas: syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$" " environments: syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement " commands: syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement endfunction autocmd BufRead,BufNewFile *.md setfiletype markdown autocmd FileType markdown :call <SID>MDSettings()  That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated! Read more Dustin Kirkland ## Improving Random Seeds in Ubuntu 14.04 LTS Cloud Instances Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images. You can view my slides here (PDF), or you can read on below. Enjoy! ### Q: Why should I care about randomness? ### A: Because entropy is important! • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy • SSL keys • SSH keys • GPG keys • /etc/shadow salts • TCP sequence numbers • UUIDs • dm-crypt keys • eCryptfs keys • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above ### Q: Where does entropy come from? ### A: Hardware, typically. • Keyboards • Mouses • Interrupt requests • HDD seek timing • Network activity • Microphones • Web cams • Touch interfaces • WiFi/RF • TPM chips • RdRand • Entropy Keys • Pricey IBM crypto cards • Expensive RSA cards • USB lava lamps • Geiger Counters • Seismographs • Light/temperature sensors • And so on ### Q: But what about virtual machines, in the cloud, where we have (almost) none of those things? ### A: Pseudo random number generators are our only viable alternative. • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool • Basically, endless streams of pseudo random bytes • Some utilities and most programming languages implement their own PRNGs • But they usually seed from /dev/random or /dev/urandom • Sometimes, virtio-rng is available, for hosts to feed guests entropy • But not always ### Q: Are Linux PRNGs secure enough? ### A: Yes, if they are properly seeded. • See random(4) • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state • This reduces the actual amount of noise in the entropy pool below the estimate • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots • See /etc/init.d/urandom ...dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1 ... ### Q: And what exactly is a random seed? ### A: Basically, its a small catalyst that primes the PRNG pump. • Let’s pretend the digits of Pi are our random number generator • The random seed would be a starting point, or “initialization vector” • e.g. Pick a number between 1 and 20 • say, 18 • Now start reading random numbers • Not bad...but if you always pick ‘18’... #### XKCD on random numbers  RFC 1149.5 specifies 4 as the standard IEEE-vetted random number. ### Q: So my OS generates an initial seed at first boot? ### A: Yep, but computers are predictable, especially VMs. • Computers are inherently deterministic • And thus, bad at generating randomness • Real hardware can provide quality entropy • But virtual machines are basically clones of one another • ie, The Cloud • No keyboard or mouse • IRQ based hardware is emulated • Block devices are virtual and cached by hypervisor • RTC is shared • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool #### Dilbert on random numbers ### Q: Surely you're just being paranoid about this, right? ### A: I’m afraid not... #### Analysis of the LRNG (2006) • Little prior documentation on Linux’s random number generator • Random bits are a limited resource • Very little entropy in embedded environments • OpenWRT was the case study • OS start up consists of a sequence of routine, predictable processes • Very little demonstrable entropy shortly after boot • http://j.mp/McV2gT #### Black Hat (2009) • iSec Partners designed a simple algorithm to attack cloud instance SSH keys • Picked up by Forbes • http://j.mp/1hcJMPu #### Factorable.net (2012) • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates • Insecure or poorly seeded RNGs in widespread use • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness • http://j.mp/1iPATZx #### Dual_EC_DRBG Backdoor (2013) • Dual Elliptic Curve Deterministic Random Bit Generator • Ratified NIST, ANSI, and ISO standard • Possible backdoor discovered in 2007 • Bruce Schneier noted that it was “rather obvious” • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard • http://j.mp/1bJEjrB ### Q: Ruh roh...so what can we do about it? ### A: For starters, do a better job seeding our PRNGs. • Securely • With high quality, unpredictable data • More sources are better • As early as possible • And certainly before generating • SSH host keys • SSL certificates • Or any other critical system DNA • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded ### Q: But how do we ensure that in cloud guests? ### A: Run Ubuntu! Sorry, shameless plug... ### Q: And what is Ubuntu's solution? ### A: Meet pollinate. • pollinate is a new security feature, that seeds the PRNG. • Introduced in Ubuntu 14.04 LTS cloud images • Upstart job • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated • It’s GPLv3 free software • Simple shell script wrapper around curl • Fetches random seeds • From 1 or more entropy servers in a pool • Writes them into /dev/urandom • https://launchpad.net/pollinate ### Q: What about the back end? ### A: Introducing pollen. • pollen is an entropy-as-a-service implementation • Works over HTTP and/or HTTPS • Supports a challenge/response mechanism • Provides 512 bit (64 byte) random seeds • It’s AGPL free software • Implemented in golang • Less than 50 lines of code • Fast, efficient, scalable • Returns the (optional) challenge sha512sum • And 64 bytes of entropy • https://launchpad.net/pollen ### Q: Golang, did you say? That sounds cool! ### A: Indeed. Around 50 lines of code, cool! pollen.go ### Q: Is there a public entropy service available? ### A: Hello, entropy.ubuntu.com. • Highly available pollen cluster • TLS/SSL encryption • Multiple physical servers • Behind a reverse proxy • Deployed and scaled with Juju • Multiple sources of hardware entropy • High network traffic is always stirring the pot • AGPL, so source code always available • Supported by Canonical • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys ### Q: But what if I don't necessarily trust Canonical? ### A: Then use a different entropy service :-) • Deploy your own pollen • bzr branch lp:pollen • sudo apt-get install pollen • juju deploy pollen • Add your preferred server(s) to your$POOL
• In /etc/default/pollinate
• In your cloud-init user data
• In progress
• In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

### A: No, no, no, no, no!

• pollinate seeds your PRNG, securely and properly and as early as possible
• This improves the quality of all random numbers generated thereafter
• pollen provides random seeds over HTTP and/or HTTPS connections
• This information can be fed into your PRNG
• The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
• Note that neither pollen nor pollinate directly affect this quantity estimate!!!

### A: Think of it like the Heisenberg Uncertainty Principle.

• The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
• pollinate can verify the response and ensure that the pollen server at least “did some work”
• From the perspective of the pollen server administrator, all communications are “stirring the pot”
• Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

### A: Functionally, it’s no better or worse than it was without pollinate in the mix.

• In fact, you can dd if=/dev/zero of=/dev/random if you like, without harming your entropy quality
• All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
• Of course it doesn’t help, but it doesn’t hurt either
• Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
• Note the permissions on /dev/*random
• crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
• crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
• It's a bummer of course, but there's no new compromise

### A: We are mitigating that by bundling the public certificates in the client.

• The pollinate package ships the public certificate of entropy.ubuntu.com
• /etc/pollinate/entropy.ubuntu.com.pem
• And curl uses this certificate exclusively by default
• If this really is your concern (and perhaps it should be!)
• Add more URLs to the $POOL variable in /etc/default/pollinate • Put one of those behind your firewall • You simply need to ensure that at least one of those is outside of the control of your attackers ### Q: What information gets logged by the pollen server? ### A: The usual web server debug info. • The current timestamp • The incoming client IP/port • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer • The browser user-agent string • Basically, the exact same information that Chrome/Firefox/Safari sends • You can override if you like in /etc/default/pollinate • The challenge/response, and the generated seed are never logged! Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155] Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843] ### Q: Have the code or design been audited? ### A: Yes, but more feedback is welcome! • All of the source is available • Service design and hardware specs are available • The Ubuntu Security team has reviewed the design and implementation • All feedback has been incorporated • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation • All feedback has been incorporated ### Q: Where can I find more information? ### A: Read Up! Stay safe out there! :-Dustin Read more Michael Hall ## Why do you contribute to open source? It seems a fairly common, straight forward question. You’ve probably been asked it before. We all have reasons why we hack, why we code, why we write or draw. If you ask somebody this question, you’ll hear things like “scratching an itch” or “making something beautiful” or “learning something new”. These are all excellent reasons for creating or improving something. But contributing isn’t just about creating, it’s about giving that creation away. Usually giving it away for free, with no or very few strings attached. When I ask “Why do you contribute to open source”, I’m asking why you give it away. This question is harder to answer, and the answers are often far more complex than the ones given for why people simply create something. What makes it worthwhile to spend your time, effort, and often money working on something, and then turn around and give it away? People often have different intentions or goals in mind when the contribute, from benevolent giving to a community they care about to personal pride in knowing that something they did is being used in something important or by somebody important. But when you strip away the details of the situation, these all hinge on one thing: Recognition. If you read books or articles about community, one consistent theme you will find in almost all of them is the importance of recognizing the contributions that people make. In fact, if you look at a wide variety of successful communities, you would find that one common thing they all offer in exchange for contribution is recognition. It is the fuel that communities run on. It’s what connects the contributor to their goal, both selfish and selfless. In fact, with open source, the only way a contribution can actually stolen is by now allowing that recognition to happen. Even the most permissive licenses require attribution, something that tells everybody who made it. Now let’s flip that question around: Why do people contribute to your project? If their contribution hinges on recognition, are you prepared to give it? I don’t mean your intent, I’ll assume that you want to recognize contributions, I mean do you have the processes and people in place to give it? We’ve gotten very good about building tools to make contribution easier, faster, and more efficient, often by removing the human bottlenecks from the process. But human recognition is still what matters most. Silently merging someone’s patch or branch, even if their name is in the commit log, isn’t the same as thanking them for it yourself or posting about their contribution on social media. Letting them know you appreciate their work is important, letting other people know you appreciate it is even more important. If you the owner or a leader in a project with a community, you need to be aware of how recognition is flowing out just as much as how contributions are flowing in. Too often communities are successful almost by accident, because the people in them are good at making sure contributions are recognized and that people know it simply because that’s their nature. But it’s just as possible for communities to fail because the personalities involved didn’t have this natural tendency, not because of any lack of appreciation for the contributions, just a quirk of their personality. It doesn’t have to be this way, if we are aware of the importance of recognition in a community we can be deliberate in our approaches to making sure it flows freely in exchange for contributions. Read more pitti ## autopkgtest 3.2: CLI cleanup, shell command tests, click improvements Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of. ## Cleanup of CLI options, and config files Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this). The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly. Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often: $ cat adt_sid
--output-dir=/tmp/out
-s
---
schroot
sid



## Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run
Depends: @, xvfb, [...]


This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

## Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

Michael Hall

## When is a fork not a fork?

Technically a fork is any instance of a codebase being copied and developed independently of its parent.  But when we use the word it usually encompasses far more than that. Usually when we talk about a fork we mean splitting the community around a project, just as much as splitting the code itself. Communities are not like code, however, they don’t always split in consistent or predictable ways. Nor are all forks the same, and both the reasons behind a fork, and the way it is done, will have an effect on whether and how the community around it will split.

There are, by my observation, three different kinds of forks that can be distinguished by their intent and method.  These can be neatly labeled as Convergent, Divergent and Emergent forks.

## Convergent Forks

Most often when we talk about forks in open source, we’re talking about convergent forks. A convergent fork is one that shares the same goals as its parent, seeks to recruit the same developers, and wants to be used by the same users. Convergent forks tend to happen when a significant portion of the parent project’s developers are dissatisfied with the management or processes around the project, but otherwise happy with the direction of its development. The ultimate goal of a convergent fork is to take the place of the parent project.

Because they aim to take the place of the parent project, convergent forks must split the community in order to be successful. The community they need already exists, both the developers and the users, around the parent project, so that is their natural source when starting their own community.

## Divergent Forks

Less common that convergent forks, but still well known by everybody in open source, are the divergent forks.  These forks are made by developers who are not happy with the direction of a project’s development, even if they are generally satisfied with its management.  The purpose of a divergent fork is to create something different from the parent, with different goals and most often different communities as well. Because they are creating a different product, they will usually be targeting a different group of users, one that was not well served by the parent project.  They will, however, quite often target many of the same developers as the parent project, because most of the technology and many of the features will remain the same, as a result of their shared code history.

Divergent forks will usually split a community, but to a much smaller extent than a convergent fork, because they do not aim to replace the parent for the entire community. Instead they often focus more on recruiting those users who were not served well, or not served at all, by the existing project, and will grown a new community largely from sources other than the parent community.

## Emergent Forks

Emergent forks are not technically forks in the code sense, but rather new projects with new code, but which share the same goals and targets the same users as an existing project.  Most of us know these as NIH, or “Not Invented Here”, projects. They come into being on their own, instead of splitting from an existing source, but with the intention of replacing an existing project for all or part of an existing user community. Emergent forks are not the result of dissatisfaction with either the management or direction of an existing project, but most often a dissatisfaction with the technology being used, or fundamental design decisions that can’t be easily undone with the existing code.

Because they share the same goals as an existing project, these forks will usually result in a split of the user community around an existing project, unless they differ enough in features that they can targets users not already being served by those projects. However, because they do not share much code or technology with the existing project, they most often grow their own community of developers, rather than splitting them from the existing project as well.

All of these kinds of forks are common enough that we in the open source community can easily name several examples of them. But they are all quite different in important ways. Some, while forks in the literal sense, can almost be considered new projects in a community sense.  Others are not forks of code at all, yet result in splitting an existing community none the less. Many of these forks will fail to gain traction, in fact most of them will, but some will succeed and surpass those that came before them. All of them play a role in keeping the wider open source economy flourishing, even though we may not like them when they affect a community we’ve been involved in building.