Canonical Voices

Posts tagged with 'canonical'

Michael Hall

Will CookeThis is a guest post from Will Cooke, the new Desktop Team manager at Canonical. It’s being posted here while we work to get a blog setup on unity.ubuntu.com, which is where you can find out more about Unity 8 and how to get involved with it.

Intro

Understandably, most of the Ubuntu news recently has focused around phones. There is a lot of excitement and anticipation building around the imminent release of the first devices.  However, the Ubuntu Desktop has not been dormant during this time.  A lot of thought and planning has been given to what the desktop will become in the future; who will use it and what will they use it for.  All the work which is going in to the phone will be directly applicable to the desktop as well, since they will use the same code.  All the apps, the UI tweaks, everything which makes applications secure and stable will all directly apply to the desktop as well.  The plan is to have the single converged operating system ready for use on the desktop by 16.04.

The plan

We learned some lessons during the early development of Unity 7. Here’s what happened:

  • 11.04: New Unity as default
  • 11.10: New Unity version
  • 12.04: Unity in First LTS

What we’ve decided to do this time is to keep the same, stable Unity 7 desktop as the default while we offer users who want to opt-in to Unity8 an option to use that desktop. As development continues the Unity 8 desktop will get better and better.  It will benefit from a lot of the advances which have come about through the development of the phone OS and will benefit from continual improvements as the releases happen.

  • 14.04 LTS: Unity 7 default / Unity 8 option for the first time
  • 14.10: Unity 7 default / Unity 8 new rev as an option
  • 15.04: Unity 7 default / Unity 8 new rev as an option
  • 15.10: Potentially Unity 8 default / Unity 7 as an option
  • 16.04 LTS: Unity 8 default / Unity 7 as an option

As you can see, this gives us a full 2 cycles (in addition to the one we’ve already done) to really nail Unity 8 with the level of quality that people expect. So what do we have?

How will we deliver Unity 8 with better quality than 7?

Continuous Integration is the best way for us to achieve and maintain the highest quality possible.  We have put a lot of effort in to automating as much of the testing as we can, the best testing is that which is performed easily.  Before every commit the changes get reviewed and approved – this is the first line of defense against bugs.  Every merge request triggers a run of the tests, the second line of defense against bugs and regressions – if a change broke something we find out about it before it gets in to the build.

The CI process builds everything in a “silo”, a self contained & controlled environment where we find out if everything works together before finally landing in the image.

And finally, we have a large number of tests which run against those images. This really is a “belt and braces” approach to software quality and it all happens automatically.  You can see, we are taking the quality of our software very seriously.

What about Unity 7?

Unity 7 and Compiz have a team dedicated to maintenance and bug fixes and so the quality of it continues to improve with every release.  For example; windows switching workspaces when a monitor gets unplugged is fixed, if you have a mouse with 6 buttons it works, support for the new version of Metacity (incase you want to use the Gnome2 desktop) – added (and incidentally, a lot of that work was done by a community contributor – thanks Alberts!)

Unity 7 is the desktop environment for a lot of software developers, devops gurus, cloud platform managers and millions of users who rely on it to help them with their everyday computing.  We don’t want to stop you being able to get work done.  This is why we continue to maintain Unity 7 while we develop Unity 8.  If you want to take Unity 8 for a spin and see how its coming along then you can; if you want to get your work done, we’re making that experience better for you every day.  Best of all, both of these options are available to you with no detriment to the other.

Things that we’re getting in the new Ubuntu Desktop

  1. Applications decoupled from the OS updates.  Traditionally a given release of Ubuntu has shipped with the versions of the applications available at the time of release.  Important updates and security fixes are back-ported to older releases where required, but generally you had to wait for the next release to get the latest and greatest set of applications.  The new desktop packaging system means that application developers can push updates out when they are ready and the user can benefit right away.
  2. Application isolation.  Traditionally applications can access anything the user can access; photos, documents, hardware devices, etc.  On other platforms this has led to data being stolen or rendered otherwise unusable.  Isolation means that without explicit permission any Click packaged application is prevented from accessing data you don’t want it to access.
  3. A full SDK for writing Ubuntu apps.  The SDK which many people are already using to write apps for the phone will allow you to write apps for the desktop as well.  In fact, your apps will be write once run anywhere – you don’t need to write a “desktop” app or a “phone” app, just an Ubuntu app.

What we have now

The easiest way to try out the Unity 8 Desktop Preview is to use the daily Ubuntu Desktop Next live image:   http://cdimage.ubuntu.com/ubuntu-desktop-next/daily-live/current/   This will allow you to boot into a Unity 8 session without touching your current installation.  An easy 10 step way to write this image to a USB stick is:

  1. Download the ISO
  2. Insert your USB stick in the knowledge that it’s going to get wiped
  3. Open the “Disks” application
  4. Choose your USB stick and click on the cog icon on the righthand side
  5. Choose “Restore Disk Image”
  6. Browse to and select the ISO you downloaded in #1
  7. Click “Start restoring”
  8. Wait
  9. Boot and select “Try Ubuntu….”
  10. Done *

* Please note – there is currently a bug affecting the Unity 8 greeter which means you are not automatically logged in when you boot the live image.  To log in you need to:

  1. Switch to vt1 (ctrl-alt-f1)
  2. type “passwd” and press enter
  3. press enter again to set the current password to blank
  4. enter a new password twice
  5. Check that the password has been successfully changed
  6. Switch back to vt7 (ctrl-alt-f7)
  7. Enter the new password to login

 

Here are some screenshots showing what Unity 8 currently looks like on the desktop:

00000009000000190000003100000055000000690000011000000183000001950000020700000255000002630000032800000481

The team

The people working on the new desktop are made up of a few different disciplines.  We have a team dedicated to Unity 7 maintenance and bug fixes who are also responsible for Unity 8 on the desktop and feed in a lot of support to the main Unity 8 & Mir teams. We have the Ubuntu Desktop team who are responsible for many aspects of the underlying technologies used such as GNOME libraries, settings, printing etc as well as the key desktop applications such as Libreoffice and Chromium.  The Ubuntu desktop team has some of the longest serving members of the Ubuntu family, with some people having been here for the best part of ten years.

How you can help

We need to log all the bugs which need to be fixed in order to make Unity 8 the best desktop there is.  Firstly, we need people to test the images and log bugs.  If developers want to help fix those bugs, so much the better.  Right now we are focusing on identifying where the work done for the phone doesn’t work as expected on the desktop.  Once those bugs are logged and fixed we can rely on the CI system described above to make sure that they stay fixed.

Link to daily ISOs:  http://cdimage.ubuntu.com/ubuntu-desktop-next/daily-live/current/

Bugs:  https://bugs.launchpad.net/ubuntu/+source/unity8-desktop-session

IRC:  #ubuntu-desktop on Freenode

Read more
Michael Hall

screenshot_1.0So it’s finally happened, one of my first Ubuntu SDK apps has reached an official 1.0 release. And I think we all know what that means. Yup, it’s time to scrap the code and start over.

It’s a well established mantra, codified by Fred Brooks, in software development that you will end up throwing away the first attempt at a new project. The releases between 0.1 and 0.9 are a written history of your education about the problem, the tools, or the language you are learning. And learn I did, I wrote a whole series of posts about my adventures in writing uReadIt. Now it’s time to put all of that learning to good use.

Often times projects still spend an extremely long time in this 0.x stage, getting ever closer but never reaching that 1.0 release.  This isn’t because they think 1.0 should wait until the codebase is perfect, I don’t think anybody expects 1.0 to be perfect. 1.0 isn’t the milestone of success, it’s the crossing of the Rubicon, the point where drastic change becomes inevitable. It’s the milestone where the old code, with all it’s faults, dies, and out of it is born a new codebase.

So now I’m going to start on uReadIt 2.0, starting fresh, with the latest Ubuntu UI Toolkit and platform APIs. It won’t be just a feature-for-feature rewrite either, I plan to make this a great Reddit client for both the phone and desktop user. To that end, I plan to add the following:

  • A full Javascript library for interacting with the Reddit API
  • User account support, which additionally will allow:
    • Posting articles & comments
    • Reading messages in your inbox
    • Upvoting and downvoting articles and comments
  • Convergence from the start, so it’s usable on the desktop as well
  • Re-introduce link sharing via Content-Hub
  • Take advantage of new features in the UITK such as UbuntuListView filtering & pull-to-refresh, and left/right swipe gestures on ListItems

Another change, which I talked about in a previous post, will be to the license of the application. Where uReadIt 1.0 is GPLv3, the next release will be under a BSD license.

Read more
Michael Hall

Ubuntu Mauritius CommunityBut it isn’t perfect.  And that, in my opinion, is okay.  I’m not perfect, and neither are you, but you are still wonderful too.

I was asked, not too long ago, what I hated about the community. The truth, then and now, is that I don’t hate anything about it. There is a lot I don’t like about what happens, of course, but nothing that I hate. I make an effort to understand people, to “grok” them if I may borrow the word from Heinlein. When you understand somebody, or in this case a community of somebodies, you understand the whole of them, the good and the bad. Now understanding the bad parts doesn’t make them any less bad, but it does provide opportunities for correcting or removing them that you don’t get otherwise.

You reap what you sow

People will usually respond in kind with the way they are treated. I try to treat everybody I interact with respectfully, kindly, and rationally, and I’ve found that I am treated that way back. But, if somebody is prone to arrogance or cruelty or passion, they will find far more of that treatment given back and them than the positive ones. They are quite often shocked when this happens. But when you are a source of negativity you drive away people who are looking for something positive, and attract people who are looking for something negative. It’s not absolute, nice people will have some unhappy followers, and crumpy people will have some delightful ones, but on average you will be surrounded by people who behave like you.

Don’t get even, get better

An eye for an eye makes the whole world blind, as the old saying goes. When somebody is rude or disrespectful to us, it’s easy to give in to the desire to be rude and disrespectful back. When somebody calls us out on something, especially in public, we want to call them out on their own problems to show everybody that they are just as bad. This might feel good in the short term, but it causes long term harm to both the person who does it and the community they are a part of. This ties into what I wrote above, because even if you aren’t naturally a negative person, if you respond to negativity with more of the same, you’ll ultimately share the same fate. Instead use that negativity as fuel to drive you forward in a positive way, respond with coolness, thoughtfulness and introspection and not only will you disarm the person who started it, you’ll attract far more of the kind of people and interactions that you want.

Know your audience

Your audience isn’t the person or people you are talking to. Your audience is the people who hear you. Many of the defenders of Linus’ beratement of kernel contributors is that he only does it to people he knows can take it. This defense is almost always countered, quite properly, by somebody pointing out that his actions are seen by far more than just their intended recipient. Whenever you interact with any member of your community in a public space, such as a forum or mailing list, treat it as if you were interacting with every member, because you are. Again, if you perpetuate negativity in your community, you will foster negativity in your community, either directly in response to you or indirectly by driving away those who are more positive in nature. Linus’ actions might be seen as a joke, or necessary “tough love” to get the job done, but the LKML has a reputation of being inhospitable to potential contributors in no small part because of them. You can gather a large number of negative, or negativity-accepting, people into a community and get a lot of work done, but it’s easier and in my opinion better to have a large number of positive people doing it.

Monoculture is dangerous

I think all of us in the open source community know this, and most of us have said it at least once to somebody else. As noted security researcher Bruce Schneier says, “monoculture is bad; embrace diversity or die along with everyone else.” But it’s not just dangerous for software and agriculture, it’s dangerous to communities too. Communities need, desperately need, diversity, and not just for the immediate benefits that various opinions and perspectives bring. Including minorities in your community will point out flaws you didn’t know existed, because they didn’t affect anyone else, but a distro-specific bug in upstream is still a bug, and a minority-specific flaw in your community is still a flaw. Communities that are almost all male, or white, or western, aren’t necessarily bad because of their monoculture, but they should certainly consider themselves vulnerable and deficient because of it. Bringing in diversity will strengthen it, and adding minority contributor will ultimately benefit a project more than adding another to the majority. When somebody from a minority tells you there is a problem in your community that you didn’t see, don’t try to defend it by pointing out that it doesn’t affect you, but instead treat it like you would a normal bug report from somebody on different hardware than you.

Good people are human too

The appendix is a funny organ. Most of the time it’s just there, innocuous or maybe even slightly helpful. But every so often one happens to, for whatever reason, explode and try to kill the rest of the body. People in a community do this too.  I’ve seen a number of people that were good or even great contributors who, for whatever reason, had to explode and they threatened to take down anything they were a part of when it happened. But these people were no more malevolent than your appendix is, they aren’t bad, even if they do need to be removed in order to avoid lasting harm to the rest of the body. Sometimes, once whatever caused their eruption has passed, these people can come back to being a constructive part of your community.

Love the whole, not the parts

When you look at it, all of it, the open source community is a marvel of collaboration, of friendship and family. Yes, family. I know I’m not alone in feeling this way about people I may not have ever met in person. And just like family you love them during the good and the bad. There are some annoying, obnoxious people in our family. There are good people who are sometimes annoying and obnoxious. But neither of those truths changes the fact that we are still a part of an amazing, inspiring, wonderful community of open source contributors and enthusiasts.

Read more
Louis

I have seen this setup documented a few places, but not for Ubuntu so here it goes.

I have used this many time to verify or diagnose Device Mapper Multipath (DM-MPIO) since it is rather easy to fail a path by switching off one of the network interfaces. Nowaday, I use two KVM virtual machines with two NIC each.

Those steps have been tested on Ubuntu 12.04 (Precise) and Ubuntu 14.04 (Trusty). The DM-MPIO section is mostly a cut and paste of the Ubuntu Server Guide

The virtual machine that will act as the iSCSI target provider is called PreciseS-iscsitarget. The VM that will connect to the target is called PreciseS-iscsi. Each one is configured with two network interfaces (NIC) that get their IP addresses from DHCP. Here is an example of the network configuration file :

$ cat /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp
#
auto eth1
iface eth1 inet dhcp

The second NIC resolves to the same hostname with a « 2 » appended (i.e. PreciseS-iscsitarget2 and PreciseS-iscsi2)

Setting up the iSCSI Target VM

This is done by installing the following packages :

$ sudo apt-get install iscsitarget iscsitarget-dkms

Edit /etc/default/iscsitarget and change the following line to enable the service :

ISCSITARGET_ENABLE=true

We now proceed to create an iSCSI target (aka disk). This is done by creating a 50 Gb sparse file that will act as our disk :

$ sudo dd if=/dev/zero of=/home/ubuntu/iscsi_disk.img count=0 obs=1 seek=50G

This container is used in the definition of the iSCSI target. Edit the file /etc/iet/ietd.conf. At the bottom, add :

Target iqn.2014-09.PreciseS-iscsitarget:storage.sys0
        Lun 0 Path=/home/ubuntu/iscsi_disk.img,Type=fileio,ScsiId=lun0,ScsiSN=lun0

The iSCSI target service must be restarted for the new target to be accessible

$ sudo service iscsitarget restart


Setting up the iSCSI initiator

To be able to access the iSCSI target, only one package is required :

$ sudo apt-get install open-iscsi

Edit /etc/iscsi/iscsid.conf changing the following:

node.startup = automatic

This will ensure that the iSCSI targets that we discover are enabled automatically upon reboot.

Now we will proceed to discover and connect to the device that we setup in the previous section

$ sudo iscsiadm -m discovery -t st -p PreciseS-iscsitarget
$ sudo iscsiadm -m node --login
$ dmesg | tail
[   68.461405] iscsid (1458): /proc/1458/oom_adj is deprecated, please use /proc/1458/oom_score_adj instead.
[  189.989399] scsi2 : iSCSI Initiator over TCP/IP
[  190.245529] scsi 2:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
[  190.245785] sd 2:0:0:0: Attached scsi generic sg0 type 0
[  190.249413] sd 2:0:0:0: [sda] 104857600 512-byte logical blocks: (53.6 GB/50.0 GiB)
[  190.250487] sd 2:0:0:0: [sda] Write Protect is off
[  190.250495] sd 2:0:0:0: [sda] Mode Sense: 77 00 00 08
[  190.251998] sd 2:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[  190.257341]  sda: unknown partition table
[  190.258535] sd 2:0:0:0: [sda] Attached SCSI disk

We can see in the dmesg output that the new device /dev/sda has been discovered. Format the new disk & create a file system. Then verify that everything is correct by mounting and unmounting the new file system.

$ fdisk /dev/sda
n
p
1
<ret>
<ret>
w
$  mkfs -t ext4 /dev/sda1
$ mount /dev/sda1 /mnt
$ umount /mnt

 

Setting up DM-MPIO

Since each of our virtual machines have been configured with two network interfaces, it is possible to reach the iSCSI target through the second interface :

$ iscsiadm -m discovery -t st -p
192.168.1.193:3260,1 iqn.2014-09.PreciseS-iscsitarget:storage.sys0
192.168.1.43:3260,1 iqn.2014-09.PreciseS-iscsitarget:storage.sys0
$ iscsiadm -m node -T iqn.2014-09.PreciseS-iscsitarget:storage.sys0 --login

Now that we have two paths toward our iSCSI target, we can proceed to setup DM-MPIO.

First of all, a /etc/multipath.conf file must exist.  Then we install the needed package :

$ sudo -s
# cat << EOF > /etc/multipath.conf
defaults {
        user_friendly_names yes
}
EOF
# exit
$ sudo apt-get -y install multipath-tools

Two paths to the iSCSI device created previously need to exist for the multipath device to be seen.

# multipath -ll
mpath0 (149455400000000006c756e30000000000000000000000000) dm-2 IET,VIRTUAL-DISK
size=50G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 4:0:0:0 sda 8:0   active ready  running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 5:0:0:0 sdb 8:16  active ready  running

The two paths are indeed visible. We can move forward and verify that the partition table created previously is accessible :

$ sudo fdisk -l /dev/mapper/mpath0

Disk /dev/mapper/mpath0: 53.7 GB, 53687091200 bytes
64 heads, 32 sectors/track, 51200 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0e5e5db1

              Device Boot      Start         End      Blocks   Id  System
/dev/mapper/mpath0p1            2048   104857599    52427776   83  Linux

All that is remaining is to add an entry to the /etc/fstab file so the file system that we created is mounted automatically at boot.  Notice the _netdev entry : this is required otherwise the iSCSI device will not be mounted.

$ sudo -s
# cat << EOF >> /etc/fstab
/dev/mapper/mpath0-part1        /mnt    ext4    defaults,_netdev        0 0
EOF
exit
$ sudo mount -a
$ df /mnt
Filesystem               1K-blocks   Used Available Use% Mounted on
/dev/mapper/mpath0-part1  51605116 184136  48799592   1% /mnt

Read more
Dustin Kirkland

A StackExchange question, back in February of this year inspired a new feature in Byobu, that I had been thinking about for quite some time:

Wouldn't it be nice to have a hot key in Byobu that would send a command to multiple splits (or windows?
This feature was added and is available in Byobu 5.73 and newer (in Ubuntu 14.04 and newer, and available in the Byobu PPA for older Ubuntu releases).

I actually use this feature all the time, to update packages across multiple computers.  Of course, Landscape is a fantastic way to do this as well.  But if you don't have access to Landscape, you can always do this very simply with Byobu!

Create some splits, using Ctrl-F2 and Shift-F2, and in each split, ssh into a target Ubuntu (or Debian) machine.

Now, use Shift-F9 to open up the purple prompt at the bottom of your screen.  Here, you enter the command you want to run on each split.  First, you might want to run:

sudo true

This will prompt you for your password, if you don't already have root or sudo access.  You might need to use Shift-Up, Shift-Down, Shift-Left, Shift-Right to move around your splits, and enter passwords.

Now, update your package lists:

sudo apt-get update

And now, apply your updates:

sudo apt-get dist-upgrade

Here's a video to demonstrate!


In a related note, another user-requested feature has been added, to simultaneously synchronize this behavior among all splits.  You'll need the latest version of Byobu, 5.87, which will be in Ubuntu 14.10 (Utopic).  Here, you'll press Alt-F9 and just start typing!  Another demonstration video here...




Cheers,
Dustin

Read more
Ben Howard

Cloud Images and Bash Vulnerabilities

The Ubuntu Cloud Image team has been monitoring the bash vulnerabilities. Due to the scope, impact and high profile nature of these vulnerabilties, we have published new images. New cloud images to address the lastest bash USN-2364-1 [1, 8, 9] are being released with a build serials of 20140927. These images include code to address all prior CVEs, including CVE-2014-6271 [6] and CVE-2014-7169 [7], and supersede images published in the past week which addressed those CVEs.

Please note: Securing Ubuntu Cloud Images requires users to regularly apply updates[5]; using the latest Cloud Images are insufficient. 

Addressing the full scope of the Bash vulnerability has been an iterative process. The security team has worked with the upstream bash community to address multiple aspects of the bash issue. As these fixes have become available, the Cloud Image team has published daily[2]. New released images[3] have been made available at the request of the Ubuntu Security team.

Canonical has been in contact with our public Cloud Partners to make these new builds available as soon as possible.

Cloud image update timeline

Daily image builds are automatically triggered when new package versions become available in the public archives. New releases for Cloud Images are triggered automatically when a new kernel becomes available. The Cloud Image team will manually trigger new released images when either requested by the Ubuntu Security team or when a significant defect requires.

Please note:  Securing Ubuntu cloud images requires that security updates be applied regularly [5], using the latest available cloud image is not sufficient in itself.  Cloud Images are built only after updated packages are made available in the public archives. Since it takes time to build the  images, test/QA and finally promote the images, there is time (sometimes  considerable) between public availablity of the package and updated Cloud Images. Users should consider this timing in their update strategy.

[1] http://www.ubuntu.com/usn/usn-2364-1/
[2] http://cloud-images.ubuntu.com/daily/server/
[3] http://cloud-images.ubuntu.com/releases/
[4] https://help.ubuntu.com/community/Repositories/Ubuntu/
[5] https://wiki.ubuntu.com/Security/Upgrades/
[6] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-6271.html
[7] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7169.html
[8] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7187.html
[9] http://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-7186.html

Read more
Michael Hall

Last week I attended FOSSETCON, a new open source convention here in central Florida, and I had the opportunity to give a couple of presentations on Ubuntu phones and app development. Anybody who knows me knows that I love talking about these things, but a lot fewer people know that doing it in front of a room of people I don’t know still makes me extremely nervous. I’m an introvert, and even though I have a public-facing job and work with the wider community all the time, I’m still an introvert.

I know there are a lot of other introverts out there who might find the idea of giving presentations to be overwhelming, but they don’t have to be.  Here I’m going to give my personal experiences and advice, in the hope that it’ll encourage some of you to step out of your comfort zones and share your knowledge and talent with the rest of us at meetups and conferences.

You will be bad at it…

Public speaking is like learning how to ride a bicycle, everybody falls their first time. Everybody falls a second time, and a third. You will fidget and stutter, you will lose your train of thought, your voice will sound funny. It’s not just you, everybody starts off being bad at it. Don’t let that stop you though, accept that you’ll have bruises and scrapes and keep getting back on that bike. Coincidentally, accepting that you’re going to be bad at the first ones makes it much less frightening going into them.

… until you are good at it

I read a lot of things about how to be a good and confident public speaker, the advice was all over the map, and a lot of it felt like pure BS.  I think a lot of people try different things and when they finally feel confident in speaking, they attribute whatever their latest thing was with giving them that confidence. In reality, you just get more confident the more you do it.  You’ll be better the second time than the first, and better the third time than the second. So keep at it, you’ll keep getting better. No matter how good or bad you are now, you will keep getting better if you just keep doing it.

Don’t worry about your hands

You’ll find a lot of suggestions about how to use your hands (or not use them), how to walk around (or not walk around) or other suggestions about what to do with yourself while you’re giving your presentation. Ignore them all. It’s not that these things don’t affect your presentation, I’ll admit that they do, it’s that they don’t affect anything after your presentation. Think back about all of the presentations you’ve seen in your life, how much do you remember about how the presenter walked or waved their hands? Unless those movements were integral to the subject, you probably don’t remember much. The same will happen for you, nobody is going to remember whether you walked around or not, they’re going to remember the information you gave them.

It’s not about you

This is the one piece of advice I read that actually has helped me. The reason nobody remembers what you did with your hands is because they’re not there to watch you, they’re there for the information you’re giving them. Unless you’re an actual celebrity, people are there to get information for their own benefit, you’re just the medium which provides it to them.  So don’t make it about you (again, unless you’re an actual celebrity), focus on the topic and information you’re giving out and what it can do for the audience. If you do that, they’ll be thinking about what they’re going to do with it, not what you’re doing with your hands or how many times you’ve said “um”. Good information is a good distraction from the things you don’t want them paying attention to.

It’s all just practice

Practicing your presentation isn’t nearly as stressful as giving it, because you’re not worried about messing up. If you mess up during practice you just correct it, make a note to not make the same mistake next time, and carry on. Well if you plan on doing more public speaking there will always be a next time, which means this time is your practice for that one. Keep your eye on the presentation after this one, if you mess up now you can correct it for the next one.

 

All of the above are really just different ways of saying the same thing: just keep doing it and worry about the content not you. You will get better, your content will get better, and other people will benefit from it, for which they will be appreciative and will gladly overlook any faults in the presentation. I guarantee that you will not be more nervous about it than I was when I started.

Read more
Dustin Kirkland


This little snippet of ~200 lines of YAML is the exact OpenStack that I'm deploying tonight, at the OpenStack Austin Meetup.

Anyone with a working Juju and MAAS setup, and 7 registered servers should be able to deploy this same OpenStack setup, in about 12 minutes, with a single command.


$ wget http://people.canonical.com/~kirkland/icehouseOB.yaml
$ juju-deployer -c icehouseOB.yaml
$ cat icehouseOB.yaml

icehouse:
overrides:
openstack-origin: "cloud:trusty-icehouse"
source: "distro"
services:
ceph:
charm: "cs:trusty/ceph-27"
num_units: 3
constraints: tags=physical
options:
fsid: "9e7aac42-4bf4-11e3-b4b7-5254006a039c"
"monitor-secret": AQAAvoJSOAv/NRAAgvXP8d7iXN7lWYbvDZzm2Q==
"osd-devices": "/srv"
"osd-reformat": "yes"
annotations:
"gui-x": "2648.6688842773438"
"gui-y": "708.3873901367188"
keystone:
charm: "cs:trusty/keystone-5"
num_units: 1
constraints: tags=physical
options:
"admin-password": "admin"
"admin-token": "admin"
annotations:
"gui-x": "2013.905517578125"
"gui-y": "75.58013916015625"
"nova-compute":
charm: "cs:trusty/nova-compute-3"
num_units: 3
constraints: tags=physical
to: [ceph=0, ceph=1, ceph=2]
options:
"flat-interface": eth0
annotations:
"gui-x": "776.1040649414062"
"gui-y": "-81.22811031341553"
"neutron-gateway":
charm: "cs:trusty/quantum-gateway-3"
num_units: 1
constraints: tags=virtual
options:
ext-port: eth1
instance-mtu: 1400
annotations:
"gui-x": "329.0572509765625"
"gui-y": "46.4658203125"
"nova-cloud-controller":
charm: "cs:trusty/nova-cloud-controller-41"
num_units: 1
constraints: tags=physical
options:
"network-manager": Neutron
annotations:
"gui-x": "1388.40185546875"
"gui-y": "-118.01156234741211"
rabbitmq:
charm: "cs:trusty/rabbitmq-server-4"
num_units: 1
to: mysql
annotations:
"gui-x": "633.8120727539062"
"gui-y": "862.6530151367188"
glance:
charm: "cs:trusty/glance-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "1147.3269653320312"
"gui-y": "1389.5643157958984"
cinder:
charm: "cs:trusty/cinder-4"
num_units: 1
to: nova-cloud-controller
options:
"block-device": none
annotations:
"gui-x": "1752.32568359375"
"gui-y": "1365.716194152832"
"ceph-radosgw":
charm: "cs:trusty/ceph-radosgw-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "2216.68212890625"
"gui-y": "697.16796875"
cinder-ceph:
charm: "cs:trusty/cinder-ceph-1"
num_units: 0
annotations:
"gui-x": "2257.5515747070312"
"gui-y": "1231.2130126953125"
"openstack-dashboard":
charm: "cs:trusty/openstack-dashboard-4"
num_units: 1
to: "keystone"
options:
webroot: "/"
annotations:
"gui-x": "2353.6898193359375"
"gui-y": "-94.2642593383789"
mysql:
charm: "cs:trusty/mysql-1"
num_units: 1
constraints: tags=physical
options:
"dataset-size": "20%"
annotations:
"gui-x": "364.4567565917969"
"gui-y": "1067.5167846679688"
mongodb:
charm: "cs:trusty/mongodb-0"
num_units: 1
constraints: tags=physical
annotations:
"gui-x": "-70.0399979352951"
"gui-y": "1282.8224487304688"
ceilometer:
charm: "cs:trusty/ceilometer-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "-78.13333225250244"
"gui-y": "919.3128051757812"
ceilometer-agent:
charm: "cs:trusty/ceilometer-agent-0"
num_units: 0
annotations:
"gui-x": "-90.9158582687378"
"gui-y": "562.5347595214844"
heat:
charm: "cs:trusty/heat-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "494.94012451171875"
"gui-y": "1363.6024169921875"
ntp:
charm: "cs:trusty/ntp-4"
num_units: 0
annotations:
"gui-x": "-104.57728099822998"
"gui-y": "294.6641273498535"
relations:
- - "keystone:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:amqp"
- "rabbitmq:amqp"
- - "nova-cloud-controller:image-service"
- "glance:image-service"
- - "nova-cloud-controller:identity-service"
- "keystone:identity-service"
- - "glance:shared-db"
- "mysql:shared-db"
- - "glance:identity-service"
- "keystone:identity-service"
- - "cinder:shared-db"
- "mysql:shared-db"
- - "cinder:amqp"
- "rabbitmq:amqp"
- - "cinder:cinder-volume-service"
- "nova-cloud-controller:cinder-volume-service"
- - "cinder:identity-service"
- "keystone:identity-service"
- - "neutron-gateway:shared-db"
- "mysql:shared-db"
- - "neutron-gateway:amqp"
- "rabbitmq:amqp"
- - "neutron-gateway:quantum-network-service"
- "nova-cloud-controller:quantum-network-service"
- - "openstack-dashboard:identity-service"
- "keystone:identity-service"
- - "nova-compute:shared-db"
- "mysql:shared-db"
- - "nova-compute:amqp"
- "rabbitmq:amqp"
- - "nova-compute:image-service"
- "glance:image-service"
- - "nova-compute:cloud-compute"
- "nova-cloud-controller:cloud-compute"
- - "cinder:storage-backend"
- "cinder-ceph:storage-backend"
- - "ceph:client"
- "cinder-ceph:ceph"
- - "ceph:client"
- "nova-compute:ceph"
- - "ceph:client"
- "glance:ceph"
- - "ceilometer:identity-service"
- "keystone:identity-service"
- - "ceilometer:amqp"
- "rabbitmq:amqp"
- - "ceilometer:shared-db"
- "mongodb:database"
- - "ceilometer-agent:container"
- "nova-compute:juju-info"
- - "ceilometer-agent:ceilometer-service"
- "ceilometer:ceilometer-service"
- - "heat:shared-db"
- "mysql:shared-db"
- - "heat:identity-service"
- "keystone:identity-service"
- - "heat:amqp"
- "rabbitmq:amqp"
- - "ceph-radosgw:mon"
- "ceph:radosgw"
- - "ceph-radosgw:identity-service"
- "keystone:identity-service"
- - "ntp:juju-info"
- "neutron-gateway:juju-info"
- - "ntp:juju-info"
- "ceph:juju-info"
- - "ntp:juju-info"
- "keystone:juju-info"
- - "ntp:juju-info"
- "nova-compute:juju-info"
- - "ntp:juju-info"
- "nova-cloud-controller:juju-info"
- - "ntp:juju-info"
- "rabbitmq:juju-info"
- - "ntp:juju-info"
- "glance:juju-info"
- - "ntp:juju-info"
- "cinder:juju-info"
- - "ntp:juju-info"
- "ceph-radosgw:juju-info"
- - "ntp:juju-info"
- "openstack-dashboard:juju-info"
- - "ntp:juju-info"
- "mysql:juju-info"
- - "ntp:juju-info"
- "mongodb:juju-info"
- - "ntp:juju-info"
- "ceilometer:juju-info"
- - "ntp:juju-info"
- "heat:juju-info"
series: trusty

:-Dustin

Read more
Dustin Kirkland

What would you say if I told you, that you could continuously upload your own Software-as-a-Service  (SaaS) web apps into an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, autonomically managed by an open source Orchestration-Service… right now, today?

“An idea is resilient. Highly contagious. Once an idea has taken hold of the brain it's almost impossible to eradicate.”

“Now, before you bother telling me it's impossible…”

“No, it's perfectly possible. It's just bloody difficult.” 

Perhaps something like this...

“How could I ever acquire enough detail to make them think this is reality?”

“Don’t you want to take a leap of faith???”
Sure, let's take a look!

Okay, this looks kinda neat, what is it?

This is an open source Java Spring web application, called Spring-Music, deployed as an app, running inside of Linux containers in CloudFoundry


Cloud Foundry?

CloudFoundry is an open source Platform-as-a-Service (PAAS) cloud, deployed into Linux virtual machine instances in OpenStack, by Juju.


OpenStack?

Juju?

OpenStack is an open source Infrastructure-as-a-Service (IAAS) cloud, deployed by Juju and Landscape on top of MAAS.

Juju is an open source Orchestration System that deploys and scales complex services across many public clouds, private clouds, and bare metal servers.

Landscape?

MAAS?

Landscape is a systems management tool that automates software installation, updates, and maintenance in both physical and virtual machines. Oh, and it too is deployed by Juju.

MAAS is an open source bare metal provisioning system, providing a cloud-like API to physical servers. Juju can deploy services to MAAS, as well as public and private clouds.

"Ready for the kick?"

If you recall these concepts of nesting cloud technologies...

These are real technologies, which exist today!

These are Software-as-a-Service  (SaaS) web apps served by an open source Platform-as-a-Service (PaaS) framework, running on top of an open source Infrastructure-as-a-Service (IaaS) cloud, deployed on an open source Metal-as-a-Service provisioning system, managed by an open source Orchestration-Service.

Spring Music, served by CloudFoundry, running on top of OpenStack, deployed on MAAS, managed by Juju and Landscape!

“The smallest seed of an idea can grow…”

Oh, and I won't leave you hanging...you're not dreaming!


:-Dustin

Read more
Dustin Kirkland



In case you missed the recent Cloud Austin MeetUp, you have another chance to see the Ubuntu Orange Box live and in action here in Austin!

This time, we're at the OpenStack Austin MeetUp, next Wednesday, September 10, 2014, at 6:30pm at Tech Ranch Austin, 9111 Jollyville Rd #100, Austin, TX!

If you join us, you'll witness all of OpenStack Ice House, deployed in minutes to real hardware. Not an all-in-one DevStack; not a minimum viable set of components.  Real, rich, production-quality OpenStack!  Ceilometer, Ceph, Cinder, Glance, Heat, Horizon, Keystone, MongoDB, MySQL, Nova, NTP, Quantum, and RabbitMQ -- intelligently orchestrated and rapidly scaled across 10 physical servers sitting right up front on the podium.  Of course, we'll go under the hood and look at how all of this comes together on the fabulous Ubuntu Orange Box.

And like any good open source software developer, I generally like to make things myself, and share them with others.  In that spirit, I'll also bring a couple of growlers of my own home brewed beer, Ubrewtu ;-)  Free as in beer, of course!
Cheers,Dustin

Read more
Ben Howard

For years, the Ubuntu Cloud Images have been built on a timer (i.e. cronjob or Jenkins). Every week, you can reasonably expect that stable and LTS releases to be built twice a week while our development build is build once a day.  Each of these builds is given a serial in the form of YYYYMMDD. 

While time-based building has proven to be reliable, different build serials may be functionally the same, just put together at a different point in time. Many of the builds that we do for stable and LTS releases are pointless.

When the whole heartbleed fiasco hit, it put the Cloud Image team into over-drive, since it required manually triggering builds the LTS releases. When we manually trigger builds, it takes roughly 12-16 hours to build, QA, test and release new Cloud Images. Sure, most of this is automated, but the process had to be manually started by a human. This got me thinking: there has to be a better way.

What if we build the Cloud Images when the package set changes?

With that, I changed the Ubuntu 14.10 (Utopic Unicorn) build process from time-based to archive trigger-based. Now, instead of building every day at 00:30 UTC, the build starts when the archive has been updated and the packages in the prior cloud image build is older than the archive version. In the last three days, there were eight builds for Utopic. For a development version of Ubuntu, this just means that developers don't have to wait 24 hours for the latest package changes to land in a Cloud Image.

Over the next few weeks, I will be moving the 10.04 LTS, 12.04 LTS and 14.04 LTS build processes from time to archive trigger-based. While this might result less frequent daily builds, the main advantage is that the daily builds will contain the latest package sets. And if you are trying to respond to the latest CVE, or waiting on a bug fix to land, it likely means that you'll have a fresh daily that you can use the following day.

Read more
Dustin Kirkland


Docker 1.0.1 is available for testing, in Ubuntu 14.04 LTS!

Docker 1.0.1 has landed in the trusty-proposed archive, which we hope to SRU to trusty-updates very soon.  We would love to have your testing feedback, to ensure both upgrades from Docker 0.9.1, as well as new installs of Docker 1.0.1 behave well, and are of the highest quality you have come to expect from Ubuntu's LTS  (Long Term Stable) releases!  Please file any bugs or issues here.

Moreover, this new version of the Docker package now installs the Docker binary to /usr/bin/docker, rather than /usr/bin/docker.io in previous versions. This should help Ubuntu's Docker package more closely match the wealth of documentation and examples available from our friends upstream.

A big thanks to Paul Tagliamonte, James Page, Nick Stinemates, Tianon Gravi, and Ryan Harper for their help upstream in Debian and in Ubuntu to get this package updated in Trusty!  Also, it's probably worth mentioning that we're targeting Docker 1.1.2 (or perhaps 1.2.0) for Ubuntu 14.10 (Utopic), which will release on October 23, 2014.

Here are a few commands that might help your testing...

Check What Candidate Versions are Available

$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

If that shows 0.9.1~dfsg1-2 (as it should), then you need to enable the trusty-proposed pocket.

$ echo "deb http://archive.ubuntu.com/ubuntu/ trusty-proposed universe" | sudo tee -a /etc/apt/sources.list
$ sudo apt-get update
$ apt-cache show docker.io | grep ^Version:

And now you should see the new version, 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1, available (probably in addition to 1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1).

Upgrades

Check if you already have Docker installed, using:

$ dpkg -l docker.io

If so, you can simply upgrade.

$ sudo apt-get upgrade

And now, you can check your Docker version:

$ sudo dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
0.9.1~dfsg1-2

New Installations

You can simply install the new package with:

$ sudo apt-get install docker.io

And ensure that you're on the latest version with:

$ dpkg -l docker.io | grep -m1 ^ii | awk '{print $3}'
1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1

Running Docker

If you're already a Docker user, you probably don't need these instructions.  But in case you're reading this, and trying Docker for the first time, here's the briefest of quick start guides :-)

$ sudo docker pull ubuntu
$ sudo docker run -i -t ubuntu /bin/bash

And now you're running a bash shell inside of an Ubuntu Docker container.  And only bash!

root@1728ffd1d47b:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 13:42 ? 00:00:00 /bin/bash
root 8 1 0 13:43 ? 00:00:00 ps -ef

If you want to do something more interesting in Docker, well, that's whole other post ;-)

:-Dustin

Read more
Michael Hall

Recognition is like money, it only really has value when it’s being passed between one person and another. Otherwise it’s just potential value, sitting idle.  Communication gives life to recognition, turning it’s potential value into real value.

As I covered in my previous post, Who do you contribute to?, recognition doesn’t have a constant value.  In that article I illustrated how the value of recognition differs depending on who it’s coming from, but that’s not the whole story.  The value of recognition also differs depending on the medium of communication.

communication_triangleOver at the Community Leadership Knowledge Base I started documenting different forms of communication that a community might choose, and how each medium has a balance of three basic properties: Speed, Thoughtfulness and Discoverability. Let’s call this the communication triangle. Each of these also plays a part in the value of recognition.

Speed

Again, much like money, recognition is something that is circulated.  It’s usefulness is not simply created by the sender and consumed by the receiver, but rather passed from one person to another, and then another.  The faster you can communicate recognition around your community, the more utility you can get out of even a small amount of it. Fast communications, like IRC, phone calls or in-person meetups let you give and receive a higher volume of recognition than slower forms, like email or blog posts. But speed is only one part, and faster isn’t necessarily better.

Thoughtfulness

Where speed emphasizes quantity, thoughtfulness is a measure of the quality of communication, and that directly affects the value of recognition given. Thoughtful communications require consideration upon both receiving and replying. Messages are typically longer, more detailed, and better presented than those that emphasize speed. As a result, they are also usually a good bit slower too, both in the time it takes for a reply to be made, and also the speed at which a full conversation happens. An IRC meeting can be done in an hour, where an email exchange can last for weeks, even if both end up with the same word-count at the end.

Discoverability

The third point on our communication triangle, discoverability, is a measure of how likely it is that somebody not immediately involved in a conversation can find out about it. Because recognition is a social good, most of it’s value comes from other people knowing who has given it to whom. Discoverability acts as a multiplier (or divisor, if done poorly) to the original value of recognition.

There are two factors to the discoverability of communication. The first, accessibility, is about how hard it is to find the conversation. Blog posts, or social media posts, are usually very easy to discover, while IRC chats and email exchanges are not. The second factor, longevity, is about how far into the future that conversation can still be discovered. A social media post disappears (or at least becomes far less accessible) after a while, but an IRC log or mailing list archive can stick around for years. Unlike the three properties of communication, however, these factors to discoverability do not require a trade off, you can have something that is both very accessible and has high longevity.

Finding Balance

Most communities will have more than one method of communication, and a healthy one will have a combination of them that compliment each other. This is important because sometimes one will offer a more productive use of your recognition than another. Some contributors will respond better to lots of immediate recognition, rather than a single eloquent one. Others will respond better to formal recognition than informal.  In both cases, be mindful of the multiplier effect that discoverability gives you, and take full advantage of opportunities where that plays a larger than usual role, such as during an official meeting or when writing an article that will have higher than normal readership.

Read more
Dustin Kirkland


If you're interested in learning how to more effectively use your terminal as your integrated devops environment, consider taking 10 minutes and watching this video while enjoying the finale of Mozart's Symphony No. 40Allegro Assai (part of which is rumored to have inspired Beethoven's 5th).

I'm often asked for a quick-start guide, to using Byobu effectively.  This wiki page is a decent start, as is the manpage, and the various links on the upstream website.  But it seems that some of the past screencast videos have had the longest lasting impressions to Byobu users over the years.

I was on a long, international flight from Munich to Newark this past Saturday with a bit of time on my hands, and I cobbled together this instructional video.    That recent international trip to Nuremberg inspired me to rediscover Mozart, and I particularly like this piece, which Mozart wrote in 1788, but sadly never heard performed.  You can hear it now, and learn how to be more efficient in command line environments along the way :-)


Enjoy!
:-Dustin

Read more
Dustin Kirkland



I hope you'll join me at Rackspace on Tuesday, August 19, 2014, at the Cloud Austin Meetup, at 6pm, where I'll use our spectacular Orange Box to deploy Hadoop, scale it up, run a terasort, destroy it, deploy OpenStack, launch instances, and destroy it too.  I'll talk about the hardware (the Orange Box, Intel NUCs, Managed VLAN switch), as well as the software (Ubuntu, OpenStack, MAAS, Juju, Hadoop) that makes all of this work in 30 minutes or less!

Be sure to RSVP, as space is limited.

http://www.meetup.com/CloudAustin/events/194009002/

Cheers,
Dustin

Read more
beuno

I'm a few days away from hitting 6 years at Canonical and I've ended up doing a lot more management than anything else in that time. Before that I did a solid 8 years at my own company, doing anything from developing, project managing, product managing, engineering managing, sales and accounting.
This time of the year is performance review time at Canonical, so it's gotten me thinking a lot about my role and how my view on engineering management has evolved over the years.

A key insights I've had from a former boss, Elliot Murphy, was viewing it as a support role for others to do their job rather than a follow-the-leader approach. I had heard the phrase "As a manager, I work for you" a few times over the years, but it rarely seemed true and felt mostly like a good concept to make people happy but not really applied in practice in any meaningful way.

Of all the approaches I've taken or seen, a role where you're there to unblock developers more than anything else, I believe is the best one. And unless you're a bit power-hungry on some level, it's probably the most enjoyable way of being a manager.

It's not to be applied blindly, though, I think a few conditions have to be met:
1) The team has to be fairly experienced/senior/smart, I think if it isn't it breaks down to often
2) You need to understand very clearly what needs doing and why, and need to invest heavily and frequently in communicated it to the team, both the global context as well as how it applies to them individually
3) You need to build a relationship of trust with each person and need to trust them, because trust is always a 2-way street
4) You need to be enough of an engineer to understand problems in depth when explained, know when to defer to other's judgments (which should be the common case when the team generally smart and experienced) and be capable of tie-breaking in a technical-savvy way
5) Have anyone who's ego doesn't fit in a small, 100ml container, leave it at home

There are many more things to do, but I think if you don't have those five, everything else is hard to hold together. In general, if the team is smart and experienced, understands what needs doing and why, and like their job, almost everything else self-organizes.
If it isn't self-organizing well enough, walk over those 5 points, one or several must be mis-aligned. More often than not, it's 2). Communication is hard, expensive and more of an art than a science. Most of the times things have seemed to stumble a bit, it's been a failure of how I understood what we should be doing as a team, or a failure on how I communicated it to everyone else as it evolved over time.
Second most frequent I think is 1), but that may vary more depending on your team, company and project.

Oh, and actually caring about people and what you do helps a lot, but that helps a lot in life in general, so do that anyway regardless of you role  :)

Read more
Michael Hall

When you contribute something as a member of a community, who are you actually giving it to? The simple answer of course is “the community” or “the project”, but those aren’t very specific.  On the one hand you have a nebulous group of people, most of which you probably don’t even know about, and on the other you’ve got some cold, lifeless code repository or collection of web pages. When you contribute, who is that you really care about, who do you really want to see and use what you’ve made?

In my last post I talked about the importance of recognition, how it’s what contributors get in exchange for their contribution, and how human recognition is the kind that matters most. But which humans do our contributors want to be recognized by? Are you one of them and, if so, are you giving it effectively?

Owners

The owner of a project has a distinct privilege in a community, they are ultimately the source of all recognition in that community.  Early contributions made to a project get recognized directly by the founder. Later contributions may only get recognized by one of those first contributors, but the value of their recognition comes from the recognition they received as the first contributors.  As the project grows, more generations of contributors come in, with recognition coming from the previous generations, though the relative value of it diminishes as you get further from the owner.

Leaders

After the project owner, the next most important source of recognition is a project’s leaders. Leaders are people who gain authority and responsibility in a project, they can affect the direction of a project through decisions in addition to direct contributions. Many of those early contributors naturally become leaders in the project but many will not, and many others who come later will rise to this position as well. In both cases, it’s their ability to affect the direction of a project that gives their recognition added value, not their distance from the owner. Before a community can grown beyond a very small size it must produce leaders, either through a formal or informal process, otherwise the availability of recognition will suffer.

Legends

Leadership isn’t for everybody, and many of the early contributors who don’t become one still remain with the project, and end of making very significant contributions to it and the community over time.  Whenever you make contributions, and get recognition for them, you start to build up a reputation for yourself.  The more and better contributions you make, the more your reputation grows.  Some people have accumulated such a large reputation that even though they are not leaders, their recognition is still sought after more than most. Not all communities will have one of these contributors, and they are more likely in communities where heads-down work is valued more than very public work.

Mentors

When any of us gets started with a community for the first time, we usually end of finding one or two people who help us learn the ropes.  These people help us find the resources we need, teach us what those resources don’t, and are instrumental in helping us make the leap from user to contributor. Very often these people aren’t the project owners or leaders.  Very often they have very little reputation themselves in the overall project. But because they take the time to help the new contributor, and because theirs is very likely to be the first, the recognition they give is disproportionately more valuable to that contributor than it otherwise would be.

Every member of a community can provide recognition, and every one should, but if you find yourself in one of the roles above it is even more important for you to be doing so. These roles are responsible both for setting the example, and keeping a proper flow, or recognition in a community. And without that flow or recognition, you will find that your flow of contributions will also dry up.

Read more
Dustin Kirkland

Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!


Q: Why should I care about randomness? 

A: Because entropy is important!

  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

Q: Where does entropy come from?

A: Hardware, typically.

  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on

Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?

A: Pseudo random number generators are our only viable alternative.

  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always

Q: Are Linux PRNGs secure enough?

A: Yes, if they are properly seeded.

  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...

Q: And what exactly is a random seed?

A: Basically, its a small catalyst that primes the PRNG pump.

  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...

XKCD on random numbers

RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

Q: So my OS generates an initial seed at first boot?

A: Yep, but computers are predictable, especially VMs.

  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

Dilbert on random numbers


http://j.mp/1dHAK4V


Q: Surely you're just being paranoid about this, right?

A: I’m afraid not...

Analysis of the LRNG (2006)

  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT

Black Hat (2009)

  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu

Factorable.net (2012)

  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx

Dual_EC_DRBG Backdoor (2013)

  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB

Q: Ruh roh...so what can we do about it?

A: For starters, do a better job seeding our PRNGs.

  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

Q: But how do we ensure that in cloud guests?

A: Run Ubuntu!


Sorry, shameless plug...

Q: And what is Ubuntu's solution?

A: Meet pollinate.

  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate

Q: What about the back end?

A: Introducing pollen.

  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen

Q: Golang, did you say?  That sounds cool!

A: Indeed. Around 50 lines of code, cool!

pollen.go

Q: Is there a public entropy service available?

A: Hello, entropy.ubuntu.com.

  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

Q: But what if I don't necessarily trust Canonical?

A: Then use a different entropy service :-)

  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

Q: So does this increase the overall entropy on a system?

A: No, no, no, no, no!

  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!

Q: Why the challenge/response in the protocol?

A: Think of it like the Heisenberg Uncertainty Principle.

  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

Q: What if pollinate gets crappy or compromised or no random seeds?

A: Functionally, it’s no better or worse than it was without pollinate in the mix.

  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise

Q: What about SSL compromises, or CA Man-in-the-Middle attacks?

A: We are mitigating that by bundling the public certificates in the client.


  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers

Q: What information gets logged by the pollen server?

A: The usual web server debug info.

  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

Q: Have the code or design been audited?

A: Yes, but more feedback is welcome!

  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated

Q: Where can I find more information?

A: Read Up!


Stay safe out there!
:-Dustin

Read more
Michael Hall

It seems a fairly common, straight forward question.  You’ve probably been asked it before. We all have reasons why we hack, why we code, why we write or draw. If you ask somebody this question, you’ll hear things like “scratching an itch” or “making something beautiful” or “learning something new”.  These are all excellent reasons for creating or improving something.  But contributing isn’t just about creating, it’s about giving that creation away. Usually giving it away for free, with no or very few strings attached.  When I ask “Why do you contribute to open source”, I’m asking why you give it away.

takemyworkThis question is harder to answer, and the answers are often far more complex than the ones given for why people simply create something. What makes it worthwhile to spend your time, effort, and often money working on something, and then turn around and give it away? People often have different intentions or goals in mind when the contribute, from benevolent giving to a community they care about to personal pride in knowing that something they did is being used in something important or by somebody important. But when you strip away the details of the situation, these all hinge on one thing: Recognition.

If you read books or articles about community, one consistent theme you will find in almost all of them is the importance of recognizing  the contributions that people make. In fact, if you look at a wide variety of successful communities, you would find that one common thing they all offer in exchange for contribution is recognition. It is the fuel that communities run on.  It’s what connects the contributor to their goal, both selfish and selfless. In fact, with open source, the only way a contribution can actually stolen is by now allowing that recognition to happen.  Even the most permissive licenses require attribution, something that tells everybody who made it.

Now let’s flip that question around:  Why do people contribute to your project? If their contribution hinges on recognition, are you prepared to give it?  I don’t mean your intent, I’ll assume that you want to recognize contributions, I mean do you have the processes and people in place to give it?

We’ve gotten very good about building tools to make contribution easier, faster, and more efficient, often by removing the human bottlenecks from the process.  But human recognition is still what matters most.  Silently merging someone’s patch or branch, even if their name is in the commit log, isn’t the same as thanking them for it yourself or posting about their contribution on social media. Letting them know you appreciate their work is important, letting other people know you appreciate it is even more important.

If you the owner or a leader in a project with a community, you need to be aware of how recognition is flowing out just as much as how contributions are flowing in. Too often communities are successful almost by accident, because the people in them are good at making sure contributions are recognized and that people know it simply because that’s their nature. But it’s just as possible for communities to fail because the personalities involved didn’t have this natural tendency, not because of any lack of appreciation for the contributions, just a quirk of their personality. It doesn’t have to be this way, if we are aware of the importance of recognition in a community we can be deliberate in our approaches to making sure it flows freely in exchange for contributions.

Read more
Michael Hall

Technically a fork is any instance of a codebase being copied and developed independently of its parent.  But when we use the word it usually encompasses far more than that. Usually when we talk about a fork we mean splitting the community around a project, just as much as splitting the code itself. Communities are not like code, however, they don’t always split in consistent or predictable ways. Nor are all forks the same, and both the reasons behind a fork, and the way it is done, will have an effect on whether and how the community around it will split.

There are, by my observation, three different kinds of forks that can be distinguished by their intent and method.  These can be neatly labeled as Convergent, Divergent and Emergent forks.

Convergent Forks

Most often when we talk about forks in open source, we’re talking about convergent forks. A convergent fork is one that shares the same goals as its parent, seeks to recruit the same developers, and wants to be used by the same users. Convergent forks tend to happen when a significant portion of the parent project’s developers are dissatisfied with the management or processes around the project, but otherwise happy with the direction of its development. The ultimate goal of a convergent fork is to take the place of the parent project.

Because they aim to take the place of the parent project, convergent forks must split the community in order to be successful. The community they need already exists, both the developers and the users, around the parent project, so that is their natural source when starting their own community.

Divergent Forks

Less common that convergent forks, but still well known by everybody in open source, are the divergent forks.  These forks are made by developers who are not happy with the direction of a project’s development, even if they are generally satisfied with its management.  The purpose of a divergent fork is to create something different from the parent, with different goals and most often different communities as well. Because they are creating a different product, they will usually be targeting a different group of users, one that was not well served by the parent project.  They will, however, quite often target many of the same developers as the parent project, because most of the technology and many of the features will remain the same, as a result of their shared code history.

Divergent forks will usually split a community, but to a much smaller extent than a convergent fork, because they do not aim to replace the parent for the entire community. Instead they often focus more on recruiting those users who were not served well, or not served at all, by the existing project, and will grown a new community largely from sources other than the parent community.

Emergent Forks

Emergent forks are not technically forks in the code sense, but rather new projects with new code, but which share the same goals and targets the same users as an existing project.  Most of us know these as NIH, or “Not Invented Here”, projects. They come into being on their own, instead of splitting from an existing source, but with the intention of replacing an existing project for all or part of an existing user community. Emergent forks are not the result of dissatisfaction with either the management or direction of an existing project, but most often a dissatisfaction with the technology being used, or fundamental design decisions that can’t be easily undone with the existing code.

Because they share the same goals as an existing project, these forks will usually result in a split of the user community around an existing project, unless they differ enough in features that they can targets users not already being served by those projects. However, because they do not share much code or technology with the existing project, they most often grow their own community of developers, rather than splitting them from the existing project as well.

All of these kinds of forks are common enough that we in the open source community can easily name several examples of them. But they are all quite different in important ways. Some, while forks in the literal sense, can almost be considered new projects in a community sense.  Others are not forks of code at all, yet result in splitting an existing community none the less. Many of these forks will fail to gain traction, in fact most of them will, but some will succeed and surpass those that came before them. All of them play a role in keeping the wider open source economy flourishing, even though we may not like them when they affect a community we’ve been involved in building.

Read more