Canonical Voices

Posts tagged with 'ubuntu'

Michael Hall

As most you you know by now, Ubuntu 16.04 will be dropping the old Ubuntu Software Center in favor of the newer Gnome Software as the graphical front-end to both the Ubuntu archives and 3rd party application store.

Gnome Software

Gnome Software provides a lot of the same enhancements over simple package managers that USC did, and it does this using a new metadata format standard called AppStream. While much of the needed AppStream data can be extracted from the existing packages in the archives, sometimes that’s not sufficient, and that’s when we need people to help fill the gaps.

It turns out that the bulk of the missing or incorrect data is caused by the application icons being used by app packages. While most apps already have an icon, it was never strictly enforced before, and the size and format allowed by the desktop specs was more lenient than what’s needed now.  These lower resolution icons might have been fine for a menu item, but they don’t work very well for a nice, beautiful App Store interface like Gnome Software. And that’s where you can help!

Don’t worry, contributing icons isn’t hard, and it doesn’t require any knowledge of programming or packing to do. Best of all, you’ll not only be helping Ubuntu, but you’ll also be contributing to any other distro that uses the AppStream standard too! In the steps below I will walk you through the process of finding an app in need, getting the correct icon for it, and contributing it to the upstream project and Ubuntu.

1) Pick an App

Because the AppStream data is being automatically extracted from the contents of existing packages, we are able to tell which apps are in need of new icons, and we’ve generated a list of them, sorted by popularity (based on PopCon stats) so you can prioritize your contributions to where they will help the most users. To start working on one, first click the “Create” link to file a new bug report against the package in Ubuntu. Then replace that link in the wiki with a link to your new bug, and put your name in the “Claimed” column so that others know you’ve already started work on it.

Apps with Icon ErrorsNote that a package can contain multiple .desktop files, each of which has it’s own icon, and your bug report will be specific to just that one metadata file. You will also need to be a member of the ~ubuntu-etherpad team (or sub-team like ~ubuntumembers) in order to edit the wiki, you will be asked to verify that membership as part of the login process with Ubuntu SSO.

2) Verify that an AppStream icon is needed

While the extraction process is capable of identifying what packages have a missing or unsupported image in them, it’s not always smart enough to know which packages should have this AppStream data in the first place. So before you get started working on icons, it’s best to first make sure that the metadata file you picked should be part of the AppStream index in the first place.

Because AppStream was designed to be application-centric, the metadata extraction process only looks at those with Type=Application in their .desktop file. It will also ignore any .desktop files with NoDisplay=True in them. If you find a file in the list that shouldn’t be indexed by AppStream, chances are one or both of these values are set incorrectly. In that case you should change your bug description to state that, rather than attaching an icon to it.

3) Contact Upstream

Since there is nothing Ubuntu-specific about AppStream data or icons, you really should be sending your contribution upstream to the originating project. Not only is this best for Ubuntu (carrying patches wastes resources), but it’s just the right thing to do in the open source community. So the after you’ve chosen an app to work on and verfied that it does in fact need a new icon for AppStream, the very next thing you should do is start talking to the upstream project developers.

Start by letting them know that you want to contribute to their project so that it integrates better with AppStream enabled stores (you can reference these Guidelines if they’re not familiar with it), and opening a similar bug report in their bug tracker if they don’t have one already. Finally, be sure to include a link to that upstream bug report in the Ubuntu bug you opened previously so that the Ubuntu developers know the work is also going into upstream to (your contribute might be rejected otherwise).

4) Find or Create an Icon

Chances are the upstream developers already have an icon that meets the AppStream requirements, so ask them about it before trying to find one on your own. If not, look for existing artwork assets that can be used as a logo, and remember that it needs to be at least 64×64 pixels (this is where SVGs are ideal, as they can be exported to any size). Whatever you use, make sure that it matches the application’s current branding, we’re not out to create a new logo for them after all. If you do create a new image file, you will need to make it available under the CC-BY-SA license.

While AppStream only requires a 64×64 pixel image, many desktops (including Unity) will benefit from having even higher resolution icons, and it’s always easier to scale them down than up. So if you have the option, try to provide a 256×256 icon image (or again, just an SVG).

5) Submit your icon

Now that you’ve found (or created) an appropriate icon, it’s time to get it into both the upstream project and Ubuntu. Because each upstream will be different in how they want you to do that, you will need to ask them for guidance (and possibly assistance) in order to do that. Just make sure that you update the upstream bug report with your work, so that the Ubuntu developers can see that it’s been done.

Ubuntu 16.04 has already synced with Debian, so it’s too late for these changes in the upstream project to make their way into this release. In order to get them into 16.04, the Ubuntu packages will have to carry a patch until the changes that land in upstream have the time to make their way into the Ubuntu archives. That’s why it’s so important to get your contribution accepted into the upstream project first, the Ubuntu developers want to know that the patches to their packages will eventually be replaced by the same change from upstream.

attach_file_to_bugTo submit your image to Ubuntu, all you need to do is attach the image file to the bug report you created way back in step #1.

launchpad-subscribeThen, subscribe the “ubuntu-sponsors” team to the bug, these are the Ubuntu developers who will review and apply your icon to the target package, and get it into the Ubuntu archives.

6) Talk about it!

Congratulations, you’ve just made a contribution that is likely to affect millions of people and benefit the entire open source community! That’s something to celebrate, so take to Twitter, Google+, Facebook or your own blog and talk about it. Not only is it good to see people doing these kinds of contributions, it’s also highly motivating to others who might not otherwise get involved. So share your experience, help others who want to do the same, and if you enjoyed it feel free to grab another app from the list and do it again.

Read more
Nicholas Skaggs

Reflections

The joys of Spring (or Fall for our friends in the Southern Hemisphere) are now upon us. The change of seasons spurs us to implement our own changes, to start anew. It's a time to reflect on the past, appreciate it, and then do a little Spring cleaning.

As I write this post to you, I'm doing my own reflecting. It's been quite a journey we've undertaken within the QA community. It's not always been easy, but I think we are poised for even greater success with Xenial than Trusty and Precise LTS's. We have continued ramping up our quality efforts to test new platforms, such as the phone and IOT devices, while also implementing automated testing via things like autopkgtest and autopilot. Nevertheless, the desktop images have continued to release like clockwork. We're testing more things, more often, while still managing to raise our quality bar.

I want to thank all of the volunteers who've helped make each of those releases a reality. Oftentimes quality can be a background job, with thank you's going unsaid, while complaints are easy to find. Truly, it's been wonderful learning and hacking on quality efforts with you. So thank you!

So if this post sounds a bit like a farewell, that's because it is. At least in a way. Moving forward, I'll be transitioning to working on a new challenge. Don't worry, I'm keeping my QA hat on, and staying firmly within the realm of ubuntu! However, the time has come to try my hand at a different side of ubuntu. That's right, it's time to head to the last frontier, juju!

I'll be working on improving the quality story for juju, but I believe juju has real opportunities to enable the testing story within ubuntu too. I'm looking forward to the new challenges, and sharing best practices. We're all working on ubuntu at it's heart, no matter our focus.

Moving forward, I'll still be around in my usual haunts. You'll still be able to poke me on IRC, or send me a mail, and I'm certainly still going to be watching what happens within quality with interest. That said, you are much more likely to find me discussing juju, servers and charms in #juju.

As with anything, please feel free to contact me directly if you have any concerns or questions. I plan to wind down my involvement during the next few weeks. I'll be handing off any lingering project roles, and stepping down gracefully. Ubuntu 'Y' will begin anew, with fresh challenges and fresh opportunities. I know there are folks waiting to tackle them!

Read more
Dustin Kirkland


We at Canonical have conducted a legal review, including discussion with the industry's leading software freedom legal counsel, of the licenses that apply to the Linux kernel and to ZFS.

And in doing so, we have concluded that we are acting within the rights granted and in compliance with their terms of both of those licenses.  Others have independently achieved the same conclusion.  Differing opinions exist, but please bear in mind that these are opinions.

While the CDDL and GPLv2 are both "copyleft" licenses, they have different scope.  The CDDL applies to all files under the CDDL, while the GPLv2 applies to derivative works.

The CDDL cannot apply to the Linux kernel because zfs.ko is a self-contained file system module -- the kernel itself is quite obviously not a derivative work of this new file system.

And zfs.ko, as a self-contained file system module, is clearly not a derivative work of the Linux kernel but rather quite obviously a derivative work of OpenZFS and OpenSolaris.  Equivalent exceptions have existed for many years, for various other stand alone, self-contained, non-GPL kernel modules.

Our conclusion is good for Ubuntu users, good for Linux, and good for all of free and open source software.

As we have already reached the conclusion, we are not interested in debating license compatibility, but of course welcome the opportunity to discuss the technology.

Cheers,
Dustin

EDIT: This post was updated to link to the supportive position paper from Eben Moglen of the SFLC, an amicus brief from James Bottomley, as well as the contrarian position from Bradley Kuhn and the SFC.

Read more
Dustin Kirkland



I had the opportunity to speak at Container World 2016 in Santa Clara yesterday.  Thanks in part to the Netflix guys who preceded me, the room was absolutely packed!

You can download a PDF of my slides here, or flip through them embedded below.

I'd really encourage you to try the demo instructions of LXD toward the end!


:-Dustin

Read more
Dustin Kirkland


Ubuntu 16.04 LTS (Xenial) is only a few short weeks away, and with it comes one of the most exciting new features Linux has seen in a very long time...

ZFS -- baked directly into Ubuntu -- supported by Canonical.

What is ZFS?

ZFS is a combination of a volume manager (like LVM) and a filesystem (like ext4, xfs, or btrfs).

ZFS one of the most beloved features of Solaris, universally coveted by every Linux sysadmin with a Solaris background.  To our delight, we're happy to make to OpenZFS available on every Ubuntu system.  Ubuntu's reference guide for ZFS can be found here, and these are a few of the killer features:
  • snapshots
  • copy-on-write cloning
  • continuous integrity checking against data corruption
  • automatic repair
  • efficient data compression.
These features truly make ZFS the perfect filesystem for containers.

What does "support" mean?

  • You'll find zfs.ko automatically built and installed on your Ubuntu systems.  No more DKMS-built modules!
$ locate zfs.ko
/lib/modules/4.4.0-4-generic/kernel/zfs/zfs/zfs.ko
  • You'll see the module loaded automatically if you use it.

$ lsmod | grep zfs
zfs 2801664 11
zunicode 331776 1 zfs
zcommon 57344 1 zfs
znvpair 90112 2 zfs,zcommon
spl 102400 3 zfs,zcommon,znvpair
zavl 16384 1 zfs

  • The user space zfsutils-linux package will be included in Ubuntu Main, with security updates provided by Canonical (as soon as this MIR is completed).
  • As always, industry leading, enterprise class technical support is available from Canonical with Ubuntu Advantage services.

How do I get started?

It's really quite simple!  Here's a few commands to get you up and running with ZFS and LXD in 60 seconds or less.

First, make sure you're running Ubuntu 16.04 (Xenial).

$ head -n1 /etc/issue
Ubuntu Xenial Xerus (development branch) \n \l

Now, let's install lxd and zfsutils-linux, if you haven't already:

$ sudo apt install lxd zfsutils-linux

Next, let's use the interactive lxd init command to setup LXD and ZFS.  In the example below, I'm simply using a sparse, loopback file for the ZFS pool.  For best results (and what I use on my laptop and production servers), it's best to use a raw SSD partition or device.

$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 2
Would you like LXD to be available over the network (yes/no)? no
LXD has been successfully configured.

We can check our ZFS pool now:

$ sudo zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
lxd 1.98G 450K 1.98G - 0% 0% 1.00x ONLINE -

$ sudo zpool status
pool: lxd
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
lxd ONLINE 0 0 0
/var/lib/lxd/zfs.img ONLINE 0 0 0
errors: No known data errors

$ lxc config get storage.zfs_pool_name
storage.zfs_pool_name: lxd

Finally, let's import the Ubuntu LXD image, and launch a few containers.  Note how fast containers launch, which is enabled by the ZFS cloning and copy-on-write features:

$ newgrp lxd
$ lxd-images import ubuntu --alias ubuntu
Downloading the GPG key for http://cloud-images.ubuntu.com
Progress: 48 %
Validating the GPG signature of /tmp/tmpa71cw5wl/download.json.asc
Downloading the image.
Image manifest: http://cloud-images.ubuntu.com/server/releases/trusty/release-20160201/ubuntu-14.04-server-cloudimg-amd64.manifest
Image imported as: 54c8caac1f61901ed86c68f24af5f5d3672bdc62c71d04f06df3a59e95684473
Setup alias: ubuntu

$ for i in $(seq 1 5); do lxc launch ubuntu; done
...
$ lxc list
+-------------------------+---------+-------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | EPHEMERAL | SNAPSHOTS |
+-------------------------+---------+-------------------+------+-----------+-----------+
| discordant-loria | RUNNING | 10.0.3.130 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| fictive-noble | RUNNING | 10.0.3.91 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| interprotoplasmic-essie | RUNNING | 10.0.3.242 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| nondamaging-cain | RUNNING | 10.0.3.9 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+
| untreasurable-efrain | RUNNING | 10.0.3.89 (eth0) | | NO | 0 |
+-------------------------+---------+-------------------+------+-----------+-----------+

Super easy, right?

Cheers,
:-Dustin

Read more
Nicholas Skaggs

Prepping for a Summer of Code!

The time to apply is here! Ubuntu has applied for GSOC 2016, but we need project ideas for prospective students, and mentors to mentor them.

What is GSOC?
GSOC stands for Google Summer of Code. The event brings together university students and open source organizations like Ubuntu. It happens over the course of the summer, and mentors mentor students on a one to one basis. Mentors give project ideas, and students select them, pairing up with the mentor to make the idea a reality.

I'll be a mentor!
Mentors need to be around to help a student from May - August. You'll be mentoring a student on the project you propose, so you'll need to be capable of completing the project. As the time commitment is long, it's helpful to have a friend who can pitch in if needed. We've put together all the information you need to know as a mentor on community.u.c, including links to some mentoring guides. This will help give you more details about what to expect.

I'm in. What do I need to do?
To make sure you ideas are included in our application, you need to have them on the Ideas wiki by February 19th, 2016. When you are ready, simply add your idea. It's that simple. Assuming we are accepted as an organization, students will read our ideas, and we'll have a period of time to finalize the details with interested students.

I have a question!
If you have questions about what all this mentoring might entail, feel free to reach out to myself or anyone on the community team. This is a great way to make some needed ideas a reality and grow the community at the same time!

Read more
Nicholas Skaggs

Google Code In 2015: Complete!

Google Code In 2015 is now complete! Overall, we had a total of 215 students finish more than 500 tasks for ubuntu! The students made contributions to documentation, created wallpapers and other art, fixed Unity 7 issues, hacked on the core apps for the phone, performed tests, wrote automated and manual tests, and worked on tools like the qatracker. A big thank you to all of the students and mentors who helped out.

Here's our winners!

 * Daniyaal Rasheed
 * Matthew Allen

And our Finalists

 * Evan McIntire
 * Girish Rawat
 * Malena Vasquez Currie

The students amazed everyone, myself included, with the level and skill they displayed in there work. You all should be very proud. It was lovely to have you as part of the community, and I've been delighted to see some of your faces sticking around and still contributing! Thank you, and welcome to the community!

    Read more
    Dustin Kirkland


    There's no shortage of excitement, controversy, and readership, any time you can work "Docker" into a headline these days.  Perhaps a bit like "Donald Trump", but for CIO tech blogs and IT news -- a real hot button.  Hey, look, I even did it myself in the title of this post!

    Sometimes an article even starts out about CoreOS, but gets diverted into a discussion about Docker, like this one, where shykes (Docker's founder and CTO) announced that Docker's default image would be moving away from Ubuntu to Alpine Linux.


    I have personally been Canonical's business and technical point of contact with Docker Inc, since September of 2013, when I co-presented at an OpenStack Meetup in Austin, Texas, with Ben Golub and Nick Stinemates of Docker.  I can tell you that, along with most of the rest of the Docker community, this casual declaration in an unrelated Hacker News thread, came as a surprise to nearly all of us!

    Docker's default container image is certainly Docker's decision to make.  But it would be prudent to examine at a few facts:

    (1) Check DockerHub and you may notice that while Busybox (Alpine Linux) has surpassed Ubuntu in the number downloads (66M to 40M), Ubuntu is still by far the most "popular" by number of "stars" -- likes, favorites, +1's, whatever, (3.2K to 499).

    (2) Ubuntu's compressed, minimal root tarball is 59 MB, which is what is downloaded over the Internet.  That's different from the 188 MB uncompressed root filesystem, which has been quoted a number of times in the press.

    (3) The real magic of Docker is such that you only ever download that base image, one time!  And you only store one copy of the uncompressed root filesystem on your disk! Just once, sudo docker pull ubuntu, on your laptop at home or work, and then launch thousands of images at a coffee shop or airport lounge with its spotty wifi.  Build derivative images, FROM ubuntu, etc. and you only ever store the incremental differences.

    Actually, I encourage you to test that out yourself...  I just launched a t2.micro -- Amazon's cheapest instance type with the lowest networking bandwidth.  It took 15.938s to sudo apt install docker.io.  And it took 9.230s to sudo docker pull ubuntu.  It takes less time to download Ubuntu than to install Docker!

    ubuntu@ip-172-30-0-129:~⟫ time sudo apt install docker.io -y
    ...
    real 0m15.938s
    user 0m2.146s
    sys 0m0.913s

    As compared to...

    ubuntu@ip-172-30-0-129:~⟫ time sudo docker pull ubuntu
    latest: Pulling from ubuntu
    f15ce52fc004: Pull complete
    c4fae638e7ce: Pull complete
    a4c5be5b6e59: Pull complete
    8693db7e8a00: Pull complete
    ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
    Digest: sha256:457b05828bdb5dcc044d93d042863fba3f2158ae249a6db5ae3934307c757c54
    Status: Downloaded newer image for ubuntu:latest
    real 0m9.230s
    user 0m0.021s
    sys 0m0.016s

    Now, sure, it takes even less than that to download Alpine Linux (0.747s by my test), but again you only ever do that once!  After you have your initial image, launching Docker containers take the exact same amount of time (0.233s) and identical storage differences.  See:

    ubuntu@ip-172-30-0-129:/tmp/docker⟫ time sudo docker run alpine /bin/true
    real 0m0.233s
    user 0m0.014s
    sys 0m0.001s
    ubuntu@ip-172-30-0-129:/tmp/docker⟫ time sudo docker run ubuntu /bin/true
    real 0m0.234s
    user 0m0.012s
    sys 0m0.002s

    (4) I regularly communicate sincere, warm congratulations to our friends at Docker Inc, on its continued growth.  shykes publicly mentioned the hiring of the maintainer of Alpine Linux in that Hacker News post.  As a long time Linux distro developer myself, I have tons of respect for everyone involved in building a high quality Linux distribution.  In fact, Canonical employs over 700 people, in 44 countries, working around the clock, all calendar year, to make Ubuntu the world's most popular Linux OS.  Importantly, that includes a dedicated security team that has an outstanding track record over the last 12 years, keeping Ubuntu servers, clouds, desktops, laptops, tablets, and phones up-to-date and protected against the latest security vulnerabilities.  I don't know personally Natanael, but I'm intimately aware of what a spectacular amount of work it is to maintain and secure an OS distribution, as it makes its way into enterprise and production deployments.  Good luck!

    (5) There are currently 5,854 packages available via apk in Alpine Linux (sudo docker run alpine apk search -v).  There are 8,862 packages in Ubuntu Main (officially supported by Canonical), and 53,150 binary packages across all of Ubuntu Main, Universe, Restricted, and Multiverse, supported by the greater Ubuntu community.  Nearly all 50,000+ packages are updated every 6 months, on time, every time, and we release an LTS version of Ubuntu and the best of open source software in the world every 2 years.  Like clockwork.  Choice.  Velocity.  Stability.  That's what Ubuntu brings.

    Docker holds a special place in the Ubuntu ecosystem, and Ubuntu has been instrumental in Docker's growth over the last 3 years.  Where we go from here, is largely up to the cross-section of our two vibrant communities.

    And so I ask you honestly...what do you want to see?  How would you like to see Docker and Ubuntu operate together?

    I'm Canonical's Product Manager for Ubuntu Server, I'm responsible for Canonical's relationship with Docker Inc, and I will read absolutely every comment posted below.

    Cheers,
    :-Dustin

    p.s. I'm speaking at Container Summit in New York City today, and wrote this post from the top of the (inspiring!) One World Observatory at the World Trade Center this morning.  Please come up and talk to me, if you want to share your thoughts (at Container Summit, not the One World Observatory)!


    Read more
    Daniel Holbach

    It’s been a while since our last Snappy Clinic, so we asked for your input on which topics to cover. Thanks for the feedback so far.

    In our next session Sergio Schvezov is going to talk about what’s new in Snapcraft and the changes in the 2.x series. Be there and you are going to be up-to-date on how to publish your software on Snappy Ubuntu Core. There will be time for questions afterwards.

    Join us on the 12th February 2016 at 16:00 UTC on http://ubuntuonair.com.

    Read more
    Dustin Kirkland

    People of earth, waving at Saturn, courtesy of NASA.
    “It Doesn't Look Like Ubuntu Reached Its Goal Of 200 Million Users This Year”, says Michael Larabel of Phoronix, in a post that it seems he's been itching to post for months.

    Why the negativity?!? Are you sure? Did you count all of them?

    No one has.

    How many people in the world use Ubuntu?

    Actually, no one can count all of the Ubuntu users in the world!

    Canonical, unlike Apple, Microsoft, Red Hat, or Google, does not require each user to register their installation of Ubuntu.

    Of course, you can buy laptops preloaded with Ubuntu from Dell, HP, Lenovo, and Asus.  And there are millions of them out there.  And you can buy servers powered by Ubuntu from IBM, Dell, HP, Cisco, Lenovo, Quanta, and compatible with the OpenCompute Project.

    In 2011, hardware sales might have been how Mark Shuttleworth hoped to reach 200M Ubuntu users by 2015.

    But in reality, hundreds of millions of PCs, servers, devices, virtual machines, and containers have booted Ubuntu to date!

    Let's look at some facts...
    • Docker users have launched Ubuntu images over 35.5 million times.
    • HashiCorp's Vagrant images of Ubuntu 14.04 LTS 64-bit have been downloaded 10 million times.
    • At least 20 million unique instances of Ubuntu have launched in public clouds, private clouds, and bare metal in 2015 itself.
      • That's Ubuntu in clouds like AWS, Microsoft Azure, Google Compute Engine, Rackspace, Oracle Cloud, VMware, and others.
      • And that's Ubuntu in private clouds like OpenStack.
      • And Ubuntu at scale on bare metal with MAAS, often managed with Chef.
    • In fact, over 2 million new Ubuntu cloud instances launched in November 2015.
      • That's 67,000 new Ubuntu cloud instances launched per day.
      • That's 2,800 new Ubuntu cloud instances launched every hour.
      • That's 46 new Ubuntu cloud instances launched every minute.
      • That's nearly one new Ubuntu cloud instance launched every single second of every single day in November 2015.
    • And then there are Ubuntu phones from Meizu.
    • And more Ubuntu phones from BQ.
    • Of course, anyone can install Ubuntu on their Google Nexus tablet or phone.
    • Or buy a converged tablet/desktop preinstalled with Ubuntu from BQ.
    • Oh, and the Tesla entertainment system?  All electric Ubuntu.
    • Google's self-driving cars?  They're self-driven by Ubuntu.
    • George Hotz's home-made self-driving car?  It's a homebrewed Ubuntu autopilot.
    • Snappy Ubuntu downloads and updates for Raspberry Pi's and Beagle Bone Blacks -- the response has been tremendous.  Download numbers are astounding.
    • Drones, robots, network switches, smart devices, the Internet of Things.  More Snappy Ubuntu.
    • How about Walmart?  Everyday low prices.  Everyday Ubuntu.  Lots and lots of Ubuntu.
    • Are you orchestrating containers with Kubernetes or Apache Mesos?  There's plenty of Ubuntu in there.
    • Kicking PaaS with Cloud Foundry?  App instances are Ubuntu LXC containers.  Pivotal has lots of serious users.
    • And Heroku?  You bet your PaaS those hosted application containers are Ubuntu.  Plenty of serious users here too.
    • Tianhe-2, the world's largest super computer.  Merely 80,000 Xeons, 1.4 TB of memory, 12.4 PB of disk, all number crunching on Ubuntu.
    • Ever watch a movie on Netflix?  You were served by Ubuntu.
    • Ever hitch a ride with Uber or Lyft?  Your mobile app is talking to Ubuntu servers on the backend.
    • Did you enjoy watching The Hobbit?  Hunger Games?  Avengers?  Avatar?  All rendered on Ubuntu at WETA Digital.  Among many others.
    • Do you use Instagram?  Say cheese!
    • Listen to Spotify?  Music to my ears...
    • Doing a deal on Wall Street?  Ubuntu is serious business for Bloomberg.
    • Paypal, Dropbox, Snapchat, Pinterest, Reddit. Airbnb.  Yep.  More Ubuntu.
    • Wikipedia and Wikimedia, among the busiest sites on the Internet with 8 - 18 billion page views per month, are hosted on Ubuntu.
    How many "users" of Ubuntu are there ultimately?  I bet there are over a billion people today, using Ubuntu -- both directly and indirectly.  Without a doubt, there are over a billion people on the planet benefiting from the services, security, and availability of Ubuntu today.
    • More people use Ubuntu than we know.
    • More people use Ubuntu than you know.
    • More people use Ubuntu than they know.
    More people use Ubuntu than anyone actually knows.

    Because of who we all are.

    :-Dustin

    Read more
    David Henningsson

    13 ways to PulseAudio

    All roads lead to Rome, but PulseAudio is not far behind! In fact, how the PulseAudio client library determines how to try to connect to the PulseAudio server has no less than 13 different steps. Here they are, in priority order:

    1) As an application developer, you can specify a server string in your call to pa_context_connect. If you do that, that’s the server string used, nothing else.

    2) If the PULSE_SERVER environment variable is set, that’s the server string used, and nothing else.

    3) Next, it goes to X to check if there is an x11 property named PULSE_SERVER. If there is, that’s the server string, nothing else. (There is also a PulseAudio module called module-x11-publish that sets this property. It is loaded by the start-pulseaudio-x11 script.)

    4) It also checks client.conf, if such a file is found, for the default-server key. If that’s present, that’s the server string.

    So, if none of the four methods above gives any result, several items will be merged and tried in order.

    First up is trying to connect to a user-level PulseAudio, which means finding the right path where the UNIX socket exists. That in turn has several steps, in priority order:

    5) If the PULSE_RUNTIME_PATH environment variable is set, that’s the path.

    6) Otherwise, if the XDG_RUNTIME_DIR environment variable is set, the path is the “pulse” subdirectory below the directory specified in XDG_RUNTIME_DIR.

    7) If not, and the “.pulse” directory exists in the current user’s home directory, that’s the path. (This is for historical reasons – a few years ago PulseAudio switched from “.pulse” to using XDG compliant directories, but ignoring “.pulse” would throw away some settings on upgrade.)

    8) Failing that, if XDG_CONFIG_HOME environment variable is set, the path is the “pulse” subdirectory to the directory specified in XDG_CONFIG_HOME.

    9) Still no path? Then fall back to using the “.config/pulse” subdirectory below the current user’s home directory.

    Okay, so maybe we can connect to the UNIX socket inside that user-level PulseAudio path. But if it does not work, there are still a few more things to try:

    10) Using a path of a system-level PulseAudio server. This directory is /var/run/pulse on Ubuntu (and probably most other distributions), or /usr/local/var/run/pulse in case you compiled PulseAudio from source yourself.

    11) By checking client.conf for the key “auto-connect-localhost”. If so, also try connecting to tcp4:127.0.0.1…

    12) …and tcp6:[::1], too. Of course we cannot leave IPv6-only systems behind.

    13) As the last straw of hope, the library checks client.conf for the key “auto-connect-display”. If it’s set, it checks the DISPLAY environment variable, and if it finds a hostname (i e, something before the “:”), then that host will be tried too.

    To summarise, first the client library checks for a server string in step 1-4, if there is none, it makes a server string – out of one item from steps 5-9, and then up to four more items from steps 10-13.

    And that’s all. If you ever want to customize how you connect to a PulseAudio server, you have a smorgasbord of options to choose from!

    Read more
    Dustin Kirkland


    As always, I enjoyed speaking at the SCALE14x event, especially at the new location in Pasadena, California!

    What if you could adapt a package from a newer version of Ubuntu, onto your stable LTS desktop/server?

    Or, as a developer, what if you could provide your latest releases to your users running an older LTS version of Ubuntu?

    Introducing adapt!

    adapt is a lot like apt...  It’s a simple command that installs packages.

    But it “adapts” a requested version to run on your current system.

    It's a simple command that installs any package from any release of Ubuntu into any version of Ubuntu.

    How does adapt work?

    Simple… Containers!

    More specifically, LXD system containers.

    Why containers?

    Containers can run anywhere, physical, virtual, desktops, servers, and any CPU architecture.

    And containers are light and fast!  Zero latency and no virtualization overhead.

    Most importantly, system containers are perfect copies of the released distribution, the operating system itself.

    And all of that continuous integration testing we do perform on every single Ubuntu release?

    We leverage that!
    You can download a PDF of the slides for my talk here, or flip through them here:



    I hope you enjoy some of the magic that LXD is making possible ;-)

    Cheers!
    Dustin

    Read more
    pitti

    This week from Tuesday to Thursday four Canonical Foundations team members held a virtual sprint about the proposed-migration infrastructure. It’s been a loooong three days and nightshifts, but it was absolutely worth it. Thanks to Brian, Barry, and Robert for your great work!

    I started the sprint on Tuesday with a presentation (slides) about the design and some details about the involved components, and showed how to deploy the whole thing locally in juju-local. I also prepared a handful of bite-size improvements which were good finger-exercises for getting familiar with the infrastructure and testing changes. I’m happy to report that all of those got implemented and are running in production!

    The big piece of work which we all collaborated on was providing a web-based test retry for all Ubuntu developers. Right now this is limited to a handful of Canonical employees, but we want Ubuntu developers to be able to retry autopkgtest regressions (which stop their package from landing in Ubuntu) by themselves. I don’t know the first thing about web applications and OpenID, so I’m really glad that Barry and Robert came up with a “hello world” kind of Flask webapp which uses Ubuntu SSO authentication to verify that the requester is an Ubuntu Developer. I implemented the input variable validation and sending the actual test requests over AMQP.

    Now we have a nice autopkgtest-retrier git with the required functionality and 100% (yes, complete!) test coverage. With that, requesting tests in a local deployment works! So what’s left to do for me now is to turn this into a CGI script, configure apache for it, enable SSL on autopkgtest.ubuntu.com, and update the charms to set this all up automatically. So this moved from “ugh, I don’t know where to start” from “should land next week” in these three days!

    We are going to have similar sprints for Brian’s error tracker, Robert’s CI train, and Barry’s system-image builder in the next weeks. Let’s increase all those bus factors from the current “1” to at least “4” ☺ . Looking forward to these!

    Read more
    Dustin Kirkland

    tl;dr

    • Put /tmp on tmpfs and you'll improve your Linux system's I/O, reduce your carbon foot print and electricity usage, stretch the battery life of your laptop, extend the longevity of your SSDs, and provide stronger security.
    • In fact, we should do that by default on Ubuntu servers and cloud images.
    • Having tested 502 physical and virtual servers in production at Canonical, 96.6% of them could immediately fit all of /tmp in half of the free memory available and 99.2% could fit all of /tmp in (free memory + free swap).

    Try /tmp on tmpfs Yourself

    $ echo "tmpfs /tmp tmpfs rw,nosuid,nodev" | sudo tee -a /etc/fstab
    $ sudo reboot

    Background

    In April 2009, I proposed putting /tmp on tmpfs (an in memory filesystem) on Ubuntu servers by default -- under certain conditions, like, well, having enough memory. The proposal was "approved", but got hung up for various reasons.  Now, again in 2016, I proposed the same improvement to Ubuntu here in a bug, and there's a lively discussion on the ubuntu-cloud and ubuntu-devel mailing lists.

    The benefits of /tmp on tmpfs are:
    • Performance: reads, writes, and seeks are insanely fast in a tmpfs; as fast as accessing RAM
    • Security: data leaks to disk are prevented (especially when swap is disabled), and since /tmp is its own mount point, we should add the nosuid and nodev options (and motivated sysadmins could add noexec, if they desire).
    • Energy efficiency: disk wake-ups are avoided
    • Reliability: fewer NAND writes to SSD disks
    In the interest of transparency, I'll summarize the downsides:
    • There's sometimes less space available in memory, than in your root filesystem where /tmp may traditionally reside
    • Writing to tmpfs could evict other information from memory to make space
    You can learn more about Linux tmpfs here.

    Not Exactly Uncharted Territory...

    Fedora proposed and implemented this in Fedora 18 a few years ago, citing that Solaris has been doing this since 1994. I just installed Fedora 23 into a VM and confirmed that /tmp is a tmpfs in the default installation, and ArchLinux does the same. Debian debated doing so, in this thread, which starts with all the reasons not to put /tmp on a tmpfs; do make sure you read the whole thread, though, and digest both the pros and cons, as both are represented throughout the thread.

    Full Data Treatment

    In the current thread on ubuntu-cloud and ubuntu-devel, I was asked for some "real data"...

    In fact, across the many debates for and against this feature in Ubuntu, Debian, Fedora, ArchLinux, and others, there is plenty of supposition, conjecture, guesswork, and presumption.  But seeing as we're talking about data, let's look at some real data!

    Here's an analysis of a (non-exhaustive) set of 502 of Canonical's production servers that run Ubuntu.com, Launchpad.net, and hundreds of related services, including OpenStack, dozens of websites, code hosting, databases, and more. These servers sampled are slightly biased with more physical machines than virtual machines, but both are present in the survey, and a wide variety of uptime is represented, from less than a day of uptime, to 1306 days of uptime (with live patched kernels, of course).  Note that this is not an exhaustive survey of all servers at Canonical.

    I humbly invite further study and analysis of the raw, tab-separated data, which you can find at:
    The column headers are:
    • Column 1: The host names have been anonymized to sequential index numbers
    • Column 2: `du -s /tmp` disk usage of /tmp as of 2016-01-17 (ie, this is one snapshot in time)
    • Column 3-8: The output of the `free` command, memory in KB for each server
    • Column 9-11: The output of the `free` command, sway in KB for each server
    • Column 12: The number of inodes in /tmp
    I have imported it into a Google Spreadsheet to do some data treatment. You're welcome to do the same, or use the spreadsheet of your choice.

    For the numbers below, 1 MB = 1000 KB, and 1 GB = 1000 MB, per Wikipedia. (Let's argue MB and MiB elsewhere, shall we?)  The mean is the arithmetic average.  The median is the middle value in a sorted list of numbers.  The mode is the number that occurs most often.  If you're confused, this article might help.  All calculations are accurate to at least 2 significant digits.

    Statistical summary of /tmp usage:

    • Max: 101 GB
    • Min: 4.0 KB
    • Mean: 453 MB
    • Median: 16 KB
    • Mode: 4.0 KB
    Looking at all 502 servers, there are two extreme outliers in terms of /tmp usage. One server has 101 GB of data in /tmp, and the other has 42 GB. The latter is a very noisy django.log. There are 4 more severs using between 10 GB and 12 GB of /tmp. The remaining 496 severs surveyed (98.8%) are using less than 4.8 GB of /tmp. In fact, 483 of the servers surveyed (96.2%) use less than 1 GB of /tmp. 454 of the servers surveyed (90.4%) use less than 100 MB of /tmp. 414 of the servers surveyed (82.5%) use less than 10 MB of /tmp. And actually, 370 of the servers surveyed (73.7%) -- the overwhelming majority -- use less than 1MB of /tmp.

    Statistical summary of total memory available:

    • Max: 255 GB
    • Min: 1.0 GB
    • Mean: 24 GB
    • Median: 10.2 GB
    • Mode: 4.1 GB
    All of the machines surveyed (100%) have at least 1 GB of RAM.  495 of the machines surveyed (98.6%) have at least 2GB of RAM.   437 of the machines surveyed (87%) have at least 4 GB of RAM.   255 of the machines surveyed (50.8%) have at least 10GB of RAM.    157 of the machines surveyed (31.3%) have more than 24 GB of RAM.  74 of the machines surveyed (14.7%) have at least 64 GB of RAM.

    Statistical summary of total swap available:

    • Max: 201 GB
    • Min: 0.0 KB
    • Mean: 13 GB
    • Median: 6.3 GB
    • Mode: 2.96 GB
    485 of the machines surveyed (96.6%) have at least some swap enabled, while 17 of the machines surveyed (3.4%) have zero swap configured. One of these swap-less machines is using 415 MB of /tmp; that machine happens to have 32 GB of RAM. All of the rest of the swap-less machines are using between 4 KB and 52 KB (inconsequential) /tmp, and have between 2 GB and 28 GB of RAM.  5 machines (1.0%) have over 100 GB of swap space.

    Statistical summary of swap usage:

    • Max: 19 GB
    • Min: 0.0 KB
    • Mean: 657 MB
    • Median: 18 MB
    • Mode: 0.0 KB
    476 of the machines surveyed (94.8%) are using less than 4 GB of swap. 463 of the machines surveyed (92.2%) are using less than 1 GB of swap. And 366 of the machines surveyed (72.9%) are using less than 100 MB of swap.  There are 18 "swappy" machines (3.6%), using 10 GB or more swap.

    Modeling /tmp on tmpfs usage

    Next, I took the total memory (RAM) in each machine, and divided it in half which is the default allocation to /tmp on tmpfs, and subtracted the total /tmp usage on each system, to determine "if" all of that system's /tmp could actually fit into its tmpfs using free memory alone (ie, without swap or without evicting anything from memory).

    485 of the machines surveyed (96.6%) could store all of their /tmp in a tmpfs, in free memory alone -- i.e. without evicting anything from cache.

    Now, if we take each machine, and sum each system's "Free memory" and "Free swap", and check its /tmp usage, we'll see that 498 of the systems surveyed (99.2%) could store the entire contents of /tmp in tmpfs free memory + swap available. The remaining 4 are our extreme outliers identified earlier, with /tmp usages of [101 GB, 42 GB, 13 GB, 10 GB].

    Performance of tmpfs versus ext4-on-SSD

    Finally, let's look at some raw (albeit rough) read and write performance numbers, using a simple dd model.

    My /tmp is on a tmpfs:
    kirkland@x250:/tmp⟫ df -h .
    Filesystem Size Used Avail Use% Mounted on
    tmpfs 7.7G 2.6M 7.7G 1% /tmp

    Let's write 2 GB of data:
    kirkland@x250:/tmp⟫ dd if=/dev/zero of=/tmp/zero bs=2G count=1
    0+1 records in
    0+1 records out
    2147479552 bytes (2.1 GB) copied, 1.56469 s, 1.4 GB/s

    And let's write it completely synchronously:
    kirkland@x250:/tmp⟫ dd if=/dev/zero of=./zero bs=2G count=1 oflag=dsync
    0+1 records in
    0+1 records out
    2147479552 bytes (2.1 GB) copied, 2.47235 s, 869 MB/s

    Let's try the same thing to my Intel SSD:
    kirkland@x250:/local⟫ df -h .
    Filesystem Size Used Avail Use% Mounted on
    /dev/dm-0 217G 106G 100G 52% /

    And write 2 GB of data:
    kirkland@x250:/local⟫ dd if=/dev/zero of=./zero bs=2G count=1
    0+1 records in
    0+1 records out
    2147479552 bytes (2.1 GB) copied, 7.52918 s, 285 MB/s

    And let's redo it completely synchronously:
    kirkland@x250:/local⟫ dd if=/dev/zero of=./zero bs=2G count=1 oflag=dsync
    0+1 records in
    0+1 records out
    2147479552 bytes (2.1 GB) copied, 11.9599 s, 180 MB/s

    Let's go back and read the tmpfs data:
    kirkland@x250:~⟫ dd if=/tmp/zero of=/dev/null bs=2G count=1
    0+1 records in
    0+1 records out
    2147479552 bytes (2.1 GB) copied, 1.94799 s, 1.1 GB/s

    And let's read the SSD data:
    kirkland@x250:~⟫ dd if=/local/zero of=/dev/null bs=2G count=1
    0+1 records in
    0+1 records out
    2147479552 bytes (2.1 GB) copied, 2.55302 s, 841 MB/s

    Now, let's create 10,000 small files (1 KB) in tmpfs:
    kirkland@x250:/tmp/foo⟫ time for i in $(seq 1 10000); do dd if=/dev/zero of=$i bs=1K count=1 oflag=dsync ; done
    real 0m15.518s
    user 0m1.592s
    sys 0m7.596s

    And let's do the same on the SSD:
    kirkland@x250:/local/foo⟫ time for i in $(seq 1 10000); do dd if=/dev/zero of=$i bs=1K count=1 oflag=dsync ; done
    real 0m26.713s
    user 0m2.928s
    sys 0m7.540s

    For better or worse, I don't have any spinning disks, so I couldn't repeat the tests there.

    So on these rudimentary read/write tests via dd, I got 869 MB/s - 1.4 GB/s write to tmpfs and 1.1 GB/s read from tmps, and 180 MB/s - 285 MB/s write to SSD and 841 MB/s read from SSD.

    Surely there are more scientific ways of measuring I/O to tmpfs and physical storage, but I'm confident that, by any measure, you'll find tmpfs extremely fast when tested against even the fastest disks and filesystems.

    Summary

    • /tmp usage
      • 98.8% of the servers surveyed use less than 4.8 GB of /tmp
      • 96.2% use less than 1.0 GB of /tmp
      • 73.7% use less than 1.0 MB of /tmp
      • The mean/median/mode are [453 MB / 16 KB / 4 KB]
    • Total memory available
      • 98.6% of the servers surveyed have at least 2.0 GB of RAM
      • 88.0% have least 4.0 GB of RAM
      • 57.4% have at least 8.0 GB of RAM
      • The mean/median/mode are [24 GB / 10 GB / 4 GB]
    • Swap available
      • 96.6% of the servers surveyed have some swap space available
      • The mean/median/mode are [13 GB / 6.3 GB / 3 GB]
    • Swap used
      • 94.8% of the servers surveyed are using less than 4 GB of swap
      • 92.2% are using less than 1 GB of swap
      • 72.9% are using less than 100 MB of swap
      • The mean/median/mode are [657 MB / 18 MB / 0 KB]
    • Modeling /tmp on tmpfs
      • 96.6% of the machines surveyed could store all of the data they currently have stored in /tmp, in free memory alone, without evicting anything from cache
      • 99.2% of the machines surveyed could store all of the data they currently have stored in /tmp in free memory + free swap
      • 4 of the 502 machines surveyed (0.8%) would need special handling, reconfiguration, or more swap

    Conclusion


    • Can /tmp be mounted as a tmpfs always, everywhere?
      • No, we did identify a few systems (4 out of 502 surveyed, 0.8% of total) consuming inordinately large amounts of data in /tmp (101 GB, 42 GB), and with insufficient available memory and/or swap.
      • But those were very much the exception, not the rule.  In fact, 96.6% of the systems surveyed could fit all of /tmp in half of the freely available memory in the system.
    • Is this the first time anyone has suggested or tried this as a Linux/UNIX system default?
      • Not even remotely.  Solaris has used tmpfs for /tmp for 22 years, and Fedora and ArchLinux for at least the last 4 years.
    • Is tmpfs really that much faster, more efficient, more secure?
      • Damn skippy.  Try it yourself!
    :-Dustin

    Read more
    Daniel Holbach

    I can’t wait for UbuCon Summit to start. The list of attendees is growing and with some of the folks it’s been ages since I met them in person the last time. For me that’s the number one reason to be there. Catching up with everyone will be great.

    The schedule for UbuCon Summit is looking fantastic as well. We have many many great talks and demos lined up from a really broad spectrum, there’s going to be much to learn about and there’s going to be more surprises coming up in the unconference part of UbuCon.

    And there’s more:

    Anything I missed you’re looking forward to? Let me know in the comments. </p>
            <a href=Read more

    Prakash

    A few days before Thanksgiving, George Hotz, a 26-year-old hacker, invites me to his house in San Francisco to check out a project he’s been working on. He says it’s a self-driving car that he had built in about a month. The claim seems absurd. But when I turn up that morning, in his garage there’s a white 2016 Acura ILX outfitted with a laser-based radar (lidar) system on the roof and a camera mounted near the rearview mirror. A tangle of electronics is attached to a wooden board where the glove compartment used to be, a joystick protrudes where you’d usually find a gearshift, and a 21.5-inch screen is attached to the center of the dash. “Tesla only has a 17-inch screen,” Hotz says.

    Read More: http://www.bloomberg.com/features/2015-george-hotz-self-driving-car/

    Read more
    Daniel Holbach

    ubucon

    I’m very excited about UbuCon Summit which will bring many many Ubuntu people from all parts of its community together in January. David Planella did a great job explaining why this event is going to be just fantastic.

    I look forward to meeting everyone and particularly look forward to what we’ve got to show in terms of Snappy Ubuntu Core.

    Manik Taneja and Sergio Schvezov

    We are going to have Manik Taneja and Sergio Schvezov there who are going to give the following talk:

    Internet of Things gets ‘snappy’ with Ubuntu Core

    Snappy Ubuntu Core is the new rendition of Ubuntu, designed from the ground up to power the next generation of IoT devices. The same Ubuntu and its vast ecosystem, but delivered in a leaner form, with state-of-the art security and reliable update mechanisms to ensure devices and apps are always up-to-date.

    This talk will introduce Ubuntu Core, the technologies of its foundations and the developer experience with Snapcraft. We will also discuss how public and branded stores can kickstart a thriving app ecosystem and how Ubuntu meets the needs of connected device manufacturers, entrepreneurs and innovators.

    And there’s more! Sergio Schvezov will also give the following workshop:

    Hands-on demo: creating Ubuntu snaps with Snapcraft

    Overview the snapcraft features and demo how easily a snap can be created using multiple parts from different sources. We will also show how to create a plugin for unhandled source types.

    In addition to that we are going to have a few nice things at our booth, so we can show give you a Snappy experience there as well.

    If you want to find out more, like check the entire schedule or register for the event, do it at ubucon.org.

    I’m looking forward to seeing you there! </p>
            <a href=Read more

    pitti

    The last two major autopkgtest releases (3.18 from November, and 3.19 fresh from yesterday) bring some new features that are worth spreading.

    New LXD virtualization backend

    3.19 debuts the new adt-virt-lxd virtualization backend. In case you missed it, LXD is an API/CLI layer on top of LXC which introduces proper image management, seamlessly use images and containers on remote locations, intelligently caching them locally, automatically configure performant storage backends like zfs or btrfs, and just generally feels really clean and much simpler to use than the “classic” LXC.

    Setting it up is not complicated at all. Install the lxd package (possibly from the backports PPA if you are on 14.04 LTS), and add your user to the lxd group. Then you can add the standard LXD image server with

      lxc remote add lco https://images.linuxcontainers.org:8443
    

    and use the image to run e. g. the libpng test from the archive:

      adt-run libpng --- lxd lco:ubuntu/trusty/i386
      adt-run libpng --- lxd lco:debian/sid/amd64
    

    The adt-virt-lxd.1 manpage explains this in more detail, also how to use this to run tests in a container on a remote host (how cool is that!), and how to build local images with the usual autopkgtest customizations/optimizations using adt-build-lxd.

    I have btrfs running on my laptop, and LXD/autopkgtest automatically use that, so the performance really rocks. Kudos to Stéphane, Serge, Tycho, and the other LXD authors!

    The motivation for writing this was to make it possible to move our armhf testing into the cloud (which for $REASONS requires remote containers), but I now have a feeling that soon this will completely replace the existing adt-virt-lxc virt backend, as its much nicer to use.

    It is covered by the same regression tests as the LXC runner, and from the perspective of package tests that you run in it it should behave very similar to LXC. The one problem I’m aware of is that autopkgtest-reboot-prepare is broken, but hardly anything is using that yet. This is a bit complicated to fix, but I expect it will be in the next few weeks.

    MaaS setup script

    While most tests are not particularly sensitive about which kind of hardware/platform they run on, low-level software like the Linux kernel, GL libraries, X.org drivers, or Mir very much are. There is a plan for extending our automatic tests to real hardware for these packages, and being able to run autopkgtests on real iron is one important piece of that puzzle.

    MaaS (Metal as a Service) provides just that — it manages a set of machines and provides an API for installing, talking to, and releasing them. The new maas autopkgtest ssh setup script (for the adt-virt-ssh backend) brings together autopkgtest and real hardware. Once you have a MaaS setup, get your API key from the web UI, then you can run a test like this:

      adt-run libpng --- ssh -s maas -- \
         --acquire "arch=amd64 tags=touchscreen" -r wily \
         http://my.maas.server/MAAS 123DEADBEEF:APIkey
    

    The required arguments are the MaaS URL and the API key. Without any further options you will get any available machine installed with the default release. But usually you want to select a particular one by architecture and/or tags, and install a particular distro release, which you can do with the -r/--release and --acquire options.

    Note that this is not wired into Ubuntu’s production CI environment, but it will be.

    Selectively using packages from -proposed

    Up until a few weeks ago, autopkgtest runs in the CI environment were always seeing/using the entirety of -proposed. This often led to lockups where an application foo and one of its dependencies libbar got a new version in -proposed at the same time, and on test regressions it was not clear at all whose fault it was. This often led to perfectly good packages being stuck in -proposed for a long time, and a lot of manual investigation about root causes.

    .

    These days we are using a more fine-grained approach: A test run is now specific for a “trigger”, that is, the new package in -proposed (e. g. a new version of libbar) that caused the test (e. g. for “foo”) to run. autopkgtest sets up apt pinning so that only the binary packages for the trigger come from -proposed, the rest from -release. This provides much better isolation between the mush of often hundreds of packages that get synced or uploaded every day.

    This new behaviour is controlled by an extension of the --apt-pocket option. So you can say

      adt-run --apt-pocket=proposed=src:foo,libbar1,libbar-data ...
    

    and then only the binaries from the foo source, libbar1, and libbar-data will come from -proposed, everything else from -release.

    Caveat:Unfortunately apt’s pinning is rather limited. As soon as any of the explicitly listed packages depends on a package or version that is only available in -proposed, apt falls over and refuses the installation instead of taking the required dependencies from -proposed as well. In that case, adt-run falls back to the previous behaviour of using no pinning at all. (This unfortunately got worse with apt 1.1, bug report to be done). But it’s still helpful in many cases that don’t involve library transitions or other package sets that need to land in lockstep.

    Unified testbed setup script

    There is a number of changes that need to be made to testbeds so that tests can run with maximum performance (like running dpkg through eatmydata, disabling apt translations, or automatically using the host’s apt-cacher-ng), reliable apt sources, and in a minimal environment (to detect missing dependencies and avoid interference from unrelated services — these days the standard cloud images have a lot of unnecessary fat). There is also a choice whether to apply these only once (every day) to an autopkgtest specific base image, or on the fly to the current ephemeral testbed for every test run (via --setup-commands). Over time this led to quite a lot of code duplication between adt-setup-vm, adt-build-lxc, the new adt-build-lxd, cloud-vm-setup, and create-nova-image-new-release.

    I now cleaned this up, and there is now just a single setup-commands/setup-testbed script which works for all kinds of testbeds (LXC, LXD, QEMU images, cloud instances) and both for preparing an image with adt-buildvm-ubuntu-cloud, adt-build-lx[cd] or nova, and with preparing just the current ephemeral testbed via --setup-commands.

    While this is mostly an internal refactorization, it does impact users who previously used the adt-setup-vm script for e. g. building Debian images with vmdebootstrap. This script is now gone, and the generic setup-testbed entirely replaces it.

    Misc

    Aside from the above, every new version has a handful of bug fixes and minor improvements, see the git log for details. As always, if you are interested in helping out or contributing a new feature, don’t hesitate to contact me or file a bug report.

    Read more
    David Henningsson

    2.1 surround sound is (by a very unscientific measure) the third most popular surround speaker setup, after 5.1 and 7.1. Yet, ALSA and PulseAudio has since a long time back supported more unusual setups such as 4.0, 4.1 but not 2.1. It took until 2015 to get all pieces in the stack ready for 2.1 as well.

    The problem

    So what made adding 2.1 surround more difficult than other setups? Well, first and foremost, because ALSA used to have a fixed mapping of channels. The first six channels were decided to be:

    1. Front Left
    2. Front Right
    3. Rear Left
    4. Rear Right
    5. Front Center
    6. LFE / Subwoofer

    Thus, a four channel stream would default to the first four, which would then be a 4.0 stream, and a three channel stream would default to the first three. The only way to send a 2.1 channel stream would then be to send a six channel stream with three channels being silence.

    This was not good enough, because some cards, including laptops with internal subwoofers, would only support streaming four channels maximum.

    (To add further confusion, it seemed some cards wanted the subwoofer signal on the third channel of four, and others wanted the same signal on the fourth channel of four instead.)

    ALSA channel map API

    The first part of the solution was a new alsa-lib API for channel mapping, allowing drivers to advertise what channel maps they support, and alsa-lib to expose this information to programs (see snd_pcm_query_chmaps, snd_pcm_get_chmap and snd_pcm_set_chmap).

    The second step was for the alsa-lib route plugin to make use of this information. With that, alsa-lib could itself determine whether the hardware was 5.1 or 2.1, and change the number of channels automatically.

    PulseAudio bass / treble filter

    With the alsa-lib additions, just adding another channel map was easy.
    However, there was another problem to deal with. When listening to stereo material, we would like the low frequencies, and only those, to be played back from the subwoofer. These frequencies should also be removed from the other channels. In some cases, the hardware would have a built-in filter to do this for us, so then it was just a matter of setting enable-lfe-remixing in daemon.conf. In other cases, this needed to be done in software.

    Therefore, we’ve integrated a crossover filter into PulseAudio. You can configure it by setting lfe-crossover-freq in daemon.conf.

    The hardware

    If you have a laptop with an internal subwoofer, chances are that it – with all these changes to the stack – still does not work. Because the HDA standard (which is what your laptop very likely uses for analog audio), does not have much of a channel mapping standard either! So vendors might decide to do things differently, which means that every single hardware model might need a patch in the kernel.

    If you don’t have an internal subwoofer, but a separate external one, you might be able to use hdajackretask to reconfigure your headphone jack to an “Internal Speaker (LFE)” instead. But the downside of that, is that you then can’t use the jack as a headphone jack…

    Do I have it?

    In Ubuntu, it’s been working since the 15.04 release (vivid). If you’re not running Ubuntu, you need alsa-lib 1.0.28, PulseAudio 7, and a kernel from, say, mid 2014 or later.

    Acknowledgements

    Takashi Iwai wrote the channel mapping API, and also provided help and fixes for the alsa-lib route plugin work.

    The crossover filter code was imported from CRAS (but after refactoring and cleanup, there was not much left of that code).

    Hui Wang helped me write and test the PulseAudio implementation.

    PulseAudio upstream developers, especially Alexander Patrakov, did a thorough review of the PulseAudio patch set.

    Read more
    David Planella

    Ubuntu is about people

    Ubuntu has been around for just over a decade. That’s a long time for a project built around a field that evolves at such a rapid pace as computing. And not just any computing –software made for (and by) human beings, who have also inevitably grown and evolved with Ubuntu.

    Over the years, Ubuntu has changed and has lead change to keep thriving in such a competitive space. The first years were particularly exciting: there was so much to do, countless possibilities and plenty of opportunities to contribute.

    Everyone that has been around for a while has fond memories of the Ubuntu Developer Summit, UDS in short. An in-person event run every 6 months to plan the next version of the OS. Representatives of different areas of the community came together every half year, somewhere in the US or Europe, to discuss, design and lay out the next cycle, both in terms of community and technology.

    It was in this setting where Ubuntu governance and leadership were discussed, the decisions of which default apps to include were made, the switch to Unity’s new UX, and much more. It was a particularly intense event, as often discussions continued into the hallways and sometimes up to the bar late at night.

    In a traditionally distributed community, where discussions and planning happen online and across timezones, getting physically together in one place helped us more effectively resolve complex issues, bring new ideas, and often agree to disagree in a respectful environment.

    Ubuntu Catalan team party

    This makes Ubuntu special

    Change takes courage, it takes effort in thinking outside the box and going all the way through, but it is not always popular. I personally believe, though, that without disruptive changes we wouldn’t be where we are today: millions of devices shipped with Ubuntu pre-installed, leadership in the cloud space, Ubuntu phones shipped worldwide, the convergence story, Ubuntu on drones, IoT… and a strong, welcoming and thriving community.

    At some point, UDS morphed into UOS, an online-only event, which despite its own merits and success, it does admittedly lack the more personal component. This is where we are now, and this is not a write-up to hark back to the good old days, or to claim that all decisions we’ve made were optimal –acknowledging those lead by Canonical.

    Ubuntu has evolved, we’ve solved many of the technological issues we were facing in the early days, and in many areas Ubuntu as a platform “just works”. Where we were seeing interest in contributing to the plumbing of the OS in the past, today we see a trend where communities emerge to contribute taking advantage of a platform to build upon.

    Ubuntu Convergence

    The full Ubuntu computer experience, in your pocket

    Yet Ubuntu is just as exciting as it was in those days. Think about carrying your computer running Ubuntu in your pocket and connecting it to your monitor at home for the full experience, think about a fresh and vibrant app developer community, think about an Open Source OS powering the next generation of connected devices and drones. The areas of opportunity to get involved are much more diverse than they have ever been.

    And while we have adapted to technological and social change in the project over the years, what hasn’t changed is one of the fundamental values of Ubuntu: its people.

    To me personally, when I put aside open source and exciting technical challenges, I am proud to be part of this community because its open, welcoming, it’s driven by collaboration, I keep meeting and learning from remarkable individuals, I’ve made friendships that have lasted years… and I could go on forever. We are essentially people who share a mission: that of bringing access to computer to everyone, via Free Software and open collaboration.

    And while over the years we have learnt to work productively in a remote environment, the need to socialize is still there and as important as ever to reaffirm this bonding that keep us together.

    Enter UbuCons.

    The rise of the UbuCons

    UbuCons are in-person conferences around the world, fully driven by teams of volunteers who are passionate about Ubuntu and about community. They are also a remarkable achievement, showing an exceptional commitment and organizational effort from Ubuntu advocates to make them happen.

    Unlike other big Ubuntu events such as release parties -celebrating new releases every six months- UbuCons happen generally once a year. They vary in size, going from tens to hundreds to thousands, include talks by Ubuntu community members and cross-collaboration with other Open Source communities. Most importantly, they are always events to remember.

    UbuCons across the globe

    A network of UbuCons

    A few months back, at the Ubuntu Community Team we started thinking of how we could bring the community together in a similar way we used to do with a big central event, but also in a way that was sustainable and community-driven.

    The existing network of UbuCons came as the natural vehicle for this, and in this time we’ve been working closely with UbuCon organizers to take UbuCons up a notch. It has been from this team work where initiatives such as the UbuContest leading to UbuCon DE in Berlin were made possible. And more support for worldwide UbuCons general: in terms of speakers and community donations to cover some of the organizational cost for instance, or most recently the UbuCon site.

    It has been particularly rewarding for us to have played even a small part on this, where the full credit goes to the international teams of UbuCon organizers. Today, six UbuCons are running worldwide, with future plans for more.

    And enter the Summit

    Community power

    Community power

    But we were not content yet. With UbuCons covering a particular geographical area, we still felt a bigger, more centralized event was needed for the community to rally around.

    The idea of expanding to a bigger summit had already been brainstormed with members of the Ubuntu California LoCo in the months coming to the last UbuCon @ SCALE in LA. Building up on the initial concept, the vision for the Summit was penciled in at the Community Leadership Summit (CLS) 2015 together with representatives from the Ubuntu Community Council.

    An UbuCon Summit is a super-UbuCon, if you will: with some of the most influential members of the wider Ubuntu community, with first-class talks content, and with a space for discussions to help shape the future of particular areas of Ubuntu. It’s the evolution of an UbuCon.

    UbuCon Europe planning

    The usual suspects planning the next UbuCon Europe

    As a side note, I’m particularly happy to see that the US Summit organization has already set the wheels in motion for another summit in Europe next year. A couple of months ago I had the privilege to take part in one of the most reinvigorating online sessions I’ve been in recent times, where a highly enthusiastic and highly capable team of organizers started laying out the plans for UbuCon Europe in Germany next year! But back to the topic…

    Today, the first UbuCon Summit in the US is brought to you by a passionate team of organizers in the Ubuntu California LoCo, the Ubuntu Community Team at Canonical and SCALE, who hope you enjoy it and contribute to the event as much as we are planning to :-)

    Jono Bacon, who we have to thank for participating in and facilitating the initial CLS discussions, wrote an excellent blog post on why you should go to UbuCon in LA in January, which I highly recommend you read.

    In a nutshell, here’s what to expect at the UbuCon Summit:
    – A two-day, two-track conference
    – User and developer talks by the best experts in the Ubuntu community
    – An environment to propose topics, participate and influence Ubuntu
    – Social events to network and get together with those who make Ubuntu
    100% Free registration, although we encourage participants to also consider registering for the full 4 days of SCALE 14x, who are the host to the UbuCon

    I’m really looking forward to meeting everyone there, to seeing old and new faces and getting together to keep the big Ubuntu wheels turning.

    The post Ubuntu is about people appeared first on David Planella.

    Read more