Canonical Voices

Posts tagged with 'ubuntu'

pitti

The last two major autopkgtest releases (3.18 from November, and 3.19 fresh from yesterday) bring some new features that are worth spreading.

New LXD virtualization backend

3.19 debuts the new adt-virt-lxd virtualization backend. In case you missed it, LXD is an API/CLI layer on top of LXC which introduces proper image management, seamlessly use images and containers on remote locations, intelligently caching them locally, automatically configure performant storage backends like zfs or btrfs, and just generally feels really clean and much simpler to use than the “classic” LXC.

Setting it up is not complicated at all. Install the lxd package (possibly from the backports PPA if you are on 14.04 LTS), and add your user to the lxd group. Then you can add the standard LXD image server with

  lxc remote add lco https://images.linuxcontainers.org:8443

and use the image to run e. g. the libpng test from the archive:

  adt-run libpng --- lxd lco:ubuntu/trusty/i386
  adt-run libpng --- lxd lco:debian/sid/amd64

The adt-virt-lxd.1 manpage explains this in more detail, also how to use this to run tests in a container on a remote host (how cool is that!), and how to build local images with the usual autopkgtest customizations/optimizations using adt-build-lxd.

I have btrfs running on my laptop, and LXD/autopkgtest automatically use that, so the performance really rocks. Kudos to Stéphane, Serge, Tycho, and the other LXD authors!

The motivation for writing this was to make it possible to move our armhf testing into the cloud (which for $REASONS requires remote containers), but I now have a feeling that soon this will completely replace the existing adt-virt-lxc virt backend, as its much nicer to use.

It is covered by the same regression tests as the LXC runner, and from the perspective of package tests that you run in it it should behave very similar to LXC. The one problem I’m aware of is that autopkgtest-reboot-prepare is broken, but hardly anything is using that yet. This is a bit complicated to fix, but I expect it will be in the next few weeks.

MaaS setup script

While most tests are not particularly sensitive about which kind of hardware/platform they run on, low-level software like the Linux kernel, GL libraries, X.org drivers, or Mir very much are. There is a plan for extending our automatic tests to real hardware for these packages, and being able to run autopkgtests on real iron is one important piece of that puzzle.

MaaS (Metal as a Service) provides just that — it manages a set of machines and provides an API for installing, talking to, and releasing them. The new maas autopkgtest ssh setup script (for the adt-virt-ssh backend) brings together autopkgtest and real hardware. Once you have a MaaS setup, get your API key from the web UI, then you can run a test like this:

  adt-run libpng --- ssh -s maas -- \
     --acquire "arch=amd64 tags=touchscreen" -r wily \
     http://my.maas.server/MAAS 123DEADBEEF:APIkey

The required arguments are the MaaS URL and the API key. Without any further options you will get any available machine installed with the default release. But usually you want to select a particular one by architecture and/or tags, and install a particular distro release, which you can do with the -r/--release and --acquire options.

Note that this is not wired into Ubuntu’s production CI environment, but it will be.

Selectively using packages from -proposed

Up until a few weeks ago, autopkgtest runs in the CI environment were always seeing/using the entirety of -proposed. This often led to lockups where an application foo and one of its dependencies libbar got a new version in -proposed at the same time, and on test regressions it was not clear at all whose fault it was. This often led to perfectly good packages being stuck in -proposed for a long time, and a lot of manual investigation about root causes.

.

These days we are using a more fine-grained approach: A test run is now specific for a “trigger”, that is, the new package in -proposed (e. g. a new version of libbar) that caused the test (e. g. for “foo”) to run. autopkgtest sets up apt pinning so that only the binary packages for the trigger come from -proposed, the rest from -release. This provides much better isolation between the mush of often hundreds of packages that get synced or uploaded every day.

This new behaviour is controlled by an extension of the --apt-pocket option. So you can say

  adt-run --apt-pocket=proposed=src:foo,libbar1,libbar-data ...

and then only the binaries from the foo source, libbar1, and libbar-data will come from -proposed, everything else from -release.

Caveat:Unfortunately apt’s pinning is rather limited. As soon as any of the explicitly listed packages depends on a package or version that is only available in -proposed, apt falls over and refuses the installation instead of taking the required dependencies from -proposed as well. In that case, adt-run falls back to the previous behaviour of using no pinning at all. (This unfortunately got worse with apt 1.1, bug report to be done). But it’s still helpful in many cases that don’t involve library transitions or other package sets that need to land in lockstep.

Unified testbed setup script

There is a number of changes that need to be made to testbeds so that tests can run with maximum performance (like running dpkg through eatmydata, disabling apt translations, or automatically using the host’s apt-cacher-ng), reliable apt sources, and in a minimal environment (to detect missing dependencies and avoid interference from unrelated services — these days the standard cloud images have a lot of unnecessary fat). There is also a choice whether to apply these only once (every day) to an autopkgtest specific base image, or on the fly to the current ephemeral testbed for every test run (via --setup-commands). Over time this led to quite a lot of code duplication between adt-setup-vm, adt-build-lxc, the new adt-build-lxd, cloud-vm-setup, and create-nova-image-new-release.

I now cleaned this up, and there is now just a single setup-commands/setup-testbed script which works for all kinds of testbeds (LXC, LXD, QEMU images, cloud instances) and both for preparing an image with adt-buildvm-ubuntu-cloud, adt-build-lx[cd] or nova, and with preparing just the current ephemeral testbed via --setup-commands.

While this is mostly an internal refactorization, it does impact users who previously used the adt-setup-vm script for e. g. building Debian images with vmdebootstrap. This script is now gone, and the generic setup-testbed entirely replaces it.

Misc

Aside from the above, every new version has a handful of bug fixes and minor improvements, see the git log for details. As always, if you are interested in helping out or contributing a new feature, don’t hesitate to contact me or file a bug report.

Read more
David Henningsson

2.1 surround sound is (by a very unscientific measure) the third most popular surround speaker setup, after 5.1 and 7.1. Yet, ALSA and PulseAudio has since a long time back supported more unusual setups such as 4.0, 4.1 but not 2.1. It took until 2015 to get all pieces in the stack ready for 2.1 as well.

The problem

So what made adding 2.1 surround more difficult than other setups? Well, first and foremost, because ALSA used to have a fixed mapping of channels. The first six channels were decided to be:

1. Front Left
2. Front Right
3. Rear Left
4. Rear Right
5. Front Center
6. LFE / Subwoofer

Thus, a four channel stream would default to the first four, which would then be a 4.0 stream, and a three channel stream would default to the first three. The only way to send a 2.1 channel stream would then be to send a six channel stream with three channels being silence.

This was not good enough, because some cards, including laptops with internal subwoofers, would only support streaming four channels maximum.

(To add further confusion, it seemed some cards wanted the subwoofer signal on the third channel of four, and others wanted the same signal on the fourth channel of four instead.)

ALSA channel map API

The first part of the solution was a new alsa-lib API for channel mapping, allowing drivers to advertise what channel maps they support, and alsa-lib to expose this information to programs (see snd_pcm_query_chmaps, snd_pcm_get_chmap and snd_pcm_set_chmap).

The second step was for the alsa-lib route plugin to make use of this information. With that, alsa-lib could itself determine whether the hardware was 5.1 or 2.1, and change the number of channels automatically.

PulseAudio bass / treble filter

With the alsa-lib additions, just adding another channel map was easy.
However, there was another problem to deal with. When listening to stereo material, we would like the low frequencies, and only those, to be played back from the subwoofer. These frequencies should also be removed from the other channels. In some cases, the hardware would have a built-in filter to do this for us, so then it was just a matter of setting enable-lfe-remixing in daemon.conf. In other cases, this needed to be done in software.

Therefore, we’ve integrated a crossover filter into PulseAudio. You can configure it by setting lfe-crossover-freq in daemon.conf.

The hardware

If you have a laptop with an internal subwoofer, chances are that it – with all these changes to the stack – still does not work. Because the HDA standard (which is what your laptop very likely uses for analog audio), does not have much of a channel mapping standard either! So vendors might decide to do things differently, which means that every single hardware model might need a patch in the kernel.

If you don’t have an internal subwoofer, but a separate external one, you might be able to use hdajackretask to reconfigure your headphone jack to an “Internal Speaker (LFE)” instead. But the downside of that, is that you then can’t use the jack as a headphone jack…

Do I have it?

In Ubuntu, it’s been working since the 15.04 release (vivid). If you’re not running Ubuntu, you need alsa-lib 1.0.28, PulseAudio 7, and a kernel from, say, mid 2014 or later.

Acknowledgements

Takashi Iwai wrote the channel mapping API, and also provided help and fixes for the alsa-lib route plugin work.

The crossover filter code was imported from CRAS (but after refactoring and cleanup, there was not much left of that code).

Hui Wang helped me write and test the PulseAudio implementation.

PulseAudio upstream developers, especially Alexander Patrakov, did a thorough review of the PulseAudio patch set.

Read more
David Planella

Ubuntu is about people

Ubuntu has been around for just over a decade. That’s a long time for a project built around a field that evolves at such a rapid pace as computing. And not just any computing –software made for (and by) human beings, who have also inevitably grown and evolved with Ubuntu.

Over the years, Ubuntu has changed and has lead change to keep thriving in such a competitive space. The first years were particularly exciting: there was so much to do, countless possibilities and plenty of opportunities to contribute.

Everyone that has been around for a while has fond memories of the Ubuntu Developer Summit, UDS in short. An in-person event run every 6 months to plan the next version of the OS. Representatives of different areas of the community came together every half year, somewhere in the US or Europe, to discuss, design and lay out the next cycle, both in terms of community and technology.

It was in this setting where Ubuntu governance and leadership were discussed, the decisions of which default apps to include were made, the switch to Unity’s new UX, and much more. It was a particularly intense event, as often discussions continued into the hallways and sometimes up to the bar late at night.

In a traditionally distributed community, where discussions and planning happen online and across timezones, getting physically together in one place helped us more effectively resolve complex issues, bring new ideas, and often agree to disagree in a respectful environment.

Ubuntu Catalan team party

This makes Ubuntu special

Change takes courage, it takes effort in thinking outside the box and going all the way through, but it is not always popular. I personally believe, though, that without disruptive changes we wouldn’t be where we are today: millions of devices shipped with Ubuntu pre-installed, leadership in the cloud space, Ubuntu phones shipped worldwide, the convergence story, Ubuntu on drones, IoT… and a strong, welcoming and thriving community.

At some point, UDS morphed into UOS, an online-only event, which despite its own merits and success, it does admittedly lack the more personal component. This is where we are now, and this is not a write-up to hark back to the good old days, or to claim that all decisions we’ve made were optimal –acknowledging those lead by Canonical.

Ubuntu has evolved, we’ve solved many of the technological issues we were facing in the early days, and in many areas Ubuntu as a platform “just works”. Where we were seeing interest in contributing to the plumbing of the OS in the past, today we see a trend where communities emerge to contribute taking advantage of a platform to build upon.

Ubuntu Convergence

The full Ubuntu computer experience, in your pocket

Yet Ubuntu is just as exciting as it was in those days. Think about carrying your computer running Ubuntu in your pocket and connecting it to your monitor at home for the full experience, think about a fresh and vibrant app developer community, think about an Open Source OS powering the next generation of connected devices and drones. The areas of opportunity to get involved are much more diverse than they have ever been.

And while we have adapted to technological and social change in the project over the years, what hasn’t changed is one of the fundamental values of Ubuntu: its people.

To me personally, when I put aside open source and exciting technical challenges, I am proud to be part of this community because its open, welcoming, it’s driven by collaboration, I keep meeting and learning from remarkable individuals, I’ve made friendships that have lasted years… and I could go on forever. We are essentially people who share a mission: that of bringing access to computer to everyone, via Free Software and open collaboration.

And while over the years we have learnt to work productively in a remote environment, the need to socialize is still there and as important as ever to reaffirm this bonding that keep us together.

Enter UbuCons.

The rise of the UbuCons

UbuCons are in-person conferences around the world, fully driven by teams of volunteers who are passionate about Ubuntu and about community. They are also a remarkable achievement, showing an exceptional commitment and organizational effort from Ubuntu advocates to make them happen.

Unlike other big Ubuntu events such as release parties -celebrating new releases every six months- UbuCons happen generally once a year. They vary in size, going from tens to hundreds to thousands, include talks by Ubuntu community members and cross-collaboration with other Open Source communities. Most importantly, they are always events to remember.

UbuCons across the globe

A network of UbuCons

A few months back, at the Ubuntu Community Team we started thinking of how we could bring the community together in a similar way we used to do with a big central event, but also in a way that was sustainable and community-driven.

The existing network of UbuCons came as the natural vehicle for this, and in this time we’ve been working closely with UbuCon organizers to take UbuCons up a notch. It has been from this team work where initiatives such as the UbuContest leading to UbuCon DE in Berlin were made possible. And more support for worldwide UbuCons general: in terms of speakers and community donations to cover some of the organizational cost for instance, or most recently the UbuCon site.

It has been particularly rewarding for us to have played even a small part on this, where the full credit goes to the international teams of UbuCon organizers. Today, six UbuCons are running worldwide, with future plans for more.

And enter the Summit

Community power

Community power

But we were not content yet. With UbuCons covering a particular geographical area, we still felt a bigger, more centralized event was needed for the community to rally around.

The idea of expanding to a bigger summit had already been brainstormed with members of the Ubuntu California LoCo in the months coming to the last UbuCon @ SCALE in LA. Building up on the initial concept, the vision for the Summit was penciled in at the Community Leadership Summit (CLS) 2015 together with representatives from the Ubuntu Community Council.

An UbuCon Summit is a super-UbuCon, if you will: with some of the most influential members of the wider Ubuntu community, with first-class talks content, and with a space for discussions to help shape the future of particular areas of Ubuntu. It’s the evolution of an UbuCon.

UbuCon Europe planning

The usual suspects planning the next UbuCon Europe

As a side note, I’m particularly happy to see that the US Summit organization has already set the wheels in motion for another summit in Europe next year. A couple of months ago I had the privilege to take part in one of the most reinvigorating online sessions I’ve been in recent times, where a highly enthusiastic and highly capable team of organizers started laying out the plans for UbuCon Europe in Germany next year! But back to the topic…

Today, the first UbuCon Summit in the US is brought to you by a passionate team of organizers in the Ubuntu California LoCo, the Ubuntu Community Team at Canonical and SCALE, who hope you enjoy it and contribute to the event as much as we are planning to :-)

Jono Bacon, who we have to thank for participating in and facilitating the initial CLS discussions, wrote an excellent blog post on why you should go to UbuCon in LA in January, which I highly recommend you read.

In a nutshell, here’s what to expect at the UbuCon Summit:
– A two-day, two-track conference
– User and developer talks by the best experts in the Ubuntu community
– An environment to propose topics, participate and influence Ubuntu
– Social events to network and get together with those who make Ubuntu
100% Free registration, although we encourage participants to also consider registering for the full 4 days of SCALE 14x, who are the host to the UbuCon

I’m really looking forward to meeting everyone there, to seeing old and new faces and getting together to keep the big Ubuntu wheels turning.

The post Ubuntu is about people appeared first on David Planella.

Read more
Prakash

If you purchased your computer in the last decade, it probably has a 64-bit-capable processor. The transition to 64-bit operating systems has been a long one, but Google is about to give Linux users another push. In March 2016, Google will stop releasing Chrome for 32-bit Linux distributions.

Read More: http://www.pcworld.com/article/3010404/browsers/googles-killing-chrome-support-for-32-bit-linux-ubuntu-1204-and-debian-7.html

Read more
Daniel Holbach

It’s been a while since our last Snappy Clinic (here’s a link to all videos) and since Ubuntu Online Summit a lot of great things happened in Snapcraft:

Among the changes: a nil plugin, support of pip packages, support globs in the copy plugin, a nodejs plugin, add go-packages to the go plugin, countless bugfixes and tests, a more beautiful interface and more documentation.

The above and to get Sergio Schvezov on camera are reasons enough for us to have another Snappy Clinic

See you later! </p>
            <a href=Read more

Timo Jyrinki

This is a burst of notes that I wrote in an e-mail in June when asked about it, and I'm not going to have any better steps since I don't remember even that amount as back then. I figured it's better to have it out than not.

So... if you want to use LUKS In-Place Conversion Tool, the notes below on converting a shipped-with-Ubuntu Dell XPS 13 Developer Edition (2015 Intel Broadwell model) may help you. There were a couple of small learnings to be had...
 
The page http://www.johannes-bauer.com/linux/luksipc/ itself is good and without errors, although funnily uses reiserfs as an example. It was only a bit unclear why I did save the initial_keyfile.bin since it was then removed in the next step (I guess it's for the case you want to have a recovery file hidden somewhere in case you forget the passphrase).

For using the tool I booted from a 14.04.2 LTS USB live image and operated there, including downloading and compiling luksipc in the live session. The exact reason of resizing before luksipc was a bit unclear to me at first so I simply indeed resized the main rootfs partition and left unallocated space in the partition table.


Then finally I ran ./luksipc -d /dev/sda4 etc.


I realized I want /boot to be on an unencrypted partition to be able to load the kernel + initrd from grub before entering into LUKS unlocking. I couldn't resize the luks partition anymore since it was encrypted... So I resized what I think was the empty small DIAGS partition (maybe used for some system diagnostic or something, I don't know), or possibly the next one that is the actual recovery partition one can reinstall the pre-installed Ubuntu from. And naturally I had some problems because it seems vfatresize tool didn't do what I wanted it to do and gparted simply crashed when I tried to use it first to do the same. Anyway, when done with getting some extra free space somewhere, I used the remaining 350MB for /boot where I copied the rootfs's /boot contents to.

After adding the passphrase in luks I had everything encrypted etc and decryptable, but obviously I could only access it from a live session by manual cryptsetup luksOpen + mount /dev/mapper/myroot commands. I needed to configure GRUB, and I needed to do it with the grub-efi-amd64 which was a bit unfamiliar to me. There's also grub-efi-amd64-signed I have installed now but I'm not sure if it was required for the configuration. Secure boot is not enabled by default in BIOS so maybe it isn't needed.


I did GRUB installation – I think inside rootfs chroot where I also mounted /dev/sda6 as /boot (inside the rootfs chroot), ie mounted dev, sys with -o bind to under the chroot (from outside chroot) and mount -t proc proc proc too. I did a lot of trial and effort so I surely also tried from outside the chroot, in the live session, using some parameters to point to the mounted rootfs's directories...


I needed to definitely install cryptsetup etc inside the encrypted rootfs with apt, and I remember debugging for some time if they went to the initrd correctly after I executed mkinitramfs/update-initramfs inside the chroot.


At the end I had grub asking for the password correctly at bootup. Obviously I had edited the rootfs's /etc/fstab to include the new /boot partition, I changed / to be "UUID=/dev/mapper/myroot /     ext4    errors=remount-ro 0       ", kept /boot/efi as coming from the /dev/sda1 and so on. I had also added "myroot /dev/sda4 none luks" to /etc/crypttab. I seem to also have GRUB_CMDLINE_LINUX="cryptdevice=/dev/sda4:myroot root=/dev/mapper/myroot" in /etc/default/grub.


The only thing I did save from the live session was the original partition table if I want to revert.


So the original was:

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 500118192 sectors, 238.5 GiB
Logical sector size: 512 bytes
...
First usable sector is 34, last usable sector is 500118158
Partitions will be aligned on 2048-sector boundaries
Total free space is 6765 sectors (3.3 MiB)
 
Number  Start (sector)    End (sector)  Size       Code  Name
1            2048         1026047   500.0 MiB   EF00  EFI system partition
2         1026048         1107967   40.0 MiB    FFFF  Basic data partition
3         1107968         7399423   3.0 GiB     0700  Basic data partition
4         7399424       467013631   219.2 GiB   8300
5       467017728       500117503   15.8 GiB    8200

And I now have:


Number  Start (sector)    End (sector)  Size       Code  Name

1            2048         1026047   500.0 MiB   EF00  EFI system partition
2         1026048         1107967   40.0 MiB    FFFF  Basic data partition
3         1832960         7399423   2.7 GiB     0700  Basic data partition
4         7399424       467013631   219.2 GiB   8300
5       467017728       500117503   15.8 GiB    8200
6         1107968         1832959   354.0 MiB   8300

So it seems I did not edit DIAGS (and it was also originally just 40MB) but did something with the recovery partition while preserving its contents. It's a FAT partition so maybe I was able to somehow resize it after all.


The 16GB partition is the default swap partition. I did not encrypt it at least yet, I tend to not run into swap anyway ever in my normal use with the 8GB RAM.


If you go this route, good luck! :D

Read more
Daniel Holbach

UCADay-64pxThe Ubuntu Community Appreciation Day is a really nice tradition and it’s always to think of somebody I could thank (Thanks Ahmed Shams for setting it up in the first place!). Narrowing down my list of thank-yous to just one or two for a blog post is much harder for me. </p>
            <a href=Read more

Michael Hall

I’ve just published the most recent Community Donations Report highlighting where donations made to the Ubuntu community have been used by members of that community to promote and improve Ubuntu. In this report I’ve included links to write-ups detailing how those funds were put to use.

Over the past two years these donations have allowed Ubuntu Members to travel and speak at events, host local events, accelerate development and testing, and much more. Thank you all who have donated to this fund. You can help us do more, by donating to the fund and helping spread the word about it, both to potential donors and potential beneficiaries.

Read more
Daniel Holbach

Ubuntu Online Summit featured more than 70 sessions this time around and quite a big turnout. You can find the full schedule with links to session videos and session notes in summit.ubuntu.com.

Here’s a quick summary of what happened in Snappy Ubuntu Core land:

  • Testing Snappy: In this Show & Tell session Leo Arias showcased a lot of the QA work which has been done on Ubuntu Core along with many useful techniques to run tests and easily bring up Snappy in a number of different scenario.
  • Creating more Snappy frameworks: Frameworks are an effective way to bring functionality to Ubuntu Core which can then be shared by apps. The session attracted quite a few users of Snappy who wanted to know if their use-case could be addressed by a framework. We discussed some more technical difficulties, possible solutions and learned that bluetooth and connectivity (based on network-manager) frameworks are in the works.
  • Snappy Clinic: bringing ROS apps to Snappy Ubuntu Core: Ted Gould showed off the great work which has been put into the catkin plugin of Snapcraft. Taking a simple ROS app and bringing it to Ubuntu Core is very easy. The interest from members of the ROS community was great to see and their feedback will help us improve the support even further.
  • Snap packages for phone and desktop apps: Alejandro Cura and Kyle Fazzari brought up their analysis of snappy on the phone/desktop and discussed a plan on what would need to land to make snappy apps on the Ubuntu desktop and phone a reality.
  • Your feedback counts: the Snappy onboarding experience: This session brought together a number of different users of Snappy who shared their experience and what they would like to do. The feedback was great and will be factored into our upcoming documentation plans.
  • Snappy Developer Community Resources: In this session Thibaut Rouffineau and I had a chat about our online support options and community resources. We gathered a number of ideas and will look into creating workshop and presentation materials this cycle as well.
  • Porting popular apps/software to Snappy: Many interesting apps and appliances exist for a variety of boards, most notably the Raspberry Pi. We put together a plan on how we could start a community initiative for bringing them over to Snappy Ubuntu Core.

Thanks to everyone who participated and helped to make this such a great UOS!

Read more
Dustin Kirkland


Picture yourself containers on a server
With systemd trees and spawned tty's
Somebody calls you, you answer quite quickly
A world with the density so high

    - Sgt. Graber's LXD Smarts Club Band

Last week, we proudly released Ubuntu 15.10 (Wily) -- the final developer snapshot of the Ubuntu Server before we focus the majority of our attention on quality, testing, performance, documentation, and stability for the Ubuntu 16.04 LTS cycle in the next 6 months.

Notably, LXD has been promoted to the Ubuntu Main archive, now commercially supported by Canonical.  That has enabled us to install LXD by default on all Ubuntu Servers, from 15.10 forward.
Join us for an interactive, live webinar on November 12th at 5pm BST/12pm EST led by James Page, where he will demonstrate LXD as the fastest hypervisor in OpenStack!
That means that every Ubuntu server -- Intel, AMD, ARM, POWER, and even Virtual Machines in the cloud -- is now a full machine container hypervisor, capable of hosting hundreds of machine containers, right out of the box!

LXD in the Sky with Diamonds!  Well, LXD is in the Cloud with Diamond level support from Canonical, anyway.  You can even test it in your web browser here.

The development tree of Xenial (Ubuntu 16.04 LTS) has already inherited this behavior, and we will celebrate this feature broadly through our use of LXD containers in Juju, MAAS, and the reference platform of Ubuntu OpenStack, as well as the new nova-lxd hypervisor in the OpenStack Autopilot within Landscape.

While the young and the restless are already running Wily Ubuntu 15.10, the bold and the beautiful are still bound to their Trusty Ubuntu 14.04 LTS servers.

At Canonical, we understand both motivations, and this is why we have backported LXD to the Trusty archives, for safe, simple consumption and testing of this new generation of machine containers there, on your stable LTS.

Installing LXD on Trusty simply requires enabling the trusty-backports pocket, and installing the lxd package from there, with these 3 little commands:

sudo sed -i -e "/trusty-backports/ s/^# //" /etc/apt/sources.list
sudo apt-get update; sudo apt-get dist-upgrade -y
sudo apt-get -t trusty-backports install lxd

In minutes, you can launch your first LXD containers.  First, inherit your new group permissions, so you can execute the lxc command as your non-root user.  Then, import some images, and launch a new container named lovely-rita.  Shell into that container, and examine the process tree, install some packages, check the disk and memory and cpu available.  Finally, exit when you're done, and optionally delete the container.

newgrp lxd
lxd-images import ubuntu --alias ubuntu
lxc launch ubuntu lovely-rita
lxc list
lxc exec lovely-rita bash
ps -ef
apt-get update
df -h
free
cat /proc/cpuinfo
exit
lxc delete lovely-rita

I was able to run over 600 containers simultaneously on my Thinkpad (x250, 16GB of RAM), and over 60 containers on an m1.small in Amazon (1.6GB of RAM).

We're very interested in your feedback, as LXD is one of the most important features of the Ubuntu 16.04 LTS.  You can learn more about LXD, view the source code, file bugs, discuss on the mailing list, and peruse the Linux Containers upstream projects.

With a little help from my friends!
:-Dustin

Read more
Michael Hall

With the release of the Wily Werewolf (Ubuntu 15.10) we have entered into the Xenial Xerus (to be Ubuntu 16.04) development cycle. This will be another big milestone for Ubuntu, not just because it will be another LTS, but it will be the last LTS before we acheive convergence. What we do here will not only be supported for the next 5 years, it will set the stage for everything to come over that time as we bring the desktop, phone and internet-of-things together into a single comprehensive, cohesive platform.

To help get us there, we have a track dedicated to Convergence at this week’s Ubuntu Online Summit where we will be discussing plans for desktops, phones, IoT and how they are going to come together.

Tuesday

We’ll start the the convergence track at 1600 with the Ubuntu Desktop team talking about the QA (Quality Assurance) plans for the next LTS desktop, which will provide another 5 years of support for Ubuntu users. We’ll end the day with the Kubuntu team who are planning for their 16.04 (Xenial Xerus) release at 1900 UTC.

Wednesday

The second day kicks off at 1400 UTC with plans for what version of the Qt toolkit will ship in 16.04, something that now affects both the KDE and Unity 8 flavors of Ubuntu. That will be followed by development planning for the next Unity 7 desktop version of Ubuntu at 1500, and a talk on how legacy apps (.deb and X11 based) might be supported in the new Snappy versions of Ubuntu. We will end the day with a presentation by the Unity 8 developers at 1800 about how you can get started working on and contributing to the next generation desktop interface for Ubuntu.

Thursday

The third and last day of the Online Summit will begin with a live Questions and Answers session at 1400 UTC about the Convergence plans in general with the project and engineering managers who are driving it forward. At 1500 we’ll take a look at how those plans are being realized in some of the apps already being developed for use on Ubuntu phones and desktop. Then at 1600 UTC members of the design team will be talking to independent app developers about how to design their app with convergence in mind. We will then end the convergence track with a summary from KDE developers on the state and direction of their converged UI, Plama Mobile.

Plenaries

Outside of the Convergence track, you’ll want to watch Mark Shuttleworth’s opening keynote at 1400 UTC on Tuesday, and Canonical CEO Jane Silber’s live Q&A session at 1700 UTC on Wednesday.

Read more
Daniel Holbach

With Ubuntu Online Summit happening 3-5 November, it is really just around the corner. Time to check out the schedule and see what’s planned.

UOS is our online planning and show-and-tell event. We use a mix of Hangouts-on-Air, IRC and Etherpad to organise ourselves. It’s a great opportunity to get to know people, have your say and find out what’s planned the next weeks and months.

Register for the event at http://summit.ubuntu.com/.

This is also where you find the schedule for all the individual tracks and if you click on the sessions themselves, you can register your attendance as well, that will make it easy for you to see “your schedule” on the site and help you plan your days.

Here is a quick roundup of the sessions coming straight from the world of Snappy Ubuntu Core:

  1. 2015-11-03 15:00 UTC Testing Snappy
    Leo and Federico will cover both automated and manual approaches to testing snappy, and the work that goes into making sure each new version of snappy is ready to release. They will also offer advice on how you can help make snappy
    better!
  2. 2015-11-03 16:00 UTC Creating more Snappy frameworks
    Frameworks extend the functionality of Snappy Ubuntu Core systems in a vary practical way. Let’s discuss how we can bring more services to Snappy Ubuntu Core.
  3. 2015-11-03 18:00 UTC Snappy Clinic: bringing ROS apps to Snappy Ubuntu Core
    Snapcraft integrates building and packaging software and is what we recommend to bring software to Snappy Ubuntu Core. Snapcraft has recently seen the addition of a catkin plugin. This will make it very easy to bring ROS applications to Snappy Ubuntu Core. Check out this demo by Sergio and Ted and you’ll see just how easy it is.
  4. 2015-11-05 14:00 UTC Your feedback counts: the Snappy onboarding experience
    In this session we want your feedback on your Snappy and Snapcraft onboarding experience:
    – How were you welcomed into the world of Snappy? Was the documentation sufficient? Were you able to find your way around?
    – We are planning some changes to the documentation and would like to present them and get feedback.
    – If you are a device builder, we would specifically like to get your input as well, so we can improve our device builder documentation.
  5. 2015-11-05 15:00 UTC Snappy Developer Community Resources
    In this session we want to figure out how the Snappy developer community can interact and get support, particularly:
    – support of askubuntu/stackoverflow
    – which G+ communities/Twitter/etc to use
    – which presentation and workshop materials we want to create and share
    – how we can support people who want to represent Snappy Ubuntu Core at events/hackathons/workshops
  6. 2015-11-05 16:00 UTC Porting popular apps/software to Snappy
    With hardware becoming cheaper (ie Raspberry Pi, etc.) a number of apps and appliances were built, which are very popular today. It’d be great if it was easy for app developers to bring their apps to Snappy Ubuntu Core as well. Let’s figure out how developers can port them over and we can get feedback about what should be easier.

Please note: there might be last-minute changes to the schedule, so make sure stay up to date. If you have any questions, let me know.

Read more
David Planella

Ubuntu Online Summit
Starting on Tuesday 3rd to Thursday 5th of November, a new edition of the Ubuntu Online Summit is taking place next week.

Three days of free and live content all around Ubuntu and Open Source: discussions, tutorials, demos, presentations and Q+As for anyone to get in touch with the latest news and technologies, and get started contributing to Ubuntu.

The tracks

As in previous editions, the sessions runs along multiple tracks that group related topics as a theme:

  • App & scope development: the SDK and developer platform roadmaps, phone core apps planning, developer workshops
  • Cloud: Ubuntu Core on clouds, Juju, Cloud DevOps discussions, charm tutorials, the Charm, OpenStack
  • Community: governance discussions, community event planning, Q+As, how to get involved in Ubuntu
  • Convergence: the road to convergence, the Ubuntu desktop roadmap, requirements and use cases to bring the desktop and phone together
  • Core: snappy Ubuntu Core, snappy post-vivid plans, snappy demos and Q+As
  • Show & Tell: presentations, demos, lightning talks (read: things that break and explode) on a varied range of topics

The highlights

Here are some of my personal handpicks on sessions not to miss:

  • Opening keynote: Mark Shuttleworth, Canonical and Ubuntu founder will be opening the Online Summit with his keynote, on Tuesday 3rd Nov, 14:00 UTC
  • Ask the CEO: Jane Silber, Canonical’s CEO will be talking with the audience and answering questions from the community on her Q+A session, on Wednesday 4th Nov, 17:00 UTC
  • Snappy Clinic: join the snappy team on an interactive session about bringing robotics to Ubuntu – porting ROS apps to snappy Ubuntu Core, on Tuesday 3rd Nov, 18:00 UTC
  • JavaScript scopes hands-on: creating Ubuntu phone scopes is now easier than ever with JavaScript; learn all about it with resident scopes expert Marcus Tomlinson on Thursday 5th Nov, 15:00 UTC
  • An introduction to LXD: Stéphane Graber will be demoing LXD, the container hypervisor, and discussing features and upcoming plans on Thursday 5th Nov, 16:00 UTC
  • UbuCon Europe planning: a community team around the Ubuntu German LoCo will be getting together to plan the next in-person UbuCon Summit in Europe next year on Wednesday 4th Nov, 18:00 UTC

Check out the full schedule for more! >

Participating

Joining the summit is easy. Simply remember to:

Once you’ve done that, there are different ways of taking part online event via video hangouts and IRC:

  • Participate or watch sessions – everyone is welcome to participate and join a discussion to provide input or offer contribution. If you prefer to take a rear seat, that’s fine too. You can either subscribe to sessions, watch them on your browser or directly join a live hangout.
  • Propose a session – do you want to take a more active role in contributing to Ubuntu? Do you have a topic you’d like to discuss, or an idea you’d like to implement? Then you’ll probably want to propose a session to make it happen. There is still a week for accepting proposals, so why don’t you go ahead and propose a session?

Looking forward to seeing you all at the Summit!

The post The Ubuntu Online Summit starts next week appeared first on David Planella.

Read more
David Planella

ubuntu-community
I am thrilled to announce the next big event in the Ubuntu calendar: the UbuCon Summit, taking place in Pasadena, CA, in the US, from the 21st to 22nd of January 2016, hosted at SCALE and with Mark Shuttleworth on the opening keynote.

Taking UbuCons to the next level

UbuCons are a remarkable achievement from the Ubuntu community: a network of conferences across the globe, organized by volunteers passionate about Open Source and about collaborating, contributing, and socializing around Ubuntu. The UbuCon at SCALE has been one of the most successful ones, and this year we are kicking it up a notch.

Enter the UbuCon Summit. In discussions with the Community Council, and after the participation of some Ubuntu team members at the Community Leadership Summit a few months ago, one of the challenges that we identified our community is facing was the lack of a global event to meet face to face after the UDS era. While UbuCons continue to thrive as regional conferences, one of the conclusions we reached was that we needed a way to bring everyone together on a bigger setting to complement the UbuCon fabric: the Summit.

The Summit is the expansion of the traditional UbuCon: more content and at a bigger scale. But at the same maintaining the grass-roots spirit and the community-driven organization that has made these events successful.

Two days and two tracks of content

During these two days, the event will be structured as a traditional conference with presentations, demos and plenaries on the first day and as an unconference for the second one. The idea behind the unconference is simple: participants will propose a set of topics in situ, each one of which will be scheduled as a session. For each session the goal is to have a discussion and reach a set of conclusions and actions to address the topics. Some of you will be familiar with the setting :)

We will also have two tracks to group sessions by theme: Users, for those interested in learning about the non-tech, day-to-day part of using Ubuntu, but also including the component on how to contribute to Ubuntu as an advocate. The Developers track will cover the sessions for the technically minded, including app development, IoT, convergence, cloud and more. One of the exciting things about our community is that there is so much overlap between these themes to make both tracks interesting to everyone.

All in all, the idea is to provide a space to showcase, learn about and discuss the latest Ubuntu technologies, but also to focus on new and vibrant parts of the community and talk about the challenges (and opportunities!) we are facing as a project.

A first-class team

In addition to the support and guidance from the Community Council, the true heroes of the story are Richard Gaskin, Nathan Haines and the Ubuntu California LoCo. Through the years, they have been the engines behind the UbuCon at SCALE in LA, and this time around they were quick again to jump and drive the Summit wagon too.

This wouldn’t have been possible without the SCALE team either: an excellent host to UbuCon in the past and again on this occasion. In particular Gareth Greenaway and Ilan Rabinovitch, who are helping us with the logistics and organization all along the way. If you are joining the Summit, I very much recommend to stay for SCALE as well!

More Summit news coming soon

On the next few weeks we’ll be sharing more details about the Summit, revamping the global UbuCon site and updating the SCALE schedule with all relevant information.

Stay tuned for more, including the session about the UbuCon Summit at the next Ubuntu Online Summit in two weeks.

Looking forward to seeing some known and new faces at the UbuCon Summit in January!

Picture from an original by cm-t arudy

The post Announcing the UbuCon Summit appeared first on David Planella.

Read more
Daniel Holbach

ROAR!

This morning I chatted with Laura Czajkowski and we quickly figured out that wily is our 23rd Ubuntu release. Crazy in a way – 23 releases, who would’ve thought? But on the other hand, Ubuntu is a constant evolution of great stuff becoming even better. Even after 11 years of Ubuntu I can still easily get excited about what’s new in Ubuntu and what is getting better. If you have read any of my recent blog entries you will know that snappy and snapcraft are a combination too good to be true. Shipping software on Ubuntu has never been that easy and I can’t wait for snappy and snapcraft to reach into further parts of Ubuntu. The 16.04 (‘xenial‘) cycle is going to deliver much more of this. Awesome!

But for now: enjoy the great work wrapped up in our wily 15.10 package. Take it, install it, give it to friends and family and spread great open source software in the world. </p>
            <a href=Read more

Hardik Dalwadi

Recently i have written blog about assembling Ubuntu Orange Matchbox (Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow) & Demonstration of Snappy Ubuntu Core with it.

Now with Make-Me-Glow LP Project, we have built an application for controlling PiGlow from Ubuntu Phone. The PiGlow is a small add on board for the Raspberry Pi that provides 18 individually controllable LEDs. Recently Victor Tuson Palau has released glowapi for Snappy Ubuntu Core, which will allow us to control PiGlow over HTTP Protocol.

We decided to build quick Ubuntu Phone application to control  PiGlow during Ubuntu Hackathon India using glowapi for Snappy Ubuntu Core. As a result of this, we came with Make-Me-Glow LP Project. Big Thanks

 

What It Does:

It will allow you to control any LED of PiGlow from Ubuntu Phone. You have to  download “Make Me Glow” application from Ubuntu Phone Store. For example, if you want to On/Off Orange LED on Leg 1 with specific Intensity on PiGlow, Make Me Glow UI will allow you to do the same. Just make sure that you have to enter correct IP address of your Ubuntu Orange Matchbox (Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow). We are assuming that you have already installed  glowapi for Snappy Ubuntu Core on your  Ubuntu Orange Matchbox (Ubuntu Branded Pibow for Raspberry Pi 2 & PiGlow)

Make Me Glow Installation from Ubuntu Store @ Ubuntu Phone Launch Make Me Glow Launch Make Me Glow Give you input to control PiGlow LEDs Give you input to control PiGlow LEDs Give you input to control PiGlow LEDs

How It Does:

As i said erlier, we are using glowapi for Snappy Ubuntu Core which can operate PiGlow according to HTTP Protocol POST request.  We are issuing POST request over HTTP Protocol from Ubuntu Phone Application – Make Me Glow as per user configuration and PiGlow does operate accordingly. We have written simple QML function for the same, which request URL using POST over HTTP  according to user input. It is very very easy to develop Ubuntu Phone Application using Ubuntu SDK. Big Thanks to XiaoGuo Liu, without it would not be possible to execute this project.

function request(url) {
var xhr = new XMLHttpRequest();
xhr.open(‘POST’, url, true);
xhr.send(”);
}

Future Roadmap:

We are working on few animations. In fact  i have already made the proof of concept using bash script, like Fan &  FadeOut animation with bash scripting. If you want to contribute for the same please visit Make-Me-Glow LP Project.

Conclusion:

Goal behind this demonstration is to defined immense possibility of Snappy Ubuntu Core to control devices remotely, within Ubuntu Ecosystem. If you are planning to dive in to IOT based solutions, Snappy Ubuntu Core is great start for you. After this demonstration, I am planning to control my Home Lights connected to device running Snappy Ubuntu Core and controlling the same through Ubuntu Phone.  Stay tuned…

Read more
bmichaelsen

Warum hast Du mir das angetan?
Ich hab’s von einem Bekannten erfahren.

— Die Ärtze, Debil, Zu Spät

Its been more than two years since the last Hackfest in Hamburg! So we are indeed much too late (german: Zu Spät) with repeating this wonderful Event. Right a day after everyone updated his or her Desktop to Wily Werewolf we will meet for a weekend of happy hacking again in Hamburg!

Hamburg Hackfest 2013 - carelessly stolen from Eikes Retrospective
Hamburg Hackfest 2013 – carelessly stolen from Eikes Retrospective

So now, we will meet again. You are invited to drop by this weekend, we will celebrate a bit on Friday evening (ignoring the german culinary advise in the song linked above about “Currywurst and Pommes Fritz” — I imagine we prefer Club Mate and Pizza) and hack on LibreOffice on Saturday and Sunday. Curious new faces are more then welcome!


Read more
bmichaelsen

Warum hast Du mir das angetan?
Ich hab’s von einem Bekannten erfahren.

— Die Ärtze, Debil, Zu Spät

Its been more than two years since the last Hackfest in Hamburg! So we are indeed much too late (german: Zu Spät) with repeating this wonderful Event. Right a day after everyone updated his or her Desktop to Wily Werewolf we will meet for a weekend of happy hacking again in Hamburg!

Hamburg Hackfest 2013 - carelessly stolen from Eikes RetrospectiveHamburg Hackfest 2013 – carelessly stolen from Eikes Retrospective

So now, we will meet again. You are invited to drop by this weekend, we will celebrate a bit on Friday evening (ignoring the german culinary advise in the song linked above about “Currywurst and Pommes Fritz” — I imagine we prefer Club Mate and Pizza) and hack on LibreOffice on Saturday and Sunday. Curious new faces are more then welcome!


Read more
Daniel Holbach

As announced earlier, we had a Ubuntu Snappy Core Clinic yesterday and we had a great time. Sergio Schvezov, Ted Gould and I talked about snapcraft in general, what’s new in the 0.3 release and showed off a couple of examples how to package software for Ubuntu Snappy Core. As you can see in the video, none of the snapcraft.yaml files length exceeded 30 lines (and this file is all that’s required); compared to what packaging on various platforms usually looks like that’s just beautiful.

We are going to have these clinics more regularly now. They will always revolve around the world of Snappy Ubuntu Core and there will always be room for questions, requests, feedback and what your want them to be.

ROS people might be interested in the one: we are very likely going to talk about snapcraft’s catkin plugin.

If you have missed the show yesterday, here it is in full length:

You might be wondering why I’m posting two videos. Unfortunately I accidentally pressed the “stop broadcast” button when I was actually looking for “stop screensharing”. Once I hit the button, we couldn’t find a way to resume the broadcast and we had to start a new one. I’m sorry about that.

If anyone of you knows a browser plugin which shows a “are you sure you want to stop the broadcast” warning, that would be fantastic. I could imagine I’m not the only one who might have confused the two when they were busy doing a demo, getting feedback on IRC and were busy talking. </p>
            <a href=Read more

Louis

While testing the upcoming release of Ubuntu (15.10 Wily Warewolf), I ran over a bug that renders the kernel crash dump mechanism unusable by default :

LP: #1496317 : kexec fails with OOM killer with the current crashkernel=128 value

The root cause of this bug is that the initrd.img file that is used by kexec to reboot into a new kernel when the original one panics is getting bigger with kernel 4.2 on Ubuntu.  Hence, it is using too much of the reserved crashkernel memory (default: 128Mb). This triggers the “Out Of Memory (OOM)” killer and the kernel dump capture cannot complete.

One workaround for this issue is to increase the amount of reserved memory to a higher value. 150Mb seems to be sufficient but you may need to increase it to a higher value.  While one solution to this problem could be to increase the default crashkernel= value, it is only pushing the issue forward until we hit this limit once again.

Reduce the size of initrd.img

update-initramfs has an option in its configuration file ( /etc/initramfs-tools/initramfs.conf) that let us modify the modules that are included in the initrd.img file.  Our current default is to add most of the modules :

# MODULES: [ most | netboot | dep | list ]
#
# most - Add most filesystem and all harddrive drivers.
#
# dep - Try and guess which modules to load.
#
# netboot - Add the base modules, network modules, but skip block devices.
#
# list - Only include modules from the 'additional modules' list
#

MODULES=most

By changing this configuration to MODULES=dep, we can sensibly reduce the size of the initrd.img :

MODULES=most : initrd.img-4.2.0-16-generic = 30Mb

MODULES=dep :initrd.img-4.2.0-16-generic = 12Mb

Identifying this led to a discussion with the Ubuntu Kernel team about using a custom crafted initrd.img for kdump. This would keep the file to a sensible size and avoid triggering the OOM killer.

Implementation

The current implementation of kdump-tools already provides a mechanism to specify which vmlinuz and initrd.img files to use when settting up kexec (from /etc/default/kdump-tools) :

# ---------------------------------------------------------------
# Kdump Kernel:
# KDUMP_KERNEL - A full pathname to a kdump kernel.
# KDUMP_INITRD - A full pathname to the kdump initrd (if used).
# If these are not set, kdump-config will try to use the current kernel
# and initrd if it is relocatable. Otherwise, you will need to specify 
# these manually.
#KDUMP_KERNEL=
#KDUMP_INITRD=

If we use those variables, defined to point to a generic value that can be adapted according to the running kernel version, we have a way to specify a smaller initrd.img for kdump.

Building a smaller initrd.img

Kernel package hooks already exists in /etc/kernel/postinst.d and /etc/kernel/postrm.d to create the initrd.img. Using those as templates, we created new hooks that will create smaller images in /var/lib/kdump and clean them up if the kernel version they pertain to is removed.

In order to create that smaller initrd.img, the content of the /etc/initramfs-tools directory needs to be replicated in /var/lib/kdump. This is done each time that the hook is executed to assure that the content matches the original source. Otherwise, their content may diverge if the content of the original directory gets modified.

Each time a new kernel package is installed, the hook will create a kdump specific initrd.img using MODULES=dep. and store it in /var/lib/kdump.  When the kernel package is removed, the corresponding file is removed.

Using the smaller initrd.img

As we outlined previously, the /etc/default/kdump-tools file can be used to point to a specific initrd.img/vmlinuz pair. So we can do :

KDUMP_KERNEL=/var/lib/kdump/vmlinuz
KDUMP_INITRD=/var/lib/kdump/initrd.img

When kexec will be loaded by kdump-config, it will find the appropriate files and load them in memory for future use.  But for that to happen, those new parameter needs to point to the correct file.  Here we use symbolic links to achieve our goal.

Linking to the smaller initrd.img

Using the hooks to create the proper symbolic links turns out to be overly complex. But since kdump-config runs at each boot, we can ask this script to be responsible for doing symlink maintenance.

Symlink creation follow this simple flowchart

kdump-tools_symlink_workflow

 

This will assure that the symbolic links always  point to the file with the version of the running kernel.

One drawback of this method is that, in the remote eventuality that the running kernel breaks the kernel crash dump functionality, we cannot automatically revert to the previous kernel in order to use a known configuration.

A future evolution of the kdump-config tool will add a function to specify which kernel version to use to create the symbolic link. In the meantime, the links can be created manually with those simple commands :

$ export wanted_version="some version"
$ rm -f /var/lib/kdump/initrd.img
$ ln -s /var/lib/kdump/initrd.img-${wanted_version} /var/lib/kdump/initrd.img
$ rm -f /var/lib/kdump/vmlinuz
$ ln -s /boot/vmlinuz-${wanted_version} /var/lib/kdump/vmlinuz

For those of you interested in nitty-gritty details, you can find the modifications in the following GIT branch :

Update: New git branch with cleanup commit history

https://github.com/karibou/makedumpfile-next/tree/smaller_initrd_final

Read more