Canonical Voices

Posts tagged with 'ubuntu'

Nicholas Skaggs

It's never been easier to write tests for your application! I wanted to share some details on the new documentation and other tidbits that will help you ensure your application has a nice testsuite. If you've used the SDK in the past, you understand how nice it can make your development workflow. Writing code and running it on your desktop, device, or emulator is a snap.

Fortunately, having a nice testsuite for your application can also be just as easy. First, you will notice that now all of the wizards inside the SDK now come with nice testsuites already in place. They are ready for you to simply add-on more tests. The setup and heavy lifting is done. See for yourself!


Secondly, developer.ubuntu.com has a great section on every level of testing; no matter which language you use with the SDK. You'll find API references for the tools and technology used, along with helpful guides to get you in the proper mindset.

For autopilot itself, there's also API documentation for the various 'helpers' that will make writing tests much easier for you. In addition, there's a guide to running autopilot tests. This has been made even easier by the addition of Akiva's Autopilot plugin inside the SDK. I'll be sharing details on this as soon as it's packaged, but you can see a sneak peek in this video.

Finally, you will find a guide on how to structure your functional tests. These are the most demanding to write, and it's important to ensure you write your tests in a maintainable way. Don't forget about the guide on writing good functional tests either.

No matter what language or level you write tests for, the guides are there to help you. Why not trying adding a test or two to your project? If you are new, check out one of the wizards and try adding a simple testcase. Then apply the same knowledge (and templated code!) to your own project. Happy test writing!

Read more
Prakash

I had been thinking of my Touch Table project for a long time. My research on existing solutions was a bit disappointing: mostly insanely expensive, large, or platform locked, they did not fit my vision of a [Android or Linux-powered] ‘desktop’ that would allow me to fit it into my existing workflow, rather than hope that applications would support it (like the Microsoft Surface).

Read more at http://www.ikeahackers.net/2015/06/hemnes-multitouch-table.html

 




Read more
Shuduo

In case you want to play snappy but don’t have a Raspberry Pi 2 or other hardware…

1, sudo apt-get install virtualbox
2, download snappy image http://cdimage.ubuntu.com/ubuntu-snappy/15.04/20150423/ubuntu-15.04-snappy-amd64-generic.img.xz
3, unxz ubuntu-15.04-snappy-amd64-generic.img.xz
4, VBoxManager convertdd ubuntu-15.04-snappy-amd64-generic.img snappy.vdi –format VDI
5, launch Virtualbox GUI app, create a new VM, OS type is Linux, Version is Ubuntu 64bit, memory is 512MB, Hard driver use an exist virtual hard disk file and select snappy.vdi we just converted from img file.
6, in Settings->Network, change Network Adapter from NAT to Bridged Adapter
7, Start VM, you can use browser to access Snappy App Store by url “webdm.local:4200” or login in from console or ssh with username/password ‘ubuntu/ubuntu’ to do anything fun snappy things like update/rollback

Read more
bmichaelsen

I would walk 500 miles and I would walk 500 more
The proclaimers, 500 miles

So I recently noted that github reported I have 1337 commits on LibreOffice since I joined Canonical in February 2011. Looking at those stats, it seems I also deleted some net 155,634 lines over that time in the codebase.

LibreOffice commits

Even though I cant find that mail, I seem to remember that Michael Stahl, when joining the LibreOffice project proclaimed his goal to be to contribute ‘a net negative lines of code.’1) Now I have not looked into the details of the above stats — they might very likely reveal to be caused by some bulk change. Which would be lame, unless its the killing of the old build system, for which I think I can claim some credit. But in general I really love the idea of ‘contributing a net negative number of lines of code’.

So, at the last LibreOffice Hackfest in Cambridge 2), I pushed a set of commits refactoring the UNO bindings of writer tables. It all started so innocent. I was actually aiming to do something completely different: namely give the UNO cursors in Writer (SwUnoCrsr) somewhat saner resource management and drag them screaming and kicking out of the 1980ies. However, once in unotbl.cxx, I found more of “determined Real Programmer can write FORTRAN programs in any language” and copypasta there than I could bear. I thought: “This UNO stuff has decent test coverage, you could refactor it a bit quickly.”.

Of course I was wrong with both sides of that statement: On the one hand, when I started the coverage was 70.1% LOC on that file which is not really as high as I expected. On the other hand, I did not end with “a bit quickly”, rather I went on to refactor away:
dc -e "`git log --author Michaelsen -p dc8697e554417d31501a0d90d731403ede223370^..HEAD sw/source/core/unocore/unotbl.cxx|grep ^+|wc -l` `git log --author Michaelsen -p dc8697e554417d31501a0d90d731403ede223370^..HEAD sw/source/core/unocore/unotbl.cxx|grep ^-|wc -l` - p"
-1015

… a thousand lines. On discovering the lacking test-coverage, I quickly added some more tests — bringing coverage to 77.52% LOC at least now.3) And yes, I also silently fixed the one regression I thereby discovered I had introduced, which nobody seemed to have noticed so far. One thing I noticed in this little refactoring spree is that while C++11s features might look tame compared to more modern programming languages in metrics like avoiding boilerplate, it still outclasses what we had before. Beyond the simplifying refactoring, features like lambdas are really nice for non-interactive (test-driven) debugging, including quickly asserting on the state of variables some over some 10 stackframes up or down without going into major contortions in testcode.

1) By the way, a quick:
dc -e "`git log --author Stahl -p |grep ^+|wc -l` `git log --author Stahl -p |grep ^-|wc -l` - p"
-108686

confirms Michael is more than living up to his personal goals.

2) Speaking of the Hackfest: The other thing I did there was helping/observing Sam Tuke getting setup for his first code contribution. While we made great progress in making this easier than it used to be, we could be a lot better there still. Sadly though, I didnt see a shortcut or simplification we could implement right away.

3) And along with that did bring coverage of unochart.cxx from abismal 4.4% LOC to at least 35.31% LOC  as a collateral damage.

addendum: Note that the writer tables core also increased coverage quite a bit from 54.6% LOC to 65% LOC.


Read more
Ben Howard

With Ubuntu 12.04.2, the kernel team introduced the idea of the "hardware enablement kernel" (HWE), originally intended to support new hardware for bare metal server and desktop. In fact, the documentation indicates that HWE images are not suitable for Virtual or Cloud Computing environments.  The thought was that cloud and virtual environments provide stable hardware and that the newer kernel features would not be needed.

Time has proven this assumption painfully wrong. Take for example the need for drivers in virtual environments. Several of the Cloud providers that we have engaged with have requested the use of the HWE kernel by default. On GCE, the HWE kernels provide support for their NVME disks or multiqueue NIC support. Azure has benefited from having an updated HyperV driver stack resulting in better performance. When we engaged with VMware Air, the 12.04 kernel lacked the necessary drivers.

Perhaps more germane to our Cloud users is that containers are using kernel features. 12.04 users need to use the HWE kernel in order to make use of Docker. The new Ubuntu Fan project will be enabled for 14.04 via the HWE-V kernel for Ubuntu 14.04.3. If you use Ubuntu as your container host, you will likely consider using an HWE kernel.

And with that there has been a steady chorus of people requesting that we provide HWE image builds for AWS. The problem has never been the base builds; building the base bits is fairly easy. The hard part is that by adding base builds, each daily and release build goes form 96 images for AWS to 288 (needless to say that is quite a problem). Over the last few weeks -- largely in my spare time -- I've been working out what it would take to deliver HWE images for AWS.

I am happy to announce that as of today, we are now building HWE-U (3.16) and HWE-V (3.19) Ubuntu 14.04 images for AWS. To be clear, we are not making any behavioral changes to the standard Ubuntu 14.04 images. Unless users opt into using an HWE image on AWS they will continue to get the 3.13 kernel. However, for those who want newer kernels, they now have the choice.

For the time being, only amd64 and i386 builds are being published.. Over the next few weeks, we expect the HWE images to reach full feature parity including release promotions, and indexing. And I fully expect that the HWE-V version of 14.04 will include our recent Fan project once the SRU's complete.

Check them out at http://cloud-images.ubuntu.com/trusty/current/hwe-u and http://cloud-images.ubuntu.com/trusty/current/hwe-v .

As always, feedback is always welcome.

Read more
Ben Howard

[UPDATE] The Image ID's have been updated with the latest builds which now include Docker 1.6.2, the latest LXD and of course the Ubuntu Fan driver. 

This week, Dustin Kirkland announced the Ubuntu Fan Project.  To steal from the description, "The Fan is not a software-defined network, and relies on neither distributed databases nor consensus protocols.  Rather, routes are calculated deterministically and traffic carries no additional overhead beyond routine IP tunneling.  Canonical engineers have already demonstrated The Fan operating at 5Gpbs between two Docker containers on separate hosts."

My team at Canonical is responsible for the production of these images. Once the official SRU's land, I anticipate that we will publish an official stream over at cloud-images.ubuntu.com. But until then, check back here for images and updates. As always, if you have feedback, please hop into #server on FreeNode or send email.

GCE Images

Images for GCE have been published to the "ubuntu-os-cloud-devel" project.

The Images are:
  • daily-ubuntu-docker-lxd-1404-trusty-v20150620
  • daily-ubuntu-docker-lxd-1504-vivid-v20150621
To launch an instance, you might run:
$ gcloud compute instances create \
    --image-project ubuntu-os-cloud-devel \
    --image <IMAGE> <NAME>

You need to make sure that IPIP traffic is enable:
$ gcloud compute firewall-rules create fan2 --allow 4 --source-ranges 10.0.0.0/8

Amazon AWS Images

The AWS images are HVM-only, AMD64 builds. 


Version
Region
HVM-SSD
HVM-Instance
14.04-LTS
eu-central-1
ami-7e94ac63
ami-8e93ab93
sa-east-1
ami-f943c1e4
ami-e742c0fa
ap-northeast-1
ami-543c9b54
ami-b4298eb4
eu-west-1
ami-4ae2a73d
ami-48e7a23f
us-west-1
ami-fbd126bf
ami-6bd3242f
us-west-2
ami-63585c53
ami-875357b7
ap-southeast-2
ami-7de69c47
ami-1de19b27
ap-southeast-1
ami-aca4a0fe
ami-2a9b9f78
us-east-1
ami-95877efe
ami-e58b728e
15.04
eu-central-1
ami-9a94ac87
ami-ae93abb3
sa-east-1
ami-1340c20e
ami-0743c11a
ap-northeast-1
ami-9c3c9b9c
ami-42379042
eu-west-1
ami-a2e2a7d5
ami-e4e7a293
us-west-1
ami-4bd0270f
ami-1dd32459
us-west-2
ami-f9585cc9
ami-1dd32459
ap-southeast-2
ami-5de69c67
ami-01e19b3b
ap-southeast-1
ami-74a5a126
ami-c89b9f9a
us-east-1
ami-29f90042
ami-8d8a73e6

It is important to note that these images are only usable inside of a VPC. Newer AWS users are in VPC by default, but older users may need to create and update their VPC. For example:
$ ec2-authorize --cidr <CIDR_RANGE> --protocol 4 <SECURITY_GROUP>


Read more
pitti

Almost every new autopkgtest release brings some small improvements, but 3.14 got some reboot related changes worth pointing out.

First of all, I simplified and unified the implementation of rebooting across all runners that support it (ssh, lxc, and qemu). If you use a custom setup script for adt-virt-ssh you might have to update it: Previously, the setup script needed to respond to a reboot function to trigger a reboot, wait for the testbed to go down, and come back up. This got split into issuing the actual reboot system command directly by adt-run itself on the testbed, and the “wait for go down and back up” part. The latter now has a sensible default implementation: it simply waits for the ssh port to become unavailable, and then waits for ssh to respond again; most testbeds should be fine with that. You only need to provide the new wait-reboot function in your ssh setup script if you need to do anything else (such as re-enabling ssh after reboot). Please consult the manpage and the updated SKELETON for details.

The ssh runner gained a new --reboot option to indicate that the remote testbed can be rebooted. This will automatically declare the reboot testbed capability and thus you can now run rebooting tests without having to use a setup script. This is very useful for running tests on real iron.

Finally, in testbeds which support rebooting your tests will now find a new /tmp/autopkgtest-reboot-prepare command. Like /tmp/autopkgtest-reboot it takes an arbitrary “marker”, saves the current state, restores it after reboot and re-starts your test with the marker; however, it will not trigger the actual reboot but expects the test to do that. This is useful if you want to test a piece of software which does a reboot as part of its operation, such as a system-image upgrade. Another use case is testing kernel crashes, kexec or another “nonstandard” way of rebooting the testbed. README.package-tests shows an example how this looks like.

3.14 is now available in Debian unstable and Ubuntu wily. As usual, for older releases you can just grab the deb and install it, it works on all supported Debian and Ubuntu releases.

Enjoy, and let me know if you run into troubles or have questions!

Read more
Dustin Kirkland

652 Linux containers running on a Laptop?  Are you kidding me???

A couple of weeks ago, at the OpenStack Summit in Vancouver, Canonical released the results of some scalability testing of Linux containers (LXC) managed by LXD.

Ryan Harper and James Page presented their results -- some 536 Linux containers on a very modest little Intel server (16GB of RAM), versus 37 KVM virtual machines.

Ryan has published the code he used for the benchmarking, and I've used to to reproduce the test on my dev laptop (Thinkpad x230, 16GB of RAM, Intel i7-3520M).

I managed to pack a whopping 652 Ubuntu 14.04 LTS (Trusty) containers on my Ubuntu 15.04 (Vivid) laptop!


The system load peaked at 1056 (!!!), but I was using merely 56% of 15.4GB of system memory.  Amazingly, my Unity desktop and Byobu command line were still perfectly responsive, as were the containers that I ssh'd into.  (Aside: makes me wonder if the Linux system load average is accounting for container process correctly...)


Check out the process tree for a few hundred system containers here!

As for KVM, I managed to launch 31 virtual machines without KSM enabled, and 65 virtual machines with KSM enabled and working hard.  So that puts somewhere between 10x - 21x as many containers as virtual machines on the same laptop.

You can now repeat these tests, if you like.  Please share your results with #LXD on Google+ or Twitter!

I'd love to see someone try this in AWS, anywhere from an m3.small to an r3.8xlarge, and share your results ;-)

Density test instructions

## Install lxd
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-git-master
$ sudo apt-get update
$ sudo apt-get install -y lxd bzr
$ cd /tmp
## At this point, it's a good idea to logout/login or reboot
## for your new group permissions to get applied
## Grab the tests, disable the tools download
$ bzr branch lp:~raharper/+junk/density-check
$ cd density-check
$ mkdir lxd_tools
## Periodically squeeze your cache
$ sudo bash -x -c 'while true; do sleep 30; \
echo 3 | sudo tee /proc/sys/vm/drop_caches; \
free; done' &
## Run the LXD test
$ ./density-check-lxd --limit=mem:512m --load=idle release=trusty arch=amd64
## Run the KVM test
$ ./density-check-kvm --limit=mem:512m --load=idle release=trusty arch=amd64

As for the speed-of-launch test, I'll cover that in a follow-up post!

Can you contain your excitement?

Cheers!
Dustin

Read more
Daniel Holbach

Out of nowhere, the Ukrainian translations team came up and translated 70% (the threshold where we call translations ‘complete enough to be official’) of the Ubuntu Packaging Guide into Ukrainian. This all happened within just a couple of days.

All I can say is: amazing work and Дуже дякую (thanks a lot)! Keep it up

ukrainian-packaging-guide

We are going to prepare an upload to Debian and Ubuntu in the coming days as well. Again: fantastic work everyone.

Call for help

This post of course can’t go out without a call for help.

Thanks again translations community, you all are heroes. It’s you who makes Ubuntu welcoming to everyone!

Read more
Timo Jyrinki

I recently obtained the newest Dell's Ubuntu developer offering, XPS 13 (2015, model 9343). I opted in for FullHD non-touch display, mostly because of better battery life, the actual no need for higher resolution, and matte screen which is great outside. Touch would have been "nice-to-have", but in my work I don't really need it.

The other specifications include i7-5600U CPU, 8GB RAM, 256GB SSD [edit: lshw], and of course Ubuntu 14.04 LTS pre-installed as OEM specific installation. It was not possible to directly order it from Dell site, as Finland is reportedly not online market for Dell... The wholesale company however managed to get two models on their lists and so it's now possible to order via retailers. [edit: here are some country specific direct web order links however US, DE, FR, SE, NL]

In this blog post I give a quick look on how I started up using it, and do a few observations on the pre-installed Ubuntu included. I personally was interested in using the pre-installed Ubuntu like a non-Debian/Ubuntu developer would use it, but Dell has also provided instructions for Ubuntu 15.04, Debian 7.0 and Debian 8.0 advanced users among else. Even if not using the pre-installed Ubuntu, the benefit from buying an Ubuntu laptop is obviously smaller cost and on the other hand contributing to free software (by paying for the hardware enablement engineering done by or purchased by Dell).

Unboxing

The Black Box. (and white cat)

Opened box.






First time lid opened, no dust here yet!
First time boot up, transitioning from the boot logo to a first time Ubuntu video.
A small clip from the end of the welcoming video.
First time setup. Language, Dell EULA, connecting to WiFi, location, keyboard, user+password.
Creating recovery media. I opted not to do this as I had happened to read that it's highly recommended to install upgrades first, including to this tool.
Finalizing setup.
Ready to log in!
It's alive!
Not so recent 14.04 LTS image... lots of updates.

Problems in the First Batch

Unfortunately the first batch of XPS 13:s with Ubuntu are going to ship with some problems. They're easy to fix if you know how to, but it's sad that they're there to begin with in the factory image. There is no knowledge when a fixed batch will start shipping - July maybe?

First of all, installing software upgrades stops. You need to run the following command via Dash → Terminal once: sudo apt-get install -f (it suggests upgrading libc-dev-bin, libc6-dbg, libc6-dev and udev). After that you can continue running Software Updater as usual, maybe rebooting in between.

Secondly, the fixed touchpad driver is included but not enabled by default. You need to enable the only non-enabled ”Additional Driver” as seen in the picture below or instructed in Youtube.

Dialog enabling the touchpad driver.

Clarification: you can safely ignore the two paragraphs below, they're just for advanced users like me who want to play with upgraded driver stacks.

Optionally, since I'm interested in the latest graphics drivers especially in case of a brand new hardware like Intel Broadwell, I upgraded my Ubuntu to use the 14.04.2 Hardware Enablement stack (matches 14.10 hardware support): sudo apt install --install-recommends libgles2-mesa-lts-utopic libglapi-mesa-lts-utopic linux-generic-lts-utopic xserver-xorg-lts-utopic libgl1-mesa-dri-lts-utopic libegl1-mesa-drivers-lts-utopic libgl1-mesa-glx-lts-utopic:i386

Even though it's much better than a normal Ubuntu 14.10 would be since many of the Dell fixes continue to be in use, some functionality might become worse compared to the pre-installed stack. The only thing I have noticed though is the internal microphone not working anymore out-of-the-box, requiring a kernel patch as mentioned in Dell's notes. This is not a surprise since the real eventual upstream support involves switching from HDA to I2S and during 14.10 kernel work that was not nearly done. If you're excited about new drivers, I'd recommend waiting until August when the 15.04 based 14.04.3 stack is available (same package names, but 'vivid' instead of 'utopic'). [edit: I couldn't resist myself when I saw linux-generic-lts-vivid (3.19 kernel) is already in the archives. 14.04.2 + that gives me working microphone again!]

Conclusion

Dell XPS 13 Developer Edition with Ubuntu 14.04 LTS is an extremely capable laptop + OS combination nearing perfection, but not quite there because of the software problems in the launch pre-install image. The laptop looks great, feels like a quality product should and is very compact for the screen size.

I've moved over all my work onto it and everything so far is working smoothly in my day-to-day tasks. I'm staying at Ubuntu 14.04 LTS and using my previous LXC configuration to run the latest Ubuntu and Debian development versions. I've also done some interesting changes already like LUKS In-Place Conversion, converting the pre-installed Ubuntu into whole disk encrypted one (not recommended for the faint hearted, GRUB reconfiguration is a bit of a pain).

I look happily forward to working a few productive years with this one!

Read more
bmichaelsen

But I believe in this and it’s been tested by research
— The Clash, Death and Glory

Thanks to Norbert’s efforts, the LibreOffice project now has a Jenkins setup that not only gives us visibility on how healthy our master branch is, with the results being reported to the ESC regularly: In addition it allows everyone easily testing commits and branches on all major LibreOffice platforms (Linux, OS X, Windows) just by uploading a change to gerrit. Doing so is really easy once you are set up:

./logerrit submit                      # a little helper script in our repo
git push logerrit HEAD:refs/for/master # alternative: plain old git
git review                             # alternative: needs to install the git-review addon

Each of the above commands alone send your work for review and testbuilding to gerrit. The last one needs an additional setup, that is however really helpful and worth it for people working with gerrit from the command-line regulary.

So, what if you have a branch that you want to testbuild? Well, just pushing the branch to gerrit as suggested above still works: gerrit then will create a change for every commit, mark them as depending on each other and testbuild every commit. This is great for a small branch of a handful of commits, but will be annoying and somewhat wasteful for a branch with more than 10-15 commits. In the latter case you might not want a manual review for each commit and also not occupy our builders for each of them. So what’s the alternative, if you have a branch ${mybranch} and want to at least test the final commit to build fine everywhere?

git checkout -b ${mybranch}-ci ${mybranch} # switch to branch ${mybranch}-ci
git rebase -i remotes/logerrit/master      # rebase the branch on master interactively

Now your favourite editor comes up showing the commits of the branch. As your favourite editor will be vim, you can then type:

:2,$s/^pick/s/ | x

To squash all the commits of the branch into one commit. Then do:

git checkout -                                   # go back to whatever branch we where on before
git push logerrit ${mybranch}-ci:refs/for/master # push squashed branch to gerrit for testbuilding
git branch -D ${mybranch}-ci                     # optional: delete squashed branch locally

Now only wait for the builder on Jenkins to report back. This allowed me to find out that our compiler on OS X didnt think of this new struct as a POD-type, while our compilers on Linux and Windows where fine with it (see: “Why does C++ require a user-provided default constructor to default-construct a const object?” for the gory details). Testbuilding on gerrit allowed me to fix this before pushing something broken on a platform to master, which would have spoiled the nifty ability to test your commit before pushing for everyone else: Duly testing your commit on gerrit only to find that the master you build upon was broken by someone else on some platform is not fun.

The above allows you to ensure the end of your branch builds fine on all platforms. But what about the intermediate commits and our test-suites? Well, you can test that each and every commit passes tests quite easily locally:

git rebase -i remotes/logerrit/master --exec 'make check'

This rebases your branch on master (even if its already up to date) and builds and runs all the tests on each commit along the way. In case there is a test breakage, git stops and lets you fix things (just like with traditional troubles on rebases like changes not applying cleanly).

Note: gerrit will close the squashed branch change if you push the branch to master: The squashed commit message ends with the Change-Id of the final commit of the branch. So once that commit is pushed, the gerrit closes the review for the squashed change.

Another note: If the above git commands are too verbose for you (they are for me), consider using gitsh and aliases. Combined they help quite a lot in reducing redundant typing when working with git.


Read more
Daniel Holbach

Daniel McGuire is unstoppable. The work I mentioned yesterday was great, here’s some more, showing what would happen when the user selects “Playing Music”.

help app - playing music

 

More feedback we received so far:

  • Kevin Feyder suggested using a different icon for the app.
  • Michał Prędotka asked if we were planning to add more icons/pictures and the answer is “yes, we’d love to if it doesn’t clutter up the interface too much”. We are going to start a call for help with the content soon.
  • Robin of ubuntufun.de asked the same thing as Michał and wondered where the translations were. We are going to look into that. He generally like the Ubuntu-like style.

Do you have any more feedback? Anything you’d like to look or work differently? Anything you’d like to help with?

Read more
Daniel Holbach

Some of you might have noticed the Help app in the store, which has been around for a couple of weeks now. We are trying to make it friendlier and easier to use. Maybe you can comment and share your ideas/thoughts.

Apart from actual bugs and adding more and more useful content, we also wanted the app to look friendlier and be more intuitive and useful.

The latest trunk lp:help-app can be seen as version 0.3 in the store or if you run

bzr branch lp:help-app
less help-app/HACKING

you can run and check it out locally.

Here’s the design Daniel McGuire suggested going forward.

help-mockup

What are your thoughts? If you look at the content we currently have, how else would you expect the app to look like or work?

Thanks a lot Daniel for your work on this! :-)

Read more
Michael Hall

Ubuntu is sponsoring the South East Linux Fest this year in Charlotte North Carolina, and as part of that event we will have a room to use all day Friday, June 12, for an UbuCon. UbuCon is a mini-conference with presentations centered around Ubuntu the project and it’s community.

I’m recruiting speakers to fill the last three hour-long slots, if anybody is willing and able to attend the conference and wants to give a presentation to a room full of enthusiastic Ubuntu users, please email me at mhall119@ubuntu.com. Topic can be anything Ubuntu related, design, development, client, cloud, using it, community, etc.

Read more
Dustin Kirkland


In November of 2006, Canonical held an "all hands" event, which included a team building exercise.  Several teams recorded "Ubuntu commercials".

On one of the teams, Mark "Borat" Shuttleworth amusingly proffered,
"Ubuntu make wonderful things possible, for example, Linux appliance, with Ubuntu preinstalled, we call this -- the fridge!"


Nine years later, that tongue-in-cheek parody is no longer a joke.  It's a "cold" hard reality!

GE Appliances, FirstBuild, and Ubuntu announced a collaboration around a smart refrigerator, available today for $749, running Snappy Ubuntu Core on a Raspberry Pi 2, with multiple USB ports and available in-fridge accessories.  We had one in our booth at IoT World in San Francisco this week!










While the fridge prediction is indeed pretty amazing, the line that strikes me most is actually "Ubuntu make(s) wonderful things possible!"

With emphasis on "things".  As in, "Internet of Things."  The possibilities are absolutely endless in this brave new world of Snappy Ubuntu.  And that is indeed wonderful.

So what are you making with Ubuntu?!?

:-Dustin

Read more
Daniel Holbach

snappyIt’d be a bit of a stretch to call UOS Snappy Online Summit, but Snappy definitely was talk of the town this time around. It was also picked up by tech news sites, who not always depicted Ubuntu’s plans accurately. :-)

Anyway… if you missed some of the sessions, you can always go back, watch the videos of the sessions and check the notes. Here’s the links to the sessions which already happened:

Which leaves us with today, 7th May 2015! You can still join these sessions today – we’ll be glad to hear your input and ideas! :-)

  • 14:00 UTC: Ubuntu Core Brainstorm – Calling all Snappy pioneers
    Snappy and Ubuntu Core are still hot off the press, but it’s already clear that they’re going to bring a lot of opportunities and will make the lives of developers a lot easier. Let’s get together, brainstorm and find out where Snappy can be used in the future, which communities/tools/frameworks can be joined by it, which software should be ported to it and which crazy nice tutorials/demos can be easily put together. Anything goes, join us, no matter if on IRC or in the hangout!
  • 16:00 UTC: Snappy Q&A
    Everything you always wanted to know about Snappy and Ubuntu Core. Bring your questions here! Bring your friends as well. We’ll make sure to have all the relevant experts here.
  • 18:00 UTC: Replace ifupdown with networkd on snappy / cloud / server for 16.04
    What the title says. Networkers, we’ll need you here. :-)

The above are just my suggestions, obviously there’s loads of other good stuff on the schedule today! See you later!

Read more
Michael Hall

Ubuntu has been talking a lot about convergence lately, it’s something that we believe is going to be revolutionary and we want to be at the forefront of it. We love the idea of it, but so far we haven’t really had much experience with the reality of it.

image20150423_164034801I got my first taste of that reality two weeks ago, while at a work sprint in London. While Canonical has an office in London, it had other teams sprinting there, so the Desktop sprint I was at was instead held at a hotel. We planned to visit the office one day that week, it would be my first visit to any Canonical office, as well as my first time working at an actual office in several years. However, we also planned to meet up with the UK loco for release drinks that evening. This meant that we had to decide between leaving our laptops at the hotel, thus not having them to work on at the office, or taking them with us, but having to carry them around the pub all evening.

I chose to leave my laptop behind, but I did take my phone (Nexus 4 running Ubuntu) with me. After getting a quick tour of the office, I found a vacant seat at a desk, and pulled out my phone. Most of my day job can be done with the apps on my phone: I have email, I have a browser, I have a terminal with ssh, I can respond to our community everywhere they are active.

I spent the next couple of hours doing work, actual work, on my phone. The only problem I had was that I was doing it on a small screen, and I was burning through my battery. At one point I looked up and realized that the vacant desk I was sitting at was equipped with a laptop docking station. It had also a USB hub and an HDMI monitor cable available. If I had a slimport cable for my phone, I might have been able to plug it into this docking station and both power my phone and get a bigger screen to work with.

If I could have done that, I would have achieved the full reality of convergence, and it would have been just like if I had brought my laptop with me. Only with this I was able to simply slide it into my pocket when it was time to leave for drinks. It was tantalizingly close, I got a little taste of what it’s going to be like, and now I’m craving more of it.

Read more
Daniel Holbach

Not sure if you saw Marks’ blog post earlier, but I’ll make sure to be watching the keynote at http://ubuntuonair.com/ at 14:00 UTC today. :-)

Read more
Michael Hall

A couple of years ago the Ubuntu download page introduced a way for users to make a financial contribution to the ongoing development of Ubuntu and it’s surrounding projects and community. Later a program was established within Canonical to make the money donated specifically for supporting the community available directly to members of the community who would use it to benefit the wider project.

During the last month, at the request of members of the Ubuntu community and the Community Council, we have undertaken a review of the this program. While conducting a more thorough analysis of the what was donated to us and when, it was discovered that we made an error in our initial reporting, which has unfortunately affected the accuracy of all subsequent reports as well.

What Happened?

Our first report, published in May of 2014, combined the amounts donated to the community slider and the amounts dispersed to the community during the previous four financial quarters. In that report we listed the amount donated from April 2013 to June 2013 as being a total of $34,353.63. However, when looking over all of the quarterly donations going back to the start of the program, we realized that this amount actually covered donations made from April 2013 all the way to October 2013.

This means that the figure contains both the amount donated during that Apr-Jun quarter, as well as duplicating the amounts listed as being donated for the Jul-Sep quarter, and a part of the Oct-Dec quarter. The actual amount donated during just the Apr-Jun 2013 quarter was $15,726.72. As a result of this, and the fact that it affected the carry over balanced for all subsequent reports, I have gone back and corrected all of these to reflect the correct figures.

Now for the questions:

Where are the updated reports?

The reports have not moved, you can still access them from the previously published URLs, and they are also listed on a new Reports page on the community website. The original report data has been preserved in a copy which is linked to at the top of each revised report.

Where did the money go?

No money has been lost or taken away from the program, this change is only a correction to the actual state of things. We had originally over-stated the amount that was donated, due to an error when reading the raw donation data at the time the first report was written.

How could a mistake like this happen?

The information we get is a summary of a summary of the raw data. At some point in the process the wrong number was put in the wrong place. All of these reports are manually written and verified, which often catches errors such as this, but in the very first report this error was missed.

Are these numbers trustworthy?

I understand that a reduction in the balance number, in conjunction with questions being raised about the operation of the program, will lead some people to question the honesty of this change. But the fact remains that we were asked to investigate this, we did find a discrepant, and correcting it publicly is the right thing for us to do, regardless of how it may look.

Is the community funding program in trouble?

Absolutely not. Even with this correction there has been more money donated to the community slider than we have been able to use. There’s still a lot more good that can be done, if you think you have a good use for some of it please fill out a request.

Read more
Daniel Holbach

Next week we are going to have another Ubuntu Online Summit (5-7 May 2015). This is (among many other things) a great time for you to get involved with, learn about and help shape Ubuntu Snappy.

As I said in my last blog post I’m very impressed to see the general level of interest in Ubuntu Snappy given how new it is. It’ll be great to see who is joining the sessions and who is going to get involved.

For those of you who are new to it: Ubuntu Online Summit is an open event, where we’ll plan in hangouts and IRC the next Ubuntu release. You can

  • tune in
  • ask questions
  • bring up ideas
  • get to know the team
  • help out :-)

This is the preliminary schedule. Sessions might still move around a bit, but be sure to register for the event and subscribe to the blueprint/session – that way you are going to be notified of ongoing work and discussion.

Tuesday, 5th May 2015

Wednesday, 6th May 2015

Thursday, 7th May 2015

Please note that we are likely going to add more sessions, so you should definitely keep your eyes open and check the schedule every now and then.

I’m looking forward to seeing you all and seeing us shape what Snappy is going to be! See you next week!

Read more