Canonical Voices

Posts tagged with 'canonical'

Ben Howard

With Ubuntu 12.04.2, the kernel team introduced the idea of the "hardware enablement kernel" (HWE), originally intended to support new hardware for bare metal server and desktop. In fact, the documentation indicates that HWE images are not suitable for Virtual or Cloud Computing environments.  The thought was that cloud and virtual environments provide stable hardware and that the newer kernel features would not be needed.

Time has proven this assumption painfully wrong. Take for example the need for drivers in virtual environments. Several of the Cloud providers that we have engaged with have requested the use of the HWE kernel by default. On GCE, the HWE kernels provide support for their NVME disks or multiqueue NIC support. Azure has benefited from having an updated HyperV driver stack resulting in better performance. When we engaged with VMware Air, the 12.04 kernel lacked the necessary drivers.

Perhaps more germane to our Cloud users is that containers are using kernel features. 12.04 users need to use the HWE kernel in order to make use of Docker. The new Ubuntu Fan project will be enabled for 14.04 via the HWE-V kernel for Ubuntu 14.04.3. If you use Ubuntu as your container host, you will likely consider using an HWE kernel.

And with that there has been a steady chorus of people requesting that we provide HWE image builds for AWS. The problem has never been the base builds; building the base bits is fairly easy. The hard part is that by adding base builds, each daily and release build goes form 96 images for AWS to 288 (needless to say that is quite a problem). Over the last few weeks -- largely in my spare time -- I've been working out what it would take to deliver HWE images for AWS.

I am happy to announce that as of today, we are now building HWE-U (3.16) and HWE-V (3.19) Ubuntu 14.04 images for AWS. To be clear, we are not making any behavioral changes to the standard Ubuntu 14.04 images. Unless users opt into using an HWE image on AWS they will continue to get the 3.13 kernel. However, for those who want newer kernels, they now have the choice.

For the time being, only amd64 and i386 builds are being published.. Over the next few weeks, we expect the HWE images to reach full feature parity including release promotions, and indexing. And I fully expect that the HWE-V version of 14.04 will include our recent Fan project once the SRU's complete.

Check them out at http://cloud-images.ubuntu.com/trusty/current/hwe-u and http://cloud-images.ubuntu.com/trusty/current/hwe-v .

As always, feedback is always welcome.

Read more
Ben Howard

[UPDATE] The Image ID's have been updated with the latest builds which now include Docker 1.6.2, the latest LXD and of course the Ubuntu Fan driver. 

This week, Dustin Kirkland announced the Ubuntu Fan Project.  To steal from the description, "The Fan is not a software-defined network, and relies on neither distributed databases nor consensus protocols.  Rather, routes are calculated deterministically and traffic carries no additional overhead beyond routine IP tunneling.  Canonical engineers have already demonstrated The Fan operating at 5Gpbs between two Docker containers on separate hosts."

My team at Canonical is responsible for the production of these images. Once the official SRU's land, I anticipate that we will publish an official stream over at cloud-images.ubuntu.com. But until then, check back here for images and updates. As always, if you have feedback, please hop into #server on FreeNode or send email.

GCE Images

Images for GCE have been published to the "ubuntu-os-cloud-devel" project.

The Images are:
  • daily-ubuntu-docker-lxd-1404-trusty-v20150620
  • daily-ubuntu-docker-lxd-1504-vivid-v20150621
To launch an instance, you might run:
$ gcloud compute instances create \
    --image-project ubuntu-os-cloud-devel \
    --image <IMAGE> <NAME>

You need to make sure that IPIP traffic is enable:
$ gcloud compute firewall-rules create fan2 --allow 4 --source-ranges 10.0.0.0/8

Amazon AWS Images

The AWS images are HVM-only, AMD64 builds. 


Version
Region
HVM-SSD
HVM-Instance
14.04-LTS
eu-central-1
ami-7e94ac63
ami-8e93ab93
sa-east-1
ami-f943c1e4
ami-e742c0fa
ap-northeast-1
ami-543c9b54
ami-b4298eb4
eu-west-1
ami-4ae2a73d
ami-48e7a23f
us-west-1
ami-fbd126bf
ami-6bd3242f
us-west-2
ami-63585c53
ami-875357b7
ap-southeast-2
ami-7de69c47
ami-1de19b27
ap-southeast-1
ami-aca4a0fe
ami-2a9b9f78
us-east-1
ami-95877efe
ami-e58b728e
15.04
eu-central-1
ami-9a94ac87
ami-ae93abb3
sa-east-1
ami-1340c20e
ami-0743c11a
ap-northeast-1
ami-9c3c9b9c
ami-42379042
eu-west-1
ami-a2e2a7d5
ami-e4e7a293
us-west-1
ami-4bd0270f
ami-1dd32459
us-west-2
ami-f9585cc9
ami-1dd32459
ap-southeast-2
ami-5de69c67
ami-01e19b3b
ap-southeast-1
ami-74a5a126
ami-c89b9f9a
us-east-1
ami-29f90042
ami-8d8a73e6

It is important to note that these images are only usable inside of a VPC. Newer AWS users are in VPC by default, but older users may need to create and update their VPC. For example:
$ ec2-authorize --cidr <CIDR_RANGE> --protocol 4 <SECURITY_GROUP>


Read more
Dustin Kirkland

652 Linux containers running on a Laptop?  Are you kidding me???

A couple of weeks ago, at the OpenStack Summit in Vancouver, Canonical released the results of some scalability testing of Linux containers (LXC) managed by LXD.

Ryan Harper and James Page presented their results -- some 536 Linux containers on a very modest little Intel server (16GB of RAM), versus 37 KVM virtual machines.

Ryan has published the code he used for the benchmarking, and I've used to to reproduce the test on my dev laptop (Thinkpad x230, 16GB of RAM, Intel i7-3520M).

I managed to pack a whopping 652 Ubuntu 14.04 LTS (Trusty) containers on my Ubuntu 15.04 (Vivid) laptop!


The system load peaked at 1056 (!!!), but I was using merely 56% of 15.4GB of system memory.  Amazingly, my Unity desktop and Byobu command line were still perfectly responsive, as were the containers that I ssh'd into.  (Aside: makes me wonder if the Linux system load average is accounting for container process correctly...)


Check out the process tree for a few hundred system containers here!

As for KVM, I managed to launch 31 virtual machines without KSM enabled, and 65 virtual machines with KSM enabled and working hard.  So that puts somewhere between 10x - 21x as many containers as virtual machines on the same laptop.

You can now repeat these tests, if you like.  Please share your results with #LXD on Google+ or Twitter!

I'd love to see someone try this in AWS, anywhere from an m3.small to an r3.8xlarge, and share your results ;-)

Density test instructions

## Install lxd
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-git-master
$ sudo apt-get update
$ sudo apt-get install -y lxd bzr
$ cd /tmp
## At this point, it's a good idea to logout/login or reboot
## for your new group permissions to get applied
## Grab the tests, disable the tools download
$ bzr branch lp:~raharper/+junk/density-check
$ cd density-check
$ mkdir lxd_tools
## Periodically squeeze your cache
$ sudo bash -x -c 'while true; do sleep 30; \
echo 3 | sudo tee /proc/sys/vm/drop_caches; \
free; done' &
## Run the LXD test
$ ./density-check-lxd --limit=mem:512m --load=idle release=trusty arch=amd64
## Run the KVM test
$ ./density-check-kvm --limit=mem:512m --load=idle release=trusty arch=amd64

As for the speed-of-launch test, I'll cover that in a follow-up post!

Can you contain your excitement?

Cheers!
Dustin

Read more
Michael Hall

Ubuntu is sponsoring the South East Linux Fest this year in Charlotte North Carolina, and as part of that event we will have a room to use all day Friday, June 12, for an UbuCon. UbuCon is a mini-conference with presentations centered around Ubuntu the project and it’s community.

I’m recruiting speakers to fill the last three hour-long slots, if anybody is willing and able to attend the conference and wants to give a presentation to a room full of enthusiastic Ubuntu users, please email me at mhall119@ubuntu.com. Topic can be anything Ubuntu related, design, development, client, cloud, using it, community, etc.

Read more
Dustin Kirkland


In November of 2006, Canonical held an "all hands" event, which included a team building exercise.  Several teams recorded "Ubuntu commercials".

On one of the teams, Mark "Borat" Shuttleworth amusingly proffered,
"Ubuntu make wonderful things possible, for example, Linux appliance, with Ubuntu preinstalled, we call this -- the fridge!"


Nine years later, that tongue-in-cheek parody is no longer a joke.  It's a "cold" hard reality!

GE Appliances, FirstBuild, and Ubuntu announced a collaboration around a smart refrigerator, available today for $749, running Snappy Ubuntu Core on a Raspberry Pi 2, with multiple USB ports and available in-fridge accessories.  We had one in our booth at IoT World in San Francisco this week!










While the fridge prediction is indeed pretty amazing, the line that strikes me most is actually "Ubuntu make(s) wonderful things possible!"

With emphasis on "things".  As in, "Internet of Things."  The possibilities are absolutely endless in this brave new world of Snappy Ubuntu.  And that is indeed wonderful.

So what are you making with Ubuntu?!?

:-Dustin

Read more
Michael Hall

Ubuntu has been talking a lot about convergence lately, it’s something that we believe is going to be revolutionary and we want to be at the forefront of it. We love the idea of it, but so far we haven’t really had much experience with the reality of it.

image20150423_164034801I got my first taste of that reality two weeks ago, while at a work sprint in London. While Canonical has an office in London, it had other teams sprinting there, so the Desktop sprint I was at was instead held at a hotel. We planned to visit the office one day that week, it would be my first visit to any Canonical office, as well as my first time working at an actual office in several years. However, we also planned to meet up with the UK loco for release drinks that evening. This meant that we had to decide between leaving our laptops at the hotel, thus not having them to work on at the office, or taking them with us, but having to carry them around the pub all evening.

I chose to leave my laptop behind, but I did take my phone (Nexus 4 running Ubuntu) with me. After getting a quick tour of the office, I found a vacant seat at a desk, and pulled out my phone. Most of my day job can be done with the apps on my phone: I have email, I have a browser, I have a terminal with ssh, I can respond to our community everywhere they are active.

I spent the next couple of hours doing work, actual work, on my phone. The only problem I had was that I was doing it on a small screen, and I was burning through my battery. At one point I looked up and realized that the vacant desk I was sitting at was equipped with a laptop docking station. It had also a USB hub and an HDMI monitor cable available. If I had a slimport cable for my phone, I might have been able to plug it into this docking station and both power my phone and get a bigger screen to work with.

If I could have done that, I would have achieved the full reality of convergence, and it would have been just like if I had brought my laptop with me. Only with this I was able to simply slide it into my pocket when it was time to leave for drinks. It was tantalizingly close, I got a little taste of what it’s going to be like, and now I’m craving more of it.

Read more
Michael Hall

A couple of years ago the Ubuntu download page introduced a way for users to make a financial contribution to the ongoing development of Ubuntu and it’s surrounding projects and community. Later a program was established within Canonical to make the money donated specifically for supporting the community available directly to members of the community who would use it to benefit the wider project.

During the last month, at the request of members of the Ubuntu community and the Community Council, we have undertaken a review of the this program. While conducting a more thorough analysis of the what was donated to us and when, it was discovered that we made an error in our initial reporting, which has unfortunately affected the accuracy of all subsequent reports as well.

What Happened?

Our first report, published in May of 2014, combined the amounts donated to the community slider and the amounts dispersed to the community during the previous four financial quarters. In that report we listed the amount donated from April 2013 to June 2013 as being a total of $34,353.63. However, when looking over all of the quarterly donations going back to the start of the program, we realized that this amount actually covered donations made from April 2013 all the way to October 2013.

This means that the figure contains both the amount donated during that Apr-Jun quarter, as well as duplicating the amounts listed as being donated for the Jul-Sep quarter, and a part of the Oct-Dec quarter. The actual amount donated during just the Apr-Jun 2013 quarter was $15,726.72. As a result of this, and the fact that it affected the carry over balanced for all subsequent reports, I have gone back and corrected all of these to reflect the correct figures.

Now for the questions:

Where are the updated reports?

The reports have not moved, you can still access them from the previously published URLs, and they are also listed on a new Reports page on the community website. The original report data has been preserved in a copy which is linked to at the top of each revised report.

Where did the money go?

No money has been lost or taken away from the program, this change is only a correction to the actual state of things. We had originally over-stated the amount that was donated, due to an error when reading the raw donation data at the time the first report was written.

How could a mistake like this happen?

The information we get is a summary of a summary of the raw data. At some point in the process the wrong number was put in the wrong place. All of these reports are manually written and verified, which often catches errors such as this, but in the very first report this error was missed.

Are these numbers trustworthy?

I understand that a reduction in the balance number, in conjunction with questions being raised about the operation of the program, will lead some people to question the honesty of this change. But the fact remains that we were asked to investigate this, we did find a discrepant, and correcting it publicly is the right thing for us to do, regardless of how it may look.

Is the community funding program in trouble?

Absolutely not. Even with this correction there has been more money donated to the community slider than we have been able to use. There’s still a lot more good that can be done, if you think you have a good use for some of it please fill out a request.

Read more
Ben Howard

I am pleased to announce initial Vagrant images [1, 2]. These images are bit-for-bit the same as the KVM images, but have a Cloud-init configuration that allows Snappy to work within the Vagrant workflow.

Vagrant enables a cross platform developer experience on MacOS, Windows or Linux [3].

Note: due to the way that Snappy works, shared file systems within Vagrant is not possible at this time. We are working on getting the shared file system support enabled, but it will take us a little bit to get going.

If you want to use Vagrant packaged in the Ubuntu archives, in a terminal run::

  • sudo apt-get -y install vagrant
  • cd <WORKSPACE>
  • vagrant init http://goo.gl/DO7a9W 
  • vagrant up
  • vagrant ssh
If you use Vagrant from [4] (i.e Windows, Mac or install the latest Vagrant) then you can run:
  • vagrant init ubuntu/ubuntu-15.04-snappy-core-edge-amd64
  • vagrant up
  • vagrant ssh

These images are a work in progress. If you encounter any issues, please report them to "snappy-devel@lists.ubuntu.com" or ping me (utlemming) on Launchpad.net

---

[1] http://cloud-images.ubuntu.com/snappy/15.04/core/edge/current/core-edge-amd64-vagrant.box
[2] https://atlas.hashicorp.com/ubuntu/boxes/ubuntu-15.04-snappy-core-edge-amd64
[3] https://docs.vagrantup.com/v2/why-vagrant/index.html
[4] https://www.vagrantup.com/downloads.html

Read more
Michael Hall

Way back at the dawn of the open source era, Richard Stallman wrote the Four Freedoms which defined what it meant for software to be free. These are:

  • Freedom 0: The freedom to run the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute copies so you can help your neighbor.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

For nearly three decades now they have been the foundation for our movement, the motivation for many of us, and the guiding principle for the decisions we make about what software to use.

But outside of our little corner of humanity, these freedoms are not seen as particularly important. In fact, the fast majority of people are not only happy to use software that violates them, but will often prefer to do so. I don’t even feel the need to provide supporting evidence for this claim, as I’m sure all of you have been on one side or the other of a losing arguement about why using open source software is important.

The problem, it seems, is that people who don’t plan on exercising any of these freedoms, from lack of interest or lack of ability, don’t place the same value on them as those of us who do. That’s why software developers are more likely to prefer open source than non-developers, because they might actually use those freedoms at some point.

But the people who don’t see a personal value in free software are missing a larger, more important freedom. One implied by the first four, though not specifically stated. A fifth freedom if you will, which I define as:

  • Freedom 4: The freedom to have the program improved by a person or persons of your choosing, and make that improvement available back to you and to the public.

Because even though the vast majority of proprietary software users will never be interested in studying or changing the source of the software they use, they will likely all, at some point in time, ask someone else if they can fix it. Who among us hasn’t had a friend or relative ask us to fix their Windows computer? And the true answer is that, without having the four freedoms (and implied fifth), only Microsoft can truly “fix” their OS, the rest of us can only try and undo the damage that’s been done.

So the next time you’re trying to convince someone of the important of free and open software, and they chime in with the fact that don’t want to change it, try pointing out that by using proprietary code they’re limiting their options for getting it fixed when it inevitably breaks.

Read more
Kyle Nitzsche

AptBrowser QML/C++ App

I've made a QML/C++ app called aptBrowser as an exercise in:

  • QML declarative GUI that drives
  • C++ backend threads

That is, the GUI provides buttons (five) that kick off C++ threads that do the backend work and provide the results back to QML.

So the GUI is always responsive (non-blocking).

What aptbrowser does

The user enters a debian package name (and is told if it is not valid) and taps one of five buttons that do the following:
  • Show the packages this package depends on ("Depends")
  • Show the packages this package recommends ("Recommends")
  • Show the packages that depend on this package ("Parent Depends")
  • Show that packages that recommend this package ("Parent Recommends")
  • Show the  apt-cache policy for this package ("Policy")
The data for all but the last ("Policy") are returned as flickable lists of buttons. When you click any one, it becomes the current package and the GUI and displayed data adjusts appropriately.

When you click any of the buttons, the orange indicator square to its left turns purple and starts spinning, and when the c++ backend returns data, its indicator turns orange again and stops spinning.

Note that the Parent Depends and Parent Recommends actions can take a long time. This has nothing to do with this app. This is simply how long it takes to first get a package's parents and then, for each, find its type of relationship (depends or recommends) to our package of interest. Querying the apt cache is time consuming.

Where is aptbrowser

Store

Because the app queries the apt cache, it must run unconfined at the moment, and therefore it cannot go into the store.

The click

This an armhf click pkg for framework ubuntu-sdk.14.10 (compiled against vivid)

    The source 


    • bzr branch lp:aptbrowser

    Screenshots

     



    Read more
    Michael Hall

    A couple of weeks ago I had the opportunity to attend the thirteenth Southern California Linux Expo, more commonly known at SCaLE 13x. It was my first time back in five years, since I attended 9x, and my first time as a speaker. I had a blast at SCaLE, and a wonderful time with UbuCon. If you couldn’t make it this year, it should definitely be on your list of shows to attend in 2016.

    UbuCon

    Thanks to the efforts of Richard Gaskin, we had a room all day Friday to hold an UbuCon. For those of you who haven’t attended an UbuCon before, it’s basically a series of presentations by members of the Ubuntu community on how to use it, contribute to it, or become involved in the community around it. SCaLE was one of the pioneering host conferences for these, and this year they provided a double-sized room for us to use, which we still filled to capacity.

    image20150220_100226891I was given the chance to give not one but two talks during UbuCon, one on community and one on the Ubuntu phone. We also had presentations from my former manager and good friend Jono Bacon, current coworkers Jorge Castro and Marco Ceppi, and inspirational community members Philip Ballew and Richard Gaskin.

    I’d like thank Richard for putting this all together, and for taking such good care of those of us speaking (he made sure we always had mints and water). UbuCon was a huge success because of the amount of time and work he put into it. Thanks also to Canonical for providing us, on rather short notice, a box full of Ubuntu t-shirts to give away. And of course thanks to the SCaLE staff and organizers for providing us the room and all of the A/V equipment in it to use.

    The room was recorded all day, so each of these sessions can be watched now on youtube. My own talks are at 4:00:00 and 5:00:00.

    Ubuntu Booth

    In addition to UbuCon, we also had an Ubuntu booth in the SCaLE expo hall, which was registered and operated by members of the Ubuntu California LoCo team. These guys were amazing, they ran the booth all day over all three days, managed the whole setup and tear down, and did an excellent job talking to everybody who came by and explaining everything from Ubuntu’s cloud offerings, to desktops and even showing off Ubuntu phones.

    image20150221_162940413Our booth wouldn’t have happened without the efforts of Luis Caballero, Matt Mootz, Jose Antonio Rey, Nathan Haines, Ian Santopietro, George Mulak, and Daniel Gimpelevich, so thank you all so much! We also had great support from Carl Richell at System76 who let us borrow 3 of their incredible laptops running Ubuntu to show off our desktop, Canonical who loaned us 2 Nexus 4 phones running Ubuntu as well as one of the Orange Box cloud demonstration boxes, Michael Newsham from TierraTek who sent us a fanless PC and NAS, which we used to display a constantly-repeating video (from Canonical’s marketing team) showing the Ubuntu phone’s Scopes on a television monitor provided to us by Eäär Oden at Video Resources. Oh, and of course Stuart Langridge, who gave up his personal, first-edition Bq Ubuntu phone for the entire weekend so we could show it off at the booth.

    image20150222_132142752Like Ubuntu itself, this booth was not the product of just one organization’s work, but the combination of efforts and resources from many different, but connected, individuals and groups. We are what we are, because of who we all are. So thank you all for being a part of making this booth amazing.

    Read more
    Ben Howard

    Back when we announced that the Ubuntu 14.04 LTS Cloud Images on Azure were using the Hardware Enablement Kernel (HWE), the immediate feedback was "what about 12.04?"


    Well, the next Ubuntu 12.04 Cloud Images on Microsoft Azure will start using the HWE kernel. We have been working with Microsoft to validate using the 3.13 kernel on 12.04 and are pleased with the results and the stability. We spent a lot of time thinking about and testing this change, and in conference with the Ubuntu Kernel, Foundations and Cloud Image teams, feel this change will give the best experience on Microsoft Azure. 

    By default, the HWE kernel is used on official images for Ubuntu 12.04 on VMware Air, Google Compute Engine, and now Microsoft Azure. 

    Any 12.04 Image published to Azure with a serial later than 20140225 will default to the new HWE kernel. 

    Users who want to upgrade their existing instance can simply run:
    • sudo apt-get update
    • sudo apt-get install linux-image-hwe-generic linux-cloud-tools-generic-lts-trusty
    • reboot

    Read more
    Dustin Kirkland

    Gratuitous picture of my pets, the day after we rescued them
    The PetName libraries (Shell, Python, Golang) can generate infinite combinations of human readable UUIDs


    Some Background

    In March 2014, when I first started looking after MAAS as a product manager, I raised a minor feature request in Bug #1287224, noting that the random, 5-character hostnames that MAAS generates are not ideal. You can't read them or pronounce them or remember them easily. I'm talking about hostnames like: sldna, xwknd, hwrdz or wkrpb. From that perspective, they're not very friendly. Certainly not very Ubuntu.

    We're not alone, in that respect. Amazon generates forgettable instance names like i-15a4417c, along with most virtual machine and container systems.


    Meanwhile, there is a reasonably well-known concept -- Zooko's Triangle -- which says that names should be:
    • Human-meaningful: The quality of meaningfulness and memorability to the users of the naming system. Domain names and nicknaming are naming systems that are highly memorable
    • Decentralized: The lack of a centralized authority for determining the meaning of a name. Instead, measures such as a Web of trust are used.
    • Secure: The quality that there is one, unique and specific entity to which the name maps. For instance, domain names are unique because there is just one party able to prove that they are the owner of each domain name.
    And, of course we know what XKCD has to say on a somewhat similar matter :-)

    So I proposed a few different ways of automatically generating those names, modeled mostly after Ubuntu's beloved own code naming scheme -- Adjective Animal. To get the number of combinations high enough to model any reasonable MAAS user, though, we used Adjective Noun instead of Adjective Animal.

    I collected a Adjective list and a Noun list from a blog run by moms, in the interest of having a nice, soft, friendly, non-offensive source of words.

    For the most part, the feature served its purpose. We now get memorable, pronounceable names. However, we get a few odd balls in there from time to time. Most are humorous. But some combinations would prove, in fact, to be inappropriate, or perhaps even offensive to some people.

    Accepting that, I started thinking about other solutions.

    In the mean time, I realized that Docker had recently launched something similar, their NamesGenerator, which pairs an Adjective with a Famous Scientist's Last Name (except they have explicitly blacklisted boring_wozniak, because "Steve Wozniak is not boring", of course!).


    Similarly, Github itself now also "suggests" random repo names.



    I liked one part of the Docker approach better -- the use of proper names, rather than random nouns.

    On the other hand, their approach is hard-coded into the Docker Golang source itself, and not usable or portable elsewhere, easily.

    Moreover, there's only a few dozen Adjectives (57) and Names (76), yielding only about 4K combinations (4332) -- which is not nearly enough for MAAS's purposes, where we're shooting for 16M+, with minimal collisions (ie, covering a Class A network).

    Introducing the PetName Libraries

    I decided to scrap the Nouns list, and instead build a Names list. I started with Last Names (like Docker), but instead focused on First Names, and built a list of about 6,000 names from public census data.  I also built a new list of nearly 38,000 Adjectives.

    The combination actually works pretty well! While smelly-Susan isn't particularly charming, it's certainly not an ad hominem attack targeted at any particular Susan! That 6,000 x 38,000 gives us well over 228 million unique combinations!

    Moreover, I also thought about how I could actually make it infinitely extensible... The simple rules of English allow Adjectives to modify Nouns, while Adverbs can recursively modify other Adverbs or Adjectives.   How convenient!

    So I built a word list of Adverbs (13,000) as well, and added support for specifying the "number" of words in a PetName.
    1. If you want 1, you get a random Name 
    2. If you want 2, you get a random Adjective followed by a Name 
    3. If you want 3 or more, you get N-2 Adverbs, an Adjective and a Name 
    Oh, and the separator is now optional, and can be any character or string, with a default of a hyphen, "-".

    In fact:
    • 2 words will generate over 221 million unique combinations, over 227 combinations
    • 3 words will generate over 2.8 trillion unique combinations, over 241 combinations (more than 32-bit space)
    • 4 words can generate over 255 combinations
    • 5 words can generate over 268 combinations (more than 64-bit space)
    Interestingly, you need 10 words to cover 128-bit space!  So it's

    unstoutly-clashingly-assentingly-overimpressibly-nonpermissibly-unfluently-chimerically-frolicly-irrational-wonda

    versus

    b9643037-4a79-412c-b7fc-80baa7233a31

    Shell

    So once the algorithm was spec'd out, I built and packaged a simple shell utility and text word lists, called petname, which are published at:
    The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

    $ sudo apt-add-repository ppa:petname/ppa
    $ sudo apt-get update

    And:
    $ sudo apt-get install petname
    $ petname
    itchy-Marvin
    $ petname -w 3
    listlessly-easygoing-Radia
    $ petname -s ":" -w 5
    onwardly:unflinchingly:debonairly:vibrant:Chandler

    Python

    That's only really useful from the command line, though. In MAAS, we'd want this in a native Python library. So it was really easy to create python-petname, source now published at:
    The packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

    $ sudo apt-add-repository ppa:python-petname/ppa
    $ sudo apt-get update

    And:
    $ sudo apt-get install python-petname
    $ python-petname
    flaky-Megan
    $ python-petname -w 4
    mercifully-grimly-fruitful-Salma
    $ python-petname -s "" -w 2
    filthyLaurel

    Using it in your own Python code looks as simple as this:

    $ python
    ⟫⟫⟫ import petname
    ⟫⟫⟫ foo = petname.Generate(3, "_")
    ⟫⟫⟫ print(foo)
    boomingly_tangible_Mikayla

    Golang


    In the way that NamesGenerator is useful to Docker, I though a Golang library might be useful for us in LXD (and perhaps even usable by Docker or others too), so I created:
    Of course you can use "go get" to fetch the Golang package:

    $ export GOPATH=$HOME/go
    $ mkdir -p $GOPATH
    $ export PATH=$PATH:$GOPATH/bin
    $ go get github.com/dustinkirkland/golang-petname

    And also, the packages are already in Ubuntu 15.04 (Vivid). On any other version of Ubuntu, you can use the PPA:

    $ sudo apt-add-repository ppa:golang-petname/ppa
    $ sudo apt-get update

    And:
    $ sudo apt-get install golang-petname
    $ golang-petname
    quarrelsome-Cullen
    $ golang-petname -words=1
    Vivian
    $ golang-petname -separator="|" -words=10
    snobbily|oracularly|contemptuously|discordantly|lachrymosely|afterwards|coquettishly|politely|elaborate|Samir

    Using it in your own Golang code looks as simple as this:

    package main
    import (
    "fmt"
    "math/rand"
    "time"
    "github.com/dustinkirkland/golang-petname"
    )
    func main() {
    flag.Parse()
    rand.Seed(time.Now().UnixNano())
    fmt.Println(petname.Generate(2, ""))
    }
    Gratuitous picture of my pets, 7 years later.
    Cheers,
    happily-hacking-Dustin

    Read more
    jdstrand

    Most of this has been discussed on mailing lists, blog entries, etc, while developing Ubuntu Touch, but I wanted to write up something that ties together these conversations for Snappy. This will provide background for the conversations surrounding hardware access for snaps that will be happening soon on the snappy-devel mailing list.

    Background

    Ubuntu Touch has several goals that all apply to Snappy:

    • we want system-image upgrades
    • we want to replace the distro archive model with an app store model for Snappy systems
    • we want developers to be able to get their apps to users quickly
    • we want a dependable application lifecycle
    • we want the system to be easy to understand and to develop on
    • we want the system to be secure
    • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

    Snappy adds a few things to the above (that pertain to this conversation):

    • we want the system to be bulletproof (transactional updates with rollbacks)
    • we want the system to be easy to use for system builders
    • we want the system to be easy to use and understand for admins

    Let’s look at what all these mean more closely.

    system-image upgrades

    • we want system-image upgrades
    • we want the system to be bulletproof (transactional updates with rollbacks)

    We want system-image upgrades so updates are fast, reliable and so people (users, admins, snappy developers, system builders, etc) always know what they have and can depend on it being there. In addition, if an upgrade goes bad, we want a mechanism to be able to rollback the system to a known good state. In order to achieve this, apps need to work within the system and live in their own area and not modify the system in unpredictable ways. The Snappy FHS is designed for this and the security policy enforces that apps follow it. This protects us from malware, sure, but at least as importantly, it protects us from programming errors and well-intentioned clever people who might accidentally break the Snappy promise.

    app store

    • we want to replace the distro archive model with an app store model
    • we want developers to be able to get their apps to users quickly

    Ubuntu is a fantastic distribution and we have a wonderfully rich archive of software that is refreshed on a cadence. However, the traditional distro model has a number of drawbacks and arguably the most important one is that software developers have an extremely high barrier to overcome to get their software into users hands on their own time-frame. The app store model greatly helps developers and users desiring new software because it gives developers the freedom and ability to get their software out there quickly and easily, which is why Ubuntu Touch is doing this now.

    In order to enable developers in the Ubuntu app store, we’ve developed a system where a developer can upload software and have it available to users in seconds with no human review, intervention or snags. We also want users to be able to trust what’s in Ubuntu’s store, so we’ve created store policies that understand the Ubuntu snappy system such that apps do not require any manual review so long as the developer follows the rules. However, the Ubuntu Core system itself is completely flexible– people can install apps that are tightly confined, loosely confined, unconfined, whatever (more on this, below). In this manner, people can develop snaps for their own needs and distribute them however they want.

    It is the Ubuntu store policy that dictates what is in the store. The existing store policy is in place to improve the situation and is based on our experiences with the traditional distro model and attempts to build something app store-like experiences on top of it (eg, MyApps).

    application lifecycle

    • dependable application lifecycle

    This has not been discussed as much with Snappy for Ubuntu Core, but Touch needs to have a good application lifecycle model such that apps cannot run unconstrained and unpredictably in the background. In other words, we want to avoid problems with battery drain and slow systems on Touch. I think we’ve done a good job so far on Touch, and this story is continuing to evolve.

    (I mention application lifecycle in this conversation for completeness and because application lifecycle and security work together via the app’s application id)

    security

    • we want the system to be secure
    • we want an app trust model where users are in control and express that control in tasteful, easy to understand ways

    Everyone wants a system that they trust and that is secure, and security is one of the core tenants of Snappy systems. For Ubuntu Touch, we’ve created a
    system that is secure, that is easy to use and understand by users, and that still honors relevant, meaningful Linux traditions. For Snappy, we’ll be adding several additional security features (eg, seccomp, controlled abstract socket communication, firewalling, etc).

    Our security story and app store policies give us something that is between Apple and Google. We have a strong security story that has a number of similarities to Apple, but a lightweight store policy akin to Google Play. In addition to that, our trust model is that apps not needing manual review are untrusted by the OS and have limited access to the system. On Touch we use tasteful, contextual prompting so the user may trust the apps to do things beyond what the OS allows on its own (simple example, app needs access to location, user is prompted at the time of use if the app can access it, user answers and the decision is remembered next time).

    Snappy for Ubuntu Core is different not only because the UI supports a CLI, but also because we’ve defined a Snappy for Ubuntu Core user that is able to run the ‘snappy’ command as someone who is an admin, a system builder, a developer and/or someone otherwise knowledgeable enough to make a more informed trust decision. (This will come up again later, below)

    easy to use

    • we want the system to be easy to understand and to develop on
    • we want the system to be easy to use for system builders
    • we want the system to be easy to use and understand for admins

    We want a system that is easy to use and understand. It is key that developers are able to develop on it, system builders able to get their work done and admins can install and use the apps from the store.

    For Ubuntu Touch, we’ve made a system that is easy to understand and to develop on with a simple declarative permissions model. We’ll refine that for Snappy and make it easy to develop on too. Remember, the security policy is there not just so we can be ‘super secure’ but because it is what gives us the assurances needed for system upgrades, a safe app store and an altogether bulletproof system.

    As mentioned, the system we have designed is super flexible. Specifically, the underlying system supports:

    1. apps working wholly within the security policy (aka, ‘common’ security policy groups and templates)
    2. apps declaring specific exceptions to the security policy
    3. apps declaring to use restricted security policy
    4. apps declaring to run (effectively) unconfined
    5. apps shipping hand-crafted policy (that can be strict or lenient)

    (Keep in mind the Ubuntu App Store policy will auto-accept apps falling under ‘1’ and trigger manual review for the others)

    The above all works today (though it isn’t always friendly– we’re working on that) and the developer is in control. As such, Snappy developers have a plethora of options and can create snaps with security policy for their needs. When the developer wants to ship the app and make it available to all Snappy users via the Ubuntu App Store, then the developer may choose to work within the system to have automated reviews or choose not to and manage the process via manual reviews/commercial relationship with Canonical.

    Moving forward

    The above works really well for Ubuntu Touch, but today there is too much friction with regard to hardware access. We will make this experience better without compromising on any of our goals. How do we put this all together, today, so people can get stuff done with snappy without sacrificing on our goals, making it harder on ourselves in the future or otherwise opening Pandora’s box? We don’t want to relax our security policy, because we can’t make the bulletproof assurances we are striving for and it would be hard to tighten the security. We could also add some temporary security policy that adds only certain accesses (eg, serial devices) but, while useful, this is too inflexible. We also don’t want to have apps declare the accesses themselves to automatically adds the necessary security policy, because this (potentially) privileged access is then hidden from the Snappy for Ubuntu Core user.

    The answer is simple when we remember that the Snappy for Ubuntu Core user (ie, the one who is able to run the snappy command) is knowledgeable enough to make the trust decision for giving an app access to hardware. In other words, let the admin/developer/system builder be in control.

    immediate term

    The first thing we are going to do is unblock people and adjust snappy to give the snappy core user the ability to add specific device access to snap-specific security policy. In essence you’ll install a snap, then run a command to give the snap access to a particular device, then you’re done. This simple feature will unblock developers and snappy users immediately while still supporting our trust-model and goals fully. Plus it will be worth implementing since we will likely always want to support this for maximum flexibility and portability (since people can use traditional Linux APIs).

    The user experience for this will be discussed and refined on the mailing list in the coming days.

    short term

    After that, we’ll build on this and explore ways to make the developer and user experience better through integration with the OEM part and ways of interacting with the underlying system so that the user doesn’t have to necessarily know the device name to add, but can instead be given smart choices (this can have tie-ins to the web interface for snappy too). We’ll want to be thinking about hotpluggable devices as well.

    Since this all builds on the concept of the immediate term solution, it also supports our trust-model and goals fully and is relatively easy to implement.

    future

    Once we have the above in place, we should have a reasonable experience for snaps needing traditional device access. This will give us time to evaluate how people are accessing hardware and see if we can make things even better by using frameworks and/or a hardware abstraction layer. In this manner, snaps can program to an easy to use API and the system can mediate access to the underlying hardware via that API.


    Filed under: canonical, security, ubuntu, ubuntu-server, uncategorized

    Read more
    Ben Howard

    One of the perennial problems in the Cloud is knowing what is the most current image and where to find it. Some Clouds provide a nice GUI console, an API, or some combination. But what has been missing is a "dashboard" showing Ubuntu across multiple Clouds.


    Screenshot
    https://cloud-images.ubuntu.com/locator
    In that light, I am please to announce that we have a new beta Cloud Image Finder. This page shows where official Ubuntu images are available. As with all betas, we have some kinks to work out, like gathering up links for our Cloud Partners (so clicking an Image ID launches an image). I envision that in the future this locator page will be the default landing page for our Cloud Image Page..



    The need for this page became painfully apparent yesterday as I was working through the fallout of the Ghost Vulnerability (aka CVE 2015-0235). The Cloud Image team had spent a good amount of time pushing our images to AWS, Azure, GCE, Joyent and then notifying our partners like Brightbox, DreamCompute, CloudSigma and VMware of new builds. I realized that we needed a single place for our users to just look and see where the builds are available. And so I hacked up the EC2 Locator page to display other clouds.  

    Please note: this new page only shows stable releases. We push a lot of images and did not want to confuse things by showing betas, alphas, dailies or the development builds. Rather, this page will only show images that have been put through the complete QA process and are ready for production work loads. 

    This new locator page is backed by Simple Streams, which is our machine-formatted data service. Simple Streams provides a way of locating images in uniform way across the cloud. Essentially our new Locator Page is just a viewer of the Simple Stream Data.

    Hopefully our users will find this new page useful. Feedback is always welcome. Please feel free to drop me a line (utlemming @ ubuntu dot com). 

    Read more
    Ben Howard

    A few years ago when our fine friends on the kernel team introduced the idea of the "hardware enablement" (HWE) kernel, those of us in the Cloud world looked at it as curiosity. We thought that by in large, the HWE kernel would not be needed or wanted for Virtual Cloud instances.

    And we were wrong.

    So wrong in fact, that the HWE kernel has found its way into the Vagrant Cloud Images, VMware's vCHS, and Google's Compute engine as the default kernel for the Certified Images. The main reason for these requests is that virtual hardware moves at a fairly quick pace. Unlike traditional hardware, Virtual Hardware can be fixed and patched at the speed that software can be deployed.

    The feedback in regards to Azure has been the same: users and Microsoft has asked for the HWE kernel consistently. Microsoft has validated that the HWE kernel (3.16) running Ubuntu 14.04 on Windows Azures passes their validation testing. In our testing, we have validated that the 3.16 kernel works quite well in Azure.

    For Azure users, using the 3.16 HWE kernel brings SMB 2.1 copy file support and updates LIS drivers.

    Therefore, starting with the latest Windows Azure image [1], all the Ubuntu 14.04 images will track the latest hardware enablement kernel. That means that all the goodness in Ubuntu 14.10's kernel will be the default for 14.04 users launching our official images on Windows Azure.

    If you want to install the LTS kernel on your existing instance(s), simply run:

    • sudo apt-get update
    • sudo apt-get install linux-image-virtual-lts-utopic linux-lts-utopic-cloud-tools-common walinuxagent
    • sudo reboot


    [1] b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB

    Read more
    Dustin Kirkland


    With the recent introduction of Snappy Ubuntu, there are now several different ways to extend and update (apt-get vs. snappy) multiple flavors of Ubuntu (Core, Desktop, and Server).

    We've put together this matrix with a few examples of where we think Traditional Ubuntu (apt-get) and Transactional Ubuntu (snappy) might make sense in your environment.  Note that this is, of course, not a comprehensive list.

    Ubuntu Core
    Ubuntu Desktop
    Ubuntu Server
    Traditional apt-get
    Minimal Docker and LXC imagesDesktop, Laptop, Personal WorkstationsBaremetal, MAAS, OpenStack, General Purpose Cloud Images
    Transactional snappy
    Minimal IoT Devices and Micro-Services Architecture Cloud ImagesTouch, Phones, TabletsComfy, Human Developer Interaction (over SSH) in an atomically updated environment

    I've presupposed a few of the questions you might ask, while you're digesting this new landscape...

    Q: I'm looking for the smallest possible Ubuntu image that still supports apt-get...
    A: You want our Traditional Ubuntu Core. This is often useful in building Docker and LXC containers.

    Q: I'm building the next wearable IoT device/drone/robot, and perhaps deploying a fleet of atomically updated micro-services to the cloud...
    A: You want Snappy Ubuntu Core.

    Q: I want to install the best damn Linux on my laptop, desktop, or personal workstation, with industry best security practices, 30K+ freely available open source packages, freely available, with extensive support for hardware devices and proprietary add-ons...
    A: You want the same Ubuntu Desktop that we've been shipping for 10+ years, on time, every time ;-)

    Q: I want that same converged, tasteful Ubuntu experience on your personal, smart devices like my Phones and Tablets...
    A: You want Ubuntu Touch, which is a very graphical human interface focused expression of Snappy Ubuntu.

    Q: I'm deploying Linux onto bare metal servers at scale in the data center, perhaps building IaaS clouds using OpenStack or PaaS cloud using CloudFoundry? And I'm launching general purpose Linux server instances in public clouds (like AWS, Azure, or GCE) and private clouds...
    A: You want the traditional apt-get Ubuntu Server.

    Q: I'm developing and debugging applications, services, or frameworks for Snappy Ubuntu devices or cloud instances?
    A: You want Comfy Ubuntu Server, which is a command line human interface extension of Snappy Ubuntu, with a number of conveniences and amenities (ssh, byobu, manpages, editors, etc.) that won't be typically included in the minimal Snappy Ubuntu Core build. [*Note that the Comfy images will be available very soon]

    Cheers,
    :-Dustin

    Read more
    Dustin Kirkland


    Forget about The Year of the Linux Desktop...This is The Year of the Linux Countertop!

    I'm talking about Linux on every form of Internet-connected embedded devices.  The Internet-of-Things is already upon us.  Sensors, smart watches, TVs, thermostats, security cameras, drones, printers, routers, switches, robots -- you name it.  

    And with that backdrop, we are thrilled to introduce Snappy Ubuntu for Devices.  Ubuntu is now a possibility, on almost any device, anywhere.  Now that's exciting!

    This is the same Snappy Ubuntu, with its atomic, transactional updates that we launched on each major public cloud last month -- extended and updated for 64-bit Intel, AMD and ARM devices.


    Now, if you want a detailed, developer's look at building a Snappy Ubuntu image and running it on a BeagleBone, you're in luck!  I shot this little instructional video (using Cheese, GTK-RecordMyDesktop, and OpenShot).  Enjoy!


    A transcript of the video follows...


    1. What is Snappy Ubuntu?
      • A few weeks ago, we introduced a new flavor of Ubuntu that we call “Snappy” -- an atomically, transactionally updated Operating System -- and showed how to launch, update, rollback, and install apps in cloud instances of Snappy Ubuntu in Amazon EC2, Microsoft Azure, and Google Compute Engine public clouds.
      • And now we’re showing how that same Snappy Ubuntu experience is the perfect operating system for today’s Cambrian Explosion of smart devices that some people are calling “the Internet of Things”!
      • Snappy Ubuntu Core bundles only the essentials of a modern, appstore powered Linux OS stack and hence leaves room both in size as well as flexibility to build, maintain and monetize very own device solution without having to care about the overhead of inventing and maintaining your own OS and tools from scratch. Snappy Ubuntu Core comes right in time for you to put your very own stake into stake into still unconquered worlds of things
      • We think you’ll love Snappy on your smart devices for many of the same reasons that there are already millions of Ubuntu machine instances in hundreds of public and private clouds, as well as the millions of your own Ubuntu desktops, tablets, and phones!
    2. Unboxing the BeagleBone
      • Our target hardware for this Snappy Ubuntu demo is the BeagleBone Black -- an inexpensive, open platform for hardware and software developers.
      • I paid $55 for the board, and $8 for a USB to TTL Serial Cable
      • The board is about the size of a credit card, has a 1GHz ARM Cortex A8 processor, 512MB RAM, and on board ethernet.
      • While Snappy Ubuntu will run on most any armhf or amd64 hardware (including the Intel NUC), the BeagleBone is perhaps the most developer friendly solution.
    3. The easiest way to get your Snappy Ubuntu running on your Beaglebone
      • The world of Devices has so many opportunities that it won’t be possible to give everyone the perfect vertical stack centrally. Hence Canonical is trying to enable all of you and provide you with the elements that get you started doing your innovation as quickly as possible. Since there will be many devices that won’t need a screen and input devices, we have developed “webdm”. webdm gives you the ability to manage your snappy device and consume apps without any development effort.
      • To installl you simply download our prebuilt WEB .img and dd it to your sd card.
      • After that all you ahve to do is to connect your beaglebone to a DHCP enabled local network and power it on.
      • After 1-2 minutes you go to http://webdm.local:8080 and can get onto installing apps from the snappy appstore without any further effort
      • Of course, we are still in beta and will continue give you more features and a greater experience over time; we will not only make the UI better, but also work on various customization options that allow you to deliver your own app store powered product without investing your development resources in something that already got solved.
    4. Downloading Snappy and writing to an sdcard
      • Now we’re going to build a Snappy Ubuntu image to run on our device.
      • Soon, we’ll publish a library of Snappy Ubuntu images for many popular devices, but for this demo, we’re going to roll our own using the tool, ubuntu-device-flash.
      • ls -halF mysnappy.img
      • sudo dd if=mysnappy.img of=/dev/mmblk0 bs=1M oflag=dsync
    5. Hooking up the BeagleBone
      • Insert the microsd card
      • Network cable
      • USB debug
      • Power/USB
    6. Booting Snappy and command line experience
      • Okay, so we’re ready for our first boot of Snappy!
      • Let’s attach to the USB/serial console using screen
      • Now, I’ll attach the power, and if you watch very carefully, you might get to see some a few boot messages.
      • snappy help
      • ifconfig
      • ssh ubuntu@10.0.0.105
    7. WebDM experience
      • snappy info
      • Shows we have the webdm framework installed
      • point browser to http://10.0.0.105:8080
      • Configuration
      • Store
    8. Conclusion
      • Hey how cool is that!  Snappy Ubuntu running on devices :-)
      • I’ve spent plenty of time and money geeking out over my Nest and Dropcam and Netatmo and WeMo lightswitches, playing with their APIs and hooking them up to If-This-Then-That.
      • But I’m really excited about a world where those types of devices are as accessible to me as my Ubuntu servers and desktops!
      • And from what I’ve shown you here, with THIS, I think we can safely say that that we’ve blown right past the year of the Linux desktop.
      • This is the year of the Linux countertop!

    Cheers,
    Dustin

    Read more
    Michael Hall

    For a long time now Canonical has provided Ubuntu LoCo Teams with material to use in the promotion of Ubuntu. This has come in the form of CDs and DVDs for Ubuntu releases, as well as conference packs for booths and shows.

    We’ve also been sent several packages, when requested by an Ubuntu Member, to LoCo Teams for their own events, such as release parties or global jams.

    Ubuntu Mauritius Team 14.10 Global Jam

    This cycle we are extending this offer to any LoCo team that is hosting an in-person Global Jam event. It doesn’t matter how many people are going, or what you’re planning on doing for your jam. The Jam Packs will include DVDs, stickers, pens and other giveaways for your attendees, as well as an Ubuntu t-shirt for the organizers (or as a giveaway, if you choose).

    Since there is only a few weeks before Global Jam weekend, and these will be shipped from London, please take your country’s customs process into consideration before ordering. Countries in North America and Europe shouldn’t have a problem, but if you’ve experienced long customs delays in the past please consider waiting and making your request for the next Global Jam.

    To get an Ubuntu Global Jam Pack for your event, all you need to do is the following:

    • Register you Global Jam event on the LoCo Team Portal
      • Your event must be in-person, and have a venue associated with it
    • Fill out the community donation request form
      • Include a link to your LoCo Team Portal event in your request
    • Promote your event, before and after
      • Blog about it, post pictures, and share your excitement on social media
        • Use the #ubuntu hashtag when available

    You can find all kinds of resources, activities and advice for running your Global Jam event on the Ubuntu Wiki, where we’ve collected the cumulative knowledge from all across the community over many years. And you can get live help and advice any time on the #ubuntu-locoteams IRC channel on Freenode.

    Read more
    Ben Howard


    Snappy Launches


    When we launched Snappy, we introduced it on Microsoft Azure [1], Google’s GCE [2], Amazon’s AWS [3] and our KVM images [4]. Immediately our developers were asking questions like, “where’s the Vagrant images”, which we launched yesterday [5].

    The one final remaining question was “where are the images for <insert hypervisor>”. We had inquiries about Virtualbox, VMware Desktop/Fusion, interest in VMware Air, Citrix XenServer, etc.

    OVA to the rescue

    OVA is an industry standard for cross-hypervisor image support. The OVA spec [6] allows you to import a single image to:

    • VMware products
      • ESXi
      • Desktop
      • Fusion
      • VSphere
    • Virtualbox
    • Citrix XenServer
    • Microsoft SCVMM
    • Red Hat Enterprise Virtualization
    • SuSE Studio
    • Oracle VM

    Okay, so where can I get the OVA images?

    To get the latest OVA image, you can get it from here [7]. From there, you will need to follow your hypervisor instructions on importing OVA images. 

    Or if you want a short URL, http://goo.gl/xM89p7


    ---

    [1] http://www.ubuntu.com/cloud/tools/snappy#snappy-azure
    [2] http://www.ubuntu.com/cloud/tools/snappy#snappy-google
    [3] http://www.ubuntu.com/cloud/tools/snappy#snappy-amazon
    [4] http://www.ubuntu.com/cloud/tools/snappy#snappy-local
    [5] https://blog.utlemming.org/2015/01/snappy-images-for-vagrant.htm
    [6] https://en.wikipedia.org/wiki/Open_Virtualization_Format
    [7] http://cloud-images.ubuntu.com/snappy/devel/core/current/devel-core-amd64-cloud.ova

    Read more