Canonical Voices

Posts tagged with 'ubuntu'

Daniel Holbach

Tomorrow is a special anniversary: 2005-09-05 I joined Canonical – that’s right: It’s going to be a decade.

A lot of what we as Ubuntu Community experienced and went through I wrote up some time ago and it’s well-documented in blog posts, articles, LoCo event reports and pictures from Ubuntu Allhands events, so don’t expect any of that here.

For me personally it’s been a ride I could never have expected like this. A decade in a single company doesn’t seem to happen very often these days and I would also never have dreamed what we are delivering to the world today. I’m happy and proud to have been part of this all.

I still remember the days when I joined. I had just finished my studies and working next to people who could all easily be described as a wunderkind, it made me feel like I had quite a healthy impostor syndrome. It’s easy to underestimate how much I learned here – not just technically or in terms of other abilities, but also as a person. I got to work on things I never imagined I could do and am happy I was involved in so many different projects.

One thing made this whole ride even more special: the people. I made lots of friends along the way – that’s one of the primary reasons I still feel like I work at a very very special place.

Big hugs everyone and thanks for accompanying me this far! :-)

Read more
Daniel Holbach

Thanks to Nathan Haines and José Antonio Rey we have the Ubuntu Free Culture Showcase again. It’s Ubuntu’s way of acknowledging that there’s not just “free software”, but a wider movement which wants to make sharing the fruits of our labour an obvious and straight-forward reality.

You still have some time to submit your works for the competition. The winners are going to get their free culture works included in Ubuntu itself. Please share this with all your producer and artist friends who are into free culture.

Submission groups are as follows:

Find all other relevant information here.

Read more
Daniel Holbach

In the flurry of uploads for the C++ ABI transition and other frantic work (Thursday is Feature Freeze day) this gem maybe went unnoticed:

snapcraft (0.1) wily; urgency=low

  * Initial release

What this means? If you’re on wily, you can easily try out snapcraft and get started turning software into snaps. We have some initial docs available on the developer site which should help you find your way around.

This is a 0.1 release, so there are bugs and there might be bigger changes coming your way, but there will also be more docs, more plugins and more good stuff in general. If you’re curious, you might want to sign up for the daily build (just add the ppa:snappy-dev/snapcraft-daily PPA).

Here’s a brilliant example of what snapcraft can do for you: packaging a Java app was never this easy.

If you’re more into client apps, check out Ted’s article on how to create a QML snap.

As you can easily see: the future is on its way and upstreams and app developer will have a much easier time sharing their software.

As I said above: snapcraft is still a 0.1 release. If you want to let us know your feedback and find bugs or propose merges, you can find snapcraft in Launchpad.

Read more
Dustin Kirkland


Canonical is delighted to sponsor ContainerCon 2015, a Linux Foundation event in Seattle next week, August 17-19, 2015. It's quite exciting to see the A-list of sponsors, many of them newcomers to this particular technology, teaming with energy around containers. 

From chroots to BSD Jails and Solaris Zones, the concepts behind containers were established decades ago, and in fact traverse the spectrum of server operating systems. At Canonical, we've been working on containers in Ubuntu for more than half a decade, providing a home and resources for stewardship and maintenance of the upstream Linux Containers (LXC) project since 2010.

Last year, we publicly shared our designs for LXD -- a new stratum on top of LXC that endows the advantages of a traditional hypervisor into the faster, more efficient world of containers.

Those designs are now reality, with the open source Golang code readily available on Github, and Ubuntu packages available in a PPA for all supported releases of Ubuntu, and already in the Ubuntu 15.10 beta development tree. With ease, you can launch your first LXD containers in seconds, following this simple guide.

LXD is a persistent daemon that provides a clean RESTful interface to manage (start, stop, clone, migrate, etc.) any of the containers on a given host.

Hosts running LXD are handily federated into clusters of container hypervisors, and can work as Nova Compute nodes in OpenStack, for example, delivering Infrastructure-as-a-Service cloud technology at lower costs and greater speeds.

Here, LXD and Docker are quite complementary technologies. LXD furnishes a dynamic platform for "system containers" -- containers that behave like physical or virtual machines, supplying all of the functionality of a full operating system (minus the kernel, which is shared with the host). Such "machine containers" are the core of IaaS clouds, where users focus on instances with compute, storage, and networking that behave like traditional datacenter hardware.

LXD runs perfectly well along with Docker, which supplies a framework for "application containers" -- containers that enclose individual processes that often relate to one another as pools of micro services and deliver complex web applications.

Moreover, the Zen of LXD is the fact that the underlying container implementation is actually decoupled from the RESTful API that drives LXD functionality. We are most excited to discuss next week at ContainerCon our work with Microsoft around the LXD RESTful API, as a cross-platform container management layer.

Ben Armstrong, a Principal Program Manager Lead at Microsoft on the core virtualization and container technologies, has this to say:
“As Microsoft is working to bring Windows Server Containers to the world – we are excited to see all the innovation happening across the industry, and have been collaborating with many projects to encourage and foster this environment. Canonical’s LXD project is providing a new way for people to look at and interact with container technologies. Utilizing ‘system containers’ to bring the advantages of container technology to the core of your cloud infrastructure is a great concept. We are looking forward to seeing the results of our engagement with Canonical in this space.”
Finally, if you're in Seattle next week, we hope you'll join us for the technical sessions we're leading at ContainerCon 2015, including: "Putting the D in LXD: Migration of Linux Containers", "Container Security - Past, Present, and Future", and "Large Scale Container Management with LXD and OpenStack". Details are below.
Date: Monday, August 17 • 2:20pm - 3:10pm
Title: Large Scale Container Management with LXD and OpenStack
Speaker: Stéphane Graber
Abstracthttp://sched.co/3YK6
Location: Grand Ballroom B
Schedulehttp://sched.co/3YK6 
Date: Wednesday, August 19 10:25am-11:15am
Title: Putting the D in LXD: Migration of Linux Containers
Speaker: Tycho Andersen
Abstract: http://sched.co/3YTz
Location: Willow A
Schedule: http://sched.co/3YTz
Date: Wednesday, August 19 • 3:00pm - 3:50pm
Title: Container Security - Past, Present and Future
Speaker: Serge Hallyn
Abstract: http://sched.co/3YTl
Location: Ravenna
Schedule: http://sched.co/3YTl
Cheers,
Dustin

Read more
Daniel Holbach

If you haven’t heard of it yet, every Tuesday we have the Ubuntu Community Q&A session at 15:00 UTC. It’s always up on http://ubuntuonair.com and you can watch old sessions on the youtube channel. For the casual Ubuntu users it’s a great way to get to know people who are working in the inner circles of Ubuntu and can answer questions, clear up misunderstandings or get specialists on the show.

Since Jono went to XPRIZE, our team at Canonical has been running them and I really enjoy these sessions. What I liked even more were the sessions where we had guests and got to talk about some more specific topics. In the past few weeks we had Olli Ries on, quite a few UbuCon organisers, some testing/QA heroes and many more.

If you have anyone you’d like to see interviewed or any specific topics you’d like to see covered, please drop a comment below and we’ll do our best to get them on in the next weeks!

Read more
Dustin Kirkland

The Golden Ratio is one of the oldest and most visible irrational numbers known to humanity.  Pi is perhaps more famous, but the Golden Ratio is found in more of our art, architecture, and culture throughout human history.

I think of the Golden Ratio as sort of "Pi in 1 dimension".  Whereas Pi is the ratio of a circle's circumference to its diameter, the Golden Ratio is the ratio of a whole to one of its parts, when the ratio of that part to the remainder is equal.

Visually, this diagram from Wikipedia helps explain it:


We find the Golden Ratio in the architecture of antiquity, from the Egyptians to the Greeks to the Romans, right up to the Renaissance and even modern times.



While the base of the pyramids are squares, the Golden Ratio can be observed as the base and the hypotenuse of a basic triangular cross section like so:


The floor plan of the Parthenon has a width/depth ratio matching the Golden Ratio...



For the first 300 years of printing, nearly all books were printed on pages whose length to width ratio matched that of the Golden Ratio.

Leonardo da Vinci used the Golden Ratio throughout his works.  I'm told that his Vitruvian Man displays the Golden Ratio...


From school, you probably remember that the Golden Ratio is approximately ~1.6 (and change).
There's a strong chance that your computer or laptop monitor has a 16:10 aspect ratio.  Does 1280x800 or 1680x1050 sound familiar?



That ~1.6 number is only an approximation, of course.  The Golden Ratio is in fact an irrational number and can be calculated to much greater precision through several different representations, including:


You can plug that number into your computer's calculator and crank out a dozen or so significant digits.


However, if you want to go much farther than that, Alexander Yee has created a program called y-cruncher, which as been used to calculate most of the famous constants to world record precision.  (Sorry free software readers of this blog -- y-cruncher is not open source code...)

I came across y-cruncher a few weeks ago when I was working on the mprime post, demonstrating how you can easily put any workload into a Docker container and then produce both Juju Charms and Ubuntu Snaps that package easily.  While I opted to use mprime in that post, I saved y-cruncher for this one :-)

Also, while doing some network benchmark testing of The Fan Networking among Docker containers, I experimented for the first time with some of Amazon's biggest instances, which have dedicated 10gbps network links.  While I had a couple of those instances up, I did some small scale benchmarking of y-cruncher.

Presently, none of the mathematical constant records are even remotely approachable with CPU and Memory alone.  All of them require multiple terabytes of disk, which act as a sort of swap space for temporary files, as bits are moved in and out of memory while the CPU crunches.  As such, approaching these are records are overwhelmingly I/O bound -- not CPU or Memory bound, as you might imagine.

After a variety of tests, I settled on the AWS d2.2xlarge instance size as the most affordable instance size to break the previous Golden Ratio record (1 trillion digits, by Alexander Yee on his gaming PC in 2010).  I say "affordable", in that I could have cracked that record "2x faster" with a d2.4xlarge or d2.8xlarge, however, I would have paid much more (4x) for the total instance hours.  This was purely an economic decision :-)


Let's geek out on technical specifications for a second...  So what's in a d2.2xlarge?
  • 8x Intel Xeon CPUs (E5-2676 v3 @ 2.4GHz)
  • 60GB of Memory
  • 6x 2TB HDDs
First, I arranged all 6 of those 2TB disks into a RAID0 with mdadm, and formatted it with xfs (which performed better than ext4 or btrfs in my cursory tests).

$ sudo mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=6 /dev/xvd?
$ sudo mkfs.xfs /dev/md0
$ df -h /mnt
/dev/md0 11T 34M 11T 1% /mnt

Here's a brief look at raw read performance with hdparm:

$ sudo hdparm -tT /dev/md0
Timing cached reads: 21126 MB in 2.00 seconds = 10576.60 MB/sec
Timing buffered disk reads: 1784 MB in 3.00 seconds = 593.88 MB/sec

The beauty here of RAID0 is that each of the 6 disks can be used to read and/or write simultaneously, perfectly in parallel.  600 MB/sec is pretty quick reads by any measure!  In fact, when I tested the d2.8xlarge, I put all 24x 2TB disks into the same RAID0 and saw nearly 2.4 GB/sec read performance across that 48TB array!

With /dev/md0 mounted on /mnt and writable by my ubuntu user, I kicked off y-crunch with these parameters:

Program Version:       0.6.8 Build 9461 (Linux - x64 AVX2 ~ Airi)
Constant: Golden Ratio
Algorithm: Newton's Method
Decimal Digits: 2,000,000,000,000
Hexadecimal Digits: 1,660,964,047,444
Threading Mode: Thread Spawn (1 Thread/Task) ? / 8
Computation Mode: Swap Mode
Working Memory: 61,342,174,048 bytes ( 57.1 GiB )
Logical Disk Usage: 8,851,913,469,608 bytes ( 8.05 TiB )

Byobu was very handy here, being able to track in the bottom status bar my CPU load, memory usage, disk usage, and disk I/O, as well as connecting and disconnecting from the running session multiple times over the 4 days of running.


And approximately 79 hours later, it finished successfully!

Start Date:            Thu Jul 16 03:54:11 2015
End Date: Sun Jul 19 11:14:52 2015

Computation Time: 221548.583 seconds
Total Time: 285640.965 seconds

CPU Utilization: 315.469 %
Multi-core Efficiency: 39.434 %

Last Digits:
5027026274 0209627284 1999836114 2950866539 8538613661 : 1,999,999,999,950
2578388470 9290671113 7339871816 2353911433 7831736127 : 2,000,000,000,000

Amazing, another person (who I don't know), named Ron Watkins, performed the exact same computation and published his results within 24 hours, on July 22nd/23rd.  As such, Ron and I are "sharing" credit for the Golden Ratio record.


Now, let's talk about the economics here, which I think are the most interesting part of this post.

Look at the above chart of records, which are published on the y-cruncher page, the vast majority of those have been calculated on physical PCs -- most of them seem to be gaming PCs running Windows.

What's different about my approach is that I used Linux in the Cloud -- specifically Ubuntu in AWS.  I paid hourly (actually, my employer, Canonical, reimbursed me for that expense, thanks!)  It took right at 160 hours to run the initial calculation (79 hours) as well as the verification calculation (81 hours), at the current rate of $1.38/hour for a d2.2xlarge, which is a grand total of $220!

$220 is a small fraction of the cost of 6x 2TB disks, 60 GB of memory, or 8 Xeon cores, not to mention the electricity and cooling required to run a system of this size (~750W) for 160 hours.

If we say the first first trillion digits were already known from the previous record, that comes out to approximately 4.5 billion record-digits per dollar, and 12.5 billion record-digits per hour!

Hopefully you find this as fascinating as I!

Cheers,
:-Dustin

Read more
Dustin Kirkland

tl;dr:  Your Ubuntu-based container is not a copyright violation.  Nothing to see here.  Carry on.
I am speaking for my employer, Canonical, when I say you are not violating our policies if you use Ubuntu with Docker in sensible, secure ways.  Some have claimed otherwise, but that’s simply sensationalist and untrue.

Canonical publishes Ubuntu images for Docker specifically so that they will be useful to people. You are encouraged to use them! We see no conflict between our policies and the common sense use of Docker.

Going further, we distribute Ubuntu in many different signed formats -- ISOs, root tarballs, VMDKs, AMIs, IMGs, Docker images, among others.  We take great pride in this work, and provide them to the world at large, on ubuntu.com, in public clouds like AWS, GCE, and Azure, as well as in OpenStack and on DockerHub.  These images, and their signatures, are mirrored by hundreds of organizations all around the world. We would not publish Ubuntu in the DockerHub if we didn’t hope it would be useful to people using the DockerHub. We’re delighted for you to use them in your public clouds, private clouds, and bare metal deployments.

Any Docker user will recognize these, as the majority of all Dockerfiles start with these two words....

FROM ubuntu

In fact, we gave away hundreds of these t-shirts at DockerCon.


We explicitly encourage distribution and redistribution of Ubuntu images and packages! We also embrace a very wide range of community remixes and modifications. We go further than any other commercially supported Linux vendor to support developers and community members scratching their itches. There are dozens of such derivatives and many more commercial initiatives based on Ubuntu - we are definitely not trying to create friction for people who want to get stuff done with Ubuntu.

Our policy exists to ensure that when you receive something that claims to be Ubuntu, you can trust that it will work to the same standard, regardless of where you got it from. And people everywhere tell us they appreciate that - when they get Ubuntu on a cloud or as a VM, it works, and they can trust it.  That concept is actually hundreds of years old, and we’ll talk more about that in a minute....


So, what do I mean by “sensible use” of Docker? In short - secure use of Docker. If you are using a Docker container then you are effectively giving the producer of that container ‘root’ on your host. We can safely assume that people sharing an Ubuntu docker based container know and trust one another, and their use of Ubuntu is explicitly covered as personal use in our policy. If you trust someone to give you a Docker container and have root on your system, then you can handle the risk that they inadvertently or deliberately compromise the integrity or reliability of your system.

Our policy distinguishes between personal use, which we can generalise to any group of collaborators who share root passwords, and third party redistribution, which is what people do when they exchange OS images with strangers.

Third party redistribution is more complicated because, when things go wrong, there’s a real question as to who is responsible for it. Here’s a real example: a school district buys laptops for all their students with free software. A local supplier takes their preferred Linux distribution and modifies parts of it (like the kernel) to work on their hardware, and sells them all the PCs. A month later, a distro kernel update breaks all the school laptops. In this case, the Linux distro who was not involved gets all the bad headlines, and the free software advocates who promoted the whole idea end up with egg on their faces.

We’ve seen such cases in real hardware, and in public clouds and other, similar environments.  Digital Ocean very famously published some modified and very broken Ubuntu images, outside of Canonical's policies.  That's inherently wrong, and easily avoidable.

So we simply say, if you’re going to redistribute Ubuntu to third parties who are trusting both you and Ubuntu to get it right, come and talk to Canonical and we’ll work out how to ensure everybody gets what they want and need.

Here’s a real exercise I hope you’ll try...

  1. Head over to your local purveyor of fine wines and liquors.
  2. Pick up a nice bottle of Champagne, Single Malt Scotch Whisky, Kentucky Straight Bourbon Whiskey, or my favorite -- a rare bottle of Lambic Oude Gueze.
  3. Carefully check the label, looking for a seal of Appellation d'origine contrôlée.
  4. In doing so, that bottle should earn your confidence that it was produced according to strict quality, format, and geographic standards.
  5. Before you pop the cork, check the seal, to ensure it hasn’t been opened or tampered with.  Now, drink it however you like.
  6. Pour that Champagne over orange juice (if you must).  Toss a couple ice cubes in your Scotch (if that’s really how you like it).  Pour that Bourbon over a Coke (if that’s what you want).
  7. Enjoy however you like -- straight up or mixed to taste -- with your own guests in the privacy of your home.  Just please don’t pour those concoctions back into the bottle, shove a cork in, put them back on the shelf at your local liquor store and try to pass them off as Champagne/Scotch/Bourbon.


Rather, if that’s really what you want to do -- distribute a modified version of Ubuntu -- simply contact us and ask us first (thanks for sharing that link, mjg59).  We have some amazing tools that can help you either avoid that situation entirely, or at least let’s do everyone a service and let us help you do it well.

Believe it or not, we’re really quite reasonable people!  Canonical has a lengthy, public track record, donating infrastructure and resources to many derivative Ubuntu distributions.  Moreover, we’ve successfully contracted mutually beneficial distribution agreements with numerous organizations and enterprises. The result is happy users and happy companies.

FROM ubuntu,
Dustin

The one and only Champagne region of France

Read more
Dustin Kirkland


As you probably remember from grade school math class, primes are numbers that are only divisible by 1 and themselves.  2, 3, 5, 7, and 11 are the first 5 prime numbers, for example.

Many computer operations, such as public-key cryptography, depends entirely on prime numbers.  In fact, RSA encryption, invented in 1978, uses a modulo of a product of two very large primes for encryption and decryption.  The security of asymmetric encryption is tightly coupled with the computational difficulty in factoring large numbers.  I actually use prime numbers as the status update intervals in Byobu, in order to improve performance and distribute the update spikes.

Euclid proved that there are infinitely many prime numbers around 300 BC.  But the Prime Number Theorem (proven in the 19th century) says that the probability of any number is prime is inversely proportional to its number of digits.  That means that larger prime numbers are notoriously harder to find, and it gets harder as they get bigger!
What's the largest known prime number in the world?

Well, it has 17,425,170 decimal digits!  If you wanted to print it out, size 11 font, it would take 6,543 pages -- or 14 reams of paper!

That number is actually one less than a very large power of 2.  257,885,161-1.  It was discovered by Curtis Cooper on January 25, 2013, on an Intel Core2 Duo.

Actually, each of the last 14 record largest prime numbers discovered (between 1996 and today) have been of that form, 2P-1.  Numbers of that form are called Mersenne Prime Numbers, named after Friar Marin Mersenne, a French priest who studied them in the 1600s.


Friar Mersenne's work continues today in the form of the Great Internet Mersenne Prime Search, and the mprime program, which has been used to find those 14 huge prime numbers since 1996.

mprime is a massive parallel, cpu scavenging utility, much like SETI@home or the Protein Folding Project.  It runs in the background, consuming resources, working on its little piece of the problem.  mprime is open source code, and also distributed as a statically compiled binary.  And it will make a fine example of how to package a service into a Docker container, a Juju charm, and a Snappy snap.


Docker Container

First, let's build the Docker container, which will serve as our fundamental building block.  You'll first need to download the mprime tarball from here.  Extract it, and the directory structure should look a little like this (or you can browse it here):

├── license.txt
├── local.txt
├── mprime
├── prime.log
├── prime.txt
├── readme.txt
├── results.txt
├── stress.txt
├── undoc.txt
├── whatsnew.txt
└── worktodo.txt

And then, create a Dockerfile, that copies the files we need into the image.  Here's our example.

FROM ubuntu
MAINTAINER Dustin Kirkland email@example.com
COPY ./mprime /opt/mprime/
COPY ./license.txt /opt/mprime/
COPY ./prime.txt /opt/mprime/
COPY ./readme.txt /opt/mprime/
COPY ./stress.txt /opt/mprime/
COPY ./undoc.txt /opt/mprime/
COPY ./whatsnew.txt /opt/mprime/
CMD ["/opt/mprime/mprime", "-w/opt/mprime/"]

Now, build your Docker image with:

$ sudo docker build .
Sending build context to Docker daemon 36.02 MB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
...
Successfully built de2e817b195f

Then publish the image to Dockerhub.

$ sudo docker push kirkland/mprime

You can see that image, which I've publicly shared here: https://registry.hub.docker.com/u/kirkland/mprime/



Now you can run this image anywhere you can run Docker.

$ sudo docker run -d kirkland/mprime

And verify that it's running:

$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c9233f626c85 kirkland/mprime:latest "/opt/mprime/mprime 24 seconds ago Up 23 seconds furious_pike

Juju Charm

So now, let's create a Juju Charm that uses this Docker container.  Actually, we're going to create a subordinate charm.  Subordinate services in Juju are often monitoring and logging services, things that run along side primary services.  Something like mprime is a good example of something that could be a subordinate service, attached to one or many other services in a Juju model.

Our directory structure for the charm looks like this (or you can browse it here):

└── trusty
└── mprime
├── config.yaml
├── copyright
├── hooks
│   ├── config-changed
│   ├── install
│   ├── juju-info-relation-changed
│   ├── juju-info-relation-departed
│   ├── juju-info-relation-joined
│   ├── start
│   ├── stop
│   └── upgrade-charm
├── icon.png
├── icon.svg
├── metadata.yaml
├── README.md
└── revision
3 directories, 15 files

The three key files we should look at here are metadata.yaml, hooks/install and hooks/start:

$ cat metadata.yaml
name: mprime
summary: Search for Mersenne Prime numbers
maintainer: Dustin Kirkland
description: |
A Mersenne prime is a prime of the form 2^P-1.
The first Mersenne primes are 3, 7, 31, 127
(corresponding to P = 2, 3, 5, 7).
There are only 48 known Mersenne primes, and
the 13 largest known prime numbers in the world
are all Mersenne primes.
This charm uses a Docker image that includes the
statically built, 64-bit Linux binary mprime
which will consume considerable CPU and Memory,
searching for the next Mersenne prime number.
See http://www.mersenne.org/ for more details!
tags:
- misc
subordinate: true
requires:
juju-info:
interface: juju-info
scope: container

And:

$ cat hooks/install
#!/bin/bash
apt-get install -y docker.io
docker pull kirkland/mprime

And:

$ cat hooks/start
#!/bin/bash
service docker restart
docker run -d kirkland/mprime

Now, we can add the mprime service to any other running Juju service.  As an example here, I'll --bootstrap, deploy the Apache2 charm, and attach mprime to it.

$ juju bootrap
$ juju deploy apache2
$ juju deploy cs:~kirkland/mprime
$ juju add-relation apache2 mprime

Looking at our services, we can see everything deployed and running here:

$ juju status
services:
apache2:
charm: cs:trusty/apache2-14
exposed: false
service-status:
current: unknown
since: 20 Jul 2015 11:55:59-05:00
relations:
juju-info:
- mprime
units:
apache2/0:
workload-status:
current: unknown
since: 20 Jul 2015 11:55:59-05:00
agent-status:
current: idle
since: 20 Jul 2015 11:56:03-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
machine: "1"
public-address: 23.20.147.158
subordinates:
mprime/0:
workload-status:
current: unknown
since: 20 Jul 2015 11:58:52-05:00
agent-status:
current: idle
since: 20 Jul 2015 11:58:56-05:00
version: 1.24.2
agent-state: started
agent-version: 1.24.2
upgrading-from: local:trusty/mprime-1
public-address: 23.20.147.158
mprime:
charm: local:trusty/mprime-1
exposed: false
service-status: {}
relations:
juju-info:
- apache2
subordinate-to:
- apache2


Snappy Ubuntu Core Snap

Finally, let's build a Snap.  Snaps are applications that run in Ubuntu's transactional, atomic OS, Snappy Ubuntu Core.

We need the simple directory structure below (or you can browse it here):

├── meta
│   ├── icon.png
│   ├── icon.svg
│   ├── package.yaml
│   └── readme.md
└── start.sh
1 directory, 5 files

The package.yaml describes what we're actually building, and what capabilities the service needs.  It looks like this:

name: mprime
vendor: Dustin Kirkland 
architecture: [amd64]
icon: meta/icon.png
version: 28.5-11
frameworks:
- docker
services:
- name: mprime
description: "Search for Mersenne Prime Numbers"
start: start.sh
caps:
- docker_client
- networking

And the start.sh launches the service via Docker.

#!/bin/sh
PATH=$PATH:/apps/docker/current/bin/
docker rm -v -f mprime
docker run --name mprime -d kirkland/mprime
docker wait mprime

Now, we can build the snap like so:

$ snappy build .
Generated 'mprime_28.5-11_amd64.snap' snap
$ ls -halF *snap
-rw-rw-r-- 1 kirkland kirkland 9.6K Jul 20 12:38 mprime_28.5-11_amd64.snap

First, let's install the Docker framework, upon which we depend:

$ snappy-remote --url ssh://snappy-nuc install docker
=======================================================
Installing docker from the store
Installing docker
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

And now, we can install our locally built Snap.
$ snappy-remote --url ssh://snappy-nuc install mprime_28.5-11_amd64.snap
=======================================================
Installing mprime_28.5-11_amd64.snap from local environment
Installing /tmp/mprime_28.5-11_amd64.snap
2015/07/20 17:44:26 Signature check failed, but installing anyway as requested
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 sideload
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

Alternatively, you can install the snap directly from the Ubuntu Snappy store, where I've already uploaded the mprime snap:

$ snappy-remote --url ssh://snappy-nuc install mprime.kirkland
=======================================================
Installing mprime.kirkland from the store
Installing mprime.kirkland
Name Date Version Developer
ubuntu-core 2015-04-23 2 ubuntu
docker 2015-07-20 1.6.1.002
mprime 2015-07-20 28.5-11 kirkland
webdm 2015-04-23 0.5 sideload
generic-amd64 2015-04-23 1.1
=======================================================

Conclusion

How long until this Docker image, Juju charm, or Ubuntu Snap finds a Mersenne Prime?  Almost certainly never :-)  I want to be clear: that was never the point of this exercise!

Rather I hope you learned how easy it is to run a Docker image inside either a Juju charm or an Ubuntu snap.  And maybe learned something about prime numbers along the way ;-)

Join us in #docker, #juju, and #snappy on irc.freenode.net.

Cheers,
Dustin

Read more
bmichaelsen

They sentenced me to twenty years of boredom
For trying to change the system from within
— Leonard Cohen, I’m your man, First we take Manhattan

Advance warning: This blog post talks about C++ coding style, and given the “expressiveness” (aka a severe infection with TimTowTdi) this is bound to contain significant amounts of bikeshedding, personal opinion/preference. As such, be invited to ignore all this as the ramblings of a raging lunatic.

Anyone who observed me spotting a Pimpl in code will know that I am not a fan of this idom. Its intend is to reduce build times by using a design pattern to move implementation details out of headers — a workaround for C++s misfeature of by default needing a recompile even for changing implementation details only without changing the public interface. Now I personally always thought a pure abstract base class to be a more “native” and less ugly way to tell this to the compiler. However, without real testing, such gut feelings are rarely good advisors in a complex language like C++.

So I did some testing on the real life performance of a pure abstract base class vs. a pimpl (each of course in a different compilation unit to prevent the compiler to optimize away what we want to measure) — and for reference, a class with functions that can be completely inlined. These are the three test implementations, inline:

-- header (hxx) --
class InlineClass final
{
	public:
		InlineClass(int nFirst, int nSecond)
			: m_nFirst(nFirst), m_nSecond(nSecond), m_nResult(0)
		{};
		void Add()
			{ m_nResult = m_nFirst + m_nSecond; };
		int GetResult() const
			{ return m_nResult; };
	private:
		const int m_nFirst;
		const int m_nSecond;
		int m_nResult;
};

Pimpl, as suggested by Effective Modern C++ when using C++11, but not C++14:

-- header (hxx) --
#include <memory>
class PimplClass final
{
	public:
		PimplClass(int nFirst, int nSecond);
		~PimplClass();
		void Add();
		int GetResult() const;
	private:
		struct Impl;
		std::unique_ptr<Impl> m_pImpl;
};
-- implementation (cxx) --
#include "pimpl.hxx"
struct PimplClass::Impl
{
	Impl(int nFirst, int nSecond)
		: m_nFirst(nFirst), m_nSecond(nSecond), m_nResult(0)
	{};
	const int m_nFirst;
	const int m_nSecond;
	int m_nResult;
};
PimplClass::PimplClass(int nFirst, int nSecond)
	: m_pImpl(std::unique_ptr<Impl>(new Impl(nFirst, nSecond)))
{}
PimplClass::~PimplClass()
	{}
void PimplClass::Add()
	{ m_pImpl->m_nResult = m_pImpl->m_nFirst + m_pImpl->m_nSecond; }
int PimplClass::GetResult() const
	{ return m_pImpl->m_nResult; }

Pure abstract base class:

-- header (hxx) --
#include <memory>
struct AbcClass
{
	static std::shared_ptr<AbcClass> Create(int nFirst, int nSecond);
	virtual ~AbcClass() {};
	virtual void Add() =0;
	virtual int GetResult() const =0;
};
-- implementation (cxx) --
#include "abc.hxx"
#include <memory>
struct AbcClassImpl final : public AbcClass
{
	AbcClassImpl(int nFirst, int nSecond)
		: m_nFirst(nFirst), m_nSecond(nSecond)
	{};
	virtual void Add() override
		{ m_nResult = m_nFirst + m_nSecond; };
	virtual int GetResult() const override
		{ return m_nResult; };
	const int m_nFirst;
	const int m_nSecond;
	int m_nResult;
};
std::shared_ptr<AbcClass> AbcClass::Create(int nFirst, int nSecond)
	{ return std::shared_ptr<AbcClass>(new AbcClassImpl(nFirst, nSecond)); }

Comparing these we find:

implementation lines added for GetResult() source entropy added source entropy for GetResult() runtime
inline 2 187 17 100%
Pimpl 3 316 26 168% (174%)
pure ABC 3 295 (273) 19 (16) 158%

So the abstract base class has less complex source code (entropy)1, needs less additional entropy to expand and is still faster in the end on common hardware (Intel i5-4200U) with common compiler optimization switches (-O2)2.

Additionally, in a non-trivial code base you might actually need to use virtual functions for your implementation anyway as you are deriving from or implementing an existing interface. In the Pimpl case, this means using two indirections (resolving the virtual function and then resolving the m_pImpl pointer in that function on top of that). In the abstract base class case thats not happening and in addition, it means that you can spare yourself the pure virtual declarations in the *.hxx (the virtual ... =0 ones), as those are already declared in the class derived from. In LibreOffice, this is true for any class implementing UNO interfaces. So the first numbers are actually biased against an abstract base class for real world code bases — the numbers in parathesis show the results when an interface is already defined elsewhere.

So unless the synthetic example used here is some kind of weird cornercase, this suggests abstract base classes being the better alternative over a Pimpl once the class goes beyond being a plain value type with completely inlineable accessor member functions.

Thanks for bearing with me on this rant about one of my personal pet peeves here!

1 entropy is measured as cat abc.[hc]xx|gzip|wc -c or cat pimpl.[hc]xx|sed -e 's/Pimpl/Abc/g'|gzip|wc -c.
2 Here is the code run for that comparision:

constexpr int repeats = 100000;

int pimplrun(long count)
//int abcrun(long count)
{
        std::vector< std::shared_ptr<PimplClass /* AbcClass */ > > vInstances;
        vInstances.reserve(count);
        while(--count)
                vInstances.emplace_back(std::make_shared<PimplClass>(4711, 4711));
                //vInstances.emplace_back(AbcClass::Create(4711, 4711));
        int result(0);
        count = vInstances.size();
        while(--count)
                for(auto pInstance : vInstances)
                {
                        pInstance->Add();
                        result += pInstance->GetResult();
                }
        return result;
}

Instances are stored in shared pointers as anything that a Pimpl is considered for would be “heavy” enough to be handled by reference instead of by value.

Update 1: Out of curiosity, I looked a bit deeper at this with callgrind. This is what I found for running the above (with 1000 repeats) and --cache-sim=yes:

I1 cache: 32768 B, 64 B, 8-way
D1 cache: 32768 B, 64 B, 8-way
LL cache: 3145728 B, 64 B, 12-way

event inline ABC Pimpl
Ir 23,356,163 38,652,092 38,620,878
Dr 5,066,041 14,109,098 12,107,992
Dw 3,060,033 5,094,790 5,099,991
I1ir 34 127 29
D1mr 499,952 253,006 999,013
D1mw 501,636 998,312 500,097
ILmr 28 126 24
DLmr 2 845 0
DLmw 0 1,285 250

I dont know exactly what to derive from that, but what is clear is that purely by instruction counts Ir this can not be explained. So you need --cache-sim=yes which gives the additional event counts. Actually Pimpl looks slightly better on most stats, so as it is slower in real life, the cache misses on the first level data cache D1mr might have quite an impact?

Update 2: This post made it to reddit, so I looked into some of the feedback from there. A common suggestion was to use for(auto& pInstance : vInstances) instead of for(auto pInstance : vInstances) in the benchmarking function. This had no significant impact on walltime measurements nor made it callgrind event counts show some clearer picture. I also played around with the order of linked objects to see if it has any impact (via cache locality etc.). While runtime measurements fluctuated quite a bit (even when using the same binary), the order was always the same: inlining quickest, then abstract base class and pimpl slowest.


Read more
Nicholas Skaggs

Ubuntu SDK Autopilot Plugin

Those of you who have developed an application using the Ubuntu SDK understand how nice it is to have a tool to support your workflow for writing an application. You can code, build, run and iterate on your code easily right from inside the SDK. However, to test your application, it was necessary to open a terminal and execute some commands. Leaving the Ubuntu SDK is an interruption to your workflow! It's even enough to throw you off your coding zen! It certainly may have dissuaded you from running tests. Seeing as testing should be a positive experience, this certainly won't do!


Thankfully Akiva thought the same thing. Thus he created a new plugin for the SDK. I'd like to celebrate and thank him for making all of our lives easier. Thanks Akiva! A big thank you to Benjamin from the SDK team as well for reviewing and helping get the plugin in shape.

The plugin scans your project for autopilot tests, and then creates a run configuration for them. From there, it's as easy as hitting the run button to run the application. See for yourself!

To learn more about how to install the plugin, or how it works, checkout the documentation on running autopilot tests found on developer.ubuntu.com. 

Go forth and test all the things! Try out using the plugin in your existing workflow. I'd love to hear feedback. If you are interested in making the plugin better, or expanding it to include other things, get in touch. As always, code is welcome!

Read more
bmichaelsen

When I’m drivin’ free, the world’s my home
When I’m mobile

— The Who, Who’s Next, Going Mobile

As you might have noticed, work has started to integrate LibreOffice with the document viewer of Ubuntu core apps. Here is a screenshot of how the current code renders documents on a mobile device:

Ubuntu core apps: LibreOffice and document viewer

Kudos for integrating this go entirely to Stefano Verzegnassi, all I did was providing a tiny piece of example code. It loads a document and saves a rendered version of the document to a PNG file. The relevant part of that piece of C++ code is small enough to fit in one picture shown here, including build instructions et al., showing how easy it is to use LibreOfficeKit from outside LibreOffice now:

 libreoffice2png source code

Thus the doc viewer was quickly integrated with LibreOffice in a basic way. This proof of concept isnt finished however: It just renders the all the document in one buffer. For small documents, this is reasonable, for bigger documents, tiled rendering — which LibreOfficeKit nicely supports from the API by allowing you to render any part of a document in a buffer — needs to be implemented on the clientside. The code for this can be found on launchpad, so if you are just curious how this works you are invited to have a look. If you are interested in helping out with moving this forward towards a nice all-around document viewer reading and rendering everything LibreOffice can, you are most welcome!

Update: A picture says more than a thousand words, but a video tells a whole story. Stefano created this awesome video, which you shouldnt miss:


Read more
Michael Hall

Picture by Aaron HoneycuttThe next Ubuntu Global Jam is coming up next month, the weekend of August 7th through the 9th. Last cycle we introduced the Ubuntu Global Jam Packs, and they were such a big hit that we’re bringing them back this cycle.

Jam Packs are a miniaturized version of the conference packs that Canonical has long offered to LoCo Teams who show off Ubuntu at events. These smaller packs are designed specifically for LoCo Teams to use during their own Global Jam events, to help promote Ubuntu in their area and encourage participation with the team.

What’s in the Global Jam Pack?

The Global Jam Pack contains a number of give-away items to use during your team’s Global Jam event. This cycle the packs will contain:

  • 20 DVDs
  • 20 sticker sheets
  • 20 pens
  • 20 notebooks

There will also be one XL t-shirt for the person who is organizing the event.

Who can request a Global Jam Pack?

The Global Jam Pack is available to any LoCo team that is running a Global Jam event. It doesn’t matter if your team has verified status or not, if you are hosting a Global Jam event, you can request a Jam Pack for it.

How do I request a Global Jam Pack?

The first thing you need to do is plan a Global Jam event for your LoCo team. Global Jams happen one weekend each cycle, and are a chance for you to meet up with Ubuntu contributors in your area to work together on improving some aspect of Ubuntu. They don’t require a lot of setup, just pick a day, time and location for everybody to show up.

Once you know when and where you will be holding your event, you need to register it in the LoCo Team Portal, making sure it’s listed as being part of the Ubuntu Global Jam parent event. You can use your event page on the portal to advertise your event, and allow people to register their intention to attend.

Next you will need to fill out a community donations request for your Jam Pack. In there you will be asked for your name and shipping address. In the field for describing your request, be sure to include the link to your team’s Global Jam event.

Need help?

If you need help or advice in organizing a Global Jam event, join #ubuntu-locoteams on Freenode IRC to talk to folks from the community who have experience running them. We’ve also documented some great advice to help you with organization on our wiki, including a list of suggested topics for you to work on during your event.

Read more
Daniel Holbach

snappy

Snappy is evolving, becoming more robust and is getting loads of new users. This week will see a new stable release of Snappy. For us that’s reason enough to invite you all to our first Snappy Open House today.

Starting from 14:00 UTC today (2015-07-07), we are going to be on Ubuntu-on-Air, introducing team, talking about what’s new and talking about testing Snappy. If you want to get involved or wanted to get to know snappy, this is a great opportunity.

Hope to see you later on!

Read more
Nicholas Skaggs

It's never been easier to write tests for your application! I wanted to share some details on the new documentation and other tidbits that will help you ensure your application has a nice testsuite. If you've used the SDK in the past, you understand how nice it can make your development workflow. Writing code and running it on your desktop, device, or emulator is a snap.

Fortunately, having a nice testsuite for your application can also be just as easy. First, you will notice that now all of the wizards inside the SDK now come with nice testsuites already in place. They are ready for you to simply add-on more tests. The setup and heavy lifting is done. See for yourself!


Secondly, developer.ubuntu.com has a great section on every level of testing; no matter which language you use with the SDK. You'll find API references for the tools and technology used, along with helpful guides to get you in the proper mindset.

For autopilot itself, there's also API documentation for the various 'helpers' that will make writing tests much easier for you. In addition, there's a guide to running autopilot tests. This has been made even easier by the addition of Akiva's Autopilot plugin inside the SDK. I'll be sharing details on this as soon as it's packaged, but you can see a sneak peek in this video.

Finally, you will find a guide on how to structure your functional tests. These are the most demanding to write, and it's important to ensure you write your tests in a maintainable way. Don't forget about the guide on writing good functional tests either.

No matter what language or level you write tests for, the guides are there to help you. Why not trying adding a test or two to your project? If you are new, check out one of the wizards and try adding a simple testcase. Then apply the same knowledge (and templated code!) to your own project. Happy test writing!

Read more
Prakash

I had been thinking of my Touch Table project for a long time. My research on existing solutions was a bit disappointing: mostly insanely expensive, large, or platform locked, they did not fit my vision of a [Android or Linux-powered] ‘desktop’ that would allow me to fit it into my existing workflow, rather than hope that applications would support it (like the Microsoft Surface).

Read more at http://www.ikeahackers.net/2015/06/hemnes-multitouch-table.html

 




Read more
Shuduo

In case you want to play snappy but don’t have a Raspberry Pi 2 or other hardware…

1, sudo apt-get install virtualbox
2, download snappy image http://cdimage.ubuntu.com/ubuntu-snappy/15.04/20150423/ubuntu-15.04-snappy-amd64-generic.img.xz
3, unxz ubuntu-15.04-snappy-amd64-generic.img.xz
4, VBoxManager convertdd ubuntu-15.04-snappy-amd64-generic.img snappy.vdi –format VDI
5, launch Virtualbox GUI app, create a new VM, OS type is Linux, Version is Ubuntu 64bit, memory is 512MB, Hard driver use an exist virtual hard disk file and select snappy.vdi we just converted from img file.
6, in Settings->Network, change Network Adapter from NAT to Bridged Adapter
7, Start VM, you can use browser to access Snappy App Store by url “webdm.local:4200” or login in from console or ssh with username/password ‘ubuntu/ubuntu’ to do anything fun snappy things like update/rollback

Read more
bmichaelsen

I would walk 500 miles and I would walk 500 more
The proclaimers, 500 miles

So I recently noted that github reported I have 1337 commits on LibreOffice since I joined Canonical in February 2011. Looking at those stats, it seems I also deleted some net 155,634 lines over that time in the codebase.

LibreOffice commits

Even though I cant find that mail, I seem to remember that Michael Stahl, when joining the LibreOffice project proclaimed his goal to be to contribute ‘a net negative lines of code.’1) Now I have not looked into the details of the above stats — they might very likely reveal to be caused by some bulk change. Which would be lame, unless its the killing of the old build system, for which I think I can claim some credit. But in general I really love the idea of ‘contributing a net negative number of lines of code’.

So, at the last LibreOffice Hackfest in Cambridge 2), I pushed a set of commits refactoring the UNO bindings of writer tables. It all started so innocent. I was actually aiming to do something completely different: namely give the UNO cursors in Writer (SwUnoCrsr) somewhat saner resource management and drag them screaming and kicking out of the 1980ies. However, once in unotbl.cxx, I found more of “determined Real Programmer can write FORTRAN programs in any language” and copypasta there than I could bear. I thought: “This UNO stuff has decent test coverage, you could refactor it a bit quickly.”.

Of course I was wrong with both sides of that statement: On the one hand, when I started the coverage was 70.1% LOC on that file which is not really as high as I expected. On the other hand, I did not end with “a bit quickly”, rather I went on to refactor away:
dc -e "`git log --author Michaelsen -p dc8697e554417d31501a0d90d731403ede223370^..HEAD sw/source/core/unocore/unotbl.cxx|grep ^+|wc -l` `git log --author Michaelsen -p dc8697e554417d31501a0d90d731403ede223370^..HEAD sw/source/core/unocore/unotbl.cxx|grep ^-|wc -l` - p"
-1015

… a thousand lines. On discovering the lacking test-coverage, I quickly added some more tests — bringing coverage to 77.52% LOC at least now.3) And yes, I also silently fixed the one regression I thereby discovered I had introduced, which nobody seemed to have noticed so far. One thing I noticed in this little refactoring spree is that while C++11s features might look tame compared to more modern programming languages in metrics like avoiding boilerplate, it still outclasses what we had before. Beyond the simplifying refactoring, features like lambdas are really nice for non-interactive (test-driven) debugging, including quickly asserting on the state of variables some over some 10 stackframes up or down without going into major contortions in testcode.

1) By the way, a quick:
dc -e "`git log --author Stahl -p |grep ^+|wc -l` `git log --author Stahl -p |grep ^-|wc -l` - p"
-108686

confirms Michael is more than living up to his personal goals.

2) Speaking of the Hackfest: The other thing I did there was helping/observing Sam Tuke getting setup for his first code contribution. While we made great progress in making this easier than it used to be, we could be a lot better there still. Sadly though, I didnt see a shortcut or simplification we could implement right away.

3) And along with that did bring coverage of unochart.cxx from abismal 4.4% LOC to at least 35.31% LOC  as a collateral damage.

addendum: Note that the writer tables core also increased coverage quite a bit from 54.6% LOC to 65% LOC.


Read more
Ben Howard

With Ubuntu 12.04.2, the kernel team introduced the idea of the "hardware enablement kernel" (HWE), originally intended to support new hardware for bare metal server and desktop. In fact, the documentation indicates that HWE images are not suitable for Virtual or Cloud Computing environments.  The thought was that cloud and virtual environments provide stable hardware and that the newer kernel features would not be needed.

Time has proven this assumption painfully wrong. Take for example the need for drivers in virtual environments. Several of the Cloud providers that we have engaged with have requested the use of the HWE kernel by default. On GCE, the HWE kernels provide support for their NVME disks or multiqueue NIC support. Azure has benefited from having an updated HyperV driver stack resulting in better performance. When we engaged with VMware Air, the 12.04 kernel lacked the necessary drivers.

Perhaps more germane to our Cloud users is that containers are using kernel features. 12.04 users need to use the HWE kernel in order to make use of Docker. The new Ubuntu Fan project will be enabled for 14.04 via the HWE-V kernel for Ubuntu 14.04.3. If you use Ubuntu as your container host, you will likely consider using an HWE kernel.

And with that there has been a steady chorus of people requesting that we provide HWE image builds for AWS. The problem has never been the base builds; building the base bits is fairly easy. The hard part is that by adding base builds, each daily and release build goes form 96 images for AWS to 288 (needless to say that is quite a problem). Over the last few weeks -- largely in my spare time -- I've been working out what it would take to deliver HWE images for AWS.

I am happy to announce that as of today, we are now building HWE-U (3.16) and HWE-V (3.19) Ubuntu 14.04 images for AWS. To be clear, we are not making any behavioral changes to the standard Ubuntu 14.04 images. Unless users opt into using an HWE image on AWS they will continue to get the 3.13 kernel. However, for those who want newer kernels, they now have the choice.

For the time being, only amd64 and i386 builds are being published.. Over the next few weeks, we expect the HWE images to reach full feature parity including release promotions, and indexing. And I fully expect that the HWE-V version of 14.04 will include our recent Fan project once the SRU's complete.

Check them out at http://cloud-images.ubuntu.com/trusty/current/hwe-u and http://cloud-images.ubuntu.com/trusty/current/hwe-v .

As always, feedback is always welcome.

Read more
Ben Howard

[UPDATE] The Image ID's have been updated with the latest builds which now include Docker 1.6.2, the latest LXD and of course the Ubuntu Fan driver. 

This week, Dustin Kirkland announced the Ubuntu Fan Project.  To steal from the description, "The Fan is not a software-defined network, and relies on neither distributed databases nor consensus protocols.  Rather, routes are calculated deterministically and traffic carries no additional overhead beyond routine IP tunneling.  Canonical engineers have already demonstrated The Fan operating at 5Gpbs between two Docker containers on separate hosts."

My team at Canonical is responsible for the production of these images. Once the official SRU's land, I anticipate that we will publish an official stream over at cloud-images.ubuntu.com. But until then, check back here for images and updates. As always, if you have feedback, please hop into #server on FreeNode or send email.

GCE Images

Images for GCE have been published to the "ubuntu-os-cloud-devel" project.

The Images are:
  • daily-ubuntu-docker-lxd-1404-trusty-v20150620
  • daily-ubuntu-docker-lxd-1504-vivid-v20150621
To launch an instance, you might run:
$ gcloud compute instances create \
    --image-project ubuntu-os-cloud-devel \
    --image <IMAGE> <NAME>

You need to make sure that IPIP traffic is enable:
$ gcloud compute firewall-rules create fan2 --allow 4 --source-ranges 10.0.0.0/8

Amazon AWS Images

The AWS images are HVM-only, AMD64 builds. 


Version
Region
HVM-SSD
HVM-Instance
14.04-LTS
eu-central-1
ami-7e94ac63
ami-8e93ab93
sa-east-1
ami-f943c1e4
ami-e742c0fa
ap-northeast-1
ami-543c9b54
ami-b4298eb4
eu-west-1
ami-4ae2a73d
ami-48e7a23f
us-west-1
ami-fbd126bf
ami-6bd3242f
us-west-2
ami-63585c53
ami-875357b7
ap-southeast-2
ami-7de69c47
ami-1de19b27
ap-southeast-1
ami-aca4a0fe
ami-2a9b9f78
us-east-1
ami-95877efe
ami-e58b728e
15.04
eu-central-1
ami-9a94ac87
ami-ae93abb3
sa-east-1
ami-1340c20e
ami-0743c11a
ap-northeast-1
ami-9c3c9b9c
ami-42379042
eu-west-1
ami-a2e2a7d5
ami-e4e7a293
us-west-1
ami-4bd0270f
ami-1dd32459
us-west-2
ami-f9585cc9
ami-1dd32459
ap-southeast-2
ami-5de69c67
ami-01e19b3b
ap-southeast-1
ami-74a5a126
ami-c89b9f9a
us-east-1
ami-29f90042
ami-8d8a73e6

It is important to note that these images are only usable inside of a VPC. Newer AWS users are in VPC by default, but older users may need to create and update their VPC. For example:
$ ec2-authorize --cidr <CIDR_RANGE> --protocol 4 <SECURITY_GROUP>


Read more