Canonical Voices

Posts tagged with 'cloud'

Louis

Introduction

Once in a while, I get to tackle issues that have little or no documentation other than the official documentation of the product and the product’s source code.  You may know from experience that product documentation is not always sufficient to get a complete configuration working. This article intend to flesh out a solution to customizing disk configurations using Curtin.

This article take for granted that you are familiar with Maas install mechanisms, that you already know how to customize installations and deploy workloads using Juju.

While my colleagues in the Maas development team have done a tremendous job at keeping the Maas documentation accurate (see Maas documentation), it does only cover the basics when it comes to Maas’s preseed customization, especially when it comes to Curtin’s customization.

Curtin is Maas’s fastpath installer which is meant to replace Debian’s installer (familiarly known as d-i). It does a complete machine installation much faster than with the standard debian method.  But while d-i is well known and it is easy to find example of its use on the web, Curtin does not have the same notoriety and, hence, not as much documentation.

Theory of operation

When the fastpath installer is used to install a maas unit (which is now the default), it will send the content of the files prefixed with curtin_ to the unit being installed.  The curtin_userdata contains cloud-config type commands that will be applied by cloud-init when the unit is installed. If we want to apply a specific partitioning scheme to all of our unit, we can modify this file and every unit will get those commands applied to it when it installs.

But what if we only have one or a few servers that have specific disk layout that require partitioning ?  In the following example, I will suppose that we have one server, named curtintest which has a one terabyte disk (1 TB) and that we want to partition this disk with the following partition table :

  • Partition #1 has the /boot file system and is bootable
  • Partition #2 has the root (/) file system
  • Partition #3 has a 31 Gb file system
  • Partition #4 has 32 Gb of swap space
  • Partition #5 has the remaining disk space

Since only one server has such a disk, the partitioning should be specific to that curtintest server only.

Setting up Curtin development environment

To get to a working Maas partitioning setup, it is preferable to use Curtin’s development environment to test the curtin commands. Using Maas deployment to test each command quickly becomes tedious and time consuming.  There is a description on how to set it up in the README.txt but here are more details here.

Aside from putting all the files under one single directory, the steps described here are the same as the one in the README.txt file :

$ mkdir -p download
$ DLDIR=$(pwd)/download
$ rel="trusty"
$ arch=amd64
$ burl="http://cloud-images.ubuntu.com/$rel/current/"
$ for f in $rel-server-cloudimg-${arch}-root.tar.gz $rel-server-cloudimg-{arch}-disk1.img; do wget "$burl/$f" -O $DLDIR/$f; done
$ ( cd $DLDIR && qemu-img convert -O qcow $rel-server-cloudimg-${arch}-disk1.img $rel-server-cloudimg-${arch}-disk1.qcow2)
$ BOOTIMG="$DLDIR/$rel-server-cloudimg-${arch}-disk1.qcow2"
$ ROOTTGZ="$DLDIR/$rel-server-cloudimg-${arch}-root.tar.gz"
$ mkdir src
$ bzr init-repo src/curtin
$ (cd src/curtin && bzr  branch lp:curtin trunk.dist )
$ (cd src/curtin && bzr  branch trunk.dist trunk)
$ cd src/curtin/trunk

You now have an environment you can use with Curtin to automate installations. You can test it by using the following command which will start a VM and run « curtin install » in it.  Once you get the prompt, login with :

username : ubuntu
password : passw0rd

$ sudo ./tools/launch $BOOTIMG --publish $ROOTTGZ -- curtin install "PUBURL/${ROOTTGZ##*/}"

Using Curtin in the development environment

To test Curtin in its environment, simply remove  — curtin install « PUBURL/${ROOTTGZ##*/} » at the end of the statement. Once logged in, you will find the Curtin executable in /curtin/bin :

ubuntu@ubuntu:~$ sudo -s
root@ubuntu:~# /curtin/bin/curtin --help
usage: main.py [-h] [--showtrace] [--verbose] [--log-file LOG_FILE]
{block-meta,curthooks,extract,hook,in-target,install,net-meta,pac
k,swap}
...

positional arguments:
{block-meta,curthooks,extract,hook,in-target,install,net-meta,pack,swap}

optional arguments:
-h, --help            show this help message and exit
--showtrace
--verbose, -v
--log-file LOG_FILE

Each of Curtin’s commands have their own help :

ubuntu@ubuntu:~$ sudo -s
root@ubuntu:~# /curtin/bin/curtin install --help
usage: main.py install [-h] [-c FILE] [--set key=val] [source [source ...]]

positional arguments:
source what to install

optional arguments:
-h, --help show this help message and exit
-c FILE, --config FILE
read configuration from cfg
--set key=val define a config variable

 

Creating Maas’s Curtin preseed commands

Now that we have our Curtin development environment available, we can use it to come up with a set of commands that will be fed to Curtin by Maas when a unit is created.

Maas uses preseed files located in /etc/maas/preseeds on the Maas server. The curtin_userdata preseed file is the one that we will use as a reference to build our set of partitioning commands.  During the testing phase, we will use the -c option of curtin install along with a configuration file that will mimic the behavior of curtin_userdata.

We will also need to add a fake 1TB disk to Curtin’s development environment so we can use it as a partitioning target. So in the development environment, issue the following command :

$ qemu-img create -f qcow2 boot.disk 1000G Formatting ‘boot.disk’, fmt=qcow2 size=1073741824000 encryption=off cluster_size=65536 lazy_refcounts=off

sudo ./tools/launch $BOOTIMG –publish $ROOTTGZ

ubuntu: ubuntu password: passw0rd

ubuntu@ubuntu:~$ sudo -s root@ubuntu:~# cat /proc/partitions

major minor  #blocks  name

253        0    2306048 vda 253        1    2305024 vda1 253       16        426 vdb 253       32 1048576000 vdc 11        0    1048575 sr0

We can see that the 1000G /dev/vdc is indeed present.  Let’s now start to craft the conffile that will receive our partitioning commands. To test the syntax, we will use two simple commands :

root@ubuntu:~# cat << EOF > conffile 
partitioning_commands:
  builtin: []
  01_partition_make_label: ["/sbin/parted", "/dev/vdc", "-s", "'","mklabel","msdos","'"]
  02_partition_make_part: ["/sbin/parted", "/dev/vdc", "-s", "'","mkpart","primary","1049K","538M","'"] 
sources:
  01_primary: http://192.168.0.13:9923//trusty-server-cloudimg-amd64-root.tar.gz
  EOF

The sources: statement is only there to avoid having to repeat the SOURCE portion of the curtin command and is not to be used in the final Maas configuration. The URL is the address of the server from which you are running the Curtin development environment.

WARNING

The builtin [] statement is VERY important. It is there to override Curtin’s native builtin statement which is to partition the disk using « block-meta simple ».  If it is removed, Curtin will overwrite he partitioning with its default configuration. This comes straight from Scott Moser, the main developer behind Curtin.

Now let’s run the Curtin command :

root@ubuntu:~# /curtin/bin/curtin install -c conffile

Curtin will run its installation sequence and you will see a display which you should be familiar with if you installed units with Maas previously.  The command will most probably exit on error, comlaining about the fact that install-grub received an argument that was not a block device. We do not need to worry about that at the motent.

Once completed, have a look at the partitioning of the /dev/vdc device :

root@ubuntu:~# parted /dev/vdc print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4

The partitioning commands were successful and we have the /dev/vdc disk properly configured.  Now that we know that the mechanism works, let try with a complete configuration file. I have found that it was preferable to start with a fresh 1TB disk :

root@ubuntu:~# poweroff

$ rm -f boot.img

$ qemu-img create -f qcow2 boot.disk 1000G
Formatting ‘boot.disk’, fmt=qcow2 size=1073741824000 encryption=off cluster_size=65536 lazy_refcounts=off

sudo ./tools/launch $BOOTIMG –publish $ROOTTGZ

ubuntu@ubuntu:~$ sudo -s

root@ubuntu:~# cat << EOF > conffile 
partitioning_commands:
  builtin: [] 
  01_partition_announce: ["echo", "'### Partitioning disk ###'"]
  01_partition_make_label: ["/sbin/parted", "/dev/vda", "-s", "'","mklabel","msdos","'"]
  02_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","1049k","538M","'"]
  02_partition_set_flag: ["/sbin/parted", "/dev/vda", "-s", "'","set","1","boot","on","'"]
  04_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","538M","4538M","'"]
  05_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","extended","4538M","1000G","'"]
  06_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","25.5G","57G","'"]
  07_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","57G","89G","'"]
  08_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","89G","1000G","'"]
  09_partition_announce: ["echo", "'### Creating filesystems ###'"]
  10_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda1"]
  11_partition_label_fs: ["/sbin/e2label", "/dev/vda1", "cloudimg-boot"]
  12_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda2"]
  13_partition_label_fs: ["/sbin/e2label", "/dev/vda2", "cloudimg-rootfs"]
  14_partition_mount_fs: ["sh", "-c", "mount /dev/vda2 $TARGET_MOUNT_POINT"]
  15_partition_mkdir: ["sh", "-c", "mkdir $TARGET_MOUNT_POINT/boot"]
  16_partition_mount_fs: ["sh", "-c", "mount /dev/vda1 $TARGET_MOUNT_POINT/boot"]
  17_partition_announce: ["echo", "'### Filling /etc/fstab ###'"]
  18_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-rootfs / ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
  19_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-boot /boot ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
  20_partition_make_swap: ["sh", "-c", "mkswap /dev/vda6"]
  21_partition_make_fstab: ["sh", "-c", "echo '/dev/vda6 none swap sw 0 0' >> $OUTPUT_FSTAB"]
sources: 01_primary: http://192.168.0.13:9923//trusty-server-cloudimg-amd64-root.tar.gz EOF

You will note that I have added a few statement like [« echo », « ‘### Partitioning disk ###' »] that will display some logs during the execution. Those are not necessary.
Now let’s try a second test with the complete configuration file :

root@ubuntu:~# /curtin/bin/curtin install -c conffile

root@ubuntu:~# parted /dev/vdc print
Model: Virtio Block Device (virtblk)
Disk /dev/vdc: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4         boot
2      538MB   4538MB  4000MB  primary   ext4
3      4538MB  1000GB  995GB   extended               lba
5      25.5GB  57.0GB  31.5GB  logical
6      57.0GB  89.0GB  32.0GB  logical
7      89.0GB  1000GB  911GB   logical

We now have a correctly partitioned disk in our development environment. All we need to do now is to carry that over to Maas to see if it works as expected.

Customization of Curtin execution in Maas

The section « How preseeds work in MAAS » give a good outline on how to select the name of the a preseed file to restrict its usage to specific sub-groups of nodes.  In our case, we want our partitioning to apply to only one node : curtintest.  So by following the description in the section « User provided preseeds« , we need to use the following template :

{prefix}_{node_arch}_{node_subarch}_{release}_{node_name}

The fileneme that we need to choose needs to end with our hostname, curtintest. The other elements are :

  • prefix : curtin_userdata
  • osystem : amd64
  • node_subarch : generic
  • release : trusty
  • node_name : curtintest

So according to that, our filename must be curtin_userdata_amd64_generic_trusty_curtintest

On the MAAS server, we do the following :

root@maas17:~# cd /etc/maas/preseeds

root@maas17:~# cp curtin_userdata curtin_userdata_amd64_generic_trusty_curtintest

We now edit this newly created file and add our previously crafted Curtin configuration file just after the following block :

{{if third_party_drivers and driver}}
  early_commands:
  {{py: key_string = ''.join(['\\x%x' % x for x in map(ord, driver['key_binary'])])}}
  driver_00_get_key: /bin/echo -en '{{key_string}}' > /tmp/maas-{{driver['package']}}.gpg
  driver_01_add_key: ["apt-key", "add", "/tmp/maas-{{driver['package']}}.gpg"]
  driver_02_add: ["add-apt-repository", "-y", "deb {{driver['repository']}} {{node.get_distro_series()}} main"]
  driver_03_update_install: ["sh", "-c", "apt-get update --quiet && apt-get --assume-yes install {{driver['package']}}"]
  driver_04_load: ["sh", "-c", "depmod && modprobe {{driver['module']}}"]
  {{endif}}

The complete section should look just like this :

{{if third_party_drivers and driver}}
  early_commands:
  {{py: key_string = ''.join(['\\x%x' % x for x in map(ord, driver['key_binary'])])}}
   driver_00_get_key: /bin/echo -en '{{key_string}}' > /tmp/maas-{{driver['package']}}.gpg
   driver_01_add_key: ["apt-key", "add", "/tmp/maas-{{driver['package']}}.gpg"]
   driver_02_add: ["add-apt-repository", "-y", "deb {{driver['repository']}} {{node.get_distro_series()}} main"]
   driver_03_update_install: ["sh", "-c", "apt-get update --quiet && apt-get --assume-yes install {{driver['package']}}"]
   driver_04_load: ["sh", "-c", "depmod && modprobe {{driver['module']}}"]
  {{endif}}
  partitioning_commands:
   builtin: []
   01_partition_announce: ["echo", "'### Partitioning disk ###'"]
   01_partition_make_label: ["/sbin/parted", "/dev/vda", "-s", "'","mklabel","msdos","'"]
   02_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","1049k","538M","'"]
   02_partition_set_flag: ["/sbin/parted", "/dev/vda", "-s", "'","set","1","boot","on","'"]
   04_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","primary","538M","4538M","'"]
   05_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","extended","4538M","1000G","'"]
   06_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","25.5G","57G","'"]
   07_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","57G","89G","'"]
   08_partition_make_part: ["/sbin/parted", "/dev/vda", "-s", "'","mkpart","logical","89G","1000G","'"]
   09_partition_announce: ["echo", "'### Creating filesystems ###'"]
   10_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda1"]
   11_partition_label_fs: ["/sbin/e2label", "/dev/vda1", "cloudimg-boot"]
   12_partition_make_fs: ["/sbin/mkfs", "-t", "ext4", "/dev/vda2"]
   13_partition_label_fs: ["/sbin/e2label", "/dev/vda2", "cloudimg-rootfs"]
   14_partition_mount_fs: ["sh", "-c", "mount /dev/vda2 $TARGET_MOUNT_POINT"]
   15_partition_mkdir: ["sh", "-c", "mkdir $TARGET_MOUNT_POINT/boot"]
   16_partition_mount_fs: ["sh", "-c", "mount /dev/vda1 $TARGET_MOUNT_POINT/boot"]
   17_partition_announce: ["echo", "'### Filling /etc/fstab ###'"]
   18_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-rootfs / ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
   19_partition_make_fstab: ["sh", "-c", "echo 'LABEL=cloudimg-boot /boot ext4 defaults 0 0' >> $OUTPUT_FSTAB"]
   20_partition_make_swap: ["sh", "-c", "mkswap /dev/vda6"]
   21_partition_make_fstab: ["sh", "-c", "echo '/dev/vda6 none swap sw 0 0' >> $OUTPUT_FSTAB"]

Now that maas is properly configured for curtintest, complete the test by deploying a charm in a Juju environment where curtintest is properly comissionned.  In that example, curtintest is the only available node so maas will systematically pick it up :

caribou@avogadro:~$ juju status
environment: maas17
machines:
« 0 »:
agent-state: started
agent-version: 1.24.0
dns-name: state-server.maas
instance-id: /MAAS/api/1.0/nodes/node-2555c398-1bf9-11e5-a7c4-525400214658/
series: trusty
hardware: arch=amd64 cpu-cores=1 mem=1024M
state-server-member-status: has-vote
services: {}
networks:
maas-eth0:
provider-id: maas-eth0
cidr: 192.168.100.0/24

caribou@avogadro:~$ juju deploy mysql
Added charm « cs:trusty/mysql-25″ to the environment.

Once the mysql charm has been deployed, connect to the unit to confirm that the partitioning was successful

caribou@avogadro:~$ juju ssh mysql/0
ubuntu@curtintest:~$ sudo -s
root@curtintest:~# parted /dev/vda print
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 1074GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
 
Number  Start   End     Size    Type      File system  Flags
1      1049kB  538MB   537MB   primary   ext4         boot
2      538MB   4538MB  4000MB  primary   ext4
3      4538MB  1000GB  995GB   extended               lba
5      25.5GB  57.0GB  31.5GB  logical
6      57.0GB  89.0GB  32.0GB  logical
7      89.0GB  1000GB  911GB   logical
ubuntu@curtintest:~$ swapon -s
Filename Type Size Used Priority
/dev/vda6 partition 31249404 0 -1

Conclusion

Customizing disks and partition using curtin is possible but currently not sufficiently documented. I hope that this write up will be helpful.  Sustained development on Curtin is currently done to improve these functionalities so things will definitively get better.

Read more
Ben Howard

With Ubuntu 12.04.2, the kernel team introduced the idea of the "hardware enablement kernel" (HWE), originally intended to support new hardware for bare metal server and desktop. In fact, the documentation indicates that HWE images are not suitable for Virtual or Cloud Computing environments.  The thought was that cloud and virtual environments provide stable hardware and that the newer kernel features would not be needed.

Time has proven this assumption painfully wrong. Take for example the need for drivers in virtual environments. Several of the Cloud providers that we have engaged with have requested the use of the HWE kernel by default. On GCE, the HWE kernels provide support for their NVME disks or multiqueue NIC support. Azure has benefited from having an updated HyperV driver stack resulting in better performance. When we engaged with VMware Air, the 12.04 kernel lacked the necessary drivers.

Perhaps more germane to our Cloud users is that containers are using kernel features. 12.04 users need to use the HWE kernel in order to make use of Docker. The new Ubuntu Fan project will be enabled for 14.04 via the HWE-V kernel for Ubuntu 14.04.3. If you use Ubuntu as your container host, you will likely consider using an HWE kernel.

And with that there has been a steady chorus of people requesting that we provide HWE image builds for AWS. The problem has never been the base builds; building the base bits is fairly easy. The hard part is that by adding base builds, each daily and release build goes form 96 images for AWS to 288 (needless to say that is quite a problem). Over the last few weeks -- largely in my spare time -- I've been working out what it would take to deliver HWE images for AWS.

I am happy to announce that as of today, we are now building HWE-U (3.16) and HWE-V (3.19) Ubuntu 14.04 images for AWS. To be clear, we are not making any behavioral changes to the standard Ubuntu 14.04 images. Unless users opt into using an HWE image on AWS they will continue to get the 3.13 kernel. However, for those who want newer kernels, they now have the choice.

For the time being, only amd64 and i386 builds are being published.. Over the next few weeks, we expect the HWE images to reach full feature parity including release promotions, and indexing. And I fully expect that the HWE-V version of 14.04 will include our recent Fan project once the SRU's complete.

Check them out at http://cloud-images.ubuntu.com/trusty/current/hwe-u and http://cloud-images.ubuntu.com/trusty/current/hwe-v .

As always, feedback is always welcome.

Read more
Prakash

One of the drives to Cloud is that it is suppose to be green, but is Amazon Web Services green itself ?

Amazon Web Services has been under fire in recent weeks from a group of activist customers who are calling for the company to be more transparent in its usage of renewable energy.

In response, rather than divulge additional details about the source of power for its massive cloud infrastructure, the company has argued that using the cloud is much more energy efficient than customers powering their own data center operations.

But the whole discussion has raised the question: How green is the cloud?

Lets find out: http://www.networkworld.com/article/2936654/iaas/how-green-is-amazon-s-cloud.html

Read more
Prakash

The latest Kilo release of the OpenStack software, made available Thursday, sports new identity (ID) federation capability that, in theory, will let a customer in California use her local OpenStack cloud for everyday work, but if the load spikes, allocate jobs to other OpenStack clouds either locally or far, far away.

“With Kilo, for the first time, you can log in on one dashboard and deploy across multiple clouds from many vendors worldwide,” Mark Collier, COO of the OpenStack Foundation, said in an interview.

Read More: http://fortune.com/2015/04/30/openstack-federation-cloud/

Read more
Prakash

Chinese e-commerce giant Alibaba is upping its investment in cloud computing in the United States, making it more of a competitor to Amazon, Google, and Microsoft than ever before.
Alibaba’s cloud division, Aliyun, has signed a series of new partnerships with the likes of Intel and data center company Equinix to localize its cloud offerings without having to build its own new data centers, CNBC’s Arjun Kharpal reports.

The slew of partnerships, which Alibaba is calling its Marketplace Alliance Program, focuses on expanding its cloud services globally, not just the US. Besides Equinix and Intel, it also signed a deal with Singtel in Singapore.

Read More: http://www.businessinsider.in/Alibaba-boosts-its-investment-in-the-cloud-wars-against-Amazon-Google-and-Microsoft/articleshow/47590813.cms

Read more
Robbie Williamson

A buffer overflow in the virtual floppy disk controller of QEMU has been discovered. An attacker could use this issue to cause QEMU to crash or execute arbitrary code in the host’s QEMU process.

This issue is mitigated in a couple ways on Ubuntu when using libvirt to manage QEMU virtual machines, which includes OpenStack’s use of QEMU. The QEMU process in the host environment is owned by a special libvirt-qemu user which helps to limit access to resources in the host environment. Additionally, the QEMU process is confined by an AppArmor profile that significantly lessens the impact of a vulnerability such as VENOM by reducing the host environment’s attack surface.

A fix for this issue has been committed in the upstream QEMU source code tracker. Ubuntu 12.04 LTS, Ubuntu 14.04 LTS, Ubuntu 14.10, and Ubuntu 15.04 are affected. To address the issue, ensure that qemu-kvm 1.0+noroms-0ubuntu14.22 (Ubuntu 12.04 LTS), qemu 2.0.0+dfsg-2ubuntu1.11 (Ubuntu 14.04 LTS), qemu 2.1+dfsg-4ubuntu6.6 (Ubuntu 14.10), qemu 1:2.2+dfsg-5expubuntu9.1 (Ubuntu 15.04) are installed.

For reference, the Ubuntu Security Notices website is the best place to find information on security updates and the affected supported releases of Ubuntu.  Users can get notifications via email and RSS feeds from the USN site, as well as access the Ubuntu CVE Tracker.

Read more
Prakash

For the first time Amazon has revealed its numbers for AWS.

In its latest financial earnings report, Amazon said AWS grew 49 percent in 2014, pulling in $4.6 billion in revenue. After reaching $1.57 billion in the first quarter of this year, AWS is on track for $6.23 billion in sales by year’s end, the company said. Though its cloud business still accounted for only 7 percent of the company’s overall quarterly revenue of $22.72 billion, AWS is growing at a much faster rate than the rest of Amazon (AWS grew 49 percent, while the company’s core North American business grew 22 percent). And contrary to what the company has indicated in the past, its margins are significantly higher with AWS.

Read More: http://www.wired.com/2015/04/amazons-cloud-is-the-best-part-of-its-business-aws/

Read more
Ben Howard

I am pleased to announce initial Vagrant images [1, 2]. These images are bit-for-bit the same as the KVM images, but have a Cloud-init configuration that allows Snappy to work within the Vagrant workflow.

Vagrant enables a cross platform developer experience on MacOS, Windows or Linux [3].

Note: due to the way that Snappy works, shared file systems within Vagrant is not possible at this time. We are working on getting the shared file system support enabled, but it will take us a little bit to get going.

If you want to use Vagrant packaged in the Ubuntu archives, in a terminal run::

  • sudo apt-get -y install vagrant
  • cd <WORKSPACE>
  • vagrant init http://goo.gl/DO7a9W 
  • vagrant up
  • vagrant ssh
If you use Vagrant from [4] (i.e Windows, Mac or install the latest Vagrant) then you can run:
  • vagrant init ubuntu/ubuntu-15.04-snappy-core-edge-amd64
  • vagrant up
  • vagrant ssh

These images are a work in progress. If you encounter any issues, please report them to "snappy-devel@lists.ubuntu.com" or ping me (utlemming) on Launchpad.net

---

[1] http://cloud-images.ubuntu.com/snappy/15.04/core/edge/current/core-edge-amd64-vagrant.box
[2] https://atlas.hashicorp.com/ubuntu/boxes/ubuntu-15.04-snappy-core-edge-amd64
[3] https://docs.vagrantup.com/v2/why-vagrant/index.html
[4] https://www.vagrantup.com/downloads.html

Read more
Ben Howard

Back when we announced that the Ubuntu 14.04 LTS Cloud Images on Azure were using the Hardware Enablement Kernel (HWE), the immediate feedback was "what about 12.04?"


Well, the next Ubuntu 12.04 Cloud Images on Microsoft Azure will start using the HWE kernel. We have been working with Microsoft to validate using the 3.13 kernel on 12.04 and are pleased with the results and the stability. We spent a lot of time thinking about and testing this change, and in conference with the Ubuntu Kernel, Foundations and Cloud Image teams, feel this change will give the best experience on Microsoft Azure. 

By default, the HWE kernel is used on official images for Ubuntu 12.04 on VMware Air, Google Compute Engine, and now Microsoft Azure. 

Any 12.04 Image published to Azure with a serial later than 20140225 will default to the new HWE kernel. 

Users who want to upgrade their existing instance can simply run:
  • sudo apt-get update
  • sudo apt-get install linux-image-hwe-generic linux-cloud-tools-generic-lts-trusty
  • reboot

Read more
Ben Howard

One of the perennial problems in the Cloud is knowing what is the most current image and where to find it. Some Clouds provide a nice GUI console, an API, or some combination. But what has been missing is a "dashboard" showing Ubuntu across multiple Clouds.


Screenshot
https://cloud-images.ubuntu.com/locator
In that light, I am please to announce that we have a new beta Cloud Image Finder. This page shows where official Ubuntu images are available. As with all betas, we have some kinks to work out, like gathering up links for our Cloud Partners (so clicking an Image ID launches an image). I envision that in the future this locator page will be the default landing page for our Cloud Image Page..



The need for this page became painfully apparent yesterday as I was working through the fallout of the Ghost Vulnerability (aka CVE 2015-0235). The Cloud Image team had spent a good amount of time pushing our images to AWS, Azure, GCE, Joyent and then notifying our partners like Brightbox, DreamCompute, CloudSigma and VMware of new builds. I realized that we needed a single place for our users to just look and see where the builds are available. And so I hacked up the EC2 Locator page to display other clouds.  

Please note: this new page only shows stable releases. We push a lot of images and did not want to confuse things by showing betas, alphas, dailies or the development builds. Rather, this page will only show images that have been put through the complete QA process and are ready for production work loads. 

This new locator page is backed by Simple Streams, which is our machine-formatted data service. Simple Streams provides a way of locating images in uniform way across the cloud. Essentially our new Locator Page is just a viewer of the Simple Stream Data.

Hopefully our users will find this new page useful. Feedback is always welcome. Please feel free to drop me a line (utlemming @ ubuntu dot com). 

Read more
Ben Howard

A few years ago when our fine friends on the kernel team introduced the idea of the "hardware enablement" (HWE) kernel, those of us in the Cloud world looked at it as curiosity. We thought that by in large, the HWE kernel would not be needed or wanted for Virtual Cloud instances.

And we were wrong.

So wrong in fact, that the HWE kernel has found its way into the Vagrant Cloud Images, VMware's vCHS, and Google's Compute engine as the default kernel for the Certified Images. The main reason for these requests is that virtual hardware moves at a fairly quick pace. Unlike traditional hardware, Virtual Hardware can be fixed and patched at the speed that software can be deployed.

The feedback in regards to Azure has been the same: users and Microsoft has asked for the HWE kernel consistently. Microsoft has validated that the HWE kernel (3.16) running Ubuntu 14.04 on Windows Azures passes their validation testing. In our testing, we have validated that the 3.16 kernel works quite well in Azure.

For Azure users, using the 3.16 HWE kernel brings SMB 2.1 copy file support and updates LIS drivers.

Therefore, starting with the latest Windows Azure image [1], all the Ubuntu 14.04 images will track the latest hardware enablement kernel. That means that all the goodness in Ubuntu 14.10's kernel will be the default for 14.04 users launching our official images on Windows Azure.

If you want to install the LTS kernel on your existing instance(s), simply run:

  • sudo apt-get update
  • sudo apt-get install linux-image-virtual-lts-utopic linux-lts-utopic-cloud-tools-common walinuxagent
  • sudo reboot


[1] b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64-server-20150123-en-us-30GB

Read more
Dustin Kirkland


With the recent introduction of Snappy Ubuntu, there are now several different ways to extend and update (apt-get vs. snappy) multiple flavors of Ubuntu (Core, Desktop, and Server).

We've put together this matrix with a few examples of where we think Traditional Ubuntu (apt-get) and Transactional Ubuntu (snappy) might make sense in your environment.  Note that this is, of course, not a comprehensive list.

Ubuntu Core
Ubuntu Desktop
Ubuntu Server
Traditional apt-get
Minimal Docker and LXC imagesDesktop, Laptop, Personal WorkstationsBaremetal, MAAS, OpenStack, General Purpose Cloud Images
Transactional snappy
Minimal IoT Devices and Micro-Services Architecture Cloud ImagesTouch, Phones, TabletsComfy, Human Developer Interaction (over SSH) in an atomically updated environment

I've presupposed a few of the questions you might ask, while you're digesting this new landscape...

Q: I'm looking for the smallest possible Ubuntu image that still supports apt-get...
A: You want our Traditional Ubuntu Core. This is often useful in building Docker and LXC containers.

Q: I'm building the next wearable IoT device/drone/robot, and perhaps deploying a fleet of atomically updated micro-services to the cloud...
A: You want Snappy Ubuntu Core.

Q: I want to install the best damn Linux on my laptop, desktop, or personal workstation, with industry best security practices, 30K+ freely available open source packages, freely available, with extensive support for hardware devices and proprietary add-ons...
A: You want the same Ubuntu Desktop that we've been shipping for 10+ years, on time, every time ;-)

Q: I want that same converged, tasteful Ubuntu experience on your personal, smart devices like my Phones and Tablets...
A: You want Ubuntu Touch, which is a very graphical human interface focused expression of Snappy Ubuntu.

Q: I'm deploying Linux onto bare metal servers at scale in the data center, perhaps building IaaS clouds using OpenStack or PaaS cloud using CloudFoundry? And I'm launching general purpose Linux server instances in public clouds (like AWS, Azure, or GCE) and private clouds...
A: You want the traditional apt-get Ubuntu Server.

Q: I'm developing and debugging applications, services, or frameworks for Snappy Ubuntu devices or cloud instances?
A: You want Comfy Ubuntu Server, which is a command line human interface extension of Snappy Ubuntu, with a number of conveniences and amenities (ssh, byobu, manpages, editors, etc.) that won't be typically included in the minimal Snappy Ubuntu Core build. [*Note that the Comfy images will be available very soon]

Cheers,
:-Dustin

Read more
Ben Howard


Snappy Launches


When we launched Snappy, we introduced it on Microsoft Azure [1], Google’s GCE [2], Amazon’s AWS [3] and our KVM images [4]. Immediately our developers were asking questions like, “where’s the Vagrant images”, which we launched yesterday [5].

The one final remaining question was “where are the images for <insert hypervisor>”. We had inquiries about Virtualbox, VMware Desktop/Fusion, interest in VMware Air, Citrix XenServer, etc.

OVA to the rescue

OVA is an industry standard for cross-hypervisor image support. The OVA spec [6] allows you to import a single image to:

  • VMware products
    • ESXi
    • Desktop
    • Fusion
    • VSphere
  • Virtualbox
  • Citrix XenServer
  • Microsoft SCVMM
  • Red Hat Enterprise Virtualization
  • SuSE Studio
  • Oracle VM

Okay, so where can I get the OVA images?

To get the latest OVA image, you can get it from here [7]. From there, you will need to follow your hypervisor instructions on importing OVA images. 

Or if you want a short URL, http://goo.gl/xM89p7


---

[1] http://www.ubuntu.com/cloud/tools/snappy#snappy-azure
[2] http://www.ubuntu.com/cloud/tools/snappy#snappy-google
[3] http://www.ubuntu.com/cloud/tools/snappy#snappy-amazon
[4] http://www.ubuntu.com/cloud/tools/snappy#snappy-local
[5] https://blog.utlemming.org/2015/01/snappy-images-for-vagrant.htm
[6] https://en.wikipedia.org/wiki/Open_Virtualization_Format
[7] http://cloud-images.ubuntu.com/snappy/devel/core/current/devel-core-amd64-cloud.ova

Read more
Prakash

Amazon EC2 was the most reliable compute and Google Cloud Storage was the most reliable storage.

The Laguna Beach, California company tracks status of more than 30 clouds from AWS to Zettagrid:

Service provider Outrages Downtime
Amazon EC2  20  2.41 Hours
Amazon S3  23  2.69 Hours
Google Compute Engine  72  4.41 Hours
Google Cloud Storage 8  14.23 Minutes
Microsoft Azure Virtual machines  92  40 Hours
Microsoft Azure Object Storage  141  10.97 Hours

Read More: https://gigaom.com/2015/01/07/amazon-web-services-tops-list-of-most-reliable-public-clouds/

Read more
Dustin Kirkland


Awww snap!

That's right!  Snappy Ubuntu images are now on AWS, for your EC2 computing pleasure.

Enjoy this screencast as we start a Snappy Ubuntu instance in AWS, and install the xkcd-webserver package.


And a transcript of the commands follows below.

kirkland@x230:/tmp⟫ cat cloud.cfg
#cloud-config
snappy:
ssh_enabled: True
kirkland@x230:/tmp⟫ aws ec2 describe-images \
> --region us-east-1 \
> --image-ids ami-5c442634

{
"Images": [
{
"ImageType": "machine",
"Description": "ubuntu-core-devel-1418912739-141-amd64",
"Hypervisor": "xen",
"ImageLocation": "ucore-images/ubuntu-core-devel-1418912739-141-amd64.manifest.xml",
"SriovNetSupport": "simple",
"ImageId": "ami-5c442634",
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"BlockDeviceMappings": [],
"State": "available",
"VirtualizationType": "hvm",
"Name": "ubuntu-core-devel-1418912739-141-amd64",
"OwnerId": "649108100275",
"Public": false
}
]
}
kirkland@x230:/tmp⟫
kirkland@x230:/tmp⟫ # NOTE: This AMI will almost certainly have changed by the time you're watching this ;-)
kirkland@x230:/tmp⟫ clear
kirkland@x230:/tmp⟫ aws ec2 run-instances \
> --region us-east-1 \
> --image-id ami-5c442634 \
> --key-name id_rsa \
> --instance-type m3.medium \
> --user-data "$(cat cloud.cfg)"
{
"ReservationId": "r-c6811e28",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-d5d135bc"
}
],
"OwnerId": "357813986684",
"Instances": [
{
"KeyName": "id_rsa",
"PublicDnsName": null,
"ProductCodes": [],
"StateTransitionReason": null,
"LaunchTime": "2014-12-18T17:29:07.000Z",
"Monitoring": {
"State": "disabled"
},
"ClientToken": null,
"StateReason": {
"Message": "pending",
"Code": "pending"
},
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"PrivateDnsName": null,
"ImageId": "ami-5c442634",
"BlockDeviceMappings": [],
"Placement": {
"GroupName": null,
"AvailabilityZone": "us-east-1e",
"Tenancy": "default"
},
"AmiLaunchIndex": 0,
"VirtualizationType": "hvm",
"NetworkInterfaces": [],
"SecurityGroups": [
{
"GroupName": "default",
"GroupId": "sg-d5d135bc"
}
],
"State": {
"Name": "pending",
"Code": 0
},
"Hypervisor": "xen",
"InstanceId": "i-af43de51",
"InstanceType": "m3.medium",
"EbsOptimized": false
}
]
}
kirkland@x230:/tmp⟫
kirkland@x230:/tmp⟫ aws ec2 describe-instances --region us-east-1 | grep PublicIpAddress
"PublicIpAddress": "54.145.196.209",
kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@54.145.196.209
ssh: connect to host 54.145.196.209 port 22: Connection refused
255 kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@54.145.196.209
The authenticity of host '54.145.196.209 (54.145.196.209)' can't be established.
RSA key fingerprint is 91:91:6e:0a:54:a5:07:b9:79:30:5b:61:d4:a8:ce:6f.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '54.145.196.209' (RSA) to the list of known hosts.
Welcome to Ubuntu Vivid Vervet (development branch) (GNU/Linux 3.16.0-25-generic x86_64)

* Documentation: https://help.ubuntu.com/

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Welcome to the Ubuntu Core rolling development release.

* See https://ubuntu.com/snappy

It's a brave new world here in snappy Ubuntu Core! This machine
does not use apt-get or deb packages. Please see 'snappy --help'
for app installation and transactional updates.

To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@ip-10-153-149-47:~$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=1923976k,nr_inodes=480994,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=385432k,mode=755)
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
/dev/xvda3 on /writable type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,mode=755)
tmpfs on /etc/fstab type tmpfs (rw,nosuid,noexec,relatime,mode=755)
/dev/xvda3 on /etc/systemd/system type ext4 (rw,relatime,discard,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
tmpfs on /etc/machine-id type tmpfs (ro,relatime,size=385432k,mode=755)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/xvda3 on /etc/hosts type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/sudoers.d type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /root type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /usr/share/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/snappy type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/click type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/initramfs-tools type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/writable type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ssh type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/tmp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/cache/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/apparmor.d/cache type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ufw type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/log type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/system-image type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /var/lib/sudo type tmpfs (rw,relatime,mode=700)
/dev/xvda3 on /var/lib/logrotate type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dhcp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dbus type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/cloud type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apps type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /mnt type tmpfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
/dev/xvda3 on /apps type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /home type ext4 (rw,relatime,discard,data=ordered)
/dev/xvdb on /mnt type ext3 (rw,relatime,data=ordered)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=385432k,mode=700,uid=1000,gid=1000)
ubuntu@ip-10-153-149-47:~$ mount | grep " / "
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
ubuntu@ip-10-153-149-47:~$ sudo touch /foo
touch: cannot touch ‘/foo’: Read-only file system
ubuntu@ip-10-153-149-47:~$ sudo apt-get update
Ubuntu Core does not use apt-get, see 'snappy --help'!
ubuntu@ip-10-153-149-47:~$ sudo snappy --help
Usage:snappy [-h] [-v]
{info,versions,search,update-versions,update,rollback,install,uninstall,tags,build,chroot,framework,fake-version,nap}
...

snappy command line interface

optional arguments:
-h, --help show this help message and exit
-v, --version Print this version string and exit

Commands:
{info,versions,search,update-versions,update,rollback,install,uninstall,tags,build,chroot,framework,fake-version,nap}
info
versions
search
update-versions
update
rollback undo last system-image update.
install
uninstall
tags
build
chroot
framework
fake-version ==SUPPRESS==
nap ==SUPPRESS==
ubuntu@ip-10-153-149-47:~$ sudo snappy info
release: ubuntu-core/devel
frameworks:
apps:
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
ubuntu@ip-10-153-149-47:~$ sudo snappy search docker
Part Version Description
docker 1.3.2.007 The docker app deployment mechanism
ubuntu@ip-10-153-149-47:~$ sudo snappy install docker
docker 4 MB [=============================================================================================================] OK
Part Tag Installed Available Fingerprint Active
docker edge 1.3.2.007 - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
docker edge 1.3.2.007 - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy search webserver
Part Version Description
go-example-webserver 1.0.1 Minimal Golang webserver for snappy
xkcd-webserver 0.3.1 Show random XKCD compic via a build-in webserver
ubuntu@ip-10-153-149-47:~$ sudo snappy install xkcd-webserver
xkcd-webserver 21 kB [=====================================================================================================] OK
Part Tag Installed Available Fingerprint Active
xkcd-webserver edge 0.3.1 - 3a9152b8bff494 *
ubuntu@ip-10-153-149-47:~$ exit
logout
Connection to 54.145.196.209 closed.
kirkland@x230:/tmp⟫ ec2-instances
i-af43de51 ec2-54-145-196-209.compute-1.amazonaws.com
kirkland@x230:/tmp⟫ ec2-terminate-instances i-af43de51
INSTANCE i-af43de51 running shutting-down
kirkland@x230:/tmp⟫

Cheers!
Dustin

Read more
Dustin Kirkland


As promised last week, we're now proud to introduce Ubuntu Snappy images on another of our public cloud partners -- Google Compute Engine.
In the video below, you can join us walking through the instructions we have published here.
Snap it up!
:-Dustin

Read more
Dustin Kirkland



A couple of months ago, I re-introduced an old friend -- Ubuntu JeOS (Just enough OS) -- the smallest, (merely 63MB compressed!) functional OS image that we can still call “Ubuntu”.  In fact, we call it Ubuntu Core.

That post was a prelude to something we’ve been actively developing at Canonical for most of 2014 -- Snappy Ubuntu Core!  Snappy Ubuntu combines the best of the ground-breaking image-based Ubuntu remix known as Ubuntu Touch for phones and tablets with the base Ubuntu server operating system trusted by millions of instances in the cloud.

Snappy introduces transactional updates and atomic, image based workflows -- old ideas implemented in databases for decades -- adapted to Ubuntu cloud and server ecosystems for the emerging cloud design patterns known as microservice architectures.

The underlying, base operating system is a very lean Ubuntu Core installation, running on a read-only system partition, much like your iOS, Android, or Ubuntu phone.  One or more “frameworks” can be installed through the snappy command, which is an adaptation of the click packaging system we developed for the Ubuntu Phone.  Perhaps the best sample framework is Docker.  Applications are also packaged and installed using snappy, but apps run within frameworks.  This means that any of the thousands of Docker images available in DockerHub are trivially installable as snap packages, running on the Docker framework in Snappy Ubuntu.

Take Snappy for a Drive


You can try Snappy for yourself in minutes!

You can download Snappy and launch it in a local virtual machine like this:

$ wget http://cdimage.ubuntu.com/ubuntu-core/preview/ubuntu-core-alpha-01.img
$ kvm -m 512 -redir :2222::22 -redir :4443::443 ubuntu-core-alpha-01.img

Then, SSH into it with password 'ubuntu':

$ ssh -p 2222 ubuntu@localhost

At this point, you might want to poke around the system.  Take a look at the mount points, and perhaps try to touch or modify some files.


$ sudo rm /sbin/init
rm: cannot remove ‘/sbin/init’: Permission denied
$ sudo touch /foo

touch: cannot touch ‘foo’: Permission denied
$ apt-get install docker
apt-get: command not found

Rather, let's have a look at the new snappy package manager:

$ sudo snappy --help



And now, let’s install the Docker framework:

$ sudo snappy install docker

At this point, we can do essentially anything available in the Docker ecosystem!

Now, we’ve created some sample Snappy apps using existing Docker containers.  For one example, let’s now install OwnCloud:

$ sudo snappy install owncloud

This will take a little while to install, but eventually, you can point a browser at your own private OwnCloud image, running within a Docker container, on your brand new Ubuntu Snappy system.

We can also update the entire system with a simple command and a reboot:
$ sudo snappy versions
$ sudo snappy update
$ sudo reboot

And we can rollback to the previous version!
$ sudo snappy rollback
$ sudo reboot

Here's a short screencast of all of the above...


While the downloadable image is available for your local testing today, you will very soon be able to launch Snappy Ubuntu instances in your favorite public (Azure, GCE, AWS) and private clouds (OpenStack).


Enjoy!
Dustin

Read more
Colin Ian King

Before I started some analysis on benchmarking various popular file systems on Linux I was recommended to read "Systems Performance: Enterprise  and the Cloud" by Brendan Gregg.

In today's modern server and cloud based systems the multi-layered complexity can make it hard to pin point performance issues and bottlenecks. This book is packed full useful analysis techniques covering tracing, kernel internals, tools and benchmarking.

Critical to getting a well balanced and tuned system are all the different components, and the book has chapters covering CPU optimisation (cores, threading, caching and internconnects),  memory optimisation (virtual memory, paging, swapping, allocators, busses),  file system I/O, storage, networking (protcols, sockets, physical connections) and typical issues facing cloud computing.

The book is full of very useful examples and practical instructions on how to drill down and discover performance issues in a system and also includes some real-world case studies too.

It has helped me become even more focused on how to analyse performance issues and consider how to do deep system instrumentation to be able to understand where any why performance regressions occur.

All-in-all, a most systematic and well written book that I'd recommend to anyone running large complex servers and cloud computing environments.






Read more
Ben Howard

We are pleased to announce that Ubuntu 12.04 LTS, 14.04 LTS, and 14.10 are now in beta on Google Compute Engine [1, 2, 3].

These images support both the traditional user-data as well the Google Compute Engine startup scripts. We have included the Google Cloud SDK, pre-installed as well. Users coming from other Clouds can expect to have the same great experience as on other clouds, while enjoying the features of Google Compute Engine.

From an engineering perspective, a lot of us are excited to see this launch. While we don't expect too many rough edges, it is a beta, so feedback is welcome. Please file bugs or join us in #ubuntu-server on Freenode to report any issues (ping me, utlemming, rcj or Odd_Bloke).

Finally, I wanted to thank those that have helped on this project. Launching a cloud is not an easy engineering task. You have have build infrastructure to support the new cloud, create tooling to build and publish, write QA stacks, and do packaging work. All of this spans multiple teams and disciplines. The support from Google and Canonical's Foundations and Kernel teams have been instrumental in this launch, as well the engineers on the Certified Public Cloud team.

Getting the Google Cloud SDK:

As part of the launch, Canonical and Google have been working together on packaging a version of the Google Cloud SDK. At this time, we are unable to bring it into the main archives. However, you can find it in our partner archive.

To install it run the following:

  • echo "deb http://archive.canonical.com/ubuntu $(lsb_release -c -s) partner" | sudo tee /etc/apt/sources.list.d/partner.list
  • sudo apt-get update
  • sudo apt-get -y install google-cloud-sdk


Then follow the instruction for using the Cloud SDK at [4]


[1] https://cloud.google.com/compute/docs/operating-systems#ubuntu
[2] http://googlecloudplatform.blogspot.co.uk/2014/11/curated-ubuntu-images-now-available-on.html
[3] http://insights.ubuntu.com/2014/11/03/certified-ubuntu-images-available-on-google-cloud-platform/
[4] https://cloud.google.com/sdk/gcloud/

Read more
Jane Silber

Today marks 10 years of Ubuntu and the release of the 21st version. That is an incredible milestone and one which is worthy of reflection and celebration. I am fortunate enough to be spending the day at our devices sprint with 200+ of the folks that have helped make this possible. There are of course hundreds of others in Canonical and thousands in the community who have helped as well. The atmosphere here includes a lot of reminiscing about the early days and re-telling of the funny stories, and there is a palpable excitement in the air about the future. That same excitement was present at a Canonical Cloud Summit in Brussels last week.

The team here is closing in on shipping our first phone, marking a new era in Ubuntu’s history. There has been excellent work recently to close bugs and improve quality, and our partner BQ is as pleased with the results as we are. We are on the home stretch to this milestone, and are still on track to have Ubuntu phones in the market this year. Further, there is an impressive array of further announcements and phones lined up for 2015.

But of course that’s not all we do – the Ubuntu team and community continue to put out rock solid, high quality Ubuntu desktop releases like clockwork – the 21st of which will be released today. And with the same precision, our PC OEM team continues to make that great work available on a pre-installed basis on millions of PCs across hundreds of machine configurations. That’s an unparalleled achievement, and we really have changed the landscape of Linux and open source over the last decade. The impact of Ubuntu can be seen in countless ways – from the individuals, schools, and enterprises who now use Ubuntu; to proliferation of Codes of Conduct in open source communities; to the acceptance of faster (and near continuous) release cycles for operating systems; to the unique company/community collaboration that makes Ubuntu possible; to the vast number of developers who have now grown up with Ubuntu and in an open source world; to the many, many, many technical innovations to come out of Ubuntu, from single-CD installation in years past to the more recent work on image-based updates.

Ubuntu Server also sprang from our early desktop roots, and has now grown into the leading solution for scale out computing. Ubuntu and our suite of cloud products and services is the premier choice for any customer or partner looking to operate at scale, and it is indeed a “scale-out” world. From easy to consume Ubuntu images on public clouds; to managed cloud infrastructure via BootStack; to standard on-premise, self-managed clouds via Ubuntu OpenStack; to instant solutions delivered on any substrate via Juju, we are the leaders in a highly competitive, dynamic space. The agility, reliability and superior execution that have brought us to today’s milestone remains a critical competency for our cloud team. And as we release Ubuntu 14.10 today, which includes the latest OpenStack, new versions of our tooling such as MaaS and Juju, and initial versions of scale-out solutions for big data and Cloud Foundry, we build on a ten year history of “firsts”.

All Ubuntu releases seem to have their own personality, and Utopic is a fitting way to commemorate the realisation of a decade of vision, hard work and collaboration. We are poised on the edge of a very different decade in Canonical’s history, one in which we’ll carry forward the applicable successes and patterns, but will also forge a new path in the twin worlds of converged devices and scale-out computing. Thanks to everyone who has contributed to the journey thus far. Now, on to Vivid and the next ten years!

Read more