Canonical Voices

Posts tagged with 'cloud'

Dustin Kirkland


Awww snap!

That's right!  Snappy Ubuntu images are now on AWS, for your EC2 computing pleasure.

Enjoy this screencast as we start a Snappy Ubuntu instance in AWS, and install the xkcd-webserver package.


And a transcript of the commands follows below.

kirkland@x230:/tmp⟫ cat cloud.cfg
#cloud-config
snappy:
ssh_enabled: True
kirkland@x230:/tmp⟫ aws ec2 describe-images \
> --region us-east-1 \
> --image-ids ami-5c442634

{
"Images": [
{
"ImageType": "machine",
"Description": "ubuntu-core-devel-1418912739-141-amd64",
"Hypervisor": "xen",
"ImageLocation": "ucore-images/ubuntu-core-devel-1418912739-141-amd64.manifest.xml",
"SriovNetSupport": "simple",
"ImageId": "ami-5c442634",
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"BlockDeviceMappings": [],
"State": "available",
"VirtualizationType": "hvm",
"Name": "ubuntu-core-devel-1418912739-141-amd64",
"OwnerId": "649108100275",
"Public": false
}
]
}
kirkland@x230:/tmp⟫
kirkland@x230:/tmp⟫ # NOTE: This AMI will almost certainly have changed by the time you're watching this ;-)
kirkland@x230:/tmp⟫ clear
kirkland@x230:/tmp⟫ aws ec2 run-instances \
> --region us-east-1 \
> --image-id ami-5c442634 \
> --key-name id_rsa \
> --instance-type m3.medium \
> --user-data "$(cat cloud.cfg)"
{
"ReservationId": "r-c6811e28",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-d5d135bc"
}
],
"OwnerId": "357813986684",
"Instances": [
{
"KeyName": "id_rsa",
"PublicDnsName": null,
"ProductCodes": [],
"StateTransitionReason": null,
"LaunchTime": "2014-12-18T17:29:07.000Z",
"Monitoring": {
"State": "disabled"
},
"ClientToken": null,
"StateReason": {
"Message": "pending",
"Code": "pending"
},
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"PrivateDnsName": null,
"ImageId": "ami-5c442634",
"BlockDeviceMappings": [],
"Placement": {
"GroupName": null,
"AvailabilityZone": "us-east-1e",
"Tenancy": "default"
},
"AmiLaunchIndex": 0,
"VirtualizationType": "hvm",
"NetworkInterfaces": [],
"SecurityGroups": [
{
"GroupName": "default",
"GroupId": "sg-d5d135bc"
}
],
"State": {
"Name": "pending",
"Code": 0
},
"Hypervisor": "xen",
"InstanceId": "i-af43de51",
"InstanceType": "m3.medium",
"EbsOptimized": false
}
]
}
kirkland@x230:/tmp⟫
kirkland@x230:/tmp⟫ aws ec2 describe-instances --region us-east-1 | grep PublicIpAddress
"PublicIpAddress": "54.145.196.209",
kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@54.145.196.209
ssh: connect to host 54.145.196.209 port 22: Connection refused
255 kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@54.145.196.209
The authenticity of host '54.145.196.209 (54.145.196.209)' can't be established.
RSA key fingerprint is 91:91:6e:0a:54:a5:07:b9:79:30:5b:61:d4:a8:ce:6f.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '54.145.196.209' (RSA) to the list of known hosts.
Welcome to Ubuntu Vivid Vervet (development branch) (GNU/Linux 3.16.0-25-generic x86_64)

* Documentation: https://help.ubuntu.com/

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Welcome to the Ubuntu Core rolling development release.

* See https://ubuntu.com/snappy

It's a brave new world here in snappy Ubuntu Core! This machine
does not use apt-get or deb packages. Please see 'snappy --help'
for app installation and transactional updates.

To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@ip-10-153-149-47:~$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=1923976k,nr_inodes=480994,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=385432k,mode=755)
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
/dev/xvda3 on /writable type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,mode=755)
tmpfs on /etc/fstab type tmpfs (rw,nosuid,noexec,relatime,mode=755)
/dev/xvda3 on /etc/systemd/system type ext4 (rw,relatime,discard,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
tmpfs on /etc/machine-id type tmpfs (ro,relatime,size=385432k,mode=755)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/xvda3 on /etc/hosts type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/sudoers.d type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /root type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /usr/share/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/snappy type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/click type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/initramfs-tools type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/writable type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ssh type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/tmp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/cache/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/apparmor.d/cache type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ufw type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/log type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/system-image type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /var/lib/sudo type tmpfs (rw,relatime,mode=700)
/dev/xvda3 on /var/lib/logrotate type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dhcp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dbus type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/cloud type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apps type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /mnt type tmpfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
/dev/xvda3 on /apps type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /home type ext4 (rw,relatime,discard,data=ordered)
/dev/xvdb on /mnt type ext3 (rw,relatime,data=ordered)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=385432k,mode=700,uid=1000,gid=1000)
ubuntu@ip-10-153-149-47:~$ mount | grep " / "
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
ubuntu@ip-10-153-149-47:~$ sudo touch /foo
touch: cannot touch ‘/foo’: Read-only file system
ubuntu@ip-10-153-149-47:~$ sudo apt-get update
Ubuntu Core does not use apt-get, see 'snappy --help'!
ubuntu@ip-10-153-149-47:~$ sudo snappy --help
Usage:snappy [-h] [-v]
{info,versions,search,update-versions,update,rollback,install,uninstall,tags,build,chroot,framework,fake-version,nap}
...

snappy command line interface

optional arguments:
-h, --help show this help message and exit
-v, --version Print this version string and exit

Commands:
{info,versions,search,update-versions,update,rollback,install,uninstall,tags,build,chroot,framework,fake-version,nap}
info
versions
search
update-versions
update
rollback undo last system-image update.
install
uninstall
tags
build
chroot
framework
fake-version ==SUPPRESS==
nap ==SUPPRESS==
ubuntu@ip-10-153-149-47:~$ sudo snappy info
release: ubuntu-core/devel
frameworks:
apps:
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
ubuntu@ip-10-153-149-47:~$ sudo snappy search docker
Part Version Description
docker 1.3.2.007 The docker app deployment mechanism
ubuntu@ip-10-153-149-47:~$ sudo snappy install docker
docker 4 MB [=============================================================================================================] OK
Part Tag Installed Available Fingerprint Active
docker edge 1.3.2.007 - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
docker edge 1.3.2.007 - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy search webserver
Part Version Description
go-example-webserver 1.0.1 Minimal Golang webserver for snappy
xkcd-webserver 0.3.1 Show random XKCD compic via a build-in webserver
ubuntu@ip-10-153-149-47:~$ sudo snappy install xkcd-webserver
xkcd-webserver 21 kB [=====================================================================================================] OK
Part Tag Installed Available Fingerprint Active
xkcd-webserver edge 0.3.1 - 3a9152b8bff494 *
ubuntu@ip-10-153-149-47:~$ exit
logout
Connection to 54.145.196.209 closed.
kirkland@x230:/tmp⟫ ec2-instances
i-af43de51 ec2-54-145-196-209.compute-1.amazonaws.com
kirkland@x230:/tmp⟫ ec2-terminate-instances i-af43de51
INSTANCE i-af43de51 running shutting-down
kirkland@x230:/tmp⟫

Cheers!
Dustin

Read more
Dustin Kirkland


As promised last week, we're now proud to introduce Ubuntu Snappy images on another of our public cloud partners -- Google Compute Engine.
In the video below, you can join us walking through the instructions we have published here.
Snap it up!
:-Dustin

Read more
Dustin Kirkland



A couple of months ago, I re-introduced an old friend -- Ubuntu JeOS (Just enough OS) -- the smallest, (merely 63MB compressed!) functional OS image that we can still call “Ubuntu”.  In fact, we call it Ubuntu Core.

That post was a prelude to something we’ve been actively developing at Canonical for most of 2014 -- Snappy Ubuntu Core!  Snappy Ubuntu combines the best of the ground-breaking image-based Ubuntu remix known as Ubuntu Touch for phones and tablets with the base Ubuntu server operating system trusted by millions of instances in the cloud.

Snappy introduces transactional updates and atomic, image based workflows -- old ideas implemented in databases for decades -- adapted to Ubuntu cloud and server ecosystems for the emerging cloud design patterns known as microservice architectures.

The underlying, base operating system is a very lean Ubuntu Core installation, running on a read-only system partition, much like your iOS, Android, or Ubuntu phone.  One or more “frameworks” can be installed through the snappy command, which is an adaptation of the click packaging system we developed for the Ubuntu Phone.  Perhaps the best sample framework is Docker.  Applications are also packaged and installed using snappy, but apps run within frameworks.  This means that any of the thousands of Docker images available in DockerHub are trivially installable as snap packages, running on the Docker framework in Snappy Ubuntu.

Take Snappy for a Drive


You can try Snappy for yourself in minutes!

You can download Snappy and launch it in a local virtual machine like this:

$ wget http://cdimage.ubuntu.com/ubuntu-core/preview/ubuntu-core-alpha-01.img
$ kvm -m 512 -redir :2222::22 -redir :4443::443 ubuntu-core-alpha-01.img

Then, SSH into it with password 'ubuntu':

$ ssh -p 2222 ubuntu@localhost

At this point, you might want to poke around the system.  Take a look at the mount points, and perhaps try to touch or modify some files.


$ sudo rm /sbin/init
rm: cannot remove ‘/sbin/init’: Permission denied
$ sudo touch /foo

touch: cannot touch ‘foo’: Permission denied
$ apt-get install docker
apt-get: command not found

Rather, let's have a look at the new snappy package manager:

$ sudo snappy --help



And now, let’s install the Docker framework:

$ sudo snappy install docker

At this point, we can do essentially anything available in the Docker ecosystem!

Now, we’ve created some sample Snappy apps using existing Docker containers.  For one example, let’s now install OwnCloud:

$ sudo snappy install owncloud

This will take a little while to install, but eventually, you can point a browser at your own private OwnCloud image, running within a Docker container, on your brand new Ubuntu Snappy system.

We can also update the entire system with a simple command and a reboot:
$ sudo snappy versions
$ sudo snappy update
$ sudo reboot

And we can rollback to the previous version!
$ sudo snappy rollback
$ sudo reboot

Here's a short screencast of all of the above...


While the downloadable image is available for your local testing today, you will very soon be able to launch Snappy Ubuntu instances in your favorite public (Azure, GCE, AWS) and private clouds (OpenStack).


Enjoy!
Dustin

Read more
Colin Ian King

Before I started some analysis on benchmarking various popular file systems on Linux I was recommended to read "Systems Performance: Enterprise  and the Cloud" by Brendan Gregg.

In today's modern server and cloud based systems the multi-layered complexity can make it hard to pin point performance issues and bottlenecks. This book is packed full useful analysis techniques covering tracing, kernel internals, tools and benchmarking.

Critical to getting a well balanced and tuned system are all the different components, and the book has chapters covering CPU optimisation (cores, threading, caching and internconnects),  memory optimisation (virtual memory, paging, swapping, allocators, busses),  file system I/O, storage, networking (protcols, sockets, physical connections) and typical issues facing cloud computing.

The book is full of very useful examples and practical instructions on how to drill down and discover performance issues in a system and also includes some real-world case studies too.

It has helped me become even more focused on how to analyse performance issues and consider how to do deep system instrumentation to be able to understand where any why performance regressions occur.

All-in-all, a most systematic and well written book that I'd recommend to anyone running large complex servers and cloud computing environments.






Read more
Ben Howard

We are pleased to announce that Ubuntu 12.04 LTS, 14.04 LTS, and 14.10 are now in beta on Google Compute Engine [1, 2, 3].

These images support both the traditional user-data as well the Google Compute Engine startup scripts. We have included the Google Cloud SDK, pre-installed as well. Users coming from other Clouds can expect to have the same great experience as on other clouds, while enjoying the features of Google Compute Engine.

From an engineering perspective, a lot of us are excited to see this launch. While we don't expect too many rough edges, it is a beta, so feedback is welcome. Please file bugs or join us in #ubuntu-server on Freenode to report any issues (ping me, utlemming, rcj or Odd_Bloke).

Finally, I wanted to thank those that have helped on this project. Launching a cloud is not an easy engineering task. You have have build infrastructure to support the new cloud, create tooling to build and publish, write QA stacks, and do packaging work. All of this spans multiple teams and disciplines. The support from Google and Canonical's Foundations and Kernel teams have been instrumental in this launch, as well the engineers on the Certified Public Cloud team.

Getting the Google Cloud SDK:

As part of the launch, Canonical and Google have been working together on packaging a version of the Google Cloud SDK. At this time, we are unable to bring it into the main archives. However, you can find it in our partner archive.

To install it run the following:

  • echo "deb http://archive.canonical.com/ubuntu $(lsb_release -c -s) partner" | sudo tee /etc/apt/sources.list.d/partner.list
  • sudo apt-get update
  • sudo apt-get -y install google-cloud-sdk


Then follow the instruction for using the Cloud SDK at [4]


[1] https://cloud.google.com/compute/docs/operating-systems#ubuntu
[2] http://googlecloudplatform.blogspot.co.uk/2014/11/curated-ubuntu-images-now-available-on.html
[3] http://insights.ubuntu.com/2014/11/03/certified-ubuntu-images-available-on-google-cloud-platform/
[4] https://cloud.google.com/sdk/gcloud/

Read more
Jane Silber

Today marks 10 years of Ubuntu and the release of the 21st version. That is an incredible milestone and one which is worthy of reflection and celebration. I am fortunate enough to be spending the day at our devices sprint with 200+ of the folks that have helped make this possible. There are of course hundreds of others in Canonical and thousands in the community who have helped as well. The atmosphere here includes a lot of reminiscing about the early days and re-telling of the funny stories, and there is a palpable excitement in the air about the future. That same excitement was present at a Canonical Cloud Summit in Brussels last week.

The team here is closing in on shipping our first phone, marking a new era in Ubuntu’s history. There has been excellent work recently to close bugs and improve quality, and our partner BQ is as pleased with the results as we are. We are on the home stretch to this milestone, and are still on track to have Ubuntu phones in the market this year. Further, there is an impressive array of further announcements and phones lined up for 2015.

But of course that’s not all we do – the Ubuntu team and community continue to put out rock solid, high quality Ubuntu desktop releases like clockwork – the 21st of which will be released today. And with the same precision, our PC OEM team continues to make that great work available on a pre-installed basis on millions of PCs across hundreds of machine configurations. That’s an unparalleled achievement, and we really have changed the landscape of Linux and open source over the last decade. The impact of Ubuntu can be seen in countless ways – from the individuals, schools, and enterprises who now use Ubuntu; to proliferation of Codes of Conduct in open source communities; to the acceptance of faster (and near continuous) release cycles for operating systems; to the unique company/community collaboration that makes Ubuntu possible; to the vast number of developers who have now grown up with Ubuntu and in an open source world; to the many, many, many technical innovations to come out of Ubuntu, from single-CD installation in years past to the more recent work on image-based updates.

Ubuntu Server also sprang from our early desktop roots, and has now grown into the leading solution for scale out computing. Ubuntu and our suite of cloud products and services is the premier choice for any customer or partner looking to operate at scale, and it is indeed a “scale-out” world. From easy to consume Ubuntu images on public clouds; to managed cloud infrastructure via BootStack; to standard on-premise, self-managed clouds via Ubuntu OpenStack; to instant solutions delivered on any substrate via Juju, we are the leaders in a highly competitive, dynamic space. The agility, reliability and superior execution that have brought us to today’s milestone remains a critical competency for our cloud team. And as we release Ubuntu 14.10 today, which includes the latest OpenStack, new versions of our tooling such as MaaS and Juju, and initial versions of scale-out solutions for big data and Cloud Foundry, we build on a ten year history of “firsts”.

All Ubuntu releases seem to have their own personality, and Utopic is a fitting way to commemorate the realisation of a decade of vision, hard work and collaboration. We are poised on the edge of a very different decade in Canonical’s history, one in which we’ll carry forward the applicable successes and patterns, but will also forge a new path in the twin worlds of converged devices and scale-out computing. Thanks to everyone who has contributed to the journey thus far. Now, on to Vivid and the next ten years!

Read more
Robbie Williamson

The following is an update on Ubuntu’s response to the latest Internet emergency security issue, POODLE (CVE-2014-3566), in combination with an
SSLv3 downgrade vulnerability.

Vulnerability Summary

“SSL 3.0 is an obsolete and insecure protocol. While for most practical purposes it has been replaced by its successors TLS 1.0, TLS 1.1, and TLS 1.2, many TLS implementations remain backwards­ compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. The protocol handshake provides for authenticated version negotiation, so normally the latest protocol version common to the client and the server will be used.” -https://www.openssl.org/~bodo/ssl-poodle.pdf

A vulnerability was discovered that affects the protocol negotiation between browsers and HTTP servers, where a man-in-the-middle (MITM) attacker is able trigger a protocol downgrade (ie, force downgrade to SSLv3, CVE to be assigned).  Additionally, a new attack was discovered against the CBC block cipher used in SSLv3 (POODLE, CVE-2014-3566).  Because of this new weakness in the CBC block cipher and the known weaknesses in the RC4 stream cipher (both used with SSLv3), attackers who successfully downgrade the victim’s connection to SSLv3 can now exploit the weaknesses of these ciphers to ascertain the plaintext of portions of the connection through brute force attacks.  For example, an attacker who is able to manipulate the encrypted connection is able to steal HTTP cookies.  Note, the protocol downgrade vulnerability exists in web browsers and is not implemented in the ssl libraries.  Therefore, the downgrade attack is currently known to exist only for HTTP.

OpenSSL will be updated to guard against illegal protocol negotiation downgrades (TLS_FALLBACK_SCSV).  When the server and client are updated to use TLS_FALLBACK_SCSV, the protocol cannot be downgraded to below the highest protocol that is supported between the two (so if the client and the server both support TLS 1.2, SSLv3 cannot be used even if the server offers SSLv3).

The recommended course of action is ultimately for sites to disable SSLv3 on their servers, and for browsers to disable SSLv3 by default since the SSLv3 protocol is known to be broken.  However, it will take time for sites to disable SSLv3, and some sites will choose not to, in order to support legacy browsers (eg, IE6).  As a result, immediately disabling SSLv3 in Ubuntu in the openssl libraries, in servers or in browsers, will break sites that still rely on SSLv3.

Ubuntu’s Response:

Unfortunately, this issue cannot be addressed in a single USN because this is a vulnerability in a protocol, and the Internet must respond accordingly (ie SSLv3 must be disabled everywhere).  Ubuntu’s response provides a path forward to transition users towards safe defaults:

  • Add TLS_FALLBACK_SCSV to openssl in a USN:  In progress, upstream openssl is bundling this patch with other fixes that we will incorporate
  • Follow Google’s lead regarding chromium and chromium content api (as used in oxide):
    • Add TLS_FALLBACK_SCSV support to chromium and oxide:  Done – Added by Google months ago.
    • Disable fallback to SSLv3 in next major version:  In Progress
    • Disable SSLv3 in future version:  In Progress
  • Follow Mozilla’s lead regarding Mozilla products:
    • Disable SSLv3 by default in Firefox 34:  In Progress – due Nov 25
    • Add TLS_FALLBACK_SCSV support in Firefox 35:  In Progress

Ubuntu currently will not:

  • Disable SSLv3 in the OpenSSL libraries at this time, so as not to break compatibility where it is needed
  • Disable SSLv3 in Apache, nginx, etc, so as not to break compatibility where it is needed
  • Preempt Google’s and Mozilla’s plans.  The timing of their response is critical to giving sites an opportunity to migrate away from SSLv3 to minimize regressions

For more information on Ubuntu security notices that affect the current supported releases of Ubuntu, or to report a security vulnerability in an Ubuntu package, please visit http://www.ubuntu.com/usn/.

 

Read more
pitti

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific “daily” image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical’s Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the “most demanding” tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an “always need this constant to select a non-default network” option that is specific to Canonical Bootstack.

Finally, let’ run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Read more
Prakash Advani

An independent survey of 200 UK-based CIOs has revealed that they are only using about half of the cloud capacity they’ve bought and paid for, and that 90 percent of them see over-provisioning as a necessary evil.

Cloud provider ElasticHosts, which commissioned the survey, says: “Essentially, bad habits like over-provisioning and sacrificing peak performance are being carried from the on-premise world into the cloud, partly because people are willing to accept these limitations.”

Read More: http://www.zdnet.com/cloud-customers-are-still-paying-for-twice-as-much-as-they-need-7000033369/

Read more
Dustin Kirkland



In case you missed the recent Cloud Austin MeetUp, you have another chance to see the Ubuntu Orange Box live and in action here in Austin!

This time, we're at the OpenStack Austin MeetUp, next Wednesday, September 10, 2014, at 6:30pm at Tech Ranch Austin, 9111 Jollyville Rd #100, Austin, TX!

If you join us, you'll witness all of OpenStack Ice House, deployed in minutes to real hardware. Not an all-in-one DevStack; not a minimum viable set of components.  Real, rich, production-quality OpenStack!  Ceilometer, Ceph, Cinder, Glance, Heat, Horizon, Keystone, MongoDB, MySQL, Nova, NTP, Quantum, and RabbitMQ -- intelligently orchestrated and rapidly scaled across 10 physical servers sitting right up front on the podium.  Of course, we'll go under the hood and look at how all of this comes together on the fabulous Ubuntu Orange Box.

And like any good open source software developer, I generally like to make things myself, and share them with others.  In that spirit, I'll also bring a couple of growlers of my own home brewed beer, Ubrewtu ;-)  Free as in beer, of course!
Cheers,Dustin

Read more
Ben Howard

For years, the Ubuntu Cloud Images have been built on a timer (i.e. cronjob or Jenkins). Every week, you can reasonably expect that stable and LTS releases to be built twice a week while our development build is build once a day.  Each of these builds is given a serial in the form of YYYYMMDD. 

While time-based building has proven to be reliable, different build serials may be functionally the same, just put together at a different point in time. Many of the builds that we do for stable and LTS releases are pointless.

When the whole heartbleed fiasco hit, it put the Cloud Image team into over-drive, since it required manually triggering builds the LTS releases. When we manually trigger builds, it takes roughly 12-16 hours to build, QA, test and release new Cloud Images. Sure, most of this is automated, but the process had to be manually started by a human. This got me thinking: there has to be a better way.

What if we build the Cloud Images when the package set changes?

With that, I changed the Ubuntu 14.10 (Utopic Unicorn) build process from time-based to archive trigger-based. Now, instead of building every day at 00:30 UTC, the build starts when the archive has been updated and the packages in the prior cloud image build is older than the archive version. In the last three days, there were eight builds for Utopic. For a development version of Ubuntu, this just means that developers don't have to wait 24 hours for the latest package changes to land in a Cloud Image.

Over the next few weeks, I will be moving the 10.04 LTS, 12.04 LTS and 14.04 LTS build processes from time to archive trigger-based. While this might result less frequent daily builds, the main advantage is that the daily builds will contain the latest package sets. And if you are trying to respond to the latest CVE, or waiting on a bug fix to land, it likely means that you'll have a fresh daily that you can use the following day.

Read more
Dustin Kirkland



I hope you'll join me at Rackspace on Tuesday, August 19, 2014, at the Cloud Austin Meetup, at 6pm, where I'll use our spectacular Orange Box to deploy Hadoop, scale it up, run a terasort, destroy it, deploy OpenStack, launch instances, and destroy it too.  I'll talk about the hardware (the Orange Box, Intel NUCs, Managed VLAN switch), as well as the software (Ubuntu, OpenStack, MAAS, Juju, Hadoop) that makes all of this work in 30 minutes or less!

Be sure to RSVP, as space is limited.

http://www.meetup.com/CloudAustin/events/194009002/

Cheers,
Dustin

Read more
Dustin Kirkland

Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!


Q: Why should I care about randomness? 

A: Because entropy is important!

  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above

Q: Where does entropy come from?

A: Hardware, typically.

  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on

Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?

A: Pseudo random number generators are our only viable alternative.

  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always

Q: Are Linux PRNGs secure enough?

A: Yes, if they are properly seeded.

  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...

Q: And what exactly is a random seed?

A: Basically, its a small catalyst that primes the PRNG pump.

  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...

XKCD on random numbers

RFC 1149.5 specifies 4 as the standard IEEE-vetted random number.

Q: So my OS generates an initial seed at first boot?

A: Yep, but computers are predictable, especially VMs.

  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool

Dilbert on random numbers


http://j.mp/1dHAK4V


Q: Surely you're just being paranoid about this, right?

A: I’m afraid not...

Analysis of the LRNG (2006)

  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT

Black Hat (2009)

  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu

Factorable.net (2012)

  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx

Dual_EC_DRBG Backdoor (2013)

  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB

Q: Ruh roh...so what can we do about it?

A: For starters, do a better job seeding our PRNGs.

  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded

Q: But how do we ensure that in cloud guests?

A: Run Ubuntu!


Sorry, shameless plug...

Q: And what is Ubuntu's solution?

A: Meet pollinate.

  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate

Q: What about the back end?

A: Introducing pollen.

  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen

Q: Golang, did you say?  That sounds cool!

A: Indeed. Around 50 lines of code, cool!

pollen.go

Q: Is there a public entropy service available?

A: Hello, entropy.ubuntu.com.

  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys

Q: But what if I don't necessarily trust Canonical?

A: Then use a different entropy service :-)

  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge

Q: So does this increase the overall entropy on a system?

A: No, no, no, no, no!

  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!

Q: Why the challenge/response in the protocol?

A: Think of it like the Heisenberg Uncertainty Principle.

  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state

Q: What if pollinate gets crappy or compromised or no random seeds?

A: Functionally, it’s no better or worse than it was without pollinate in the mix.

  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise

Q: What about SSL compromises, or CA Man-in-the-Middle attacks?

A: We are mitigating that by bundling the public certificates in the client.


  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers

Q: What information gets logged by the pollen server?

A: The usual web server debug info.

  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]

Q: Have the code or design been audited?

A: Yes, but more feedback is welcome!

  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated

Q: Where can I find more information?

A: Read Up!


Stay safe out there!
:-Dustin

Read more
David Murphy (schwuk)

Ars Technica has a great write up by Lee Hutchinson on our Orange Box demo and training unit.

You can't help but have your attention grabbed by it!

You can’t help but have your attention grabbed by it!

As the comments are quick to point out – at the expense of the rest of the piece – the hardware isn’t the compelling story here. While you can buy your own, you can almost certainly hand build an equivalent-or-better set up for less money1, but Ars recognises this:

Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it’s making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud.

The Orange Box is about showing off MAAS & Juju, and – usually – OpenStack.

To see what Ars think of those, you should read the article.

I definitely echo Lee’s closing statement:

I wish my closet had an Orange Box in it. That thing is hella cool.


  1. Or make one out of wood like my colleague Gavin did! 

Read more
Prakash Advani

The company has pledged to invest $1 billion in open cloud products and services over the next two years, along with community-driven, open-source cloud technologies.

“Just as the community spread the adoption of Linux in the enterprise, we believe OpenStack will do the same for the cloud,” said Hewlett-Packard CEO and President Meg Whitman, in a webcast announcing Helion Tuesday.

Read More

Read more
mark

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.

Read more
Mark Baker

Ubuntu 14.04 LTS

Today is a big day for Ubuntu and a big day for cloud computing: Ubuntu 14.04 LTS is released. Everyone involved with Ubuntu can’t help but be impressed and stirred about the significance of Ubuntu 14.04 LTS.

We are impressed because Ubuntu is gaining extensive traction outside of the tech luminaries such as Netflix, Snapchat and wider DevOP community; it is being adopted by mainstream enterprises such as BestBuy. Ubuntu is dominant in public cloud with typically 60% market share of Linux workloads in the major cloud providers such as Amazon, Azure and Joyent. Ubuntu Server also is the fastest growing platform for scale out web computing having overtaken CentOS some six months ago. So Ubuntu server is growing up and we are proud of what it has become. We are stirred up by how the adoption of Ubuntu, coupled with the adoption of cloud and scale out computing is set grow enormously as it fast becomes an ‘enterprise’ technology.

Recently 70% of CIOs stated that they are going to change their technology and sourcing relationships within the next two or three years. This is in large part due to their planned transition to cloud, be it on premise using technologies such as Ubuntu OpenStack, in a public cloud or, most commonly, using combinations of both. Since the beginning of Ubuntu Server we have been preparing for this time, the time when a wholesale technology infrastructure change occurs and Ubuntu 14.04 arrives just as the change is starting to accelerate beyond the early adopters and technology companies. Enterprises now moving parts of their infrastructure to cloud can choose the technology best suited for the job: Ubuntu 14.04 LTS:

Ubuntu Server 14.04 LTS at a glance

  • Based on version 3.13 of the Linux kernel

  • Includes the Icehouse release of OpenStack

  • Both Ubuntu Server 14.04 LTS and OpenStack are supported until April 2019

  • Includes MAAS for automated hardware provisioning

  • Includes Juju for fast service deployment of 100+ common scale out applications such as MongoDB, Hadoop, node.js, Cloudfoundry, LAMP stack and Elastic Search

  • Ceph Firefly support

  • Openvswitch  2.0.x

  • Docker included & Docker’s own repository now populated with official     Ubuntu 14.04 images

  • Optimised Ubuntu 14.04 images certified for use on all leading public cloud     platforms – Amazon AWS, Microsoft Azure, Joyent Cloud, HP Cloud, Rackspace Cloud, CloudSigma and many others.

  • Runs on key hardware architectures: x86, x64,  Avoton, ARM64, POWER Systems

  • 50+ systems certified at launch from leading hardware vendors such as HP, Dell, IBM, Cisco and SeaMicro.

The advent of OpenStack, the switch to scale out computing and the move towards public cloud providers presents a perfect storm out of which Ubuntu is set to emerge the technology used ubiquitously for the next decade. That is why we are impressed and stirred by Ubuntu 14.04. We hope you are too. Download 14.04 LTS here

Read more
John Zannos

Canonical and Cisco share a common vision around the direction of the cloud and the application-driven datacentre.  We believe both need to quickly respond to an application’s needs and be highly elastic.

Cisco’s announcement of an open approach with OpFlex is a great step towards to an application centric cloud and datacenter. Cisco Application Centric Infrastructure policy engine (APIC) makes the policy model APIs and documentation open to the marketplace. These policies will be freely usable by an emerging ecosystem that is adopting an open policy model. Canonical and Cisco are aligned in efforts to leverage open models to accelerate innovation in the cloud and datacenter.

Cisco’s ACI operational model will drive multi-vendor innovation, bringing greater agility, simplicity and scale.  Opening the ACI policy engine (APIC) to multi-vendor infrastructure is a positive step to open source cloud and datacenter operations.  This aligns with the Canonical open strategy for the cloud and datacenter.  Canonical is a firm believer in a strong and open ecosystem.  We take great pride that you can build an OpenStack cloud on Ubuntu from all the major participants in the OpenStack ecosystem (Cisco, Dell, HP, Mirantis and more).  The latest OpenStack Foundation survey of production OpenStack deployments found 55% of them on Ubuntu – that’s over twice the number of deployments than the next operating system. We believe a healthy and open ecosystem is the best way to ensure great choice for our collective customers.

Canonical is pleased to be a member of Cisco’s OpFlex ecosystem.  Canonical and Cisco intend to collaborate in the standards process. As the standard is finalised, Cisco and Canonical will integrate their company’s technology to improve the customer experience. This includes alignment of Canonical’s Juju and KVM with Cisco’s ACI model.

Cisco and Canonical believe there are opportunities to leverage Ubuntu, Ubuntu OpenStack and Juju, Canonical’s service orchestration, with Cisco’s ACI policy-based model.  We see many companies moving to Ubuntu and Ubuntu OpenStack that use Cisco network and compute technology. The collaboration of Canonical with Cisco towards an application centric cloud and datacenter is an opportunity for our mutual customers.

Read more
Mark Baker

It is pretty well known that most of the OpenStack clouds running in production today are based on Ubuntu. Companies like Comcast, NTT, Deutsche Telekom, Bloomberg and HP all trust Ubuntu Server as the right platform to run OpenStack. A fair proportion of the Ubuntu OpenStack users out there also engage Canonical to provide them with technical support, not only for Ubuntu Server but OpenStack itself. Canonical provides full Enterprise class support for both Ubuntu and OpenStack and has been supporting some of the largest, most demanding customers and their OpenStack clouds since early 2011. This gives us a unique insight into what it takes to support a production OpenStack environment.

For example, in the period January 1st 2014 to end of March, Canonical processed hundreds of OpenStack support tickets averaging over 100 per month. During that time we closed 92 bugs whilst customers opened 99 new ones. These are bugs found by real customers running real clouds and we are pleased that they are brought to our attention, especially the hard ones as it helps makes OpenStack better for everyone.

The type of support tickets we see is interesting as core OpenStack itself only represents about 12% of the support traffic. The majority of problems arise between the interaction of OpenStack, the operating system and other infrastructure components – fibre channel drivers used by nova volume, or, QEMU/libvirt issues during upgrades for example. Fixing these problems requires deep expertise Ubuntu as well as OpenStack which is why customers choose Canonical to support them.

In my next post I’ll dig a little deeper into supporting OpenStack and how this contributes to the OpenStack ecosystem.

Read more
Sally Radwan

A few years ago, the cloud team at Canonical decided that the future of cloud computing lies not only in what clouds are built on, but what runs on it, and how quickly, securely, and efficiently those services can be managed. This is when Juju was born; our service orchestration tool built for the cloud and inspired by the way IT architects visualise their infrastructure: boxes representing services, connected by lines representing interfaces or relationships. Juju’s GUI simplifies searching, dragging and dropping a ‘Charm’ into a canvas to deploy services instantly.

Today, we are announcing two new features for DevOps seeking ever faster and easier ways of deploying scalable infrastructure. The first are Juju Charm bundles that allow you to deploy an entire cloud environment with one click. Secondly we are announcing Quickstart which spins up an entire Juju environment and deploys the necessary services to run Juju, all with one command. Juju Bundles and Quickstart are powerful tools on their own but offer enormous value comes when they are used together: Quickstart can be combined with bundles to rapidly launch Juju, start-up the environment, and deploy an entire application infrastructure, all in one action.

Already there are several bundles available that cover key technology areas: security, big data, SaaS, back office workloads, web servers, content management and the integration of legacy systems. New Charm bundles available today include:

Bundles for complex services:

  • Instant Hadoop: The Hadoop cluster bundle is a 7-node starter cluster designed to deploy Hadoop in a way that’s easily scalable. The deployment has been tested with up to 2,000 nodes on AWS.

  • Instant Mongo: Mongodb, a 13-node (over three shards) starter MongoDB cluster and has the capability to horizontally scale all of the three shards.

  • Instant Wiki: Two Mediawiki deployments; a simple example mediawiki deployment with just mediawiki and MySQL; and a load balanced deployment with HAProxy and memcached, designed to be horizontally scalable.

  •  A new bundle from import.io allows their SaaS platform to be instantly integrated inside Juju. Navigate to any website using the import.io browser, template the data and then test your crawl. Finally, use the import.io charm to crawl your data directly into ElasticSearch.
  • Instant Security: Syncope + PostgreSQL, developed by Tirasa, is a bundle providing Apache Syncope with the internal storage up and running on PostreSQL. Apache Syncope is an open source system for managing digital identities in enterprise environments.

  • Instant Enterprise Solutions: Credativ, experts in Open Source consultancy, are showing with their OpenERP bundle how any enterprise can instantly deploy an enterprise resource planning solution.

  • Instant High Performance Computing: HPCC (High Performance Computing Cluster) is a massive parallel-processing computing platform that solves Big Data problems. The platform is Open Source and can now be instantly deployed via Juju.

Francesco Chicchiriccò, CEO Tirasa / VP Apache Syncope comments; “The immediate availability of an Apache Syncope Juju bundle dramatically shortens the product evaluation process and encourages adoption. With this additional facility to get started with Open Source Identity Management, we hope to increase the deployments of Apache Syncope in any environment.”

 

Bundles for developers:

These bundles provide ‘hello world’ blank applications; they are designed as templates for application developers. Simply, they provide templates with configuration options to an application:

  • Instant Django: A Django bundle with gunicorn and PostgreSQL modelled after the Django ‘Getting Started’ guide is provided for application developers.

  • Instant Rails: Two Rails bundles, one is a simple Rails/Postgres deployment, the ‘scalable’ bundle adds HAProxy, Memcached, Redis, Nagios (for monitoring), and a Logstash/Kibana (for logging), providing an application developer with an entire scalable Rails stack.

  • Instant Wildlfy (The Community JBoss): The new Wildfly bundle from Technology Blueprint, provides an out-of-the-box Wildfly application server in a standalone mode running on openjdk 7. Currently MySQL as a datasource is also supported via a MySQL relation.

Technology Blueprint, creators of the Wildfly bundle, also uses Juju to build its own cloud environments. The company’s system administrator, Saurabh Jha comments; “Juju bundles are really beneficial for programmers and system administrators. Juju saves time, efforts as well as cost. We’ve used it to create our environment on the fly. All we need is a quick command and the whole setup gets ready automatically. No more waiting for installing and starting those heavy applications/servers manually; a bundle takes care of that for us. We can code, deploy and host our application and when we don’t need it, we can just destroy the environment. It’s that easy.”

You can browse and discover all new bundles on jujucharms.com.

Our entire ecosystem is hard at work too, charming up their applications and creating bundles around them. Upcoming bundles to look forward to include a GNU Cobol bundle, which will enable instant legacy integration, a telecom bundle to instantly deploy and integrate Project Clearwater – an open source IMS, and many others. For sure you have some ideas about a bundle that gives an instant solution to some common problems. It has never been easier to see your ideas turn into reality.

==

If you would like to create your own charm or bundle, here is how to get started: http://developer.ubuntu.com/cloud/ or see a video about Charm Bundles:  https://www.youtube.com/watch?v=eYpnQI6GZTA.

And if you’ve never used Juju before, here is an excellent series of blog posts that will guide you through spinning up a simple environment on AWS: http://insights.ubuntu.com/resources/article/deploying-web-applications-using-juju-part-33/.

Need help or advice? The Juju community is here to assist https://juju.ubuntu.com/community.

Finally, for the more technically-minded, here is a slightly more geeky take on things by Canonical’s Rick Harding, including a video walkthrough of Quickstart.

Read more